This Repo is based largely on the repo from Jayden, my former Check Point colleague and is adpated to suit CloudGuard Geo-Cluster and Auto-Scaling labs. Original repo: https://github.com/jaydenaung/Terraform-AWS-TGW-Lab
In this tutorial, I'll do a step-by-step walk-through of automating an AWS environment which consists of three VPCs, and a Transit Gateway, using Terraform.
The lab environment on AWS will consist of the the following VPCs.
-
Edge VPC - A NAT instance (linux firewall), and a web server will be deployed into this VPC.
-
Spoke VPC 1 - A private web server instance will be deployed into this VPC.
-
Spoke VPC 2 - A private web server instance will be deployed into this VPC.
All three web servers are deployed into private subnets and not directly exposed to the Internet. They are exposed to the Internet via the NAT instance (linux router).
- AWS Account (and key pair)
- Terraform installed on your laptop. Follow this guide on Terraform website.
- Git
git clone https://github.com/PeterGriekspoor/Terraform-AWS-TGW-Hub-Spoke
This will clone my git repo to your local directory.
In the cloned repo directory:
- Edit variables.tf and update the values accordingly.
For example, change "yourkey" with your AWS SSH key pair that you should have created on AWS beforehand.
variable "key_name" {
description = "SSH Key Pair"
default = "yourkey"
}
In the cloned repo directory:
Execute the following and check everything is in order.
terraform plan
If you're ok with the plan and it doesn't show any error, you can apply.
echo yes | terraform apply
You will see the output from Terraform similar to the excerpt below:
...
aws_instance.web1: Still creating... [10s elapsed]
aws_instance.spoke2_web1: Still creating... [20s elapsed]
aws_instance.spoke1_web1: Still creating... [20s elapsed]
aws_instance.web1: Still creating... [20s elapsed]
aws_instance.vpc_edge_nat: Still creating... [20s elapsed]
aws_instance.spoke1_web1: Creation complete after 22s [id=i-05c0b63e20aa18d20]
aws_instance.vpc_edge_nat: Still creating... [30s elapsed]
aws_instance.web1: Still creating... [30s elapsed]
aws_instance.spoke2_web1: Still creating... [30s elapsed]
aws_instance.spoke2_web1: Creation complete after 32s [id=i-067daf534402246aa]
aws_instance.web1: Creation complete after 32s [id=i-0c8ff000033bcbfd8]
aws_instance.vpc_edge_nat: Still creating... [40s elapsed]
aws_instance.vpc_edge_nat: Creation complete after 42s [id=i-07189773c0ec04805]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Outputs:
NAT_public_ip = 13.213.211.210
Once you see the output, the terraform automation has been compeleted.
If you take a look at your AWS environment at this point, you'll notice that the following AWS resources have been created by Terraform automatically (Look at the AWS network diagram).
- VPCs
- Subnets
- Route tables and respective routes
- Transit Gateway and attachments
- Transit Gateway route table
- EIP
- A NAT Instance and an internal web server in Edge VPC
- Web servers in spoke VPC 1 and spoke VPC 2
- Security Groups
- Internet gateway (IGW)
You will also notice that all VPCS are connected to each other via the transit gateway, and all traffic (east-west and north-south) between VPCs wiill be routed via the NAT instance (linux firewall).
From the previous state, take note of the NAT instance's public IP - "NAT_public_ip". For me, it's 13.213.211.210. It'll be different in your case.
The NAT instance is pre-configured so that all web servers sitting on internal subnets are exposed to the Internet via NAT instance's elastic IP (Public IP).
NAT instance is listening on port 80, and configured to NAT (forward) the traffic on port 80 to the web server in the edge VPC. To access the website, go to your browser, and access to the NAT instance's public IP on port 80. Note that the NAT instance static route over eth1 to its private subnet does do not survive reboot!
http://13.213.211.210
You should be able to access the website hosted on the webserver (IP: 10.5.7.20) which is on the private subnet of the Edge VPC.
Validate your NAT rules in iptables
sudo iptables -t nat -L -n -v
Validate IP forwarding is set to "1"
sysctl net.ipv4.ip_forward
Validate the website copy-paste your pem key to the NAT instance using vi or nano. Set permission and login to the web server in spoke VPC 2
sudo chmod 400 mykey.pem
sudo ssh -i "mykey.pem" ubuntu@10.7.5.20
NAT instance is listening on port 8081, and configured to NAT (forward) the traffic on port 8081 to the web server in the spoke VPC 1. To access the website, go to your browser, and access to the NAT instance's public IP on port 8081.
http://13.213.211.210:8081
You should see the website hosted on the webserver (Internal IP: 10.10.1.20) which is on the private subnet of the Spoke VPC 1. Take note that the internal web server in spoke VPC is exposed via the NAT instance in edge VPC.
NAT instance is listening on port 8082, and configured to NAT (forward) the traffic on port 8082 to the web server in the spoke VPC 2. To access the website, go to your browser, and access to the NAT instance's public IP on port 8082.
http://13.213.211.210:8082
You should see the website hosted on the webserver (Internal IP: 10.20.1.20) which is on the private subnet of the Spoke VPC 2. Take note that the internal web server in spoke VPC is exposed via the NAT instance in edge VPC.
If you're feeling adventurous, you can do the following test, besides accessing the websites on different VPCs. The NAT instance (linux firewall) is pre-configured so that all traffic between VPCs (east-west traffic) goes through it.
We can test the east-west traffic by accessing to an internal web server from an internal webserver.
-
Log in to the NAT Instance via SSH, using the key pair you've described in the variables.tf.
-
Using the same key pair, jump to the web server in the Edge VPC.
ssh web1 -i yourkey
- From the Web server in Edge VPC, try to SSH to any web server sitting in either spoke-1 or spoke-2 via their internal ip.
ssh 10.20.1.10 -i yourkey
SSH traffic should be routed via the NAT instance, and you should be able to SSH to any of the internal web server from any internal instance in this lab.
Once you're satisfied with your tests, and have finished enjoying my photos of Edinburgh, you can clean up the whole lab environment by simply executing the following command.
echo yes | terraform destroy
Happy Terraform-ing!