Skip to content

Latest commit

 

History

History
131 lines (111 loc) · 7.24 KB

README.md

File metadata and controls

131 lines (111 loc) · 7.24 KB

Introduction

These HCL modules work together to create a customizable and secure network infrastructure consisting of two VPCs linked via a VPC peering connection, a NAT gateway, Internet gateway, EC2 instances, and a Bastion host allowing connectivity from external administrators.

They have been specifically designed to be applied within a specific order.

How to use

The folder structure is as seen below:

├── environments
│   ├── nonprod
│   │   ├── network
│   │   │   └── plans
│   │   └── servers
│   │       ├── keys
│   │       └── plans
│   └── prod
│       ├── network
│       │   └── plans
│       └── servers
│           ├── keys
│           └── plans
├── modules
│   ├── globalvars
│   ├── network
│   ├── nonprodvars
│   ├── prodvars
│   └── servers
├── routing
│   └── plans
├── s3_state
│   └── plans
└── vpc_peering
    └── plans

The modules should be run in the following order. Deviations from this order might cause unexpected results.

  1. s3_state (Creates all the buckets to be used)
  2. nonprod/network (Creates VPC, subnets, Internet and NAT gateways)
  3. nonprod/servers (Creates the EC2 instances, security groups, etc.)
  4. prod/network (Creates VPC and subnets)
  5. prod/servers (Creates the EC2 instances, security groups, etc.)
  6. vpc-peering (Takes VPC information from each environment to configure peer link)
  7. routing (Creates routing tables and routes, then assigns them to subnets)

Steps to Deploy:

  1. Browse to s3_state directory.
  2. Run the terraform init, terraform plan and terraform apply commands to create 4 buckets: production, nonproduction, vpc peering, and routing.
  3. Browse to environments/nonprod/network directory.
  4. Run the terraform init, terraform plan and terraform apply commands to create nonproduction network infrastructure.
  5. Browse to environments/nonprod/servers directory.
  6. Create a keys directory via mkdir keys command.
  7. Run ssh-keygen -t rsa -f keys/nonprod-key to create a key pair for EC2 creation. Optionally, you may provide a passphrase for the private key.
  8. Run the terraform init, terraform plan and terraform apply -var-file=default.tfvars commands to create nonproduction EC2 instances.
    1. Note the -var-file=default.tfvars argument which contains some of the configuration for EC2 instances.
  9. Save the Bastion public IP address that shows at the end of the apply operation. It will be used later.
  10. Browse to environments/prod/network directory.
  11. Run the terraform init, terraform plan and terraform apply commands to create production network infrastructure.
  12. Browse to environments/prod/servers directory.
  13. Create a keys directory via mkdir keys command.
  14. Run ssh-keygen -t rsa -f keys/prod-key to create a key pair for EC2 creation. Optionally, you may provide a passphrase for the private key.
  15. Run the terraform init, terraform plan and terraform apply -var-file=default.tfvars commands to create production EC2 instances.
    1. Note the -var-file=default.tfvars argument which contains some of the configuration for EC2 instances.
  16. Browse to vpc_peering directory.
  17. Run the terraform init, terraform plan and terraform apply commands to create the VPC peering connection.
  18. Browse to routing directory.
  19. Run the terraform init, terraform plan and terraform apply commands to create the route tables and routes across both VPCs.

Note: Alternatively, custom .tfvars files can be used as long as they adhere to the schema seen in default.tfvars.

Schema:

config_input = [
  {
  	# EC2 Instance Object 1
    "name" : <EC2 instance name, string value>,
    "type" : <EC2 instance type, string value>,
    "counter" : <Number of EC2 instances, integer value>,
    "az_name" : <Availability Zone name, string value>
  },
  {
  	# EC2 Instance Object 2
    "name" : <EC2 instance name, string value>,
    "type" : <EC2 instance type, string value>,
    "counter" : <Number of EC2 instances, integer value>,
    "az_name" : <Availability Zone name, string value>
  }
]

Note: Please note, you may add more Instance Objects as needed but if you want to have multiple copies of the same resource, please increment the counter attribute.

Steps to access private resources via Bastion public IP:

  1. On an admin machine running OpenSSH, open a terminal window.
  2. Collect the Bastion public IP (bastion_pub_ip) address output after deploying the environments/nonprod/servers module.
  3. Collect the prod and nonprod private keys you've generated and place them in reachable locations.
  4. At the command prompt, run ssh -i ec2-user@ -L ::
  5. Now an SSH tunnel has been established with the Bastion EC2 instance, via which traffic to private EC2 instances will traverse (see SSH Tunneling)
  6. To access websites in nonprod private EC2 instances, set the -L values to ::80 and browse to http://localhost: via a web browser.
  7. To access SSH servers in any of the private EC2 instances, set the -L values to ::22 and use a separate CLI terminal window to run this command: ssh -i ec2-user@localhost -p
  8. To access the "MySQL server" bonus, set the -L values to ::3306 and browse to http://localhost: via a web browser.
  9. Note, this is a simulation where the TCP port for MySQL has been configured as the listening port for a Python HTTP web server module and so browsing to the localhost address mentioned here will show a basic webpage listing a directory's contents.

Steps to Clean Up:

Destroying the deployed resources requires that the modules are accessed in the reverse order.

  1. Browse to routing directory.
  2. Run the terraform destroy command to remove the route tables and routes across both VPCs.
  3. Browse to vpc_peering directory.
  4. Run the terraform destroy command to remove the VPC peering connection.
  5. Browse to environments/nonprod/servers directory.
  6. Run the terraform destroy -var-file=default.tfvars command to remove nonproduction EC2 instances.
    1. Note the -var-file=default.tfvars argument which contains some of the configuration for EC2 instances.
  7. Browse to environments/nonprod/network directory.
  8. Run the terraform destroy command to remove nonproduction network infrastructure.
  9. Browse to environments/prod/servers directory.
  10. Run the terraform destroy -var-file=default.tfvars command to remove production EC2 instances.
    1. Note the -var-file=default.tfvars argument which contains some of the configuration for EC2 instances.
  11. Browse to environments/prod/network directory.
  12. Run the terraform destroy command to remove production network infrastructure.
  13. Browse to s3_state directory.
  14. Run the terraform destroy command to remove 4 buckets: production, nonproduction, vpc peering, and routing.