The work and test environment is a Docker container on Windows 10. The code that builds the image can be found in this repository.
Inside folder modules
are folders with names starting from 01
to 05
. These build the AWS infrastructure. Infrastructure can be build by manually provisioning every module or it can be done by running the shell script:
- step into the home directory of the project and run
. create-infra.sh
- this creates the AWS infrastructure - VPC, Subnet, Security group... Before you can run it make sure you export the AWS access key and secret export AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> export AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY>
The infrastructure can be destroyed in the following way:
Step into the home directory of the project and run . destroy-infra.sh
- this destroys the AWS infrastructure.
Make sure you do not have any instances running on the infrastructure
Prior to provisioning, in order to use S3 storage, the secrets should be put to Consul.
consul kv put aws/s3a/access_key <ACCESS_KEY>
and
consul kv put aws/s3a/secret_key <SECRET_KEY>
This way, terraform will fetch the secrets from consul. Remember, this is local Consul, running on the provisioner (machine you use for provisioning clusters) so if you plan to put Consul on a server, consider security measurements.
For every folder in folder modules
do the following:
Loading the modules and dependencies
terraform init
Show the provisioning plan - what is terraform planning to create.
terraform plan
Build the cluster - smart to run it with nohup
and write output to a file
nohup terraform apply -auto-approve > apply.log &
Step into the home directory of the project and run . destroy-infra.sh
.
Step into folders in folder modules
from 05
to 01
and in each one run:
terraform destroy -auto-approve
Order is important here since AWS does not allow to remove objects which are dependencies.