- Build a VPC in AWS with two availibility zones each with a public and private subnet. The private subnets are not accessible from outside but can reach the internet through NAT gateways.
- Deploy an EC2 instance and use the user data script to completely configure an installation of Jenkins complete with a pipeline job. An A record in route53 is defined to point to the public IP of the Jenkins EC2 instance.
- Build an ECR repository and EKS cluster that spans across availability zones for application HA. The code also deploys a helm chart to install the nginx ingress controller which will help us reach our application.
Once the Jenkins instance is configured from the user_data script we find a pipeline already built which performs the following steps
- Clone the git repository with our code
- Builds the docker image for our application
- Logs into ECR and pushes the image
- Updates kubectl to log into our EKS cluster
- Deploys application to EKS as a deployment with two replicas with a service in front of it and an ingress pointing to the service
- You must have terraform and awscli installed
- AWS credentials must be configured. In my case I ran aws configure and entered my access key, secret key and region.
-
Update the ssh key and aws credential values that show REPLACEME in tf/ec2.yaml with the values for your environment
-
Update route53 DNS Zone ID in tf/route53.tf to reflect your environment and FQDN for jenkins
-
Navigate to terraform directory
cd tf
-
Initialize Terraform
terraform init
-
Apply code after reviewing the plan that's automatically generated
terraform apply
-
SSH to the instance and obtain the admin password
ssh -i SSHKEY ubuntu@jenkins.mwdevops.com
cat /var/lib/jenkins/secrets/initialAdminPassword
-
Navigate to the web ui at FQDN:8080
-
Log into Jenkins with admin and the value from /var/lib/jenkins/secrets/initialAdminPassword
-
Build scm-job
-
Log into your EKS cluster and run
kubectl get svc
This will give you the load balancer DNS name for your ingress controller. If you run curl /test you will access the application that's been deployed.
- To clean up, you can run the terminate script. Update the terminate script to match the ecr name and region that you're using. The ECR must be cleaned before terraform can destroy it which is why that script has to be run.
cd tf
bash terminate.sh
The xml file that I set up in the user data script has a token that can be used to trigger a build within a github webhook. http://FQDN:8080/job/scmjob/build?token=TOKEN_NAME
All of the infrastructure is managed in Terraform.
Setting up monitoring in NewRelic for containers running Python involves nesting containers and without a license I didn't attempt this but did read the documentation.
- The user data script is not the place for sensitive credentials like aws access key/secret key pairs. In my research I found a credential provider for Jenkins (https://plugins.jenkins.io/aws-credentials/) but it's not supported by CASC and thus I used secret strings. In the future, I would set up the AWS credentials in the GUI. The SSH key had the same issue. I'd rather do this whole configuration with Ansible vs user_data but that's beyond the scope here :)