The purpose here is to write code in terraform that will provision an infrastructure as shown in the diagram below. It is made up of:
- A VPC that has one private and one public subnet
- Only the public instance has access to internet. This instance acts as a jump box to the private instance
- The private instance does not have access to internet. This instance connects to S3 via a VPC Endpoint
- VPC Endpoint provides private access to S3. Traffic between private instance and S3:
- does not go through open internet
- does not leave Amazon network
VPC Endpoint for S3 is a helpful feature that privately connects your VPC to S3. A private connection from your VPC to your AWS services is a much more secure way compared to giving internet access to your instances or using a NAT device.
VPC Endpoint for Amazon S3 blog explains how to create an Endpoint via AWS Management Console
- Terraform v0.12.3 (Code has been tested with this version)
- Create an RSA key (i.e. ~/.ssh/id_rsa) for granting access to servers that we will be provisioning in this repo. Use a third-party tool such as ssh-keygen for this purpose
- Ensure that AWS credentials are available at: "~/.aws/credentials" in the following format:
[default]
aws_access_key_id = <KEY>
aws_secret_access_key = <SECRET_KEY>
region = <REGION>
- Ensure that a unique S3 bucket as a backend type is created for saving terraform state file. See the docs here
- Edit values in main.tf as your need liike backend bucket name and region
terraform {
# It is expected that the bucket, globally unique, already exists
backend "s3" {
# you will need a globally unique bucket name
bucket = "<BUCKET_NAME>"
key = "<KEY>.tfstate"
region = "<REGION>"
encrypt = true
}
}
git clone https://github.com/shazChaudhry/terraform-aws-privateLink.git
cd terraform-aws-privateLink
eval $(ssh-agent)
ssh-add -k ~/.ssh/id_rsa
(See adding an identity key for granting access to servers)ssh-add -l
(This command should show what keys have been added to the agent)export TF_INPUT=0
(See Environment Variables link on how to set system variables)export TF_VAR_producer_public_key=$(ssh-keygen -y -f ~/.ssh/id_rsa)
(This will be required unless producer_public_key in variables.tf is already set)export TF_VAR_region=eu-west-2
(This value is for london. You may choose your own region instead)export TF_VAR_credentials=~/.aws/credentials
export TF_VAR_producer_s3_bucket_name=privaelink-202907271837
(This must be globally unique bucket name)
terraform init
terraform apply -auto-approve
Once the command above has completed successfully, check aws management console in your selected region. Make a note of IPs public and private EC2 instances:
ssh -A admin@<PRODUCER_PUBLIC_IP>
(This is the bastion/jump server. You can not ssh to a server in private subnet. Also, ensure your key-pair key is added to ssh agent)ssh -A admin@<PRODUCER_PRIVATE_IP>
(This will allow you to ssh to the instance in private subnet that has a route to S3 via privatelink)export TF_VAR_region=<REGION_SET_ABOVE>
aws s3 ls s3://privaelink-202907271837 --region ${TF_VAR_region}
(The bucket should be empty. From the producer private instance you should be able to get, put, list and delete s3 objects)touch file.txt
(create a new blank file that is going to be uploaded to the bucket)aws s3 cp file.txt s3://privaelink-202907271837/ --region ${TF_VAR_region}
(This command uploads the file created above)aws s3 ls s3://privaelink-202907271837 --region ${TF_VAR_region}
(This command should list the file that was uploaded previously) See this reference for common commands: https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html
terraform destroy -force
These resources have helped me learn about AWS PrivateLinks:
- S3 encryption with KMS Managed Keys
- Import your Custom Key into KMS; Encrypt data with custom key
- AWS Key Management Service. Customer Master Key, Data Key, Envelope Encryption; KnowledgeIndia AWS Tutorials
- Overview of Managing Access
- Gateway VPC Endpoints; includes a good diagram
- Endpoints for Amazon S3
- IAM Policies and Bucket Policies and ACLs blog