AWS Setup for launching Docker Swarm Cluster on 3 nodes (1 master, 2 workers)
Quick Demo
- Create an IAM user and get
ACCESS KEY
&SECRET ACCESS KEY
on AWS CONSOLE - Hit
aws configure
and addACCESS KEY
&SECRET ACCESS KEY
- Change the region and availability zone in
variables.tf
file if you wish to launch the setup in another region. Currently it defaults to us-east-1
- Run init script which will create a S3 bucket for storing terraform remote state. Change the bucket name in setup
./init_aws.sh
- Launch global resources which contains ssh key. Change key path in ssh_key.tf
cd global
terraform apply
- Lauch VPC. Change accordingly in variables.tf
cd vpc
terraform apply
- Launch Nodes. Change accordingly in variables.tf
cd nodes
terraform apply
manager_ip
- Its the IP of manager node which belongs to a swarm lanched on bootup of nodes.
- Services launched via Controller UI can pe accessed on
manager_ip:port_specified
controller_ip
- Controller has Portainer running on Port 9000 which is a UI over Docker Engine.
- Hit
controller_ip:9000
and login - Enter
manager_ip:2375
when asked for Docker Endpoint on login