This repo provides a demo non prodcution app that works with HashiCoprs Vault
Deploy this follow these steps
Edit the Variables to suit your envrioment and account detail
- clone this repo
- Run Terrafom init
- Run Terraform apply
Once the apply is compelete connect to your Kubenetes envrioment via your cloud shell and verify the pods are up using kubectl get pods
Now
- clone this repo into your shell https://github.com/dawright22/app_stack.git
- change into this repo and run ./full_stack_deploy.sh
If you are new to TFC, complete this tutorial: https://learn.hashicorp.com/collections/terraform/cloud-get-started
- Fork this repo
- Set Up your TFC Account and Organization
- Create a new Workspace and select the "Version control workflow"
- Choose the repo you forked
- Update Variables
- Queue and Run the plan
- Once the apply is complete connect to your Kubenetes envrioment via your cloud shell and verify the pods are up using kubectl get pods
Retrieve and update the Variables in TFC
Review variables.tf
Note: If you choose to change the region of deployment and intend to use the Cloud Shell to access the Kubernetes cluster later on, the Cloud Shell is only available in selected regions! https://docs.aws.amazon.com/cloudshell/latest/userguide/faq-list.html#regions-available
Add Environment Variables
Retrieve your keys here: https://console.aws.amazon.com/iam/home?#security_credential
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
Connecting to Cloud K8s Environment
Navigate to Elastic Kubernetes Services then select your newly created cluster, select the connect button and connect via the Cloud Shell
Note:
*** If you do not know what is your cluster name, refer to TFC's Workspace Run Log
- Install kubectl: https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html < Refer to the Linux steps
- Run
aws eks --region <region> update-kubeconfig --name <cluster_name>
to configure your kubeconfig - Run
kubectl get pods
and see that Terraform has used helm to install Vault in the cluster - Clone this repo into your shell
git clone https://github.com/dawright22/app_stack.git
- cd into the app_stack directory and run
./full_stack_deploy.sh
- Running
kubectl get svc
will show the ip address to connect to for both the demo application and vault UI
A standalone vault instance that can be either OSS (default) or Enterprise to demonstrate dynamic user credentials and trasit data encryption as a service
You can connect to the Vault UI and see the secrets engines enabled using http://<EXTERNAL_IP:8200>
Steps to retrieve external IP:
- Run
kubectl get svc
in the AWS Cloud Shell - Connect to the Vault UI and see the secrets engines enabled using http://<EXTERNAL-IP:8200>
You will need to login in using the ROOT TOKEN from the init.json file located in app_stack/vault/init.json to authenticate
it should look like this:
Execute kubectl get svc transit-app to see the ip address to connect too
You can connect to the app UI and add or change record using http://<EXTERNAL_IP:5000>
in the app_stack repo run the ./cleanup.sh
- Using the AWS Cloud Shell, in the app_stack repo run the ./cleanup.sh
- Using TFC, Settings > Destruction and Deletion > Queue destroy plan