Skip to content

Latest commit

 

History

History
 
 

config-secrets

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

Application Configuration and Secrets

Separating application code from configuration is one of the factors when building an application using 12-factor. This allows the same application to be deployed across multiple environments, such as dev, test, staging and production. It also allows the application to be more portable.

Kubernetes has native constructs like ConfigMap and Secrets that allow you to decouple configuration artifacts from the image content to keep containerized applications portable. In addition, other services as AWS Parameter Store or Hashicorp Vault can be used to store that information as well.

This chapter will cover how these constructs and services can be used to store configuration information and secrets.

  1. ConfigMap is just a set of key-value pairs. It allow you to decouple configuration artifacts from image content.

  2. Secrets allows separating sensitive information such as credentials and keys from an application.

ConfigMap is similar to Secrets, but provides a means of working with strings that don’t contain sensitive information.

Make sure you change to that directory before giving any commands in this chapter.

Prerequisites

This chapter uses a cluster with 3 master nodes and 5 worker nodes as described here: multi-master, multi-node gossip based cluster.

All configuration files for this chapter are in the config-secrets directory.

Configuration data using Kubernetes ConfigMap

This section will explain:

  1. Pass configuration information to a Pod

  2. Define environment variables in a Pod using ConfigMap

Create a ConfigMap object

Create a ConfigMap:

$ kubectl apply -f ./templates/redis-configmap.yaml
configmap "redis-config" created

redis-configmap.yaml is a standard resource configuration file. It defines the configuration information as:

data:
  redis-config: |
    maxmemory=2mb
    maxmemory-policy=allkeys-lru

The configuration data is stored in the main key data. redis-config is an attribute inside this key where the configuration information for the Redis pod is defined as key-value pairs.

Get the list of ConfigMaps:

$ kubectl get configmap
NAME           DATA      AGE
redis-config   1         14s

Get more details about the created ConfigMap:

$ kubectl get configmap/redis-config -o yaml
apiVersion: v1
items:
- apiVersion: v1
  data:
    redis-config: |
      maxmemory 2mb
      maxmemory-policy allkeys-lru
  kind: ConfigMap
  metadata:
    creationTimestamp: 2017-10-22T18:38:27Z
    labels:
      k8s-app: redis
    name: redis-config
    namespace: default
    resourceVersion: "302238"
    selfLink: /api/v1/namespaces/default/configmaps/redis-config
    uid: 316309d0-b758-11e7-8c3f-06329c8974cc
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

The configuration information is shown as key/value pairs in the data key.

Alternative ways to create ConfigMap

We created a ConfigMap using a resource configuration file. Other ways to create ConfigMap are listed below:

Note
These ConfigMaps are using the exact same name as the one previously created. If you like to try the commands, then you either need to give a different name to the ConfigMap or delete the previously created ConfigMap using the command kubectl delete -f ./templates/redis-configmap.yaml.
  1. kubectl create configmap --from-literal=<key>:<value>. Multiple --from-literal=<key>:<value> options can be used to define different key/value pairs. For example:

    $ kubectl create configmap redis-config --from-literal=maxmemory=2mb --from-literal=maxmemory-policy=allkeys-lru
    configmap "redis-config" created

    More details about the ConfigMap can be obtained as:

    $ kubectl get configmap/redis-config -o yaml
    apiVersion: v1
    data:
      maxmemory: 2mb
      maxmemory-policy: allkeys-lru
    kind: ConfigMap
    metadata:
      creationTimestamp: 2017-10-22T15:29:31Z
      name: redis-config
      namespace: default
      resourceVersion: "287452"
      selfLink: /api/v1/namespaces/default/configmaps/redis-config
      uid: cccf20b7-b73d-11e7-8c3f-06329c8974cc
  2. kubectl create configmap redis-config --from-file=<properties file> where <properties file> is a property file with key/value pairs. For example, templates/redis-config looks like:

    maxmemory 2mb
    maxmemory-policy allkeys-lru

    And now the ConfigMap can be created as:

    $ kubectl create configmap redis-config --from-file=templates/redis-config
    configmap "redis-config" created

    More details about the ConfigMap can be obtained as:

    $ kubectl get configmap/redis-config -o yaml
    apiVersion: v1
    data:
      redis-config: |
        maxmemory=2mb
        maxmemory-policy=allkeys-lru
    kind: ConfigMap
    metadata:
      creationTimestamp: 2017-10-22T15:56:08Z
      name: redis-config
      namespace: default
      resourceVersion: "289533"
      selfLink: /api/v1/namespaces/default/configmaps/redis-config
      uid: 84901162-b741-11e7-8c3f-06329c8974cc

    The filename becomes a key stored in the data section of the ConfigMap. The file contents become the key’s value.

At the end of this section, you’ll have created a ConfigMap redis-config.

Consume in a pod volume

A ConfigMap must be created before referencing it in a Pod specification (unless you mark the ConfigMap as “optional”). If you reference a ConfigMap that doesn’t exist would , the Pod won’t start.

Let’s use redis-config ConfigMap to create our redis.conf configuration file in the pod redis-pod. It maps the ConfigMap to the volume where the configuration resides:

$ kubectl apply -f ./templates/redis-pod.yaml
pod "redis-pod" created

Wait for the pod to run:

$ kubectl get pods
NAME        READY     STATUS    RESTARTS   AGE
redis-pod   1/1       Running   0          12m

Check logs from the pod to verify that Redis has started:

$ kubectl logs redis-pod
                _._
           _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 2.8.19 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in stand alone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 6
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'
[6] 22 Oct 18:39:45.386 # Server started, Redis version 2.8.19
[6] 22 Oct 18:39:45.386 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
[6] 22 Oct 18:39:45.386 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
[6] 22 Oct 18:39:45.386 * The server is now ready to accept connections on port 6379

Validate that your redis cluster picked up the appropriate configuration:

$ kubectl exec redis-pod -it redis-cli
127.0.0.1:6379> CONFIG GET maxmemory
1) "maxmemory"
2) "2097152"
127.0.0.1:6379> CONFIG GET maxmemory-policy
1) "maxmemory-policy"
2) "allkeys-lru"
127.0.0.1:6379> quit

You should see the same values that were specified in ./templates/redis-configmap.yaml outputted in the above commands.

Now, changing the pod configuration would involve the following steps:

  1. Edit redis-configmap.yaml

  2. Update the ConfigMap using the command: kubectl apply -f templates/redis-configmap.yaml

  3. Wrap the pod in a Deployment

  4. Terminate the pod, Deployment will restart the pod and pick up new configuration

Consume as pod environment variables

The data from ConfigMap can be used to initialize environment variables in a pod. We’ll use arungupta/print-hello image to print “Hello World” on the console. The number of times this message is printed is defined by an environment variable COUNT. This value of this variable is defined in the ConfigMap.

Create a pod and use ConfigMap

  1. Create a ConfigMap:

    $ kubectl create configmap hello-config --from-literal=COUNT=2
    configmap "hello-config" created
  2. Get more details about this ConfigMap:

    $ kubectl get configmap/hello-config -o yaml
    apiVersion: v1
    data:
      COUNT: "2"
    kind: ConfigMap
    metadata:
      creationTimestamp: 2017-10-26T21:40:10Z
      name: hello-config
      namespace: default
      resourceVersion: "92516"
      selfLink: /api/v1/namespaces/default/configmaps/hello-config
      uid: 3dacb22f-ba96-11e7-ab9c-123f969a2ce2
  3. Use this ConfigMap to create a pod:

    $ kubectl apply -f templates/app-pod.yaml
    pod "app-pod" created

    The pod configuration file looks like:

    apiVersion: v1
    kind: Pod
    metadata:
      labels:
        name: app-pod
      name: app-pod
    spec:
      containers:
      - name: app
        image: arungupta/print-hello:latest
        env:
        - name: COUNT
          valueFrom:
            configMapKeyRef:
              name: hello-config
              key: COUNT
        ports:
        - containerPort: 8080
  4. Observe logs from the pod:

    $ kubectl logs -f app-pod
    npm info it worked if it ends with ok
    npm info using npm@3.10.10
    npm info using node@v6.11.4
    npm info lifecycle webapp@1.0.0~prestart: webapp@1.0.0
    npm info lifecycle webapp@1.0.0~start: webapp@1.0.0
    > webapp@1.0.0 start /usr/src/app
    > node server.js
    Running on http://0.0.0.0:8080
  5. In a new terminal, expose the pod as a Service:

    $ kubectl expose pod app-pod --port=80 --target-port=8080 --name=app
    service "app" exposed
  6. Start Kubernetes proxy:

    kubectl proxy
  7. In a new terminal, access the service as:

    $ curl http://localhost:8001/api/v1/proxy/namespaces/default/services/app/
    printed 2 times

    The pod logs are refreshed as well:

    Hello world 0
    Hello world 1

Change the ConfigMap and verify pod logs

  1. Edit the ConfigMap:

    $ kubectl edit configmap/hello-config
  2. Change the value to 4

  3. Terminate the pod:

    $ kubectl delete pod/app-pod
    pod "app-pod" deleted
  4. Run the pod again:

    kubectl create -f templates/app-pod.yaml
    pod "app-pod" created
  5. Access the service again:

    curl http://localhost:8001/api/v1/proxy/namespaces/default/services/app/
    printed 4 times
  6. Logs from the pod are refreshed:

    Hello world 0
    Hello world 1
    Hello world 2
    Hello world 3

Secrets using Kubernetes Secrets

In this section we will demonstrate how to place secrets into the Kubernetes cluster and then show multiple ways of retrieving those secretes from within a pod.

Create secrets

First encode the secrets you want to apply, for this example we will use the username admin and the password password

echo -n "admin" | base64
echo -n "password" | base64

Both of these values are already written in the file ./templates/secret.yaml. The configuration looks like:

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  username: YWRtaW4=
  password: cGFzc3dvcmQ=

You can now insert this secret in the Kubernetes cluster with the following command:

kubectl apply -f ./templates/secret.yaml

The list of created secrets can be seen as:

$ kubectl get secrets
NAME                  TYPE                                  DATA      AGE
default-token-4cqsx   kubernetes.io/service-account-token   3         8h
mysecret              Opaque                                2         6s

The values of the secret are displayed as Opaque.

Get more details about the secret:

$ kubectl describe secrets/mysecret
Name:         mysecret
Namespace:    default
Labels:       <none>
Annotations:  <none>
Type:  Opaque
Data
====
password:  8 bytes
username:  5 bytes

Once again, the values of the secret are not shown.

Consume in a pod volume

Deploy the pod:

kubectl apply -f ./templates/pod-secret-volume.yaml

The pod configuration file looks like:

apiVersion: v1
kind: Pod
metadata:
  name: pod-secret-volume
spec:
  containers:
  - name: pod-secret-volume
    image: redis
    volumeMounts:
    - name: foo
      mountPath: "/etc/foo"
      readOnly: true
  volumes:
  - name: foo
    secret:
      secretName: mysecret

Open a shell to the pod to see the secrets:

kubectl exec -it pod-secret-volume /bin/bash
ls /etc/foo
cat /etc/foo/username ; echo
cat /etc/foo/password ; echo

The above commands should result in the plain text values, the decoding is done for you.

Delete the pod:

kubectl delete -f ./templates/pod-secret-volume.yaml

Consume as pod environment variables

Deploy the pod:

kubectl apply -f ./templates/pod-secret-env.yaml

The pod configuration file looks like:

apiVersion: v1
kind: Pod
metadata:
  name: pod-secret-env
spec:
  containers:
  - name: pod-secret-env
    image: redis
    env:
      - name: SECRET_USERNAME
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: username
      - name: SECRET_PASSWORD
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: password
  restartPolicy: Never

Open a shell to the pod to see the secrets:

kubectl exec -it pod-secret-env /bin/bash
echo $SECRET_USERNAME
echo $SECRET_PASSWORD

The above commands illustrate how to see the secret values via environment variables.

Configuration data and Secrets using AWS Parameter Store

Amazon EC2 Systems Manager eases the configuration and management of Amazon EC2 instances and associated resources. One of the features of Systems Manager is Parameter Store that provides a centralized location to store, provide access control, and easily reference your configuration data, whether plain-text data such as database strings or secrets such as passwords, encrypted through AWS Key Management Service (KMS).

KMS helps you encrypt your sensitive information and protect the security of your keys. Additionally, all calls to the parameter store are recorded with AWS CloudTrail so that they can be audited. Access to each parameter store secrets can be scoped with IAM.

Parameter Store allows three types of configuration data to be stored:

  • String

  • List of string

  • Secure string

This section will show how to create a secure string using AWS CLI and access it in a Pod.

Create KMS Key

  1. Create a new encryption key: https://console.aws.amazon.com/iam/home#/encryptionKeys/

  2. Click on Create key. If you haven’t used the KMS service before, click Get Started.

  3. Specify the Alias as k8s-key

    aws kms create key

    Click on Next Step.

  4. Take the defaults for Add Tags and click on Next Step.

  5. Select the IAM user(s) and roles that can administer this key through the KMS API

    aws kms key admins
  6. Select the IAM user(s) and roles that can use this key to encrypt and decrypt data from within applications. We’ll use the IAM role that is assigned to the worker nodes in the Kubernetes cluster created by kops.

    aws kms key usage perms
  7. Preview key policy:

    {
      "Id": "key-consolepolicy-3",
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "Enable IAM User Permissions",
          "Effect": "Allow",
          "Principal": {
            "AWS": [
              "arn:aws:iam::<account-id>:root"
            ]
          },
          "Action": "kms:*",
          "Resource": "*"
        },
        {
          "Sid": "Allow access for Key Administrators",
          "Effect": "Allow",
          "Principal": {
            "AWS": [
              "arn:aws:iam::<account-id>:user/arun",
              "arn:aws:iam::<account-id>:role/nodes.example.cluster.k8s.local"
            ]
          },
          "Action": [
            "kms:Create*",
            "kms:Describe*",
            "kms:Enable*",
            "kms:List*",
            "kms:Put*",
            "kms:Update*",
            "kms:Revoke*",
            "kms:Disable*",
            "kms:Get*",
            "kms:Delete*",
            "kms:TagResource",
            "kms:UntagResource",
            "kms:ScheduleKeyDeletion",
            "kms:CancelKeyDeletion"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Allow use of the key",
          "Effect": "Allow",
          "Principal": {
            "AWS": [
              "arn:aws:iam::<account-id>:role/nodes.example.cluster.k8s.local"
            ]
          },
          "Action": [
            "kms:Encrypt",
            "kms:Decrypt",
            "kms:ReEncrypt*",
            "kms:GenerateDataKey*",
            "kms:DescribeKey"
          ],
          "Resource": "*"
        },
        {
          "Sid": "Allow attachment of persistent resources",
          "Effect": "Allow",
          "Principal": {
            "AWS": [
              "arn:aws:iam::<account-id>:role/nodes.example.cluster.k8s.local"
            ]
          },
          "Action": [
            "kms:CreateGrant",
            "kms:ListGrants",
            "kms:RevokeGrant"
          ],
          "Resource": "*",
          "Condition": {
            "Bool": {
              "kms:GrantIsForAWSResource": true
            }
          }
        }
      ]
    }
  8. Click on Finish.

  9. Select IAM, Encryption Keys, k8s-key and copy the ARN of the key.

Update the IAM role

For a Kubernetes cluster created by kops, EC2 worker nodes use an instance profile to allow the EC2 instances to access other AWS services. This role must be updated to allow the worker nodes to read the secrets from Parameter Store.

In the IAM Console click roles and type nodes into the search box. Find the nodes.example.cluster.k8s.local role and click it. In the Permissions tab, expand the inline policy for nodes.example.cluster.k8s.local and click Edit policy. Add the ssm:GetParameter permission to the policy so the policy looks similar to the one below.

{
    "Version": "2012-10-17",
    "Statement": [
        .
        .
        .
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameter"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:<account-id>:parameter/GREETING",
                "arn:aws:ssm:us-east-1:<account-id>:parameter/NAME"
            ]
        }
    ]
}

Create secrets

Only the value of the secure string parameter is encrypted. The name of the parameter, description, and other properties are not encrypted.

  1. A secret in AWS Parameter is created as a secure string. Create a secure string:

    $ aws ssm put-parameter \
      --name GREETING \
      --value Hello \
      --type SecureString \
      --key-id arn:aws:kms:us-east-1:<account-id>:key/414a963b-7fe4-4a61-b19f-ea408b9bda3b
    {
        "Version": 1
    }

    This will create a secret in the Parameter Store using the KMS key.

  2. Get the value of the created secret:

    $ aws ssm get-parameter --name GREETING
    {
        "Parameter": {
            "Version": 1,
            "Type": "SecureString",
            "Name": "GREETING",
            "Value": "AQICAHghFIWYznvdUrX6qDhd5xLFHpoaQ5WL1EaHqsbkenfFEwHdqTpU8URwKMf2H9XmMyMgAAAAYzBhBgkqhkiG9w0BBwagVDBSAgEAME0GCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM0jZaUadELhmiCzj4AgEQgCBXVAZzfjac8P2AFrelnLaXb3z7ssZt2q/npxYAdJ9ABQ=="
        }
    }

    By default, the encrypted value of the secret is shown in the output.

  3. Decrypted value of the secret can be obtained:

    $ aws ssm get-parameter --name GREETING --with-decryption
    {
        "Parameter": {
            "Version": 1,
            "Type": "SecureString",
            "Name": "GREETING",
            "Value": "Hello"
        }
    }
  4. Create another secret:

    $ aws ssm put-parameter \
      --name NAME \
      --value World \
      --type SecureString \
      --key-id arn:aws:kms:us-east-1:<account-id>:key/414a963b-7fe4-4a61-b19f-ea408b9bda3b
    {
        "Version": 1
    }

    These two secrets will be consumed in the Pod.

Consume secrets in a Pod

The directory images/parameter-store-kubernetes contains a Java application that reads secrets from AWS Parameter Store. This application is then packaged as a Pod and deployed in the cluster.

The Pod configuration is shown:

apiVersion: v1
kind: Pod
metadata:
  name: pod-parameter-store
spec:
  containers:
  - name: pod-parameter-store
    image: arungupta/parameter-store-kubernetes:latest
  restartPolicy: Never

Create the Pod:

$ kubectl apply -f templates/pod-parameter-store.yaml
pod "pod-parameter-store" configured

Check the logs of the Pod:

$ kubectl logs pod-parameter-store
parameter store: HelloWorld

This shows that the Java application has been able to read both the NAME and GREETING secrets from AWS Parameter Store.

Secrets using Vault

Hashicorp Vault is a tool for managing secrets. It secures, stores and tightly controls access to tokens, passwords, certificates, API keys and other secrets.

This section explains how to install and configure Vault on AWS, store secrets, and access them in a Pod. The instructions are inspired from https://github.com/briankassouf/vault-kubernetes-demo.

Create EC2 instance

We need an EC2 instance for hosting Vault server. This server needs to be accessible to Kubernetes cluster.

  1. Create an EC2 instance with Linux flavor. For example m4.large with Amazon Linux

    1. Make sure to allow port 8200 as part of Configure Security Group

    2. Configure security group to allow 8200 (not TLS by default, more config required for TLS)

    3. SSH into the machine:

      ssh -i ~/.ssh/arun-us-east1.pem ec2-user@ec2-54-237-223-40.compute-1.amazonaws.com
  2. Note down the private IP address of the EC2 console. This is needed to start our Vault server.

Start Vault Server on EC2

  1. Download Vault server:

    wget https://releases.hashicorp.com/vault/0.9.0/vault_0.9.0_linux_amd64.zip
  2. Unzip Vault: unzip vault_0.9.0_linux_amd64.zip

  3. Start Vault server:

    [ec2-user@ip-172-31-26-180 ~]$ ./vault server -dev-listen-address=ip-172-31-26-180.ec2.internal:8200 -dev &
    [1] 26687
    [ec2-user@ip-172-31-26-180 ~]$ ==> Vault server configuration:
    
                         Cgo: disabled
             Cluster Address: https://ip-172-31-26-180.ec2.internal:8201
                  Listener 1: tcp (addr: "ip-172-31-26-180.ec2.internal:8200", cluster address: "172.31.26.180:8201", tls: "disabled")
                   Log Level: info
                       Mlock: supported: true, enabled: false
            Redirect Address: http://ip-172-31-26-180.ec2.internal:8200
                     Storage: inmem
                     Version: Vault v0.9.0
                 Version Sha: bdac1854478538052ba5b7ec9a9ec688d35a3335
    
    ==> WARNING: Dev mode is enabled!
    
    In this mode, Vault is completely in-memory and unsealed.
    Vault is configured to only have a single unseal key. The root
    token has already been authenticated with the CLI, so you can
    immediately begin using the Vault CLI.
    
    The only step you need to take is to set the following
    environment variables:
    
        export VAULT_ADDR='http://ip-172-31-26-180.ec2.internal:8200'
    
    The unseal key and root token are reproduced below in case you
    want to seal/unseal the Vault or play with authentication.
    
    Unseal Key: ZBfexpmasu0r4iba+t8tTlm4L5FQJ+JagglEhbfpxkU=
    Root Token: 4e93b3c6-c459-f166-e7e9-6c48044cfdb6
    
    ==> Vault server started! Log data will stream in below:
    
    2017/11/20 03:34:06.457231 [INFO ] core: security barrier not initialized
    2017/11/20 03:34:06.457349 [INFO ] core: security barrier initialized: shares=1 threshold=1
    2017/11/20 03:34:06.457475 [INFO ] core: post-unseal setup starting
    2017/11/20 03:34:06.470532 [INFO ] core: loaded wrapping token key
    2017/11/20 03:34:06.470542 [INFO ] core: successfully setup plugin catalog: plugin-directory=
    2017/11/20 03:34:06.471226 [INFO ] core: successfully mounted backend: type=kv path=secret/
    2017/11/20 03:34:06.471239 [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
    2017/11/20 03:34:06.471348 [INFO ] core: successfully mounted backend: type=system path=sys/
    2017/11/20 03:34:06.471530 [INFO ] core: successfully mounted backend: type=identity path=identity/
    2017/11/20 03:34:06.475065 [INFO ] expiration: restoring leases
    2017/11/20 03:34:06.475241 [INFO ] rollback: starting rollback manager
    2017/11/20 03:34:06.475583 [INFO ] expiration: lease restore complete
    2017/11/20 03:34:06.475583 [INFO ] identity: entities restored
    2017/11/20 03:34:06.475628 [INFO ] identity: groups restored
    2017/11/20 03:34:06.475641 [INFO ] core: post-unseal setup complete
    2017/11/20 03:34:06.475778 [INFO ] core: root token generated
    2017/11/20 03:34:06.475782 [INFO ] core: pre-seal teardown starting
    2017/11/20 03:34:06.475783 [INFO ] core: cluster listeners not running
    2017/11/20 03:34:06.475790 [INFO ] rollback: stopping rollback manager
    2017/11/20 03:34:06.475848 [INFO ] core: pre-seal teardown complete
    2017/11/20 03:34:06.475905 [INFO ] core: vault is unsealed
    2017/11/20 03:34:06.475919 [INFO ] core: post-unseal setup starting
    2017/11/20 03:34:06.475965 [INFO ] core: loaded wrapping token key
    2017/11/20 03:34:06.475967 [INFO ] core: successfully setup plugin catalog: plugin-directory=
    2017/11/20 03:34:06.476108 [INFO ] core: successfully mounted backend: type=kv path=secret/
    2017/11/20 03:34:06.476186 [INFO ] core: successfully mounted backend: type=system path=sys/
    2017/11/20 03:34:06.476318 [INFO ] core: successfully mounted backend: type=identity path=identity/
    2017/11/20 03:34:06.476328 [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
    2017/11/20 03:34:06.476889 [INFO ] expiration: restoring leases
    2017/11/20 03:34:06.476945 [INFO ] rollback: starting rollback manager
    2017/11/20 03:34:06.477008 [INFO ] identity: entities restored
    2017/11/20 03:34:06.477015 [INFO ] identity: groups restored
    2017/11/20 03:34:06.477022 [INFO ] core: post-unseal setup complete
    2017/11/20 03:34:06.477105 [INFO ] expiration: lease restore complete
  4. Run the command to configure Vault CLI to identify the server:

    export VAULT_ADDR='http://ip-172-31-26-180.ec2.internal:8200'
  5. Check status:

    [ec2-user@ip-172-31-26-180 ~]$ ./vault status
    Type: shamir
    Sealed: false
    Key Shares: 1
    Key Threshold: 1
    Unseal Progress: 0
    Unseal Nonce:
    Version: 0.9.0
    Cluster Name: vault-cluster-01043c83
    Cluster ID: 89ccbeb4-8af1-7dca-77bb-38f39c423a39
    
    High-Availability Enabled: false

Configure Vault CLI on your local machine

  1. Download Vault server:

    wget https://releases.hashicorp.com/vault/0.9.0/vault_0.9.0_linux_amd64.zip
  2. Unzip Vault: unzip vault_0.9.0_linux_amd64.zip

  3. Find public IP address of the EC2 instance and setup an environment variable:

    export VAULT_ADDR='<public-ip-address>'

    For example, this command will look like:

    export VAULT_ADDR='http://http://ec2-54-237-223-40.compute-1.amazonaws.com:8200'
  4. Verify the status can be seen from your local machine:

    $ vault status
    Type: shamir
    Sealed: false
    Key Shares: 1
    Key Threshold: 1
    Unseal Progress: 0
    Unseal Nonce:
    Version: 0.9.0
    Cluster Name: vault-cluster-01043c83
    Cluster ID: 89ccbeb4-8af1-7dca-77bb-38f39c423a39
    
    High-Availability Enabled: false

Configure Kubernetes Service Account

  1. Create the service account to verify service account token during login:

    $ kubectl create -f templates/vault-reviewer.yaml
    serviceaccount "vault-reviewer" created
  2. Create the RBAC role that will be used by the service account to access the TokenReview API:

    $ kubectl apply -f templates/vault-reviewer-rbac.yaml
    clusterrolebinding "role-tokenreview-binding" created
  3. Creat a service account that will be used to login to the auth backend:

    $ kubectl create -f templates/vault-auth.yaml
    serviceaccount "vault-auth" created

Configure Kubernetes Auth backend

Service account token, Kubernetes API server address and the certificate used to access the API server are needed in order to configure the Kubernetes Auth backend. Let’s get these values.

  1. On the local machine, read the service account token:

    kubectl get secret \
    $(kubectl get serviceaccount vault-reviewer -o jsonpath={.secrets[0].name}) \
    -o jsonpath={.data.token} | base64 -D -
    eyJ . . . reg
  2. Get the API server address:

    $ kubectl config view -o jsonpath='{.clusters[*].cluster.server}'
    https://api-example-cluster-k8s-l-1dt7vk-41321592.us-east-1.elb.amazonaws.com https://192.168.99.100:8443

    This is the address of API servers currently configured. The first one is for the cluster created by Kops. Second one is for the minikube server, if its running. The first one is relevant for our case.

  3. Extract the certificate

    1. Find the default secret token:

      $ kubectl get secrets | grep default
      default-token-kvjn9          kubernetes.io/service-account-token   3         4d
    2. Use the default token name to extract the certificate:

      $ kubectl get secrets default-token-kvjn9 -o jsonpath="{.data['ca\.crt']}" | base64 -D > ~/.kube/kops.crt
  4. Now that all the required values are available, configure the Kubernetes auth backend.

    1. Mount the Kubernetes auth backend:

      $ vault auth-enable kubernetes
      Successfully enabled 'kubernetes' at 'kubernetes'!
    2. Configure the auth backend:

      $ vault write auth/kubernetes/config \
        token_reviewer_jwt=<service-account-token>  \
        kubernetes_host=<api-server> \
        kubernetes_ca_cert=@~/.kube/kops.crt

      For example, here is how our command will look like:

      $ vault write auth/kubernetes/config \
        token_reviewer_jwt=eyJ . . . reg  \
        kubernetes_host=https://api-example-cluster-k8s-l-1dt7vk-41321592.us-east-1.elb.amazonaws.com \
        kubernetes_ca_cert=@~/.kube/kops.crt
      Success! Data written to: auth/kubernetes/config
  5. Create a role with service account name vault-auth in the default namespace:

    $ vault write auth/kubernetes/role/demo \
      bound_service_account_names=vault-auth \
      bound_service_account_namespaces=default \
      policies=kube-auth \
      period=60s
    Success! Data written to: auth/kubernetes/role/demo
  6. Read the role:

    $ vault read auth/kubernetes/role/demo
    Key                               Value
    ---                               -----
    bound_service_account_names       [vault-auth]
    bound_service_account_namespaces  [default]
    max_ttl                           0
    num_uses                          0
    period                            60
    policies                          [kube-auth]
    ttl                               0
  7. Create a policy for this role

    $ vault policy-write kube-auth templates/kube-auth.hcl
    Policy 'kube-auth' written.
  8. Write secrets to Vault:

    $ vault write secret/creds GREETING=Hello NAME=World
    Success! Data written to: secret/creds
  9. Check that this value can be read:

    $ vault read -field=GREETING secret/creds
    Hello

Deploy a Pod using secrets from Vault

Let’s deploy a Pod that is reading secrets from the Vault server. Here is the sequence of events that need to happen:

  • Pod needs to know the address of Vault server. This is passed as VAULT_ADDR environment variable.

  • Pod reads the Kubernetes service account token

  • Service account token is passed to Vault server to retrieve a client token

  • Client token is used to authenticate and read secrets from Vault

More details about the Docker image used in the Pod is at https://github.com/arun-gupta/vault-kubernetes.

  1. The Pod configuration file looks like:

    apiVersion: v1
    kind: Pod
    metadata:
      name: vault-kubernetes
    spec:
      containers:
      - name: vault-kubernetes
        image: arungupta/vault-kubernetes:latest
        env:
          - name: VAULT_ADDR
            value: http://ec2-54-237-223-40.compute-1.amazonaws.com:8200
      restartPolicy: Never
  2. Deploy the Pod:

    $ kubectl apply -f templates/pod-vault.yaml
    pod "vault-kubernetes" created
  3. Get the list of Pods:

    $ kubectl get pods --show-all
    NAME               READY     STATUS      RESTARTS   AGE
    vault-kubernetes   0/1       Completed   0          20s
  4. Get logs from the completed Pod:

    $ kubectl logs vault-kubernetes
    Connecting to Vault: http://ec2-54-237-223-40.compute-1.amazonaws.com:8200
    vault: HelloWorld