Skip to content

Commit

Permalink
Encrypting secrets at rest and cluster security guide
Browse files Browse the repository at this point in the history
  • Loading branch information
smarterclayton committed Jun 24, 2017
1 parent 9da5890 commit 765cef6
Show file tree
Hide file tree
Showing 2 changed files with 391 additions and 0 deletions.
205 changes: 205 additions & 0 deletions docs/tasks/administer-cluster/encrypt-data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,205 @@
---
title: Encrypting data at rest
redirect_from:
- "/docs/user-guide/federation/"
- "/docs/user-guide/federation/index.html"
- "/docs/concepts/cluster-administration/multiple-clusters/"
- "/docs/concepts/cluster-administration/multiple-clusters.html"
- "/docs/admin/multi-cluster/"
- "/docs/admin/multi-cluster.html"
---

{% capture overview %}
This page shows how to enable and configure encryption of secret data at rest.
{% endcapture %}

{% capture prerequisites %}

* {% include task-tutorial-prereqs.md %}

* Kubernetes version 1.7.0 or later is required

* Encryption at rest is alpha in 1.7.0 which means it may change without notice. Users may be required to decrypt their data prior to upgrading to 1.8.0.

{% endcapture %}

{% capture steps %}

## Configuration and determining whether encryption at rest is already enabled

The `kube-apiserver` process accepts an argument `--experimental-encryption-provider-config`
that controls how API data is encrypted in etcd. An example configuration
is provided below.

## Understanding the encryption at rest configuration.

```yaml
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- identity: {}
- aesgcm:
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==
- aescbc:
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==
- secretbox:
keys:
- name: key1
secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
```
Each `resources` array item is a separate config and contains a complete configuration. The
`resources.resources` field is an array of Kubernetes resource names (`resource` or `resource.group`)
that should be encrypted. The `providers` array is an ordered list of the possible encryption
providers. Only one provider type may be specified per entry (`identity` or `aescbc` may be provided,
but not both in the same item).

The first provider in the list will be used to encrypt resources going into storage. When reading
resources from storage the providers will be attempted in order to decrypt data. If no provider
can read the stored data, an error will be returned which will prevent clients from accessing that
resource.

IMPORTANT: If any resource is not readable via the encryption config (because keys were changed),
the only recourse is to delete that key from the underlying etcd directly. Calls that attempt to
read that resource will fail until it is deleted or a valid decryption key is provided.

### Providers:

* `identity` results in the data being written as-is without encryption.
* When placed in the first position the resource will be decrypted as new values are written.
* `secretbox` uses XSalsa20 and Poly1305 to store data at rest
* It is fast, but a newer standard and may not be considered acceptable in environments that require high levels of review.
* It requires a 32 byte key.
* `aescbc` uses AES in CBC mode
* It is the recommended choice for encryption at rest but may be slightly slower than `secretbox`.
* It requires 32 byte keys
* `aesgcm` uses AES in GCM mode with a randomly assigned nonce
* This mode is the fastest, but is not recommended for use except when an automated key rotation scheme is implemented. * Because these nonces are small the secret key must be rotated frequently - at most every 200k writes.
* It supports 16, 24, or 32 byte keys, but 32 byte keys should always be used.

Each provider supports multiple keys - the keys are tried in order for decryption, and if the provider
is the first provider the first key is used for decryption.

## Encrypting your data

Create a new encryption config file

```yaml
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <BASE 64 ENCODED SECRET>
- identity: {}
```

To create a new secret, generate a 32 byte random key and base64 encode it. On Linux or Mac OS X, the
following command will read 32 bytes of random data and then base 64 encode it.

```
head -c 32 /dev/urandom | base64 -i - -o -
```
Place that value in the secret field.
Set the `--experimental-encryption-provider-config` flag on the `kube-apiserver` to point to the location
of the config file and restart your API server.
IMPORTANT: Your config file contains keys that can decrypt the content in etcd and should be properly
permission restricted on your masters so that only the user that runs the `kube-apiserver` can read it.
## Verifying that data is encrypted
Data is encrypted when written to etcd. After restarting your `kube-apiserver`, any newly created or
updated secret should be encrypted when stored. To check, you can use the `etcdctl` command line
program to retrieve the contents of your secret.
Create a new secret called `secret1` in the `default` namespace:
```
kubectl create secret generic secret1 -n default --from-literal=mykey=mydata
```
Using the etcdctl commandline, read that secret out of etcd:
```
ETCDCTL_API=3 etcdctl get /kubernetes.io/secrets/default/secret1 [...] | hexdump -C
```
where `[...]` must be the additional arguments for connecting to the etcd server. The `hexdump` command
will format the encoded bytes into a readable form. Verify the resulting output is prefixed with
`k8s:enc:aescbc:v1:` which indicates the `aescbc` provider has encrypted the resulting data.
Verify the secret is correctly decoded by retrieving the secret from the command line, and that the secret
contents match what was provided above (`mykey: mydata`).
```
kubectl describe secret generic -n default
```
## Ensuring all secrets to be encrypted
Since secrets are encrypted on write, performing an update on a secret will encrypt that content.
```
kubectl get secrets -o json | kubectl update -f -
```
Will read all secrets and then perform an update with encryption. If an error occurs due to a
conflicting write, retry the command. For larger clusters, you may wish to subdivide the secrets
by namespace or script an update.
## Rotating a decryption key
Changing the secret without incurring downtime requires a multi step operation, especially in
the presence of a highly available deployment where multiple `kube-apiserver` processes are running.
1. Generate a new key and add it as the second key entry for the current provider on all servers
2. Restart all `kube-apiserver` processes to ensure each server can decrypt using the new key
3. Make the new key the first entry in the `keys` array so that it is used for encryption in the config
4. Restart all `kube-apiserver` processes to ensure each server now encrypts using the new key
5. Run `kubectl get secrets -o json | kubectl update -f -` to update all secrets
6. Remove the old decryption key from the config after you back up etcd with the new key in use and update all secrets
With a single `kube-apiserver`, step 2 may be skipped
## Decrypting all data
To disable encryption at rest place the `identity` provider as the first entry in the config:
```yaml
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- identity: {}
- aescbc:
keys:
- name: key1
secret: <BASE 64 ENCODED SECRET>
```

and restart all `kube-apiserver` processes. Then run the command `kubectl get secrets -o json | kubectl update -f -`
to force all secrets to be decrypted.
186 changes: 186 additions & 0 deletions docs/tasks/administer-cluster/securing-a-cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,186 @@
---
assignees:
- smarterclayton
title: Securing a Cluster
redirect_from:
- "/docs/admin/cluster-security/"
- "/docs/admin/cluster-security.html"
- "/docs/concepts/cluster-administration/cluster-security/"
- "/docs/concepts/cluster-administration/cluster-security.html"
---

* TOC
{:toc}

This document covers topics related to protecting a cluster from accidental or malicious access
and provides recommendations on overall security.

## Controlling access to the Kubernetes API

As Kubernetes is entirely API driven, controlling and limiting who can access the cluster and what actions
they are allowed to perform is the first line of defense.

### Use Transport Level Security (TLS) for all API traffic

Kubernetes expects that all API communication in the cluster is encrypted by default with TLS, and the
majority of installation methods will allow the necessary certificates to be created and distributed to
the cluster components. Note that some components and installation methods may enable local ports over
HTTP and administrators should familiarize themselves with the settings of each component to identify
potentially unsecured traffic.

### API Authentication

When a cluster is installed, choose an appropriate authentication mechanism for the API servers to use that
match the common access patterns. For instance, small single user clusters may wish to use a simple certificate
or static Bearer token approach. Larger clusters may wish to integrate an existing or OIDC or LDAP server that
allow users to be subdivided into groups.

All API clients must be authenticated, even those that are part of the infrastructure like nodes,
proxies, the scheduler, and volume plugins. These clients are typically [service accounts](/docs/admin/service-accounts-admin.md) and are created automatically at cluster startup.

Consult the [authentication reference document](/docs/admin/authentication.md) for more information.

### API Authorization

Once authenticated, every API call is also expected to pass an authorization check. Kubernetes ships an
integrated Role-Based Access Control (RBAC) component that matches an incoming user or group to a
set of permissions bundled into roles. These permissions combine verbs (get, create, delete) with
resources (pods, services, nodes) and can be namespace or cluster scoped. A set of out of the box
roles are provided that offer reasonable default separation of responsibilty depending on what actions
a client might want to perform.

As with authentication, simple and broad roles may be appropriate for smaller clusters, but as
more users interact with the cluster, it may become necessary to separate teams into separate
namespaces with more limited roles.

With authorization, it is important to understand how updates on one object may cause actions in
other places. For instance, a user may not be able to create pods directly, but allowing them to
create a deployment (which creates pods on their behalf) will let them create those pods
indirectly. Likewise, deleting a node from the API will result in the pods scheduled to that node
being terminated and recreated on other nodes. The out of the box roles represent a compromise
between flexibility and the common use cases, but more limited roles should be carefully reviewed
to prevent accidental escalation.

Consult the [authorization reference section](/docs/admin/authorization) for more information.


## Controlling the capabilities of a workload or user at runtime

Authorization in Kubernetes is intentionally high level, focused on coarse actions on resources.
More powerful controls exist as **policies** to limit by use case how those objects act on the
cluster, themselves, and other resources.

### Limiting resource usage on a cluster

[Resource quota](/docs/concepts/policy/resource-quotas.md) limits the number or capacity of
resources granted to a namespace. This is most often used to limit the amount of CPU, memory,
or persistent disk a namespace can allocate, but can also control how many pods, services, or
volumes exist in each namespace.

[Limit ranges](/docs/admin/limitrange) restrict the maximum or minimum size of some of the
resources above, to prevent users from requesting unreasonably high or low values for commonly
reserved resources like memory, or to provide default limits when none are specified.


### Controlling what privileges containers run with

A pod definition contains a [security context](/docs/tasks/configure-pod-container/security-context.md)
that allows it to request access to running as a specific Linux user on a node (like root),
access to run privileged or access the host network, and other controls that would otherwise
allow it to run unfettered on a hosting node. [Pod security policies](/docs/concepts/policy/pod-security-policy.md)
can limit which users or service accounts can provide dangerous security context settings.

In general most application workloads need limited access to host resources and so can run
successfully as a root process (uid 0) without access to host information. However, given
the privileges associated with the root user, it is always recommended that application
containers be written to run as a non-root user, and that administrators that wish to prevent
client applications from escaping their containers to use a restrictive pod security policy.


### Restricting network access

The [network policy](/docs/tasks/administer-cluster/declare-network-policy.md) for a namespace
allows application authors to restrict which pods in other namespaces may access pods and ports
within their namespace. Many of the supported [Kubernetes networking providers](/docs/concepts/cluster-administration/networking.md)
now respect network policy.

Quota and limit ranges can also be used to control whether users may request node ports or
load balanced services, which on many clusters can control whether those users applications
are visible outside of the cluster.

Additional protections may be available that control network rules on a per plugin or per
environment basis, such as per-node firewalls, physically separating cluster nodes to
prevent cross talk, or advanced networking policy.


### Controlling which nodes pods may access

By default there are no limits on which nodes a pod may run. Kubernetes offers a
[rich set of policies for controlling placement of pods onto nodes](/docs/concepts/configuration/assign-pod-node.md)
that are available to end users. For many clusters use of these policies to separate workloads
can be a convention that authors adopt or enforce via tooling.

As an administrator, a beta admission plugin `PodNodeSelector` can be used to force pods
within a namespace to default or require a specific node selector, and if end users cannot
alter namespaces, this can strongly limit the placement of all of the pods in a specific workload.


## Protecting cluster components from compromise

This section describes some common patterns for protecting clusters from compromise.

### Restrict access to etcd

Write access to the etcd backend for the API is equivalent to gaining root on the entire cluster,
and read access can be used to escalate fairly quickly. Administrators should always use strong
credentials from the API servers to their etcd server, such as mutual auth via TLS client certificates,
and it is often recommended to isolate the etcd servers behind a firewall that only the API servers
may access. It is not recommended to allow other components within the cluster to access the master
etcd instance unless proper security precautions are taken to limit access for those other components
to only keys the API does not use.

### Enable audit logging

The [audit logger](/docs/admin/audit/) is an alpha feature that records actions taken by the
API for later analysis in the event of a compromise. It is recommended to enable audit logging
and archive the audit file on a secure server.

### Restrict access to alpha or beta features

Alpha and beta Kubernetes features are in active development and may have limitations or bugs
that result in security vulnerabilities. Always assess the value an alpha or beta feature may
provide against the possible risk to your security posture. When in doubt, disable features you
do not use.

### Rotate infrastructure credentials frequently

### Review third party integrations before enabling them

Many third party integrations to Kubernetes may alter the security profile of your cluster. When
enabling an integration, always review the permissions that extension requests before granting
it access. For example, many security integrations may request access to view all secrets on
your cluster which is effectively making that component a cluster admin. When in doubt,
restrict the integration to functioning in a single namespace if possible.

Components that create pods may also be unexpectedly powerful if they can do so inside namespaces
like the `kube-system` namespace, because those pods can gain access to service account secrets
or run with elevated permissions if those service accounts are granted access to permissive
[pod security policies](/docs/concepts/policy/pod-security-policy.md)

### Encrypt secrets at rest

In general, the etcd database will contain any information accessible via the Kubernetes API
and may grant an attacker significant visibility into the state of your cluster. Always encrypt
your backups using a well reviewed backup and encryption solution, and consider using full disk
encryption where possible.

Kubernetes 1.7 contains an alpha feature that will encrypt `Secret` resources in etcd, preventing
parties that gain access to your etcd backups from viewing the content of those secrets. While
this feature is currently experimental, it may offer an additional level of defence when backups
are not encrypted or an attacker gains read access to etcd.

### Receiving alerts for security updates and reporting vulnerabilities

Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce)
group for emails about security announcements. See the [security reporting](/docs/reference/security.md)
page for more on how to report vulnerabilities.

0 comments on commit 765cef6

Please sign in to comment.