- Release Signoff Checklist
- Summary
- Motivation
- Proposal
- Design Details
- Production Readiness Review Questionnaire
- Implementation History
- Drawbacks
- Alternatives
Items marked with (R) are required prior to targeting to a milestone / release.
- (R) Enhancement issue in release milestone, which links to KEP dir in kubernetes/enhancements (not the initial KEP PR)
- (R) KEP approvers have approved the KEP status as
implementable
- (R) Design details are appropriately documented
- (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input
- (R) Graduation criteria is in place
- (R) Production readiness review completed
- Production readiness review approved
- "Implementation History" section is up-to-date for milestone
- User-facing documentation has been created in kubernetes/website, for publication to kubernetes.io
- Supporting documentation e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
This is a proposal to add AppArmor support to the Kubernetes API. This proposal aims to do the bare minimum to clean up the feature from its beta release, without blocking future enhancements.
AppArmor can enable users to run a more secure deployment, and/or provide better auditing and monitoring of their systems. AppArmor for should be supported to provide users a simpler alternative to SELinux, or to provide an interface for users that are already maintaining a set of AppArmor profiles.
- Fully document and formally spec the feature support
- Add equivalent API fields to replace AppArmor annotations and provide a pod level field, which applies to all containers.
- Deprecate the AppArmor annotations
This KEP proposes the absolute minimum to get AppArmor to GA, therefore all functional enhancements are out of scope, including:
- Defining any standard "Kubernetes branded" AppArmor profiles
- Formally specifying the AppArmor profile format in Kubernetes
- Providing mechanisms for loading profiles from outside of the node
- Changing the semantics around AppArmor support
- Windows support
AppArmor is not available on every Linux distribution. Beside this, container runtimes have AppArmor as compile-time feature which may be disabled as well. With the GA API we do not change the error handling and behave exactly the same as the current error propagation paths.
The AppArmor API will be functionally equivalent to the current beta API, with the enhancement of adding pod level profiles to match the behavior with seccomp. This includes the Pod API, which specifies what profile the containers run with.
The Pod AppArmor API is generally immutable, except in PodTemplates
.
type PodSecurityContext struct {
...
// The AppArmor options to use by the containers in this pod.
// +optional
AppArmorProfile *AppArmorProfile
...
}
type SecurityContext struct {
...
// The AppArmor options to use by this container. If AppArmor options are
// provided at both the pod & container level, the container options
// override the pod options.
// +optional
AppArmorProfile *AppArmorProfile
...
}
// Only one profile source may be set.
// +union
type AppArmorProfile struct {
// +unionDescriminator
Type AppArmorProfileType
// LocalhostProfile indicates a loaded profile on the node that should be used.
// The profile must be preconfigured on the node to work.
// Must match the loaded name of the profile.
// Must only be set if type is "Localhost".
// +optional
LocalhostProfile *string
}
type AppArmorProfileType string
const (
AppArmorProfileTypeUnconfined AppArmorProfileType = "Unconfined"
AppArmorProfileTypeRuntimeDefault AppArmorProfileType = "RuntimeDefault"
AppArmorProfileTypeLocalhost AppArmorProfileType = "Localhost"
)
This API makes the options more explicit and leaves room for new profile sources to be added in the future (e.g. Kubernetes predefined profiles or ConfigMap profiles) and for future extensions, such as defining the behavior when a profile cannot be set.
This KEP proposes LocalhostProfile as the only source of user-defined profiles at this point. User-defined profiles are essential for users to realize the full benefits out of AppArmor, allowing them to decrease their attack surface based on their own workloads. Only profiles with a specified prefix will be available to Localhost profiles. This prevents profiles meant for other system daemons to be utilized by Kubernetes and will be configurable by the kubelet.
AppArmor profiles are applied at container creation time. The underlying
container runtime only references already loaded profiles by its name.
Therefore, updating the profiles content requires a manual reload via
apparmor_parser
.
Note that changing profiles is not recommended and may cause containers to fail on next restart, in the case of the new profile being more restrictive, invalid or the file no longer available on the host.
Currently, users have no way to tell whether their physical profiles have been deleted or modified. This KEP proposes no changes to the existing functionality.
The recommended approach for rolling out changes to AppArmor profiles is to always create new profiles instead of updating existing ones. Create and deploy a new version of the existing Pod Template, changing the profile name to the newly created profile. Redeploy, once working delete the former Pod Template. This will avoid disruption on in-flight workloads.
The current behavior lacks features to facilitate the maintenance of AppArmor profiles across the cluster. Two examples being: 1) the lack of profile synchronization across nodes and 2) how difficult it can be to identify that profiles have been changed on disk/memory, after pods started using it. However, given its current "pseudo-GA" state, we don't want to change it with this KEP. Out of tree enhancements like the security-profiles-operator can provide such enhanced functionality on top.
The current support relies on profiles being loaded on all cluster nodes
where the pods using them may be scheduled. It is also the cluster admin's
responsibility to ensure the profiles are correctly saved and synchronised
across the all nodes. Existing mechanisms like node labels
and nodeSelectors
can be used to ensure that pods are scheduled on nodes supporting their desired
profiles.
We propose maintaining the support to a single runtime profile, which will be
defined by using the AppArmorProfileTypeRuntimeDefault
. The reasons being:
- No changes to the current behavior. Users are currently not allowed to specify
other runtime profiles. The existing API server rejects runtime profile names
that are different than
runtime/default
. - Most runtimes only support the default profile, although the CRI is flexible enough to allow the kubelet to send other profile names.
- Multiple runtime profiles has never been requested as a feature.
If built-in support for multiple runtime profiles is needed in the future, a new KEP will be created to cover its details.
[X] I/we understand the owners of the involved components may require updates to existing tests to make this code solid enough prior to committing the changes necessary to implement this enhancement.
None
New tests will be added covering the annotation/field conflict cases described under Version Skew Strategy.
Additional integration tests with PodSecurityAdmission will be required.
- TestPodSecurityWebhook: https://github.com/kubernetes/kubernetes/blob/1ded677b2a77a764a0a0adfa58180c3705242c49/test/integration/auth/podsecurity_test.go#L95
AppArmor already has [e2e tests][https://github.com/kubernetes/kubernetes/blob/2f6c4f5eab85d3f15cd80d21f4a0c353a8ceb10b/test/e2e_node/apparmor_test.go],
but the tests are guarded by the [Feature:AppArmor]
tag and not run in the
standard test suites.
Tests will be tagged as [Feature:AppArmor]
like it is implemented right now,
but they will be migrated to use the new fields API.
There are different scenarios in which applying an AppArmor profile may fail, below are the ones we mapped and their outcome once this KEP is implemented:
Scenario | API Server Result | Kubelet Result |
---|---|---|
1) Using localhost or runtime/default profile when container runtime does not support AppArmor. |
Pod created | The outcome is container runtime dependent. In this scenario containers may 1) fail to start or 2) run normally without having its policies enforced. |
2) Using custom or runtime/default profile that restricts actions a container is trying to make. |
Pod created | The outcome is workload and AppArmor dependent. In this scenario containers may 1) fail to start, 2) misbehave or 3) log violations. |
3) Using custom profile that does not exist on the node. | Pod created | Containers fail to start. Retry respecting RestartPolicy and back-off delay. |
4) Using an unsupported runtime profile (i.e. runtime/default-audit ). |
Pod not created. | N/A |
5) Using localhost profile with invalid name | Pod not created. | N/A |
6) AppArmor is disabled by the host or the build | Pod created. | Kubelet puts Pod in blocked state. |
Scenario 2 is the expected behavior of using AppArmor and it is included here for completeness.
Scenario 5 represents the case of failing the existing validation, which is defined at Pod API.
All API skew is resolved in the API server. New Kubelets will only use the AppArmor values specified in the fields, and ignore the annotations.
If no AppArmor annotations or fields are specified, no action is necessary.
If the AppArmor
feature is disabled per feature gate, then the annotations and
fields are cleared (current behavior).
If the pod's OS is windows
, fields are forbidden to be set and annotations
are not copied to the corresponding fields.
If only AppArmor fields are specified, add the corresponding annotations. If these are specified at the Pod level, copy the annotations to each container that does not have annotations already specified. This ensures that the fields are enforced even if the node version trails the API version (see Version Skew Strategy).
If only AppArmor annotations are specified, copy the values into the
corresponding fields. This ensures that existing applications continue to
enforce AppArmor, and prevents the kubelet from needing to resolve annotations &
fields. If the annotation is empty, then the runtime/default
profile will be
used by the CRI container runtime. If a localhost profile is specified, then
container runtimes will strip the localhost/
prefix, too. This will be covered
by e2e tests during the GA promotion.
If both AppArmor annotations and fields are specified, the values MUST match. This will be enforced in API validation.
If a Pod with a container specifies an AppArmor profile by field/annotation, then the container will only apply the Pod level field/annotation if none are set on the container level.
To raise awareness of annotation usage (in case of old automation), a warning mechanism will be used to highlight that support will be dropped in v1.30. The mechanisms being considered are audit annotations, annotations on the object, events, or a warning as described in KEP #1693.
PodSecurityPolicy support has been removed in Kubernetes 1.25. Due to this, the equivalent functionality for annotations such as the below will not be supported:
apparmor.security.beta.kubernetes.io/defaultProfileName
apparmor.security.beta.kubernetes.io/allowedProfileNames
To provide this functionality, users will need to implement equivalent rules in admission control,
by injecting or validating a LocalHostProfile
field for Pods or individual containers.
The AppArmor fields on a pod are immutable, which also applies to the annotation.
When an Ephemeral Container is added, it will follow the same rules for using or overriding the pod's AppArmor profile. Ephemeral container's will never sync with an AppArmor annotation.
PodTemplates (e.g. ReplaceSets, Deployments, StatefulSets, etc.) will be ignored. The field/annotation resolution will happen on template instantiation.
To raise awareness of existing controllers using the AppArmor annotations that need to be migrated, a warning mechanism will be used to highlight that support will be dropped in v1.30.
The mechanisms being considered are audit annotations, annotations on the object, events, or a warning as described in KEP #1693.
The API Server will continue to reject annotations with runtime profiles
different than runtime/default
, to maintain the existing behavior.
Violations would lead to the error message:
Invalid value: "runtime/profile-name": must be a valid AppArmor profile
Nodes do not currently support in-place upgrades, so pods will be recreated on node upgrade and downgrade. No special handling or consideration is needed to support this.
On the API server side, we've already taken version skew in HA clusters into account. The same precautions make upgrade & downgrade handling a non-issue.
Since we support up to 2 minor releases of version skew between the master and node, annotations must continue to be supported and backfilled for at least 2 versions passed the initial implementation. If this feature is implemented in v1.27, I propose v1.30 as a target for removal of the old behavior. Specifically, annotation support will be removed in the kubelet after this period, and fields will no longer be copied to annotations for older kubelet versions. However, annotations submitted to the API server will continue to be copied to fields at the kubelet indefinitely, as was done with Seccomp.
The changes brought to the Kubelet by this KEP will ensure backwards compatibility in a similar way the changes above define it at API Server level. Therefore, the AppArmor profiles will be applied following the priority order:
- Container-specific field.
- Container-specific annotation.
- Pod-wide field.
This section is excluded, as it is the subject of the entire proposal.
- Feature gate
- Feature gate name:
AppArmor
- Components depending on the feature gate:
- kube-apiserver
- kubelet
- Feature gate name:
N/A - the feature is already enabled by default since Kubernetes v1.4.
Yes, it works in the same way as before moving the feature to GA. However, the GA related changes are backwards compatible, and the API supports rollback of the Kubernetes API Server as described in the Version Skew Strategy.
N/A - the feature is already enabled by default since Kubernetes v1.4.
N/A - the feature is already enabled by default since Kubernetes v1.4.
The Version Skew Strategy section covers this point. Running workloads should have no impact as the Kubelet will support either the existing annotations or the new fields introduced by this KEP.
Clusters upgrading while using beta AppArmor annotations will want to ensure that profiles on upgraded nodes are loaded at the Kubelet's specified path prefix. Containers of Pods loading AppArmor profiles will fail to start if they attempt to load non-Kubernetes profiles.
Monitoring the below metrics can help identify these issues:
started_containers_errors_total
started_pods_errors_total
Automated tests will cover the scenarios with and without the changes proposed on this KEP. As defined under Version Skew Strategy, we are assuming the cluster may have kubelets with older versions (without this KEP' changes), therefore this will be covered as part of the new tests.
Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.?
The promotion of AppArmor to GA would deprecate the beta annotations as described in the Version Skew Strategy.
The feature is built into the kubelet and api server components. No metric is planned at this moment. The way to determine usage is by checking whether the pods/containers have a AppArmorProfile set.
Pod events will provide details of profiles being successfully applied to specific containers.
- Events - Event Reason:
AppArmor
N/A
What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service?
N/A
Are there any missing metrics that would be useful to have to improve observability of this feature?
N/A
This KEP adds no new dependencies.
NO
NO
NO
NO
Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs?
NO
Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?
NO
No
No, it will only add an AppArmorProfile
field to existing types.
No
New container-level and pod-level fields.
Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs?
No
Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?
No
No impact to running workloads.
No impact is being foreseen to running workloads based on the nature of changes brought by this KEP.
Although some general errors and failures can be seen on Failure and Fallback Strategy.
N/A
- 2016-07-25: AppArmor design proposal
- 2016-09-26: AppArmor beta release with v1.4
- 2020-01-10: Initial KEP
- 2020-08-24: Major rework and sync with seccomp
- 2021-04-25: PSP mentions
- 2022-05-07: Rework, removal of PSP mentions
Promoting AppArmor as-is to GA may be seen as "blessing" the current functionality, and make it harder to make some of the enhancements listed under Non-Goals. Since the current behavior is unguarded, I think we already need to treat the behavior as GA.
N/A