Skip to content

KubeArmor OCI Hooks Design

Ankur Kothiwal edited this page Oct 6, 2023 · 4 revisions

Note

This Design implementation was done under the LFX mentorship program by Akshay Gaikwad

Context

Overview

Advantages of OCI Hooks

How do KubeArmor OCI hooks work?

Steps to Setup KubeArmor OCI hook

OCI Hooks Limitations

Other options we’ve considered

Pod Informers

Fanotify

Overview

KubeArmor currently mounts container runtime unix domain sockets inside a container. Exposing the CRI socket is considered dangerous. Allowing access to sockets gives full control on container management. It means you can abruptly create or delete containers. Mounting CRI sockets in containers can lead to security issues and hence people avoid it. Some policy enforcers detect and disallow mounting container sockets.

This is a proposal to use OCI hooks instead of CRI socket mount to overcome security concerns. OCI Hooks allows getting the container events without the need of communicating to container sockets. OCI hooks can be used to get events like container runtime created, container created, container stopped etc for all containers managed by runtime. OCI runtime-spec has explained OCI hooks configuration.

Advantages of OCI Hooks

OCI hooks help improve the security posture of KubeArmor. It removes the need of mounting CRI sockets. KubeArmor pod access to the host's PID namespace can be eliminated. KubeArmor OCI hooks run directly on a host and can be used to get the container process ids from the host. OCI hook can read container configuration files. It provides additional information about the container for example container’s appArmor profile.

How do KubeArmor OCI hooks work?

The OCI hook binary is invoked by the container runtime as per the actions configured. The binary given in hooks path must be an executable as per OCI standard. The KubeArmor OCI hook path binary acts as the agent for KubeArmor running on the node. It redirects container events to KubeArmor with the help of socket.

Once the hook is successfully configured for the CRI engine then the engine takes care of calling it when a container operation is performed. The KubeArmor hook then gathers required information of the container and sends it to the KubeArmor container hook listener. The process is explained in the diagram below.

OCI hooks must be set up successfully on the host machine in order to work.

Steps to Setup KubeArmor OCI hook

Configuring OCI hooks is dependent on the container runtime being used. Container runtimes like Runc, Cri-O and Containerd allow configuring OCI hooks. Check out steps for Cri-O and Container runtimes below.

Crio-O

Create a hook JSON file inside the directory /usr/share/containers/oci/hooks.d/ (default hook directory path). This file contains our hook configuration.

{
  "version": "1.0.0",
  "hook": {
	"path": "/provide/kube-armor/hook/path"
  },
  "when": {
	"always": true
  },
  "stages": ["createContainer", "poststop"]
}

Containerd

Containerd runtime by default does not support OCI hooks like Cri-O. Containerd repo has the open issue for adding support for hooks in spec. Another discussion is going on about NRI that works similar to hooks for container lifecycle events. The NRI repo is still in draft mode, however in the future we can take help of it in KubeArmor.

Containerd uses Runc underneath as a low level container runtime. Due to lack of OCI hooks in Containerd here we are configuring OCI hooks in Runc configurations which work similarly and get us the necessary container events. The steps are explained below.

Create a Runc base spec Json file using the following command.

ctr oci spec

Add the following hook configuration in the above generated spec file.

  "hooks": {
      "createContainer": [
          {
    	"path": "/provide/kube-armor/hook/path",
    	"args": []
          }
       ],
      "poststop": [
         {
    	"path": “/provide/kube-armor/hook/path",
    	"args": []
         }
       ]
  }

Copy above file to /etc/conatinerd/base-spec.json path.

Edit /etc/containerd/config.toml to set the following for Runc plugin plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc:

base_runtime_spec = "/etc/containerd/base-spec.json"

At last restart the Containerd service to start using hooks

systemctl restart containerd

Note: Restarting the Containerd service does not delete the existing containers. Privileged access is required for restarting the service on a host.

OCI Hooks Limitations

OCI hooks for Docker are not natively supported. Configuring the Runc hooks are not working. We do not have access to the containers that are created before the hook setup

Other options we’ve considered

Alternative to OCI hooks for getting container events we have considered the following options. These options have their own pros and cons compared to OCI Hooks.

Pod Informers

Kubernetes Pod Informers can monitor the different events of pods in the cluster. Indirectly Pod Informers get the container create and stop events. It provides the container ids for the users. However the container ids are not enough for us. KubeArmor needs additional information like appArmor profile, mount namespace, process namespace details which can only be retrieved by connecting to the runtime engine via its sockets. We wanted to eliminate the socket mounting for its security reasons.

Fanotify

Linux fanotify API can be used to monitor linux filesystem events which can be used to get container events indirectly. It requires the KubeArmor container to run in the host pid namespace (hostPID=true).

Clone this wiki locally