Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define the differences between the networking stack #849

Closed
wants to merge 5 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 22 additions & 3 deletions SPEC.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,9 +186,28 @@ Plugins may define additional fields that they accept and may generate an error

The CNI protocol is based on execution of binaries invoked by the container runtime. CNI defines the protocol between the plugin binary and the runtime.

A CNI plugin is responsible for configuring a container's network interface in some manner. Plugins fall in to two broad categories:
* "Interface" plugins, which create a network interface inside the container and ensure it has connectivity.
* "Chained" plugins, which adjust the configuration of an already-created interface (but may need to create more interfaces to do so).
A CNI plugin is responsible for configuring a container's network interface in some manner. The meaning of this is left undefined by design. That is, plugins *may provide very different functionality*, but they all speak the CNI protocol.

Plugins fall in to three broad categories:
- Runtime centric plugins. These are called from a container runtime as a black-box for setting up a container network for a process.
- "CNI Provider plugins", tools which orchestrate the entire process of attaching a container to a network for a given application (i.e. Kubernetes), including calling other helper plugins, typically by integration into a special container network of some sort. The most commonly known CNI options (such as calico, antrea, flannel, ovn-kubernetes, multus, azure-cni, eks-cni, gke-cni and so on) all fall into this category.
- "Local" plugins, which might do one-off functionality to implement a very specific, local container networking mode. The most common example of this is the host-device plugin, which attaches a host device to the network namespace of a container.
- CNI helper plugins: these understand the semantics of the CNI api, and know how to play a specific role in the lifecycle of a CNI API call, but aren't directly relied on by a container runtime.
- For example, the `host-local` ipam plugin is used by well-known CNI providers to manage IP address ranges and picking of IP addresses for a given host in a cluster. This plugin doesn't actually do anything related to attachment of containers to a IP, but it does the work of managing IPs for higher level plugins to coordinate this process.

To exemplify this, we can envision a CNI call happening from a container runtime like so:
- First, an end user (i.e. a container runtime) calls CNI ADD on a vendor plugin (such as calico, antrea, cillium, or flannel)
- Next, the vendor calls the CNI ADD function on an IPAM provider CNI in order to decide on an IP address for the said container
- Next, the vendor completes whatever needs to be done to setup routing to and from this IP address using underlying dataplane technologies
- Finally, the vendor CNI returns control to the calling container runtime

Thus, some CNI plugins, take responsibility for configuring a container's network interface in some manner, while other plugins (like the IPAM plugin), take responsibility for supporting the attachment of a container to an interface (i.e. in the IPAM example, the IPAM plugin doesn't attach any networking devices, but it does the essential task of finding a IP address which *can* be attached by another CNI plugin).


We also might categorize CNI plugins in terms of wether or not they create network interfaces, that is:

* "Interface" plugins create a network interface inside the container and ensure it has connectivity.
* "Chained" plugins which adjust the configuration of an already-created interface (but may need to create more interfaces to do so).

The runtime passes parameters to the plugin via environment variables and configuration. It supplies configuration via stdin. The plugin returns
a [result](#Section-5-Result-Types) on stdout on success, or an error on stderr if the operation fails. Configuration and results are encoded in JSON.
Expand Down