Skip to content

Latest commit

 

History

History
409 lines (283 loc) · 35.4 KB

0121-kpack-donation-to-cnb.md

File metadata and controls

409 lines (283 loc) · 35.4 KB

Meta

  • Name: kpack donation to CNB
  • Start Date: 2022-06-21
  • Author(s): Juan Bustamante
  • Status: Approved
  • RFC Pull Request: rfcs#235
  • CNB Pull Request: (leave blank)
  • CNB Issue: N/A
  • Supersedes: (put "N/A" unless this replaces an existing RFC, then link to that RFC)

Summary

This RFC proposes the donation of the open-source project kpack into the Cloud Native Buildpacks Community Organization as a vendor neutral staging ground under the CNB governance umbrella. Once the project is deemed sufficiently mature, the project will be moved under the Cloud Native Buildpacks Organization.

Following the process defined in the Buildpack Commnity RFC the following table presents the criteria used to evaluate the project.

Criteria Evidence
The project must be a tooling, platform or integration that is related to Cloud Native Buildpacks. See Motivation section
The project must be open source and licensed under Apache 2.0. See License
List of all external dependencies with licensing info and they’re permissively licensed with a Apache 2.0 compatible license See report generated using go-licenses
It must follow the Cloud Native Computing Foundation Code of Conduct. See Code of conduct
The project must enable DCO signoff for all commits. See Sign-off process
The project must be open to contributions and have a public issue tracker. See public issue tracker
The project must have a governance document that clearly defines the project maintainers and how they are elected. Each project may choose to define their own governance model as long as it is clearly documented and allows for project maintainers to be elected from the community. See Governance
The list of project maintainers must be publicly available and controlled through a Github team. See Maintainers
The project must use a CODEOWNERS file to define the maintainers for each repository. The CODEOWNERS file should reference the Github team that controls the list of maintainers. See CODEOWNERS file
All project contributors must be members of the Buildpacks community organization. See Team Roles section and People in CNB community organization
The project must be actively maintained (i.e. issues and pull requests must be addressed regularly, approved pull requests must be merged or updated in a timely manner, etc.). See issues and pull requests
There should have visible automated testing for all repositories that are part of the project. See codecov
The project maintainers must conform to a set of best effort SLOs around patching critical CVEs when applicable to the project.
The has a file - CONTRIBUTING.md: A guide to how contributors should submit patches and the expectations around code review. See Contributing
The has a file - DEVELOPMENT.md: A guide to how contributors should develop the project. See Development
The has a file - ADOPTERS.md: A list of adopters of the project. See Adopters
The has a file - VERSIONING.md: A guide to how versioning is done for the project. See Versioning
The has a file - RELEASE.md: A guide to how releases are done for the project. See Release
The has a file - SECURITY.md: A guide to how security vulnerabilities should be reported. See Security Pull Request

Definitions

  • Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
  • Kpack is a VMware-led open-source project that utilizes Kubernetes primitives to build OCI images as a platform implementation of Cloud Native Buildpacks.
  • A Kubernetes native application is an application designed to run on Kubernetes platforms, managed by Kubernetes APIs and kubectl tooling and cohesively deployed on Kubernetes as a single object.

Motivation

Why should we do this?

It will benefit the CNB project by adding a tool to support an out-of-the box Kubernetes integration, which is part of the CNB roadmap goals.

It will show evidence to the community that the project supports multiple platform interface specification implementers increasing community's confidence on the flexibility of specification maintained by the CNB project.

It will help the CNB community (+550 members on slack channel) to grow by adding all the kpack community into CNB space.

CNB is part of the Cloud Native Computing Foundation, an open source, vendor neutral hub of cloud native computing projects, the inclusion of kpack under this umbrella will provide more opportunity to the community:

  • Increase in adopters, users looking to use buildpacks in Kubernetes will find a tool supported and maintained by the CNB team.
  • Improve efficiency, ensuring that the roadmaps of the two projects are closer aligned will make it easier to coordinate efforts between both communities.

What use cases does it support?

kpack will add support to operators by providing declarative Kubernetes resources (images, builders, or stacks for example) to monitor for security patches on the underlying builder's buildpacks or stacks and rebuild the OCI image when changes are detected, allowing platforms to roll out new versions of the applications when vulnerabilities are fixed.

How does kpack support the goals and use cases of the project?

The CNB project turns application source code into OCI-compliant container images; in order to do that, it defines a platform-to-buildpack contract that guarantees interoperability between different implementers.

The CNB project embraces modern container standards, and Kubernetes has become the industry standard for automating deployment, scaling, and management of containerized applications.

kpack fits perfectly in that direction because it implements the platform interface specification and because is a Kubernetes native application its community possesses a vast knowledge that can provide valuable feedback to the CNB project.

Is there functionality in kpack that is already provided by the project?

pack and kpack offer similar functionality (both tools implement the platform interfacespecification) but they do it for two non-overlapping contexts: while the first one targets developers and local builds, kpack manages containerization on day-2 and at scale and is a Kubernetes native implementation.

Is kpack integrated with another service or technology that is widely used?

As mentioned earlier, kpack implements the platform interface specification on Kubernetes, a standard nowadays for automating deployment, scaling, and management of containerized applications.

What it is

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services. The Kubernetes API can be extended in different ways; one of them is using custom resources, a custom resource represents a customization of a particular Kubernetes installation.

kpack extends Kubernetes using custom resources and utilizes unprivileged Kubernetes primitives to provide builds of OCI images as a platform implementation of Cloud Native Buildpacks. This means that kpack takes the CNB-defined concepts (image, builder, stacks, etc) and bakes them into the Kubernetes extension model using custom resources and exposing a declarative API for interacting with it.

The declarative API enforces a separation of responsibilities. Operators declare the configuration for a CNB image or define which buildpacks or stacks must be used, and kpack - using its custom controller - will take care of the heavy lifting, keeping the state of the custom objects in sync with the declared desired state.

How it Works

As mentioned before, kpack uses the custom resource extension point to provide the capabilities of building OCI images as a platform implementation of Cloud Native Buildpacks.

These custom resources have a common definition similar to this:

apiVersion: kpack.io/v1alpha2
kind: [ClusterStack|ClusterStore|Image|Builder|Build]
metadata:
  name: [unique name]

The apiVersion key specifies which version of the Kubernetes API is used to create the object, in this case kpack.io/v1alpha2

The kind key specifies what kind of objects we want to create for example: ClusterStack, ClusterStore, Image, Builder or Build

The metadata key is used to define the data that can uniquely identify the object. One common key used around all the custom resources is to provide a name to identify the object.

Some of the custom resources implemented by kpack are describe in the next section, if you want to check the complete reference go to kpack documentation site

ClusterStack

This resource is an abstraction to group a build image and a run image required to build the application source code.

Let's see an example of the ClusterStack definition

apiVersion: kpack.io/v1alpha2
kind: ClusterStack
metadata:
  name: base
spec:
 id: "io.buildpacks.stacks.bionic"
 buildImage:
   image: "my-buildpack-repo/build:cnb"
 runImage:
   image: "my-buildpack-repo/run:cnb"

The spec key is used to define the desired state of the ClusterStack and the keys availables under spec match the values expected in a CNB stack definition:

  • id: The 'id' of the stack
  • buildImage.image: The build-image of the stack.
  • runImage.image: The run-image of the stack.

Cluster Store

Creates a repository of buildpacks packaged as OCI artifacts to be used during a build.

Let's see an example of the ClusterStore definition

apiVersion: kpack.io/v1alpha2
kind: ClusterStore
metadata:
  name: my-cluster-store
spec:
 sources:
   - image: foo.com/my-buildpack-repo/buildpack-1@sha256:sha123
   - image: foo.com/my-buildpack-repo/buildpack-2@sha256:sha345
   - image: foo.com/my-buildpack-repo/builder:base

The spec key is used to define the desired state of the ClusterStore

  • sources: List of buildpackage images to make available in the ClusterStore. Each image is an object with the key image.

As a side note the ClusterStore resource will be deprecated in favor of a new Buildpack resource in the near future according to the following RFC

Builder or ClusterBuilder

Creates a CNB builder image that contains all the components necessary to execute a build.

An example of the Builder definition is as follows:

apiVersion: kpack.io/v1alpha2
kind: Builder
metadata:
  name: my-builder
spec:
  tag: foo.com/sample/builder
  stack:
    name: base
    kind: ClusterStack
  store:
    name: my-cluster-store
    kind: ClusterStore
  order:
    - group:
      - id: my-buildpack-repo/buildpack-1
    - group:
      - id: my-buildpack-repo/buildpack-2

It's important to notice that a ClusterStack and ClusterStore is required for creating a Builder.

The spec key is used to define the desired state of the Builder

The ClusterBuilder resource is almost identical to a Builder but it is a cluster scoped resource that can be referenced by an Image in any namespace.

Build

Custom resource responsible for scheduling and running a single build.

An example of a Build definition is

apiVersion: kpack.io/v1alpha2
kind: Build
metadata:
  name: sample-build
spec:
  tags:
    -sample/image
  builder:
    image: foo.com/sample/builder
  projectDescriptorPath: path/to/project.toml
  source:
    git:
      url: https://github.com/my-account/sample-app.git
      revision: main

The spec key is used to define the desired state of the Build

  • tags: A list of tags to build. At least one tag is required.
  • builder.image: This is the tag to the Cloud Native Buildpacks builder image to use in the build.
  • source: The source location that will be the input to the build.
  • projectDescriptorPath: Path to the project descriptor file relative to source root dir or subPath if set.

Image

Provides a configuration to build and maintain an OCI image utilizing CNB.

An example of an Image definition is as follows

apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
  name: my-app-image
  namespace: default
spec:
  tag: foo.com/my-app-repo/my-app-image
  builder:
    name: my-builder
    kind: Builder
  source:
    git:
      url: https://github.com/my-account/sample-app.git
      revision: 82cb521d636b282340378d80a6307a08e3d4a4c4

The spec key is used to define the desired state of the Image

  • tag: The image tag.
  • builder: Configuration of the builder resource the image builds will use.
  • source: The source code that will be monitored/built into images.

Contributors

Contributions to kpack during the period 2022-2019 can be summarized as follow

pie showData
    title Pull Requests Open or Closed
    "VMWare or Pivotal" : 438
    "Others" : 37
Loading

Migration

Repositories

The suggested strategy for migrating kpack's git repositories to the CNB is to use the transfer repository git feature.

The following table shows the candidates repositories to be transferred

Origin Repo Description Owner Destination Repo Owner
https://github.com/pivotal/kpack kpack source code Pivotal https://github.com/buildpacks-community/kpack CNB Technical Oversight Committee
https://github.com/vmware-tanzu/kpack-cli kpack CLI VMware https://github.com/buildpacks-community/kpack-cli CNB Technical Oversight Committee
https://github.com/vmware-tanzu/homebrew-kpack-cli Homebrew tap for the kpack CLI VMware https://github.com/buildpacks-community/homebrew-kpack-cli CNB Technical Oversight Committee

For each repository

  • The owner or admin user must follow the steps describe in github documentation and transfer the repository to the organization Cloud Native Buildpacks
  • A member of the TOC team in CNB must accept the donation of the repository. The name of the destination repository will be the one described in the table above.

CI / CD Pipelines

kpack's CI/CD pipelines were rebuilt to use github actions. In order for kpack's to run windows acceptance tests it requires a kubernetes cluster with windows nodes. The hardware requirements are specify in the following section.

Hardware requirements

The minimal hardware requirements to request to CNCF to recreate the CI/CD pipelines are:

Kubernetes clusters

Build cluster

  • Linux nodes
    • 1 amd64 node / 2 CPU / 8GB memory / 50GB ephemeral disk storage
  • Windows nodes
    • 1 amd64 node / 4 CPU / 16GB memory / 100GB ephemeral disk storage
  • At least 100 GB of storage in a public OCI registry

Documentation

Kpack documentation is currently hosted in the base code repository, after migrating to CNB the documentation will be published into the Cloud Native Buildpack site.

CNB already mentioned kpack in their documentation, specifically, in the tools section. The proposal is:

  • Create a new folder name kpack inside the tool section in the docs repository
  • Copy kpack's documentation into this new created folder
  • Update the references and all the required elements to format the documentation according to CNB site

Governance

Team roles

Based on the CNB governance policy and the fact that kpack is a platform implementation of Cloud Native Buildpacks, it will be added under the responsibility of the CNB Platform Team.

How do migrate roles and responsibilities into the CNB governance process?

Currently, the CNB Platform Team already has a team lead assigned and, by definition, each team can have only one team lead. In order to provide the current kpack team with the same accountability for the migrated repositories the proposal is to follow the guidelines describe on the Component Maintainer Role RFC

The kpack's maintainers that will be nominated as component maintainer in CNB are:

Name Github account Organization
Matthew McNew @matthewmcnew VMware
Tom Kennedy @tomkennedy513 VMware
Daniel Chen @chenbh VMware
Juan Bustamante @jjbustamante VMware

Also, those members are willing to become more involved with CNB projects and become Platform maintainers in the near future.

Outside VMware, the following contributors manifested their desired to become kpack's component maintainer.

Name Github account Organization
Sambhav Kothari @samj1912 Bloomberg
Aidan Delaney @AidanDelaney Bloomberg

RFC process

Once the migration is completed, kpack will follow the RFC process and RFC template stablished in CNB project for any new RFC created in the project.

Existing RFC
  • Open: Currently there are less that 10 open RFCs (some of them opened 2 years ago) in kpack repository.

    • The proposal is to suggest the kpack maintainers to:
      • Triage those RFCs an update their status before the donation.
      • Co-ordinate the announcement of the donation to the RFCs authors and explain them the strategy after the migration (next section)
      • After the donation, any open RFCs in kpack repository should be closed
      • The RFC author should create a new RFC in the CNB RFC repository and follow the CNB RFC process
  • Closed: For historical purpose, we will keep those RFC in the repository.

Slack channel

The proposals are:

kpack maintainers should include the notification of the new channel in the announcement of the donation.

Platform maintainers will have to request or create the new slack channel with the following name: buildpacks-kpack (which will be defined as the preferred channel to be used).

Risks

  • So far the main company behind kpack is VMware, a reduction in the investment from VMware would create a problem and the CNB project would have to either sunset kpack or find investment from the community.
  • It's not clear how to handle the budget required to finance the infrastructure to rebuild the CI/CD pipelines on CNCF CNB infrastructure.
  • Evaluate any legal requirement from CNCF that must be fulfilled before accepting the project into the CNB ecosystem.

Drawbacks

Why should we not do this?

  • If the CNB team expects to implement a different kind of integration with Kubernetes, then accepting the donation of kpack could conflict with that strategy.
  • Another component to maintain which requires additional context and expertise in Kubernetes.

Alternatives

  • What other designs have been considered?

    • VMware could continue to control the project, but it doesn't help on increase the adoption because it remains as a single-vendor driven project
    • VMware could donate kpack to the Continuous Delivery Foundation, but CNB presents a natural home for kpack (it is an implementation of the platform specification)
    • VMware could create a new CNCF project and move all kpack resources to it, but in this case it would need to undergo as a sandbox project for example.
  • Why is this proposal the best?

kpack is a mature Kubernetes-native tool that leverages buildpacks and is used in production environments. The project's maintainers and contributors possess valuable technical and user context, derived from developing kpack and integrating feedback from users utilizing CNB concepts when presented as part of Kubernetes resources.

  • What is the impact of not doing this?

The CNB community would have to develop from scratch any kind of integration with the Cloud Native Ecosystem to satisfy the project goals.

Prior Art

  • Guidelines for accepting component-level contributions RFC #143
  • Component Maintainer Role RFC #234
  • Proposal to move CNCF slack RFC #198

Unresolved Questions

See the risks section

Spec. Changes (OPTIONAL)

None