Skip to content

Latest commit

 

History

History
367 lines (278 loc) · 13.1 KB

DEVELOPMENT.md

File metadata and controls

367 lines (278 loc) · 13.1 KB

Tekton Triggers Development Guide

Getting started

  1. Ramp up on kubernetes and CRDs
  2. Ramp Tekton Pipelines
  3. Create a GitHub account
  4. Setup GitHub access via SSH
  5. Create and checkout a repo fork
  6. Set up your shell environment
  7. Install requirements
  8. Set up a Kubernetes cluster
  9. Configure kubectl to use your cluster
  10. Set up a docker repository you can push to
  11. Install Tekton Pipelines
  12. Install Tekton Triggers
  13. Iterate!

Ramp up

Welcome to the project!! You may find these resources helpful to ramp up on some of the technology this project is built on.

Ramp up on CRDs

This project extends Kubernetes (aka k8s) with Custom Resource Definitions (CRDSs). To find out more:

Ramp up on Tekton Pipelines

Checkout your fork

The Go tools require that you clone the repository to the src/github.com/tektoncd/triggers directory in your GOPATH.

To check out this repository:

  1. Create your own fork of this repo
  2. Clone it to your machine:
mkdir -p ${GOPATH}/src/github.com/tektoncd
cd ${GOPATH}/src/github.com/tektoncd
git clone git@github.com:${YOUR_GITHUB_USERNAME}/triggers.git
cd triggers
git remote add upstream git@github.com:tektoncd/triggers.git
git remote set-url --push upstream no_push

Adding the upstream remote sets you up nicely for regularly syncing your fork.

Requirements

You must install these tools:

  1. go: The language Tekton Pipelines is built in
  2. git: For source control
  3. ko: For development. ko version v0.1 or higher is required for triggers to work correctly.
  4. kubectl: For interacting with your kube cluster

Your [$GOPATH] setting is critical for ko apply to function properly: a successful run will typically involve building pushing images instead of only configuring Kubernetes resources.

Kubernetes cluster

Docker for Desktop using an edge version has been proven to work for both developing and running Pipelines. The recommended configuration is:

  • Kubernetes version 1.21 or later
  • 4 vCPU nodes (n1-standard-4)
  • Node autoscaling, up to 3 nodes
  • API scopes for cloud-platform

To setup a cluster with GKE:

  1. Install required tools and setup GCP project (You may find it useful to save the ID of the project in an environment variable (e.g. PROJECT_ID).

  2. Create a GKE cluster (with --cluster-version=latest but you can use any version 1.21 or later):

    export PROJECT_ID=my-gcp-project
    export CLUSTER_NAME=mycoolcluster
    
    gcloud container clusters create $CLUSTER_NAME \
     --enable-autoscaling \
     --min-nodes=1 \
     --max-nodes=3 \
     --scopes=cloud-platform \
     --enable-basic-auth \
     --no-issue-client-certificate \
     --project=$PROJECT_ID \
     --region=us-central1 \
     --machine-type=n1-standard-4 \
     --image-type=cos \
     --num-nodes=1 \
     --cluster-version=latest

    Note that the --scopes argument to gcloud container cluster create controls what GCP resources the cluster's default service account has access to; for example to give the default service account full access to your GCR registry, you can add storage-full to your --scopes arg.

  3. Grant cluster-admin permissions to the current user:

    kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user=$(gcloud config get-value core/account)

Environment Setup

To run your controllers with ko you'll need to set these environment variables (we recommend adding them to your .bashrc):

  1. GOPATH: If you don't have one, simply pick a directory and add export GOPATH=...
  2. $GOPATH/bin on PATH: This is so that tooling installed via go get will work properly.
  3. KO_DOCKER_REPO: The docker repository to which developer images should be pushed (e.g. gcr.io/[gcloud-project]). You can also run a local registry and set KO_DOCKER_REPO to reference the registry (e.g. at localhost:5000/myimages).

.bashrc example:

export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-name'

Make sure to configure authentication for your KO_DOCKER_REPO if required. To be able to push images to gcr.io/<project>, you need to run this once:

gcloud auth configure-docker

The user you are using to interact with your k8s cluster must be a cluster admin to create role bindings:

# Using gcloud to get your current user
USER=$(gcloud config get-value core/account)
# Make that user a cluster admin
kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole=cluster-admin \
  --user="${USER}"

Install Pipelines

To install Tekton Pipelines you can either:

Iterating

While iterating on the project, you may need to:

  1. Install/Run Pipelines

  2. Install/Run Triggers

  3. Verify it's working by looking at the logs

  4. Update your (external) dependencies with: ./hack/update-deps.sh

    Running dep ensure manually, will pull a bunch of scripts deleted here

  5. Update your type definitions with: ./hack/update-codegen.sh

  6. Update the documentation with: ./hack/update-docs.sh

  7. Add new CRD types

  8. Add and run tests

To make changes to these CRDs, you will probably interact with

Install Triggers

You can stand up a version of this controller on-cluster (to your kubectl config current-context):

ko apply -f config/
ko apply -f config/interceptors

Redeploy controller

As you make changes to the code, you can redeploy your controller with:

ko apply -f config/controller.yaml

Tear it down

You can clean up everything with:

ko delete -f config/

Accessing logs

To look at the controller logs, run:

kubectl -n tekton-pipelines logs $(kubectl -n tekton-pipelines get pods -l app=tekton-triggers-controller -o name)

To look at the webhook logs, run:

kubectl -n tekton-pipelines logs $(kubectl -n tekton-pipelines get pods -l app=tekton-triggers-webhook -o name)

Adding new types

If you need to add a new CRD type, you will need to add:

  1. A yaml definition in config/
  2. Add the type to the cluster roles in:

Adding feature gated API fields

We've introduced a feature-flag called enable-api-fields to the config-feature-flags.yaml file deployed as part of our releases.

This field can be configured either to be alpha or stable. This field is documented as part of our install docs.

For developers adding new features to Triggers' CRDs we've got a couple of helpful tools to make gating those features simpler and to provide a consistent testing experience.

Guarding Features with Feature Gates

Writing new features is made trickier when you need to support both the existing stable behaviour as well as your new alpha behaviour.

In the reconciler or sink code, you can guard your new features with an if statement such as the following:

alphaAPIEnabled := config.FromContextOrDefaults(ctx).FeatureFlags.EnableAPIFields == "alpha"
if alphaAPIEnabled {
  // new feature code goes here
} else {
  // existing stable code goes here
}

Notice that you'll need a context object to be passed into your function for this to work. When writing new features keep in mind that you might need to include this in your new function signatures.

Guarding Validations with Feature Gates

Just because your application code might be correctly observing the feature gate flag doesn't mean you're done yet! When a user submits a Tekton resource it's validated by a validation webhook. Here too you'll need to ensure your new features aren't accidentally accepted when the feature gate suggests they shouldn't be. We've got a helper function, ValidateEnabledAPIFields, to make validating the current feature gate easier. Use it like this:

requiredVersion := config.AlphaAPIFields
// errs is an instance of *apis.FieldError, a common type in our validation code
errs = errs.Also(ValidateEnabledAPIFields(ctx, "your feature name", requiredVersion))

If the user's cluster isn't configured with the required feature gate it'll return an error like this:

<your feature> requires "enable-api-fields" feature gate to be "alpha" but it is "stable"

Unit Testing with Feature Gates

Any new code you write that uses the ctx context variable is trivially unit tested with different feature gate settings. You should make sure to unit test your code both with and without a feature gate enabled to make sure it's properly guarded. See the following for an example of a unit test that sets the feature gate to test behaviour:

ctx, err := test.FeatureFlagsToContext(context.Background(), map[string]string{
        "enable-api-fields": "alpha",
})
if err != nil {
	t.Fatalf("unexpected error initializing feature flags: %v", err)
}

if err := ts.TestThing(ctx); err != nil {
	t.Errorf("unexpected error with alpha feature gate enabled: %v", err)
}

Integration Tests

For integration tests we provide the requireGate function which should be passed to the setup function used by tests:

c, namespace := setup(ctx, t, requireGate("enable-api-fields", "alpha"))

This will Skip your integration test if the feature gate is not set to alpha with a clear message explaining why it was skipped.

Note: As with running example YAMLs you have to manually set the enable-api-fields flag to alpha in your test cluster to see your alpha integration tests run. When the flag in your cluster is alpha all integration tests are executed, both stable and alpha. Setting the feature flag to stable will exclude alpha tests.