Skip to content

Latest commit

 

History

History
165 lines (117 loc) · 6.05 KB

DEVELOPMENT.md

File metadata and controls

165 lines (117 loc) · 6.05 KB

Developing

Getting started

  1. Create a GitHub account
  2. Setup GitHub access via SSH
  3. Create and checkout a repo fork
  4. Set up your shell environment
  5. Install requirements
  6. Set up a kubernetes cluster
  7. Configure kubectl to use your cluster
  8. Set up a docker repository you can push to

Then you can iterate (including runing the controllers with ko).

Checkout your fork

The Go tools require that you clone the repository to the src/github.com/knative/build-pipeline directory in your GOPATH.

To check out this repository:

  1. Create your own fork of this repo
  2. Clone it to your machine:
mkdir -p ${GOPATH}/src/github.com/knative
cd ${GOPATH}/src/github.com/knative
git clone git@github.com:${YOUR_GITHUB_USERNAME}/build-pipeline.git
cd build-pipeline
git remote add upstream git@github.com:knative/build-pipeline.git
git remote set-url --push upstream no_push

Adding the upstream remote sets you up nicely for regularly syncing your fork.

Requirements

You must install these tools:

  1. go: The language Pipeline CRD is built in
  2. git: For source control
  3. dep: For managing external Go dependencies. - Please Install dep v0.5.0 or greater.
  4. ko: For development.
  5. kubectl: For interacting with your kube cluster

Kubernetes cluster

To setup a cluster with GKE:

  1. Install required tools and setup GCP project (You may find it useful to save the ID of the project in an environment variable (e.g. PROJECT_ID).
  2. Create a GKE cluster for knative

Note that the --scopes argument to gcloud container cluster create controls what GCP resources the cluster's default service account has access to; for example to give the default service account full access to your GCR registry, you can add storage-full to your --scopes arg.

Environment Setup

To run your controllers with ko you'll need to set these environment variables (we recommend adding them to your .bashrc):

  1. GOPATH: If you don't have one, simply pick a directory and add export GOPATH=...
  2. $GOPATH/bin on PATH: This is so that tooling installed via go get will work properly.
  3. KO_DOCKER_REPO: The docker repository to which developer images should be pushed (e.g. gcr.io/[gcloud-project]).

.bashrc example:

export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-name'

Make sure to configure authentication for your KO_DOCKER_REPO if required. To be able to push images to gcr.io/<project>, you need to run this once:

gcloud auth configure-docker

The user you are using to interact with your k8s cluster must be a cluster admin to create role bindings:

# Using gcloud to get your current user
USER=$(gcloud config get-value core/account)
# Make that user a cluster admin
kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole=cluster-admin \
  --user="${USER}"

Iterating

While iterating on the project, you may need to:

  1. Install/Run everything

  2. Verify it's working by looking at the logs

  3. Update your (external) dependencies with: ./hack/update-deps.sh.

    Running dep ensure manually, will pull a bunch of scripts deleted here

  4. Update your type definitions with: ./hack/update-codegen.sh.

  5. Add new CRD types

  6. Add and run tests

To make changes to these CRDs, you will probably interact with:

Install Pipeline

You can stand up a version of this controller on-cluster (to your kubectl config current-context), including knative/build (which is wrapped by Task):

ko apply -f config/
kubectl apply -f ./third_party/config/build/release.yaml

Redeploy controller

As you make changes to the code, you can redeploy your controller with:

ko apply -f config/controller.yaml

Tear it down

You can clean up everything with:

ko delete -f config/
kubectl delete -f ./third_party/config/build/release.yaml

Accessing logs

To look at the controller logs, run:

kubectl -n knative-build-pipeline logs $(kubectl -n knative-build-pipeline get pods -l app=build-pipeline-controller -o name)

To look at the webhook logs, run:

kubectl -n knative-build-pipeline logs $(kubectl -n knative-build-pipeline get pods -l app=build-pipeline-webhook -o name)

Adding new types

If you need to add a new CRD type, you will need to add:

  1. A yaml definition in config/
  2. Add the type to the cluster roles in 200-clusterrole.yaml