- Create a GitHub account
- Setup GitHub access via SSH
- Create and checkout a repo fork
- Set up your shell environment
- Install requirements
- Set up a kubernetes cluster
- Configure kubectl to use your cluster
- Set up a docker repository you can push to
Then you can iterate (including runing the controllers with ko
).
The Go tools require that you clone the repository to the src/github.com/knative/build-pipeline
directory
in your GOPATH
.
To check out this repository:
- Create your own fork of this repo
- Clone it to your machine:
mkdir -p ${GOPATH}/src/github.com/knative
cd ${GOPATH}/src/github.com/knative
git clone git@github.com:${YOUR_GITHUB_USERNAME}/build-pipeline.git
cd build-pipeline
git remote add upstream git@github.com:knative/build-pipeline.git
git remote set-url --push upstream no_push
Adding the upstream
remote sets you up nicely for regularly syncing your
fork.
You must install these tools:
go
: The languagePipeline CRD
is built ingit
: For source controldep
: For managing external Go dependencies. - Please Install dep v0.5.0 or greater.ko
: For development.kubectl
: For interacting with your kube cluster
To setup a cluster with GKE:
- Install required tools and setup GCP project
(You may find it useful to save the ID of the project in an environment variable (e.g.
PROJECT_ID
). - Create a GKE cluster for knative
Note that the --scopes
argument to gcloud container cluster create
controls what GCP resources the cluster's default service account has access to; for example to give the default
service account full access to your GCR registry, you can add storage-full
to your --scopes
arg.
To run your controllers with ko
you'll need to set these environment
variables (we recommend adding them to your .bashrc
):
GOPATH
: If you don't have one, simply pick a directory and addexport GOPATH=...
$GOPATH/bin
onPATH
: This is so that tooling installed viago get
will work properly.KO_DOCKER_REPO
: The docker repository to which developer images should be pushed (e.g.gcr.io/[gcloud-project]
).
.bashrc
example:
export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-name'
Make sure to configure authentication
for your KO_DOCKER_REPO
if required. To be able to push images to gcr.io/<project>
, you need to run this once:
gcloud auth configure-docker
The user you are using to interact with your k8s cluster must be a cluster admin to create role bindings:
# Using gcloud to get your current user
USER=$(gcloud config get-value core/account)
# Make that user a cluster admin
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user="${USER}"
While iterating on the project, you may need to:
-
Verify it's working by looking at the logs
-
Update your (external) dependencies with:
./hack/update-deps.sh
.Running dep ensure manually, will pull a bunch of scripts deleted here
-
Update your type definitions with:
./hack/update-codegen.sh
.
To make changes to these CRDs, you will probably interact with:
- The CRD type definitions in ./pkg/apis/pipeline/alpha1
- The controllers in ./pkg/controller
- The clients are in ./pkg/client (these are generated by
./hack/update-codegen.sh
)
You can stand up a version of this controller on-cluster (to your kubectl config current-context
),
including knative/build
(which is wrapped by Task
):
ko apply -f config/
kubectl apply -f ./third_party/config/build/release.yaml
As you make changes to the code, you can redeploy your controller with:
ko apply -f config/controller.yaml
You can clean up everything with:
ko delete -f config/
kubectl delete -f ./third_party/config/build/release.yaml
To look at the controller logs, run:
kubectl -n knative-build-pipeline logs $(kubectl -n knative-build-pipeline get pods -l app=build-pipeline-controller -o name)
To look at the webhook logs, run:
kubectl -n knative-build-pipeline logs $(kubectl -n knative-build-pipeline get pods -l app=build-pipeline-webhook -o name)
If you need to add a new CRD type, you will need to add:
- A yaml definition in config/
- Add the type to the cluster roles in 200-clusterrole.yaml