This document will walk you through deploying your own Prow instance to a new Kubernetes cluster. If you encounter difficulties, please open an issue so that we can make this process easier.
Prow runs in any kubernetes cluster. Our tackle
utility helps deploy it correctly, or you can perform each of the steps manually.
Both of these are focused on Kubernetes Engine but should work on any kubernetes distro with no/minimal changes.
Before using tackle
or deploying prow manually, ensure you have created a
GitHub account for prow to use. Prow will ignore most GitHub events generated
by this account, so it is important this account be separate from any users or
automation you wish to interact with prow. For example, you still need to do
this even if you'd just setting up a prow instance to work against your own
personal repos.
- Ensure the bot user has the following permissions
- Write access to the repos you plan on handling
- Owner access (and org membership) for the orgs you plan on handling (note it is possible to handle specific repos in an org without this)
- Create a personal access token for the GitHub bot account, adding the
following scopes (more details here)
- Must have the
public_repo
andrepo:status
scopes - Add the
repo
scope if you plan on handing private repos - Add the
admin_org:hook
scope if you plan on handling a github org
- Must have the
- Set this token aside for later (we'll assume you wrote it to a file on your
workstation at
/path/to/oauth/secret
)
Prow's tackle
utility walks you through deploying a new instance of prow in a couple minutes, try it out!
You need a few things:
bazel
build tool installed and working- The prow
tackle
utility. It is recommended to use it by runningbazel run //prow/cmd/tackle
fromtest-infra
directory, alternatively you can install it by runninggo get -u k8s.io/test-infra/prow/cmd/tackle
(in that case you would also need go installed and working). - Optionally, credentials to a Kubernetes cluster (otherwise,
tackle
will help you create one)
To install prow run the following from the test-infra
directory and follow the on-screen instructions:
# Ideally use https://bazel.build, alternatively try:
# go get -u k8s.io/test-infra/prow/cmd/tackle && tackle
bazel run //prow/cmd/tackle
This will help you through the following steps:
- Choosing a kubectl context (and creating a cluster / getting its credentials if necessary)
- Deploying prow into that cluster
- Configuring GitHub to send prow webhooks for your repos. This is where you'll provide the absolute
/path/to/oauth/secret
See the Next Steps section after running this utility.
If you do not want to use the tackle
utility above, here are the manual set of commands tackle will run.
Prow runs in a kubernetes cluster, so first figure out which cluster you want to deploy prow into. If you already have a cluster, skip to the next step.
You can use the GCP cloud console to set up a project and create a new Kubernetes Engine cluster.
I'm assuming that PROJECT
and ZONE
environment variables are set.
export PROJECT=your-project
export ZONE=us-west1-a
Run the following to create the cluster. This will also set up kubectl
to
point to the new cluster.
gcloud container --project "${PROJECT}" clusters create prow \
--zone "${ZONE}" --machine-type n1-standard-4 --num-nodes 2
As of 1.8 Kubernetes uses Role-Based Access Control (“RBAC”) to drive authorization decisions, allowing cluster-admin
to dynamically configure policies.
To create cluster resources you need to grant a user cluster-admin
role in all namespaces for the cluster.
For Prow on GCP, you can use the following command.
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user $(gcloud config get-value account)
For Prow on other platforms, the following command will likely work.
kubectl create clusterrolebinding cluster-admin-binding-"${USER}" \
--clusterrole=cluster-admin --user="${USER}"
On some platforms the USER
variable may not map correctly to the user
in-cluster. If you see an error of the following form, this is likely the case.
Error from server (Forbidden): error when creating
"prow/cluster/starter.yaml": roles.rbac.authorization.k8s.io "<account>" is
forbidden: attempt to grant extra privileges:
[PolicyRule{Resources:["pods/log"], APIGroups:[""], Verbs:["get"]}
PolicyRule{Resources:["prowjobs"], APIGroups:["prow.k8s.io"], Verbs:["get"]}
APIGroups:["prow.k8s.io"], Verbs:["list"]}] user=&{<CLUSTER_USER>
[system:authenticated] map[]}...
Run the previous command substituting USER
with CLUSTER_USER
from the error
message above to solve this issue.
kubectl create clusterrolebinding cluster-admin-binding-"<CLUSTER_USER>" \
--clusterrole=cluster-admin --user="<CLUSTER_USER>"
There are relevant docs on Kubernetes Authentication that may help if neither of the above work.
You will need two secrets to talk to GitHub. The hmac-token
is the token that
you give to GitHub for validating webhooks. Generate it using any reasonable
randomness-generator, eg openssl rand -hex 20
.
# openssl rand -hex 20 > /path/to/hook/secret
kubectl create secret generic hmac-token --from-file=hmac=/path/to/hook/secret
The oauth-token
is the OAuth2 token you created above for the [GitHub bot account]
# https://github.com/settings/tokens
kubectl create secret generic oauth-token --from-file=oauth=/path/to/oauth/secret
Run the following command to deploy a basic set of prow components.
kubectl apply -f prow/cluster/starter.yaml
After a moment, the cluster components will be running.
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deck 2 2 2 2 1m
hook 2 2 2 2 1m
horologium 1 1 1 1 1m
plank 1 1 1 1 1m
sinker 1 1 1 1 1m
tide 1 1 1 1 1m
Find out your external address. It might take a couple minutes for the IP to show up.
$ kubectl get ingress ing
NAME HOSTS ADDRESS PORTS AGE
ing * an.ip.addr.ess 80 3m
Go to that address in a web browser and verify that the "echo-test" job has a green check-mark next to it. At this point you have a prow cluster that is ready to start receiving GitHub events!
Configure github to send your prow instance application/json
webhooks
for specific repos and/or whole orgs.
You can do this with the add-hook
utility:
# Note /path/to/hook/secret and /path/to/oauth/secret from earlier secrets step
# Note the an.ip.addr.ess from previous ingres step
# Ideally use https://bazel.build, alternatively try:
# go get -u k8s.io/test-infra/experiment/add-hook && add-hook
bazel run //experiment/add-hook -- \
--hmac-path=/path/to/hook/secret \
--github-token-path=/path/to/oauth/secret \
--hook-url http://an.ip.addr.ess/hook \
--repo my-org/my-repo \
--repo my-whole-org \
--confirm=false # Remove =false to actually add hook
Now go to your org or repo and click Settings -> Webhooks
.
Look for the http://an.ip.addr.ess/hook
you added above.
A green check mark (for a ping event, if you click edit and view the details of the event) suggests everything is working!
You can click Add webhook
on the Webhooks page to add the hook manually
if you do not want to use the add-hook
utility.
You now have a working Prow cluster (Woohoo!), but it isn't doing anything interesting yet. This section will help you configure your first plugin and job, and complete any additional setup that your instance may need.
Create a file called plugins.yaml
and add the following to it:
plugins:
YOUR_ORG/YOUR_REPO:
- size
Replace YOUR_ORG/YOUR_REPO:
with the appropriate values. If you want, you can
instead just say YOUR_ORG:
and the plugin will run for every repo in the org.
Next, create an empty file called config.yaml
:
touch config.yaml
Run the following to test the files, replacing the paths as necessary:
bazel run //prow/cmd/checkconfig -- --plugin-config=path/to/plugins.yaml --config-path=path/to/config.yaml
There should be no errors. You can run this as a part of your presubmit testing so that any errors are caught before you try to update.
Now run the following to update the configmap, replacing the path as necessary:
kubectl create configmap plugins \
--from-file=plugins.yaml=path/to/plugins.yaml --dry-run -o yaml \
| kubectl replace configmap plugins -f -
We added a make rule to do this for us:
get-cluster-credentials:
gcloud container clusters get-credentials "$(CLUSTER)" --project="$(PROJECT)" --zone="$(ZONE)"
update-plugins: get-cluster-credentials
kubectl create configmap plugins --from-file=plugins.yaml=plugins.yaml --dry-run -o yaml | kubectl replace configmap plugins -f -
Now when you open a PR, it will automatically be labelled with a size/*
label. When you make a change to the plugin config and push it with make update-plugins
, you do not need to redeploy any of your cluster components.
They will pick up the change within a few minutes.
Add the following to config.yaml
:
prowjob_namespace: default
pod_namespace: test-pods
By doing so, we keep prowjobs in the default
namespace and test pods in the
test_pods
namespace.
You can also choose other names. Remember to update the RBAC roles and rolebindings afterwards.
Note: If you set or update the prowjob_namespace
or pod_namespace
fields after deploying the prow components, you will need to redeploy them
so that they pick up the change.
When configuring Prow jobs to use the Pod utilities
with decorate: true
, job metdata, logs, and artifacts will be uploaded
to a GCS bucket in order to persist results from tests and allow for the
job overview page to load those results at a later point. In order to run
these jobs, it is required to set up a GCS bucket for job outputs. If your
Prow deployment is targeted at an open source community, it is strongly
suggested to make this bucket world-readable.
In order to configure the bucket, follow the following steps:
- provision a new service account for interaction with the bucket
- create the bucket
- (optionally) expose the bucket contents to the world
- grant access to admin the bucket for the service account
- serialize a key for the service account
- upload the key to a
Secret
under theservice-account.json
key - edit the
plank
configuration fordefault_decoration_config.gcs_credentials_secret
to point to theSecret
above
After downloading the gcloud
tool and authenticating,
the following script will execute the above steps for you:
gcloud iam service-accounts create prow-gcs-publisher # step 1
identifier="$( gcloud iam service-accounts list --filter 'name:prow-gcs-publisher' --format 'value(email)' )"
gsutil mb gs://prow-artifacts/ # step 2
gsutil iam ch allUsers:objectViewer gs://prow-artifacts # step 3
gsutil iam ch "serviceAccount:${identifier}:objectAdmin" gs://prow-artifacts # step 4
gcloud iam service-accounts keys create --iam-account "${identifier}" service-account.json # step 5
kubectl -n test-pods create secret generic gcs-credentials --from-file=service-account.json # step 6
Before we can update plank's default_decoration_config
we'll need to retrieve the version of plank using the following:
$ kubectl get pod -lapp=plank -o jsonpath='{.items[0].spec.containers[0].image}' | cut -d: -f2
v20191108-08fbf64ac
Then, we can use that tag to retrieve the corresponding utility images in default_decoration_config
in config.yaml
:
For more information on how the pod utility images for prow are versioned see autobump
plank:
default_decoration_config:
utility_images: # using the tag we identified above
clonerefs: "gcr.io/k8s-prow/clonerefs:v20191108-08fbf64ac"
initupload: "gcr.io/k8s-prow/initupload:v20191108-08fbf64ac"
entrypoint: "gcr.io/k8s-prow/entrypoint:v20191108-08fbf64ac"
sidecar: "gcr.io/k8s-prow/sidecar:v20191108-08fbf64ac"
gcs_configuration:
bucket: prow-artifacts # the bucket we just made
path_strategy: explicit
gcs_credentials_secret: gcs-credentials # the secret we just made
Add the following to config.yaml
:
periodics:
- interval: 10m
name: echo-test
decorate: true
spec:
containers:
- image: alpine
command: ["/bin/date"]
postsubmits:
YOUR_ORG/YOUR_REPO:
- name: test-postsubmit
decorate: true
spec:
containers:
- image: alpine
command: ["/bin/printenv"]
presubmits:
YOUR_ORG/YOUR_REPO:
- name: test-presubmit
decorate: true
always_run: true
skip_report: true
spec:
containers:
- image: alpine
command: ["/bin/printenv"]
Again, run the following to test the files, replacing the paths as necessary:
bazel run //prow/cmd/checkconfig -- --plugin-config=path/to/plugins.yaml --config-path=path/to/config.yaml
Now run the following to update the configmap.
kubectl create configmap config \
--from-file=config.yaml=path/to/config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
We use a make rule:
update-config: get-cluster-credentials
kubectl create configmap config --from-file=config.yaml=config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
Presubmits and postsubmits are triggered by the trigger
plugin. Be sure to
enable that plugin by adding it to the list you created in the last section.
Now when you open a PR it will automatically run the presubmit that you added
to this file. You can see it on your prow dashboard. Once you are happy that it
is stable, switch skip_report
to false
. Then, it will post a status on the
PR. When you make a change to the config and push it with make update-config
,
you do not need to redeploy any of your cluster components. They will pick up
the change within a few minutes.
When you push or merge a new change to the git repo, the postsubmit job will run.
For more information on the job environment, see jobs.md
You may choose to run test pods in a separate cluster entirely. This is a good practice to keep testing isolated from Prow's service components and secrets. It can also be used to furcate job execution to different clusters.
One can use a Kubernetes kubeconfig
file (i.e. Config
object) to instruct Prow components to use the build cluster(s).
All contexts in kubeconfig
are used as build clusters and the InClusterConfig
(or current-context
) is the default.
Create a secret containing a kubeconfig
like this:
apiVersion: v1
clusters:
- name: default
cluster:
certificate-authority-data: fake-ca-data-default
server: https://1.2.3.4
- name: other
cluster:
certificate-authority-data: fake-ca-data-other
server: https://5.6.7.8
contexts:
- name: default
context:
cluster: default
user: default
- name: other
context:
cluster: other
user: other
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
token: fake-token-default
- name: other
user:
token: fake-token-other
Use gencred to create the kubeconfig
file (and credentials) for accessing the cluster(s):
NOTE:
gencred
will merge new entries to the specifiedoutput
file on successive invocations by default .
Create a default cluster context (if one does not already exist):
NOTE: If executing
gencred
withbazel
like below, ensure--output
is an absolute path.
bazel run //gencred -- \
--context=<kube-context> \
--name=default \
--output=/tmp/kubeconfig.yaml \
--serviceaccount
Create one or more build cluster contexts:
NOTE: the
current-context
of the existingkubeconfig
will be preserved.
bazel run //gencred -- \
--context=<kube-context> \
--name=other \
--output=/tmp/kubeconfig.yaml \
--serviceaccount
Create a secret containing the kubeconfig.yaml
in the cluster:
kubectl --context=<kube-context> create secret generic kubeconfig --from-file=config=/tmp/kubeconfig.yaml
Mount this secret into the prow components that need it (at minimum: plank
,
sinker
and deck
) and set the --kubeconfig
flag to the location you mount it at. For
instance, you will need to merge the following into the plank deployment:
spec:
containers:
- name: plank
args:
- --kubeconfig=/etc/kubeconfig/config # basename matches --from-file key
volumeMounts:
- name: kubeconfig
mountPath: /etc/kubeconfig
readOnly: true
volumes:
- name: kubeconfig
secret:
defaultMode: 0644
secretName: kubeconfig # example above contains a `config` key
Configure jobs to use the non-default cluster with the cluster:
field.
The above example kubeconfig.yaml
defines two clusters: default
and other
to schedule jobs, which we can use as follows:
periodics:
- name: cluster-unspecified
# cluster:
interval: 10m
decorate: true
spec:
containers:
- image: alpine
command: ["/bin/date"]
- name: cluster-default
cluster: default
interval: 10m
decorate: true
spec:
containers:
- image: alpine
command: ["/bin/date"]
- name: cluster-other
cluster: other
interval: 10m
decorate: true
spec:
containers:
- image: alpine
command: ["/bin/date"]
This results in:
- The
cluster-unspecified
anddefault-cluster
jobs run in thedefault
cluster. - The
cluster-other
job runs in theother
cluster.
See gencred for more details about how to create/update kubeconfig.yaml
.
PRs satisfying a set of predefined criteria can be configured to be automatically merged by Tide.
Tide can be enabled by modifying config.yaml
.
See how to configure tide for more details.
GitHub Oauth is required for PR Status
and for the rerun button on Prow Status.
To enable these features, follow the
instructions in github_oauth_setup.md
.
Use cert-manager for automatic LetsEncrypt integration. If you already have a cert then follow the official docs to set up HTTPS termination. Promote your ingress IP to static IP. On GKE, run:
gcloud compute addresses create [ADDRESS_NAME] --addresses [IP_ADDRESS] --region [REGION]
Point the DNS record for your domain to point at that ingress IP. The convention
for naming is prow.org.io
, but of course that's not a requirement.
Then, install cert-manager as described in its readme. You don't need to run it in a separate namespace.