-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add build and release for csi proxy to gcs bucket #53
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jingxu97 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Add a prow job to publish csi-proxy binary csi-proxy PR to add cloudbuild.yaml kubernetes-csi/csi-proxy#53
This PR is ready for review. PTAL. Thanks! |
c22ffe8
to
6039148
Compare
Add a prow job to publish csi-proxy binary csi-proxy PR to add cloudbuild.yaml kubernetes-csi/csi-proxy#53
cc @verult |
build/cloudbuild.yaml
Outdated
@@ -0,0 +1,17 @@ | |||
# Build the module. | |||
steps: | |||
- name: gcr.io/cloud-builders/go:debian |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to use the k8s build images that have specific versions of golang, like in https://github.com/kubernetes-csi/csi-release-tools/pull/87/files#diff-e5fc65747fb9417968a3295dc1ac5111R26?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think go:debian get the latest version of go GoogleCloudPlatform/cloud-builders#132. Based on that PR's comment, seems like it also does not use specific version of go. @pohly might help on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please have a look at kubernetes-csi/csi-release-tools#86 (comment) an the PRs linked to there.
In a nutshell, the approach there is to install the right version of the Go toolchain before using it, using the normal build rules for Windows binaries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The relevant code is https://github.com/pohly/csi-release-tools/blob/bd416901d4b7cad7bc70a39f508b68e2a9fff1f6/cloudbuild.sh which sources release-tools/prow.sh
and then invokes image building with run_with_go "${CSI_PROW_GO_VERSION_BUILD}" make push-multiarch
.
You could do something similar and use run_with_go "${CSI_PROW_GO_VERSION_BUILD}" make build
to produce the csi-proxy.exe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got it. Thanks!
I updated it.
build/cloudbuild.yaml
Outdated
set -ex | ||
IFS=- read str1 rev <<< "${_GIT_TAG}" | ||
make build REV=${rev} | ||
cp bin/csi-proxy.exe bin/csi-proxy${_GIT_TAG}.exe |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may want to consider enhancing this later with the logic in https://github.com/kubernetes-csi/csi-release-tools/pull/87/files#diff-841c5d7c9fb7c0ba0fb2f452298ed526R147 to support "canary" images
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got it.
build/cloudbuild.yaml
Outdated
_GIT_TAG: '12345' | ||
artifacts: | ||
objects: | ||
location: 'gs://k8s-artifacts-csi' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the "prod" bucket which I believe we should only push to by promoting from staging. Is there a staging bucket we can use instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
before I confirmed with bartsmykla@ that there is no such process yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, there's no GCS promotion process yet. I talk about it a little here: kubernetes/release#964 (comment)
I wrote gh2gcs
very recently, which supports uploading GitHub releases to a GCS bucket.
See the usage here: kubernetes/release#1311
So if you publish release tarballs to GitHub, you'll be able to do something like this to upload to GCS:
$ time gh2gcs --log-level debug --org kubernetes-csi --repo csi-proxy --bucket k8s-artifacts-csi --release-dir release --tags v0.0.0,v0.0.1,v0.0.2
If you run the command w/o the --tags
flag, it'll iterate over all releases and upload them.
You could add a step in the cloudbuild.yaml to do this.
Eventually (hopefully very soon), I'm going to update the tool to accept a yaml config, so you can add a config file somewhere, that looks roughly like this:
- org: foo
repo: bar
tags: []
gcsBucket: bucket-name
releaseDir: release-dir
Let me know what you think!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@justaugustus thank you very much for providing this tool. For now, I will follow other csi repo build step first. Later we will try to see how to improve it with your tool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't quite follow the discussion, but is the suggestion to put ci images under a subfolder called "dev/ci", and then "promote" releases to "release/"? That way it's easy for someone to easily distinguish what is a production ready image vs a dev image.
@pohly @msau42 @justaugustus PR is updated. PTAL. thanks! |
cloudbuild.yaml
Outdated
_PULL_BASE_REF: 'master' | ||
artifacts: | ||
objects: | ||
location: 'gs://disksize-bucket' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this location correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, sorry, that's for my local testing. fixed it.
build/cloudbuild.yaml
Outdated
_GIT_TAG: '12345' | ||
artifacts: | ||
objects: | ||
location: 'gs://k8s-artifacts-csi' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't quite follow the discussion, but is the suggestion to put ci images under a subfolder called "dev/ci", and then "promote" releases to "release/"? That way it's easy for someone to easily distinguish what is a production ready image vs a dev image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
cloudbuild.yaml
Outdated
_PULL_BASE_REF: 'master' | ||
artifacts: | ||
objects: | ||
location: 'gs://k8s-artifacts-csi' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late change, but for container repo, we decided to not use the storage-csi image repo, and instead have one "k8s-sig-storage" repo for all sig related projects. Do we want to follow a similar naming scheme here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to search a little bit and only found we have k8s-artifacts-cni and k8s-artifacts-prod were used. So overall we only have very few of such binary packages. And the name of artifacts is very close to the binary itself. So I feel I am ok with csi name.
artifacts: | ||
objects: | ||
location: 'gs://k8s-artifacts-csi/dev' | ||
paths: 'bin/csi-proxy-${_PULL_BASE_REF}.exe' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah sorry one more thought came to me. Do we want to have a "csi-proxy/bin/" directory in case in the future we have more than one binary we want to put here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you mean the gs bucket location as gs://k8s-artifacts-csi/csi-proxy/bin? Or put it in paths: csi-proxy/bin/csi-proxy-${_PULL_BASE_REF}.exe?
For the bucket, if there are other binaries, they will have different names? For the paths here, I am not sure adding csi-proxy is useful?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either way. I think it's better organized when each binary is in separate directories, especially when we start to get many versions stored.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got it. But I thought paths is the local path where the file is stored locally, and location is the remote store where the file will be uploaded. So we should add it to location?
How about this gs://k8s-artifacts-csi/csi-proxy/dev?
This PR adds building and publishing csi-proxy binary to gcs bucket k8s-artifacts-csi
/lgtm |
This PR adds building and publishing csi-proxy binary to gcs bucket k8s-artifacts-csi
The prow job for publishing the binary in test-infra is here kubernetes/test-infra#17683
What type of PR is this?
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: