Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

e2e-kubeadm: Allow pinning Kubernetes version. #2761

Merged
merged 1 commit into from
Jun 16, 2017

Conversation

pipejakob
Copy link
Contributor

Add support for kubeadm e2e jobs to specify a version string to use that ultimately gets passed as kubeadm init --kubernetes-version <version>. This is important for e2e jobs running against release branches, instead of always using latest.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label May 15, 2017
@pipejakob
Copy link
Contributor Author

/assign @krzyzacy

@pipejakob
Copy link
Contributor Author

Not sure if I'm accumulating too much kubeadm-specific logic into the kubernetes_e2e.py scenario. At any point, feel free to push back and I'll fork it off into its own scenario.

@krzyzacy
Copy link
Member

yeah, sounds sane to move all kubeadm stuff to it's own scenario, or in kubetest/anywhere.go?

@pipejakob
Copy link
Contributor Author

Sure. Just two questions:

  1. Does this PR itself look okay? This (in addition to Fix e2e-kubeadm release-1.6 job. #2765) should bring the kubeadm release-1.6 job back to green, and this can become the basis of converting the relevant parts of kubernetes_e2e.py to a separate kubeadm_e2e.py scenario. If it's not too much trouble, I'd like to get this change in first, and then follow it up with a PR to extract all of the kubeadm logic to its own scenario instead of conflating the two.
  2. Is there documentation on the significance of the different layers in test-infra, or guidelines around what functionality should go into each layer? I've been trying to just observe and follow suit for what belongs in kubetest vs. scenarios vs. jobs/config.json vs. env files vs. prow's config file, etc., but I still don't feel like I have a good handle on where something brand new should go, if I don't already have several examples setting precedence. (This may be a @fejta question)


k8s_version = mode.job_env().get('KUBEADM_KUBERNETES_VERSION')
if k8s_version:
opt += ' --kubernetes-anywhere-kubernetes-version %s' % k8s_version
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update test :-)

def start(self, args):
"""Runs e2e-runner.sh after setting env and installing prereqs."""
print >>sys.stderr, 'starts with local mode'
def job_env(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe this should be a property

@@ -377,6 +382,11 @@ def main(args):
' --kubernetes-anywhere-phase2-provider kubeadm' \
' --kubernetes-anywhere-cluster %s' \
' --kubernetes-anywhere-kubeadm-version %s' % (cluster, version)

k8s_version = mode.job_env().get('KUBEADM_KUBERNETES_VERSION')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docker mode does not have job_env() though... you might want to check mutual exclusive?

@krzyzacy
Copy link
Member

krzyzacy commented May 17, 2017

It seems all you want for kubeadm is add bunch of flags to kubetest. You can create the scenario and pass those flags through config.json, like, kubernetes_kubeadm.py --kubeadm-k8s-version=latest-1.6, plus all your kubeadm jobs does not need to support docker mode.

But sgtm if you want to get this in first and refactor in a separate PR :-)

@pipejakob
Copy link
Contributor Author

Looks like I need to convert my JENKINS_PUBLISHED_VERSION settings to --extract settings. Working on that now, and will retest.

@pipejakob pipejakob force-pushed the pin-k8s-version branch 3 times, most recently from 4672969 to 04c768d Compare June 15, 2017 23:28
@pipejakob
Copy link
Contributor Author

Okay, I've converted this functionality from an environment variable in the .env files to instead be a new flag, which simplifies the implementation and tests, as well as keeps more of the per-job settings bundled together in jobs/config.json. Care to take another look?

@pipejakob
Copy link
Contributor Author

(Rebased)

@krzyzacy
Copy link
Member

lg overall, please fix the presubmit

@krzyzacy
Copy link
Member

@pipejakob
Copy link
Contributor Author

Sweet! Yes, this should fix both the release-1.6 and release-1.7 CI tests.

@krzyzacy
Copy link
Member

it might also conflict with #3084, let's see who gets in first

@pipejakob
Copy link
Contributor Author

The race is on. I'll go distract Erick while the presubmits finish.

@pipejakob
Copy link
Contributor Author

I lost the race :-(. Rebasing now.

@krzyzacy
Copy link
Member

cc @caesarxuchao this should fix kubeadm in 1.7 you asked earlier

@@ -473,6 +479,9 @@ def create_parser():
parser.add_argument(
'--kubeadm', choices=['ci', 'periodic', 'pull'])
parser.add_argument(
'--kubeadm-kubernetes-version', default=None, help='Version of Kubernetes to use while '
'testing kubeadm (e.g. latest, latest-1.6, 1.6.4, etc)')
Copy link
Member

@krzyzacy krzyzacy Jun 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can just pass in --kubernetes-anywhere-kubernetes-version=x.x.x from config.json, rest should be automatic, which means you don't need to create a new flag here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did consider this after I saw #3082. I have been intentionally trying to keep the options to the scenario to be kubeadm-based (and keeping kubernetes-anywhere an implementation detail under the covers). It's probably worth redoing the kubetest args to be consistent with these in the future, since kubeadm is the only thing we test using kubernetes-anywhere.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would probably add some presubmit check say --kubeadm-kubernetes-version must set --kubeadm, rather than complicate the logic here? Since this is against #3082.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little confused. What logic did you want removed from here? Or are you saying that we should add logic to make sure that if someone specifies --kubeadm-kubernetes-version and forgets to set --kubeadm, we should have an error, but that new logic should go into a presubmit test instead of a runtime check in the scenario?

@pipejakob
Copy link
Contributor Author

Rebased again and fixed a bug where all of the kubernetes-anywhere flags had prepended spaces, which made kubetest reject them.

@pipejakob
Copy link
Contributor Author

Okay I don't think that failure was because of me:

W0616 21:45:08.638] unexpected pipe read status: (error: 22): Invalid argument
W0616 21:45:08.638] Server presumed dead. Now printing '/root/.cache/bazel/_bazel_root/27a95fe2fc68aa1195ddf1a4bbaaea62/server/jvm.out':

/retest

Add support for kubeadm e2e jobs to specify a version string to use that
ultimately gets passed as "kubeadm init --kubernetes-version <version>".
This is important for e2e jobs running against release branches, instead
of always using "latest".
@pipejakob
Copy link
Contributor Author

@krzyzacy and I chatted offline. I've pulled out the new kubeadm flag in favor of just using the kubetest kubernetes-anywhere flag instead. I'm still testing the jobs locally, but please have a look!

@krzyzacy
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 16, 2017
@krzyzacy
Copy link
Member

hope it will fix things :-)

@krzyzacy krzyzacy merged commit 59792dc into kubernetes:master Jun 16, 2017
@pipejakob
Copy link
Contributor Author

Well I was going to finish testing before merging, but I guess we'll roll forward if any problems are found. 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants