Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(ci)!: Update tested Kubernetes versions #7798

Merged
merged 6 commits into from
Jun 11, 2021
Merged

chore(ci)!: Update tested Kubernetes versions #7798

merged 6 commits into from
Jun 11, 2021

Conversation

spencergilbert
Copy link
Contributor

@spencergilbert spencergilbert commented Jun 8, 2021

Signed-off-by: Spencer Gilbert spencer.gilbert@gmail.com

While Vector still runs on kubernetes 1.14.x, as far as I can tell it's no longer offered/supported
by any major cloud provider. Right now #6564 is failing CI as we try to use a kubectl command that isn't
supported in 1.14.x (rollout restart).

This PR does the following:

  • Drops 1.14.10

I'd also suggest consider adjusting our documentation around required/supported kubernetes versions. We'll also need to update and include more versions in a followup

Signed-off-by: Spencer Gilbert <spencer.gilbert@gmail.com>
@spencergilbert spencergilbert requested review from a team, binarylogic and jszwedko June 8, 2021 22:19
@spencergilbert spencergilbert self-assigned this Jun 8, 2021
@spencergilbert spencergilbert added ci-condition: k8s e2e all targets Run Kubernetes E2E test suite for all targets (instead of just the essential subset) ci-condition: k8s e2e tests enable Run Kubernetes E2E test suite for this PR domain: ci Anything related to Vector's CI environment platform: kubernetes Anything `kubernetes` platform related labels Jun 8, 2021
@spencergilbert spencergilbert requested a review from zsherman June 8, 2021 22:22
Copy link
Member

@jszwedko jszwedko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we update the docs with this change too? 👍 on supporting only the lowest available version on major cloud providers.

.github/workflows/k8s_e2e.yml Outdated Show resolved Hide resolved
@spencergilbert
Copy link
Contributor Author

Should we update the docs with this change too? 👍 on supporting only the lowest available version on major cloud providers.

I wasn't sure if we wanted to - the chart still works on 1.14, we're no longer testing - I think it's a good idea to tighten the scope of supported versions, but that's also a larger discussion and (possibly) impactful to users/customers

@StephenWakely
Copy link
Contributor

I'd say we need to update the docs. If we aren't testing it, we aren't supporting it.

@spencergilbert
Copy link
Contributor Author

I'd say we need to update the docs. If we aren't testing it, we aren't supporting it.

I'd defer to the group/leadership, but there's definitely some semantics here - Vector is still compatible with 1.14, we're not testing it actively, but we're also "best effort" not breaking existing compatibilities. Perhaps we could adopt language similar to minikube's:

minikube follows the Kubernetes Version and Version Skew Support Policy, so we guarantee support for the latest build for the last 3 minor Kubernetes releases. When practical, minikube aims to support older releases as well so that users can emulate legacy environments.

@jszwedko
Copy link
Member

jszwedko commented Jun 9, 2021

I'd say we need to update the docs. If we aren't testing it, we aren't supporting it.

I'm inclined to agree with this. If we aren't testing it, we aren't really supporting it.

My preference would be to support the lowest common denominator version for the major cloud providers.

I do think we should update the docs with this PR.

@spencergilbert
Copy link
Contributor Author

I'd say we need to update the docs. If we aren't testing it, we aren't supporting it.

I'm inclined to agree with this. If we aren't testing it, we aren't really supporting it.

👍

My preference would be to support the lowest common denominator version for the major cloud providers.

I think can figure this out in a following issue/rfc

I do think we should update the docs with this PR.

I'll just update the docs to be >=1.15 in this PR... though, do we need to consider this a breaking change?

Signed-off-by: Spencer Gilbert <spencer.gilbert@gmail.com>
@jszwedko
Copy link
Member

jszwedko commented Jun 9, 2021

I'll just update the docs to be >=1.15 in this PR... though, do we need to consider this a breaking change?

Yeah, this is a good question. I guess technically it should be a breaking change. I'd like to have us define a policy for what is breaking and not. I'll make a note of this for next quarter.

@spencergilbert
Copy link
Contributor Author

I'll update it to be a breaking change, and I'll fix these patch versions - I went a little too new for these patches, I think the failing ones are to be released next week 😅

@spencergilbert spencergilbert changed the title chore(ci): Update tested Kubernetes versions chore(ci)!: Update tested Kubernetes versions Jun 9, 2021
@spencergilbert spencergilbert changed the title chore(ci)!: Update tested Kubernetes versions chore!(ci): Update tested Kubernetes versions Jun 9, 2021
Signed-off-by: Spencer Gilbert <spencer.gilbert@gmail.com>
@spencergilbert
Copy link
Contributor Author

Tested the two failed jobs locally and they passed, so I'm guessing just flakey? Or maybe related to the runner env

@spencergilbert spencergilbert changed the title chore!(ci): Update tested Kubernetes versions chore(ci)!: Update tested Kubernetes versions Jun 10, 2021
@StephenWakely
Copy link
Contributor

Tested the two failed jobs locally and they passed, so I'm guessing just flakey?

I have a hunch about that flakey test. It's not to do with this PR. By creating the affinity pod in the same namespace as the test pod, sometimes the test picks up the logs from that pod instead. I'll get a fix.

@spencergilbert
Copy link
Contributor Author

Tested the two failed jobs locally and they passed, so I'm guessing just flakey?

I have a hunch about that flakey test. It's not to do with this PR. By creating the affinity pod in the same namespace as the test pod, sometimes the test picks up the logs from that pod instead. I'll get a fix.

Talked a bit over slack we have two sources of flakiness here, cri-o related and one in our tests @StephenWakely will fix

Signed-off-by: Spencer Gilbert <spencer.gilbert@gmail.com>
Signed-off-by: Spencer Gilbert <spencer.gilbert@gmail.com>
Signed-off-by: Spencer Gilbert <spencer.gilbert@gmail.com>
@spencergilbert spencergilbert merged commit 331c3a4 into vectordotdev:master Jun 11, 2021
@spencergilbert spencergilbert deleted the update-kube-e2e branch June 11, 2021 17:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci-condition: k8s e2e all targets Run Kubernetes E2E test suite for all targets (instead of just the essential subset) ci-condition: k8s e2e tests enable Run Kubernetes E2E test suite for this PR domain: ci Anything related to Vector's CI environment platform: kubernetes Anything `kubernetes` platform related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants