Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Per-Provider Post-Up Setup #2143

Closed
zmerlynn opened this issue Mar 6, 2017 · 12 comments
Closed

Per-Provider Post-Up Setup #2143

zmerlynn opened this issue Mar 6, 2017 · 12 comments
Assignees
Labels
area/kubetest lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@zmerlynn
Copy link
Member

zmerlynn commented Mar 6, 2017

In the kops testing, we're currently disabling NodePort testing, c.f. kubernetes/kops#775 . I just added the -slow tests as well, and there's more of these issue. Essentially, all of the providers do some tiny amount of post-deployment setup to "ready" the cluster for e2e testing, but because kops is outside the normal kube-up framework, all kops deployments are as a customer would have them.

I see a couple of options here:

  1. We add a function in the kubetest kops deployment that handles the post-deployment configuration. This could call a kops function, but likely as not, we'd just thunk out to the AWS CLI, or pull in the AWS SDK to kubetest (yay?).
  2. We find a better way to test these things that would actually work in an as-deployed environment
  3. We tag anything that needs special cluster setup and filter the tag out for kops testing. This has obvious coverage issues, but lets us filter the tag without specially calling out NodePort.

Thoughts from @kubernetes/sig-testing-misc?

@zmerlynn
Copy link
Member Author

zmerlynn commented Mar 6, 2017

cc @justinsb

@spiffxp
Copy link
Member

spiffxp commented Mar 6, 2017

In the abstract, post-up seems like a valid hook. But I question why this wouldn't just be a modification to the existing deployer's "up"

I am firm believer that kubetest -test should "just work" without any additional configuration save for credentials, at least for conformance tests, so as to support the testing of clusters deployed with tools other than kube-up/kops/kube-anywhere

@zmerlynn
Copy link
Member Author

zmerlynn commented Mar 6, 2017

@spiffxp: Then you're seemingly opting for (2) above, which is to say "tests should work on the default setup"?

@zmerlynn
Copy link
Member Author

zmerlynn commented Mar 6, 2017

To be fair, this is my general desire as well - and I believe everything tagged with [Conformance] actually obeys this. I'd almost rather flip this on its head and first implement (3) with a [SpecialSetup] tag, then rewrite the tests if possible (i.e. (3) then (2)).

@spiffxp
Copy link
Member

spiffxp commented Mar 6, 2017

Yeah, I guess I'm opting for 2. Or asking that we find a better way than [TestTags:ParsedViaRegex] to classify these things.

Varying levels of fidelity here:

  • allow tests to introspect cluster for its capabilities, and test appropriately?
  • allow tests to compare the cluster's cloud provider against a hardcoded list of capabilities? (this seems roughly where we're at re: tests that require ssh)
  • same as above but s/cloud-provider/e2e-deployer so it's passed vs. introspected
  • create a general purpose tag that indicate non-conformant/non-standard setup (we tried using Feature: for this, it's grown past this scope now)
  • mumbles something about feature flags

I'm open to whatever solves kops's immediate need, just pointing out it's not unique.

@zmerlynn
Copy link
Member Author

zmerlynn commented Mar 6, 2017

mumbles something about feature flags

Heh, c.f. kubernetes/kubernetes#2953 .. which we haven't implemented.

@bgrant0607
Copy link
Member

bgrant0607 commented Mar 9, 2017

FWIW, I made some comments about feature flags in the layers doc:
https://github.com/kubernetes/community/blob/master/contributors/devel/architectural-roadmap.md

@fejta
Copy link
Contributor

fejta commented Aug 14, 2017

/assign @zmerlynn

I believe you added this logic to kubetest? Can we close if so?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 3, 2018
@spiffxp
Copy link
Member

spiffxp commented Jan 3, 2018

/area kubetest

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 8, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubetest lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants