Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replatform e2e on to ginkgo v2 #4897

Closed
randomvariable opened this issue Jul 8, 2021 · 20 comments · Fixed by #6906
Closed

Replatform e2e on to ginkgo v2 #4897

randomvariable opened this issue Jul 8, 2021 · 20 comments · Fixed by #6906
Assignees
Labels
area/testing Issues or PRs related to testing kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Milestone

Comments

@randomvariable
Copy link
Member

User Story

As a developer, I want my infrastructure cleaned up properly after an aborted test run.

Detailed Description

Looks like Ginkgo v2 will solve a lot of problems when it comes to testing on actual infrastructure. One of the main benefits is that AfterEach will be executed when tests are aborted, whereas for now only AfterSuite is. In AWS, with the current behaviour, this leaves a lot of resources behind when consuming the CAPI test framework, because we don't know the namespace the framework has created which we normally store in a node-level map for the AfterSuite to iterate through and clean up clusters.

As replatforming onto v2 (which isn't released yet) will be a breaking change, very much a v0.5.0 thing.

/milestone next

Anything else you would like to add:

[Miscellaneous information that will assist in solving the issue.]

/kind feature
/area testing
/priority important-longterm

@k8s-ci-robot
Copy link
Contributor

@randomvariable: You must be a member of the kubernetes-sigs/cluster-api-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Cluster API Maintainers and have them propose you as an additional delegate for this responsibility.

In response to this:

User Story

As a developer, I want my infrastructure cleaned up properly after an aborted test run.

Detailed Description

Looks like Ginkgo v2 will solve a lot of problems when it comes to testing on actual infrastructure. One of the main benefits is that AfterEach will be executed when tests are aborted, whereas for now only AfterSuite is. In AWS, with the current behaviour, this leaves a lot of resources behind when consuming the CAPI test framework, because we don't know the namespace the framework has created which we normally store in a node-level map for the AfterSuite to iterate through and clean up clusters.

As replatforming onto v2 (which isn't released yet) will be a breaking change, very much a v0.5.0 thing.

/milestone next

Anything else you would like to add:

[Miscellaneous information that will assist in solving the issue.]

/kind feature
/area testing
/priority important-longterm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. area/testing Issues or PRs related to testing priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Jul 8, 2021
@randomvariable
Copy link
Member Author

@nobody4t
Copy link
Contributor

This looks interesting. But from the messages of robot, some confirmation from maintainers is neccessary.
Please respond. Thanks.

@vincepri
Copy link
Member

/milestone Next

@k8s-ci-robot k8s-ci-robot added this to the Next milestone Jul 28, 2021
@fabriziopandini
Copy link
Member

We should consider also https://github.com/kubernetes-sigs/e2e-framework/

@jsturtevant
Copy link
Contributor

I found the following in the e2e logs today and came here to file an issue but looks like this will cover it 👍

�[38;5;228mYou're using deprecated Ginkgo functionality:�[0m
�[38;5;228m=============================================�[0m
Ginkgo 2.0 is under active development and will introduce (a small number of) breaking changes.
To learn more, view the migration guide at �[38;5;14m�[4mhttps://github.com/onsi/ginkgo/blob/v2/docs/MIGRATING_TO_V2.md�[0m
To comment, chime in at �[38;5;14m�[4mhttps://github.com/onsi/ginkgo/issues/711�[0m

  �[38;5;11mYou are using a custom reporter.  Support for custom reporters will likely be removed in V2.  Most users were using them to generate junit or teamcity reports and this functionality will be merged into the core reporter.  In addition, Ginkgo 2.0 will support emitting a JSON-formatted report that users can then manipulate to generate custom reports.

@nobody4t
Copy link
Contributor

I have spare some time to investigate the e2e-framework. Its functionality is very similar to what ginkgo/omega provides.
It is pure golang native.
But it is at very early stage. If we use this framework, we will have take care of its possible bug. It is not muture as ginkgo.
The latest ginko provides more great features which will make our test life easy.

I would like to hear more from you guys.
@fabriziopandini @vincepri @randomvariable

@fabriziopandini
Copy link
Member

I'm +1 for investigating the e2e-framework, because AFAIK this is where Kuberenetes is going (cc @vladimirvivien), but this should be discussed in the office hours given that many providers are using the CAPI E2E test in its current form

@vincepri
Copy link
Member

Should we separate ginkgo v2 upgrade from exploring the k8s e2e framework?

@fabriziopandini
Copy link
Member

This is a possible way forward, but as far as I understand it won't fix #2955 (even if it will mitigate the problems that we have today for cleanup in case of errors)

@vladimirvivien
Copy link

Of course I am biased, but I think e2e-framework can help here.
I understand that currently e2e is nascent, but I believe we can get it where CAPI needs it to be.

(cc @ShwethaKumbla )

@vladimirvivien
Copy link

@vincepri and @fabriziopandini
I would love to better understand the pain points. Can you point me to somewhere in the code where you think could be done better with the new framework and I can work backward.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 18, 2021
@vladimirvivien
Copy link

/remove-lifecycle stale

Keeping this open if it's still relevant.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 24, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 22, 2022
@vladimirvivien
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 26, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 28, 2022
@vladimirvivien
Copy link

/remove-lifecycle stale

Keeping this alive as upcoming release of e2e-framework will have features to make it easy to support testing of CRD developments including CAPI.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 28, 2022
@fabriziopandini fabriziopandini added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini fabriziopandini removed this from the Next milestone Jul 29, 2022
@fabriziopandini fabriziopandini removed the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini
Copy link
Member

/triage accepted

As discussed in the CAPI office hours meeting, @mboersma will send an email to the list announcing the timeline for switching to Ginkgo v2.

@k8s-ci-robot k8s-ci-robot added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini fabriziopandini added this to the v1.3 milestone Jul 29, 2022
@mboersma
Copy link
Contributor

mboersma commented Aug 2, 2022

https://groups.google.com/g/kubernetes-sig-cluster-lifecycle/c/5wf5ogFUeYI

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/testing Issues or PRs related to testing kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants