-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run/fix DynamicPV tests in CI & remove redundant tests #189
Comments
one more consideration is test time, they are slow since all tests involve creating an fsx volume (pre-provisioned the test calls aws api, dynamic the driver calls was api). Don't need to run the dynamic+pre-provisioned variation of every test. Maybe run all dynamic cases and run just 1 pre-provisioned case |
Also , need to debug why a specific subPath test fails: https://github.com/kubernetes-sigs/aws-fsx-csi-driver/blob/master/Makefile#L51 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/lifecycle frozen now that CI is fixed, should try to untangle dynamic PV tests |
/kind bug
What happened?
Old TODO: https://github.com/kubernetes-sigs/aws-fsx-csi-driver/blob/master/tests/e2e/conformance_test.go#L136
IIRC we are skipping the DynamicPV tests because of an issue in the framework where it would only wait 5 minutes for a volume to be created but that issue has been solved for a long time now.
Running them will allow us to refactor/get rid of some of the custom tests, for example 2/3 of the ones here https://github.com/kubernetes-sigs/aws-fsx-csi-driver/blob/master/tests/e2e/dynamic_provisioning_test.go#L55, and maybe reduce some of the code duplicated from upstream like driver struct https://github.com/kubernetes-sigs/aws-fsx-csi-driver/blob/master/tests/e2e/driver/driver.go#L23
(obviously this is blocked by CI being 100% broken at the moment)
What you expected to happen?
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?:
Environment
kubectl version
):The text was updated successfully, but these errors were encountered: