-
Notifications
You must be signed in to change notification settings - Fork 14.1k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing strategy of Helm Chart #18127
Comments
Sure, great initiative. I can help a little bit :) |
Comment from my/Airlfow side (and maybe more from @ephraimbuddy and @kaxil and @jedcunningham who implemented most of the tests there): Yes. Definitely. Both values schema and unit tests on rendering are super-helpful to keep our Helm Chart "in-check". Re - unit tests, we have currently > 350 tests. One thing that I want to look at shortly is to speed them up a bit (this is the only problem we have now that they are rather slow but I believe it should be possible to speed them up a lot). And knowing that other - friendly :) - projects might want to use it we might prioritise it a bit and share our experience. We use Pytest for all kinds of tests and we think pytest is a great test harness that can be used in various contexts. We tried other solutions before for the helm rendering (helm-unittests) but it was extremely unfriendly - declarative way of describing and asserting the tests was very difficult to maintain and very few people were even somewhat fluent in it. One thing we get with Pytest for the tests - most of our contributors are Pythonistas, and the way we do the rendering tests, it's very much like any other python tests and it's extremely easy to ask even casual contributors to add tests for any contribution they provide and they are able to do it very quickly without too much learning. |
@potiuk It would be great to share a solution and be able to achieve a synergistic effect in the development of test mechanisms in this area! Thank you very much for sharing your in-depth test management experience in this area. We don't have to go into the same puddle twice if we can avoid it.
@wiktor2200 what kind of proper testing rules do you mean? Could you give examples so that we can understand each other well about the approach to testing? |
For a good start it would be great to have:
|
Great, this is the approach I was thinking about in the beginning - schema of |
To protect against #17712 misconfig in pytest, there would be needed some kind of expected values list (which probably should be updated) - idk about actual implementation, probably will be more clear while planning this :) |
Also another comment for that - in Airlfow we have > 350 unit tests, but we also have "kubernetes integration tests". Those are run also regularly in our CI and they consist of :
We also run separate "upgrade" tests. We update the chart and upgrade it after installing just to make sure that the upgrade scenario also works (we had some problems with hooks and migration jobs in the past). Those are really small number of tests but they give us confidence that we have not done something disastrous by accident. I can heartily recommend Kind for this kind of testing, it's also great for development environment when you would like to reproduce those kind of tests locally. |
I like idea of integration testing via eg. We (in Superset) have already an ephemeral test environment on top of AWS ECS & AWS Fargate ( https://preset.io/blog/2021-4-28-ephemeral-test-environments-in-superset/ ). So I am thinking about testing it on an AWS EKS cluster by creating a dedicated namespace for each PR. Then arise the issue of funding, but it goes in line with the test environments about sponsorship. Do you have opinions on which approach is better in this regard? @robdiciuccio as a contributor in #13189 , could you provide some feedback on that? |
Fully agree. |
Yeah, I strongly suggest a combination of both unit-test and integration test for the Helm Chart. Jed and I had talked about it speeding up Unit test framework we have with pytest by grouping/caching the manifest files for the same input, however, the caveat is if we run them in parallel, that cache will be per thread. Like Jarek pointed out, we will share our finding and optimization here as I think more projects can benefit from a pytest backed Helm unit-test framework. |
I don't think we need to use external services as we can use KIND on Github Action. We have test environments running on Fargate because we want them to be available for longer than the duration of one build.
I agree. The Helm Chart testing framework written in Python is very convenient and has a low entry barrier for projects that already use Python. For Golang project, It is also worth mentioning the discussion we had at Airflow when we decided to migrate from tests written in YAML to the new Python framework: apache/airflow#11657 |
From the current discussion I see:
Given the above, I think I can start working on However, I am still very eager to hear all the other points of view, even the contraindications for |
Beware of issues like apache/airflow#20539 when using external Schema definitions where offline or Air-gapped installations don’t work. We solved it with bringing those definitions in the repo itself - apache/airflow#20544 |
@craig-rueda As the maintainer of Chart, can you give input here based on your experiences and previous problems to address them well? In fact, this is an issue of how to make the maintainer's life easier. I will have some time to deal with this once we agree on the direction. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Is your feature request related to a problem? Please describe.
I would like to discuss the testing strategy for Helm Chart for Apache Superset to improve the developer experience and avoid issues like #17920 .
I think a DevEX tweaks in this area could improve development efficiency, which will allow address the numerous issues that exist for the Helm Charts. Then we will be able to obtain the certainty necessary for publication on Artifact Repository as an official artifact.
Describe the solution you'd like
I see numerous solution in that area. I am afraid that I do not have experience in the project to see all the required aspects, so I highly recommend any comments from experienced people, both users and developers. Comments and additional requirements are welcome!
First of all, It will be great to provide
values.schema.json
( https://helm.sh/docs/topics/charts/#schema-files ). It's about writing JSON schema forvalues.yaml
file. I already have some draft in that area. This will improve the experience of developers, but most of all end-users will lose the ability to use wrongly formatted or misindent somethingvalues.yaml
file.Secondly, I think about providing unit tests about rendering helm charts to:
pytest
to renders helm manifests & use it as a tests framework eg. Apache Airflow use that approach.Describe alternatives you've considered
We might also use kubeval for testing schema format, but this is another tool when we have a few testing framework in place. A large number of different frameworks raise the entry threshold and lower DevEx by constantly changing the context.
We might also use conftest, but again – one more testing framework which does not bring much more values than
pytest
We might also start integration tests on CI eg. in minikube or – like test env – on AWS. This can be a great topic, but such tests are long, slow, and thus cover a limited number of scenarios (but provide a very real validation of correctness). I think we should start with something smaller.
Additional context
@wiktor2200 @nytai @mvoitko might be interested in that area as involved in development of our Helm Chart.
@mik-laj @potiuk may want to share their thoughts based on the experience of Apache Airflow (another Apache Foundation project).
The text was updated successfully, but these errors were encountered: