Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need to add integration test to the regular CI runs, not just integration_validation #1974

Closed
yihuaf opened this issue May 27, 2023 · 7 comments · Fixed by #2012
Closed

Need to add integration test to the regular CI runs, not just integration_validation #1974

yihuaf opened this issue May 27, 2023 · 7 comments · Fixed by #2012
Assignees

Comments

@yihuaf
Copy link
Collaborator

yihuaf commented May 27, 2023

@utam0k @YJDoc2 Would you please double check my thinking here.

This is triggered by #1967 failing the integration test because the extra error logging I added. Upon investigation, I think the integration test is either not triggered correctly, or we should add integration test into the main CI workflow.

Currently, the integration_test_validation.yaml is only triggered when the integration test code are changed. The workflow runs runc and youki to make sure that the integration tests runs for both. This make sense because we want to make sure we don't introduce breaking changes so the same validation tests can be used for runc as well.

However, other than this occasion, the integration tests are not triggered at all. This means code changes to youki and/or libcontainer alone do not trigger integration test. This is bad because we can easily break integration test without knowing.

To resolve this, we should either:

  1. add integration test into the main.yaml and use the same trigger. Likely, we should trigger the integration as a different job so that we can parallelize the integration test and the existing tests (unit tests, clippy and etc.)
  2. OR, add a new workflow that specifically triggers the integration.

I would prefer (1) so that we can share the same trigger.

@YJDoc2
Copy link
Collaborator

YJDoc2 commented May 29, 2023

Hey @yihuaf the original though was given that the rust tests are still under development, we only run that when tests themselves are changed, and otherwise as we have to keep the tests same as go tests (actual opencontainers tests) as long as youki passes them, it should be able to pass the rust ones as well.

I'm not particularly against running all of the tests for all PRs, if we do so, we should run them parallely as you have suggested.

I think we need to rethink and re-write our whole CI eventually...

@yihuaf
Copy link
Collaborator Author

yihuaf commented May 29, 2023

Looks like we should add the integration tests to the normal CI run but in parallel. It avoids the case where actual code change breaks the integration tests. The integration tests are not complete but are stable with what we have currently, we should proceed.

I think we need to rethink and re-write our whole CI eventually...

Do you have some ideas on what area we should improve or rewrite? What would you change if you have unlimited time and resources?

@utam0k
Copy link
Collaborator

utam0k commented May 30, 2023

Shouldn't it be added to e2e.yaml? As long as CI have the binary, we can implement it with conatinerd/kubernetes e2e jobs, etc.
What about testing the test itself with runc if there are changes in integration_test itself?

@yihuaf
Copy link
Collaborator Author

yihuaf commented May 30, 2023

What about testing the test itself with runc if there are changes in integration_test itself?

This already happens with the integration_validate.yaml workflow, which only triggers when the integration tests are changed.

Shouldn't it be added to e2e.yaml? As long as CI have the binary, we can implement it with conatinerd/kubernetes e2e jobs, etc.

We can. I was not sure if you want to reserve e2e for containerd/k8s/other actual running platforms. Or we can implement it as a separate workflow. Do you prefer us to add to e2e instead?

@yihuaf yihuaf changed the title The integration test trigger is incorrect Need to add integration test to the regular CI runs, not just integration_validation May 31, 2023
@utam0k
Copy link
Collaborator

utam0k commented May 31, 2023

This already happens with the integration_validate.yaml workflow, which only triggers when the integration tests are changed.

My point was to move the testing part of youki from that workflow to e2e.

  • integration_validate.yaml is a guarantee of the operation of the integration test itself. This uses runc
  • e2e.yaml is to run the integration test using youki

It is natural to include our integration_test in e2e. I was rather trying to do that regardless of this.
I was rather going to do it regardless of this, because it would be more efficient to upload youki binaries to the artifact and test them in parallel with each type of e2e test.

@YJDoc2
Copy link
Collaborator

YJDoc2 commented Jun 5, 2023

Hey @yihuaf , I agree with @utam0k in the respect that running the rust e2e tests on youki itself should be done in the same workflow as of e2e. As for running them on runc for validation when they themselves are changed, we can do it in a separate workflow such as integration_validate.yaml or such. Also for improving the CI, I'll add some points that I have in my mind later. We can handle them in a separate issue, or take them up with this as per convenience.

@YJDoc2
Copy link
Collaborator

YJDoc2 commented Jun 5, 2023

Things that (I think) might benefit from changing in CI

main.yaml :

  • currently we use a path filter here to run linting and formatting checks only when code is changed. This was once a good idea, but now I think it is making things needlessly complex. given that format and lint checking is not that expensive in CI (takes about 3 mins which include a clean build), we can just run it always. This can also save us from missing any lints when the path filters are setup incorrectly.
  • If we still want to retain the path filter approach, I think at least switching to tj-actions/changed-files instead of dorny/paths-filter would do us good. The changed-files actions let us provide specific pattern in file name to filter the changes, and also provide various other info in its output.
  • Move oci integration tests from here to e2e workflow. This is in accordance with utam0k's suggestion of keeping all e2e in a single workflow so we can reuse youki build.
  • Currently the coverage simply uploads the coverage data on codecov, and then it reports the diff in coverage due to PR. I don't think this is much useful, as the only time this really shows any changes is when we add unit tests or code. It might be better if we can have more info such as : per file coverage or per crate coverage, so we can know where do we need to focus on improvements. The exact details need to be more thought upon.
  • Unify the dependency install. We install youki's system dependencies through either apt-get install or running ./.github/scripts/dependency.sh . Fix one approach and use only that so it is easier to maintain and update.

release.yml

  • I think the actual cargo push part of this is broken, as well as featuretests are commented out .
  • We also don't have any way to test this, so it only gets run when new release happens. It would be great if we can make it trigger based + set a "dry-run" option so it doesn't actually push anything to crates io. That way we can validate any changes to this. (I think it will be ok if it creates a draft release on each run, as long as we remember to delete it afterwards)

podman_tests.yaml :

  • Currently this is disabled, as the tests are failing, and this is supposed to be cron based anyways. We should check which tests are currently failing and enable/update it accordingly?

integration_tests_validation.yaml

  • As discussed above split into runc and youki validation, move youki validation into e2e, and always run it for each PR
  • Same updation for dorny/paths-filter as main.yaml

benchmark_execution_time.yaml:

  • Currently this is broken, but seems easily fixable (?)
  • We should try making the results of this visible in github comments or such. Running this on each PR might not be useful, but maybe set a comment based (current) + cron (for master). We can also use this to keep track and compare benchmarks across runtimes, such as youki,runc,podman etc.

👀 @yihuaf

@yihuaf yihuaf mentioned this issue Jun 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants