-
Notifications
You must be signed in to change notification settings - Fork 863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix and add test for build_image.sh
: Make it invariant to arguments order
#2226
Fix and add test for build_image.sh
: Make it invariant to arguments order
#2226
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2226 +/- ##
=======================================
Coverage 71.41% 71.41%
=======================================
Files 73 73
Lines 3348 3348
Branches 57 57
=======================================
Hits 2391 2391
Misses 954 954
Partials 3 3 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabridamicelli Thanks for the PR!
Could you please consolidate the workflow file here with the the other one you created.
You could call it Docker CI.
Ideally, we would want a single script to be run which will call a bunch of tests.
@agunapal
PS: I believe we should have a battery of usage examples (like the one here) that we consider to be fundamental and we want to guarantee that every new released image leads to a container that successfully runs those out of the box. |
@fabridamicelli Please feel free to split the below items over 1 or multiple PRs as you see fit. Here is the benchmark config we run. You don't need to exactly follow this. I would say one example with ResNet, one with HF Transformers, One with Accelerated Transformers, one with ONNX or TensorRT would be a great start. I would also recommend to design in a way which makes it easy for you to extend the tests with number of workers, batch_size etc. |
Great, thank you for the hints. I will try in following PRs to start putting something simple together that is extensible so that we can add step by step more examples to the test battery. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Just one minor nitpick about organizing the test files.
IMGS_FILE="test_images.json" | ||
docker images --no-trunc --format "{{json .}}" | jq '{"repo": .Repository, "tag": .Tag, "digest": .ID}' | jq -s > "${IMGS_FILE}" | ||
|
||
python <<EOF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Would it be better to put this python code in a separate file and invoke it here or maybe implement the entire test in python?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @namannandan . Hi @fabridamicelli this is perhaps another thing to consider for future PRs on adding more tests. It would be great if these can be implemented using pytest. You can take a look at the this directory to see examples of how this is being done for the non-docker examples. We don't have an example with docker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for reviewing and for the feedback !
@namannandan
I can understand the point about the combination of python/bash code in one script. My logic behind having everything in one script is to kind of have an atomic component with (1 test -> 1 file) – and the mixture will anyways exist if we call things from python with subprocess.run
. Still we could refactor it as you suggest and have it all in python.
@agunapal
Regarding pytest: I take the point and I will check the other existing tests to follow the standards a bit closer, so we can step by step refactor things and iterate a bit on the design of the docker tests such that it is easy to extend with more tests and usage examples. I still have the feeling that it makes sense to keep docker tests under docker/
for docker things to be all in one place..
Description
Current parsing of arguments in
docker/build_image.sh
is order dependant which makes it pretty easy to make mistakes.I found it out by trying to build an image with gpu and a custom tag like so:
This produces an image with right gpu nvidia base image, but with the following wrong tag:
pytorch/torchserve:latest-gpu
.The correct tag should be
orga/repo:custom-tag
.This happens because the arguments parsing overrides the DOCKER_TAG like so:
Therefore using is as last argument (-g) (in particular, after -t) will lead to ignoring the custom tag (-t).
This PR:
Type of change
Feature/Issue validation/testing
The fix happens by checking if a custom tag was passed and letting that override the
DOCKER_TAG
value like so:So now running
is equivalent to this:
and to this:
and they all build an image with tag
orga/repo:custom-tag
and the same digest.Checklist: