-
Notifications
You must be signed in to change notification settings - Fork 46
Issue 113 keys multi build and 103 slim JDKs #121
Issue 113 keys multi build and 103 slim JDKs #121
Conversation
Fix appropriate#113 by creating multistage builds that first download keys. Also took opportunity to reorder Dockerfiles to reduce complexity and size. Signed-off-by: Greg Wilkins <gregw@webtide.com>
Signed-off-by: Greg Wilkins <gregw@webtide.com>
@md5 this works OK... although it still fetches the keys once per image for a build with a clean local repository. But subsequent builds are good. |
Signed-off-by: Greg Wilkins <gregw@webtide.com>
Cleaned up jetty-home usage Tested the approach for slim JDKs by adding another multi stage to do the validation, since gpg is not available in slim builds Signed-off-by: Greg Wilkins <gregw@webtide.com>
I have also extended/validated the approach to use another multistage intermediary to download and validate jetty. This allows for slim images that do not have gpg available, so this fixes #103 also The docker files for 9.4 now have a lot in common, so it may be that we are best to generate them rather than copy/paste |
Signed-off-by: Greg Wilkins <gregw@webtide.com>
@tianon @md5 with this PR the 9.4 images (jre8, jre11, jre11-slim, jdk13, jdk13-slim) now vary by only a single line in the Dockerfile and all share the same first two stages of the multistage build. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM definitely faster
Signed-off-by: Greg Wilkins <gregw@webtide.com>
Also ran update of 9.2 and 9.3 Signed-off-by: Greg Wilkins <gregw@webtide.com>
Signed-off-by: Greg Wilkins <gregw@webtide.com>
The update.sh script will copy the Dockerfile-9.4 template and modify it as needed. Signed-off-by: Greg Wilkins <gregw@webtide.com>
echo '# DO NOT EDIT. Edit Dockerfile-9.4 and use update.sh' > "$path"/Dockerfile | ||
cat Dockerfile-9.4 >> "$path"/Dockerfile | ||
sed -ri 's/^(FROM openjdk:)LABEL/\1'"$label"'/; ' "$path/Dockerfile" | ||
fi | ||
sed -ri 's/^(ENV JETTY_VERSION) .*/\1 '"$fullVersion"'/; ' "$path/Dockerfile" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder, would it be possible to change the "templating" of Dockerfile to use build args?
The benefit would be having the Jetty full version changeable in a single place (Makefile, Travis config) without the need to modify all docker files for Jetty 9.4 for all JVM variants. Also, the version would be explicit, and there will be no dependency on parsing Maven repo metadata for latest string starting from 9.4
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would think... But the practise appears to be to have all three variant dockerfiles checked in and a script to generate them if need be.
So I've kept it this way for now (plus I don't really know makefile well enough). But if I can be told it is ok to parameterized and helped with the makefile....
The new multi-stage build doesn't fit either of the guidelines in the Official Images FAQ: https://github.com/docker-library/faq/#multi-stage-builds. Also note that the problem this is trying to solve is non-existent on the Official Image build servers: https://github.com/docker-library/faq/#openpgp--gnupg-keys-and-verification. See also docker-library/php#666 for how to add the same resiliency to builds on TravisCI and see the |
@yosifkit I do not see the pgp-happy-eyeballs approach as viable. Firstly it depends on a script loaded from a private repository, which is worse than our initial proposed solution of depending on an image that was built by the jetty project and contained all the required pgp signatures. Then it is only a fix for travis and will not help local builds. |
Signed-off-by: Greg Wilkins <gregw@webtide.com>
I have updated this PR to use only a single multistage to be more inline with the guidelines. Ideally we should just be able to build the jdk13 image and then have all the other variants import from that. |
Signed-off-by: Greg Wilkins <gregw@webtide.com>
See #122 for an alternative approach where the other 9.4 variants are based on the 9.4-jdk13 official image. I have a slight concern that perhaps this might not work with the official image library as it appears to want to build each image from a tag... perhaps it will be OK if we well order the generated stackbrew file? |
I have added my comment there. |
This is too much stuff here to grasp at once, but since it works, that's fine, I guess. |
Let's summarize what I see in the current changes:
Number one still won't fit in the Official Images (https://github.com/docker-library/faq/#multi-stage-builds) and if replicating the setup in RUN set -xe; \
mkdir /jetty-keys; \
export GNUPGHOME=/jetty-keys; \
for key in $JETTY_GPG_KEYS; do \
for server in \
ha.pool.sks-keyservers.net \
p80.pool.sks-keyservers.net:80 \
ipv4.pool.sks-keyservers.net \
pgp.mit.edu \
do \
if gpg --batch --keyserver "$server" --recv-keys "$key"; then \
break; \
fi; \
done; \
done; \
# count that we have all the keys?
# or just let the verify fail so that we see a key fully failed on all servers
mkdir -p "$JETTY_HOME"; \
cd "$JETTY_HOME"; \
curl -fL "$JETTY_TGZ_URL" -o jetty.tar.gz; \
curl -fL "$JETTY_TGZ_URL.asc" -o jetty.tar.gz.asc; \
gpg --batch --verify jetty.tar.gz.asc jetty.tar.gz \
... Number two is a clever solution for docker cache sharing in local builds but won't affect TravisCI (since it builds them in parallel). On the official images build infrastructure, "we don't have a clean way to preserve the cache for the intermediate stages", so the layers will get deleted when we clean up images that are "ripe". The practical implication of this is that since the build cache for these untagged stages could be deleted at any time, we will end up spending extra time rebuilding them and users will pull "new" images that are unchanged. See https://github.com/docker-library/faq/#multi-stage-builds and docker-library/official-images#5929 (comment). |
Signed-off-by: Greg Wilkins <gregw@webtide.com>
@yosifkit I like that loop you suggested and have changed this PR to use that approach, even with the staged Dockerfile. I have however left this a multistage Dockerfile because the slim images simply do not have gpg nor curl, so I need the full image to do the fetch and verify anyway. So I'm happy with this approach except with regards to what you say about "untagged stages could be deleted at any time". I'm trying to get my head around the implications of this. I understand that these build files do create untagged images... but then we copy out of them all the files we need and they do not form a layer that is needed by the final image. So I don't see why a rebuild would be necessary if those untagged images are deleted? I am able to delete all my untagged images and still run the images locally. If this is not acceptable, then I think I'm stumped how to make this work? Perhaps we could break this into two different projects: 1) build the main 9.2, 9.3 and 9.4 images; 2) build the jdk and slim variants based off the main images ? |
OK - I'm going to merge this in a few hours unless there are objections. |
Dang! this merge has broken the architectures for official images.... more work to do! |
Fix #113 by creating multistage builds that first download keys.
Also took opportunity to reorder
Dockerfile
s to reduce complexity and size.