Skip to content
This repository has been archived by the owner on Jun 29, 2020. It is now read-only.

Issue 113 keys multi build and 103 slim JDKs #121

Merged
merged 12 commits into from
Dec 15, 2019

Conversation

gregw
Copy link
Contributor

@gregw gregw commented Nov 24, 2019

Fix #113 by creating multistage builds that first download keys.
Also took opportunity to reorder Dockerfiles to reduce complexity and size.

Fix appropriate#113 by creating multistage builds that first download keys.
Also took opportunity to reorder Dockerfiles to reduce complexity and size.

Signed-off-by: Greg Wilkins <gregw@webtide.com>
Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw gregw requested review from md5 and olamy November 24, 2019 22:41
@gregw
Copy link
Contributor Author

gregw commented Nov 24, 2019

@md5 this works OK... although it still fetches the keys once per image for a build with a clean local repository. But subsequent builds are good.

Signed-off-by: Greg Wilkins <gregw@webtide.com>
Cleaned up jetty-home usage
Tested the approach for slim JDKs by adding another multi stage to do the validation, since
gpg is not available in slim builds

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw gregw changed the title Issue 113 keys multi build Issue 113 keys multi build and 103 slim JDKs Nov 24, 2019
@gregw
Copy link
Contributor Author

gregw commented Nov 24, 2019

I have also extended/validated the approach to use another multistage intermediary to download and validate jetty. This allows for slim images that do not have gpg available, so this fixes #103 also

The docker files for 9.4 now have a lot in common, so it may be that we are best to generate them rather than copy/paste

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw
Copy link
Contributor Author

gregw commented Nov 24, 2019

@tianon @md5 with this PR the 9.4 images (jre8, jre11, jre11-slim, jdk13, jdk13-slim) now vary by only a single line in the Dockerfile and all share the same first two stages of the multistage build.
Surely it would be better to use a Makefile dependency to create a common/local 9.4 image and then generate each java variant from that? Unfortunately I don't have the makefile-foo to do this.

Copy link

@olamy olamy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM definitely faster

Signed-off-by: Greg Wilkins <gregw@webtide.com>
Also ran update of 9.2 and 9.3

Signed-off-by: Greg Wilkins <gregw@webtide.com>
Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw
Copy link
Contributor Author

gregw commented Nov 25, 2019

Hmm the update script was a) out of date; b) had not been run for 9.2 and 9.3 (oops I'd been doing it manually). So this PR is now getting rather large. I have a review to commit, but would like another opinion for @md5 and/or @tianon

The update.sh script will copy the Dockerfile-9.4 template
and modify it as needed.

Signed-off-by: Greg Wilkins <gregw@webtide.com>
echo '# DO NOT EDIT. Edit Dockerfile-9.4 and use update.sh' > "$path"/Dockerfile
cat Dockerfile-9.4 >> "$path"/Dockerfile
sed -ri 's/^(FROM openjdk:)LABEL/\1'"$label"'/; ' "$path/Dockerfile"
fi
sed -ri 's/^(ENV JETTY_VERSION) .*/\1 '"$fullVersion"'/; ' "$path/Dockerfile"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder, would it be possible to change the "templating" of Dockerfile to use build args?
The benefit would be having the Jetty full version changeable in a single place (Makefile, Travis config) without the need to modify all docker files for Jetty 9.4 for all JVM variants. Also, the version would be explicit, and there will be no dependency on parsing Maven repo metadata for latest string starting from 9.4.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think... But the practise appears to be to have all three variant dockerfiles checked in and a script to generate them if need be.

So I've kept it this way for now (plus I don't really know makefile well enough). But if I can be told it is ok to parameterized and helped with the makefile....

@yosifkit
Copy link

The new multi-stage build doesn't fit either of the guidelines in the Official Images FAQ: https://github.com/docker-library/faq/#multi-stage-builds.

Also note that the problem this is trying to solve is non-existent on the Official Image build servers: https://github.com/docker-library/faq/#openpgp--gnupg-keys-and-verification. See also docker-library/php#666 for how to add the same resiliency to builds on TravisCI and see the hack-my-builds.sh for the details of how that is accomplished.

@gregw
Copy link
Contributor Author

gregw commented Nov 27, 2019

@yosifkit I do not see the pgp-happy-eyeballs approach as viable. Firstly it depends on a script loaded from a private repository, which is worse than our initial proposed solution of depending on an image that was built by the jetty project and contained all the required pgp signatures. Then it is only a fix for travis and will not help local builds.

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw
Copy link
Contributor Author

gregw commented Nov 27, 2019

I have updated this PR to use only a single multistage to be more inline with the guidelines.
I still do not like the fact that we essentially build the Jetty94 base image multiple times, even though it is identical in all the variants.

Ideally we should just be able to build the jdk13 image and then have all the other variants import from that.

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw
Copy link
Contributor Author

gregw commented Nov 27, 2019

See #122 for an alternative approach where the other 9.4 variants are based on the 9.4-jdk13 official image. I have a slight concern that perhaps this might not work with the official image library as it appears to want to build each image from a tag... perhaps it will be OK if we well order the generated stackbrew file?

@gregw
Copy link
Contributor Author

gregw commented Nov 27, 2019

@yosifkit @vutny can you have a look at #122 as an alternative?

@vutny
Copy link
Contributor

vutny commented Dec 5, 2019

I have added my comment there.

@gregw
Copy link
Contributor Author

gregw commented Dec 6, 2019

AS the #122 approach does not play nice with Travis, it is my intention to merge this PR unless there are objections accompanied with a concrete proposal for something better: @md5, @vutny, @yosifkit ?

@vutny
Copy link
Contributor

vutny commented Dec 6, 2019

This is too much stuff here to grasp at once, but since it works, that's fine, I guess.

@yosifkit
Copy link

yosifkit commented Dec 7, 2019

Let's summarize what I see in the current changes:

  1. 9.2-jre8 and 9.3-jre8 have been split into a 2 stage build: stage 0 downloads a list of gpg keys; stage 1 copies in those keys and verifies/installs jetty
  2. 9.4-* images have also been split into a 2 stage build: stage 0 is always from openjdk:13-jdk and download gpg keys and verifies/installs jetty (identical across 9.4 Dockerfiles); stage 1 copies jetty from the previous stage.

Number one still won't fit in the Official Images (https://github.com/docker-library/faq/#multi-stage-builds) and if replicating the setup in hack-my-builds is not viable, then I recommend a loop like Node. So something like this:

RUN set -xe; \
	mkdir /jetty-keys; \
	export GNUPGHOME=/jetty-keys; \
	for key in $JETTY_GPG_KEYS; do \
		for server in \
			ha.pool.sks-keyservers.net \
			p80.pool.sks-keyservers.net:80 \
			ipv4.pool.sks-keyservers.net \
			pgp.mit.edu \
		do \
			if gpg --batch --keyserver "$server" --recv-keys "$key"; then \
				break; \
			fi; \
		done; \
	done; \
	# count that we have all the keys?
	# or just let the verify fail so that we see a key fully failed on all servers
	mkdir -p "$JETTY_HOME"; \
	cd "$JETTY_HOME"; \
	curl -fL "$JETTY_TGZ_URL" -o jetty.tar.gz; \
	curl -fL "$JETTY_TGZ_URL.asc" -o jetty.tar.gz.asc; \
	gpg --batch --verify jetty.tar.gz.asc jetty.tar.gz \
	...

Number two is a clever solution for docker cache sharing in local builds but won't affect TravisCI (since it builds them in parallel). On the official images build infrastructure, "we don't have a clean way to preserve the cache for the intermediate stages", so the layers will get deleted when we clean up images that are "ripe". The practical implication of this is that since the build cache for these untagged stages could be deleted at any time, we will end up spending extra time rebuilding them and users will pull "new" images that are unchanged.

See https://github.com/docker-library/faq/#multi-stage-builds and docker-library/official-images#5929 (comment).

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw
Copy link
Contributor Author

gregw commented Dec 8, 2019

@yosifkit I like that loop you suggested and have changed this PR to use that approach, even with the staged Dockerfile.
I've also modified 9.2 and 9.3 to be in the same way as 9.4

I have however left this a multistage Dockerfile because the slim images simply do not have gpg nor curl, so I need the full image to do the fetch and verify anyway.

So I'm happy with this approach except with regards to what you say about "untagged stages could be deleted at any time". I'm trying to get my head around the implications of this. I understand that these build files do create untagged images... but then we copy out of them all the files we need and they do not form a layer that is needed by the final image. So I don't see why a rebuild would be necessary if those untagged images are deleted? I am able to delete all my untagged images and still run the images locally.

If this is not acceptable, then I think I'm stumped how to make this work?

Perhaps we could break this into two different projects: 1) build the main 9.2, 9.3 and 9.4 images; 2) build the jdk and slim variants based off the main images ?

@gregw
Copy link
Contributor Author

gregw commented Dec 12, 2019

OK - I'm going to merge this in a few hours unless there are objections.

@gregw gregw merged commit a66c862 into appropriate:master Dec 15, 2019
@gregw
Copy link
Contributor Author

gregw commented Dec 16, 2019

Dang! this merge has broken the architectures for official images.... more work to do!

@gregw
Copy link
Contributor Author

gregw commented Dec 19, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Replace sks-keyservers.net
4 participants