-
-
Notifications
You must be signed in to change notification settings - Fork 311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: speedup docker build time #6787
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## unstable #6787 +/- ##
=========================================
Coverage 62.19% 62.20%
=========================================
Files 571 571
Lines 60021 60021
Branches 1973 1976 +3
=========================================
+ Hits 37333 37334 +1
+ Misses 22645 22644 -1
Partials 43 43 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to confirm this does not increase the image size
I added some experiment using a lodestar base builder image. This reduce the need to either use a big image or update via |
docker/builder/Dockerfile
Outdated
@@ -0,0 +1,2 @@ | |||
FROM node:20-alpine | |||
RUN apk update && apk add --no-cache g++ make python3 && rm -rf /var/cache/apk/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was the argument against using the node:20
as builder image?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For some unclear reason it fails to build native libs in the second stage. Might be the original reason for using alpine
.
Also this one would be smaller and offers more opportunity for further tweaks down the road.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am really not sure it's worth to have a separate builder image just for these dependencies as the layer will always be retrieved from local cache after the first local build, and even if it is executed it just took 11 seconds on my machine.
The issues I see with using a separate builder image is that right now we automatically use the latest 20.x version which is good to get security updates in. There is also extra maintenance to keep this up-to-date if it's not part of the CI. Although we could build this image before running the main docker build.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved back to a single node
image.
|
||
```bash | ||
yarn config set yarn-offline-mirror .yarn/yarn-offline-mirror | ||
mv ~/.yarnrc ./ # `yarn config` commands are always global, make `.yarnrc` specific to this project |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if someone has settings in their global yarn config? can't we just commit a .yarnrc
to the project with this setting in it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do that, the offline mirror will always be enabled. Not sure if we want that by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the disadvantage other than storage increase which is not a lot.
> du -sh .yarn
396M .yarn
The advice to move the global yarn config into the current folder is not ideal, what if there are other settings in this file, means the global yarn config was effectively deleted.
If we wanna support offline mirror would just add a .yarnrc
in git
yarn-offline-mirror ".yarn/yarn-offline-mirror"
yarn-offline-mirror-pruning true
I don't think this is correct. By default docker will only create an image for the platform it's built on. To create multiarch images you need to pass |
In a separate Dockerfile only used for local dev, yes |
ARG COMMIT | ||
WORKDIR /usr/app | ||
RUN apk update && apk add --no-cache g++ make python3 py3-setuptools && rm -rf /var/cache/apk/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using regular node images for build layers; this remove the need to add extra tooling via apk every run
this layer will be retrieved from cache after the first run, but rather than reducing build time, a good argument for using the node:22
full image as base is that we reduce the likelyhood that systems deps are missing.
But overall, this might increase the build time in the CI as the node:22
image is not cached there, and afaik quite large to download.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would expect that if download time is an issue for CI, there would be ways to improve this via some caching.
The main goal of those changes is to speedup docker builds during dev cycles.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main goal of those changes is to speedup docker builds during dev cycles.
it's not clear to me then why we need to use a full node:22
as based on my observations the apk install layer will always be retrieved from cache after first run
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm can't reproduce what I believe I was seeing during the interop indeed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did some testings with offline mirror vs. without, the build time seems to be roughly the same. The reason I would assume for this is that I have anyways all packages cached so there shouldn't be a difference.
Hmm can't reproduce what I believe I was seeing during the interop indeed.
maybe it still does some checks against the yarn registry? There were definitely request timeouts I noticed during the interop which we might be able to avoid by using a offline mirror.
Doesn't appear to provide value. |
Motivation
Speedup docker builds
Description
Tweak current Docker build process to improve build speeds.
This is achieved by:
apk
every runUsing regular node images comes at a higher download cost (image is bigger) but is cached.
slim
images do not ship with the necessary tooling.Note that the current Docker build creates a multi-platform Docker image (
arm64
andamd64
). We can probably improve things further by having a dedicated Dockerfile for dev usage.Results
Before
1st run: 5m 22s
Subsequent runs, after code change: 4m 52s
After
1st run: 3m 16s
Subsequent runs, after code change: 2m 32s