-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache Docker Image #147
Cache Docker Image #147
Conversation
This checks for the existence of a cache if the Docker image or the dependencies have not changed. If there is a cache available, we download and restore it. If there is not, we run the tests, then save and upload the cached image for next time.
479e9ea
to
8189e59
Compare
run: | ||
mkdir -p /tmp/docker-save && docker save app:test -o | ||
/tmp/docker-save/snapshot.tar && ls -lh /tmp/docker-save | ||
if: always() && steps.cache-docker.outputs.cache-hit != 'true' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to update the cache even if there's a test failure? Feels like it would be better to keep the cache to those images that pass.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there may be value in sticking with the initial approach.
I'm imaginging the scenario where a pull request adds a new dependency to one of the earlier docker layers. The CI job fails due to application tests which are sourced by one of the last Docker layers. Given this PR is likely to generate another CI run which only changes the application layer, there is value in having a cached dependency layer on hand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that's exactly the scenario I envisaged
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I was worried about broken stuff sitting in the cache and causing bugs (but I guess we should trust Docker). Is the cache isolated to a branch? Will a cache from a branch replace the main cache (and therefore mean everyone not on that branch will have slower builds)? Don't feel super strongly either way, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Build time starts as 5m 56s and reduces to 2m 31s. Thank you for adding this!
This work is being copied across from our Rails Template. It was added after this project was created [1]. We are already using Docker Compose in CI on this project. However for context there is another pending change on the Rails Template[2] that includes a screenshot of how caching can behave. The approach here is that we create a shared area on disk called /tmp/docker-save. We load this in at the start of every test run. It will make a past docker image for this app available to the build context. Docker can then use this image and it's layers to optimise the build steps, opting for the cached version rather than rebuilding from scratch. If the cache is not hit (meaning the Dockerfile or gem.lock file changed) then it will add a new image into the cache at the end of the run. In this situation the builds will still take as long as they do now however this should be much less frequent. [1] dxw/rails-template#147 [2] dxw/rails-template#213
This checks for the existence of a cache if the Docker image or the dependencies have not changed. If there is a cache available, we download and restore it. If there is not, we run the tests, then save and upload the cached image for next time.