Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kaniko doesn't cache in Multistage when copying files from a modified stage #1244

Closed
thaituan opened this issue May 7, 2020 · 7 comments · Fixed by #2559
Closed

Kaniko doesn't cache in Multistage when copying files from a modified stage #1244

thaituan opened this issue May 7, 2020 · 7 comments · Fixed by #2559
Labels
area/caching For all bugs related to cache issues help wanted Looking for a volunteer! kind/bug Something isn't working priority/p3 agreed that this would be good to have, but no one is available at the moment.
Milestone

Comments

@thaituan
Copy link

thaituan commented May 7, 2020

Actual behavior
In multistage, caching when copying file from a stage (which has change, although that change does not effect on copied file) to other stage does not happen.
It seems that this happened after fixing issue #589

Expected behavior
Layers that haven't changed on the previous stage will still be cached on the next stage.

To Reproduce
Steps to reproduce the behavior:

  1. See https://github.com/thaituan/kaniko-cache
  2. Build docker image with cache on
  3. Update scripts in package.json (not change dependencies and devDependencies)
    Example: "test": "echo \"Error: no test specified\" && exit 10" -> "test": "echo \"Error: no test specified\" && exit 20"
  4. Re-build docker image with cache on

Additional Information

FROM node:10-slim AS deps
COPY package.json /tmp
RUN cat /tmp/package.json | xargs -0 -i% node -pe 'o=(%);JSON.stringify({dependencies:o.dependencies,devDependencies:o.devDependencies})' > /tmp/package.deps.json

FROM node:10-slim
WORKDIR /app
COPY --from=deps /tmp/package.deps.json ./package.json
COPY ["yarn.lock", "./"]
RUN yarn install

Line 7 is supposed to be cached when only dependencies and devDependencies are changed.
Using the docker command to build works fine, but kaniko not.

Description Yes/No
Please check if this a new feature you are proposing
Please check if the build works in docker but not in kaniko
Please check if this error is seen when you use --cache flag
Please check if your dockerfile is a multistage dockerfile
@thaituan thaituan changed the title Kaniko doesn't cache in multistage when that stage has changed Kaniko not cache in multistage when copying files from a modified stage May 7, 2020
@thaituan thaituan changed the title Kaniko not cache in multistage when copying files from a modified stage Kaniko doesn't cache in Multistage when copying files from a modified stage May 7, 2020
@tejal29 tejal29 added kind/bug Something isn't working priority/p3 agreed that this would be good to have, but no one is available at the moment. area/caching For all bugs related to cache issues labels May 7, 2020
@tejal29
Copy link
Member

tejal29 commented May 7, 2020

@thaituan The way kaniko caching works right now, it also adds the cache key for deps when calculating the cache key. You can make a change here

compositeKey = s.populateCopyCmdCompositeKey(command, v.From(), compositeKey)

and that should fix your issue.

@thaituan
Copy link
Author

thaituan commented May 8, 2020

@tejal29 thank for your responding. I see in line:

compositeKey = s.populateCopyCmdCompositeKey(command, v.From(), compositeKey)

and line:
compositeKey = s.populateCopyCmdCompositeKey(command, v.From(), compositeKey)

are the same with 2 different cases. It looks like the key cache of the deps is being generated incorrectly. Is there a way to create a cache key that works the same way as the docker build?

@tejal29
Copy link
Member

tejal29 commented May 8, 2020

We need to remove it from both cases i.e.

  • when a cached command is used
  • when a regular command is used.

Please let me know if you need any more pointers.

@tejal29 tejal29 added the help wanted Looking for a volunteer! label May 8, 2020
@thaituan
Copy link
Author

thaituan commented May 8, 2020

@tejal29
Thank you for the help.
I tried removing this block:

switch v := command.(type) {
case *commands.CopyCommand:
compositeKey = s.populateCopyCmdCompositeKey(command, v.From(), compositeKey)
case *commands.CachingCopyCommand:
compositeKey = s.populateCopyCmdCompositeKey(command, v.From(), compositeKey)
}

and it looks like my problem has been solved. Please check back when you have time.

@HubertBos
Copy link

Hello

We have a very similar situation today when a multistage container created a binary on the first stage but use a cached layer on 2nd stage.

@gsaraf
Copy link

gsaraf commented Jun 14, 2021

Not quite the same thing, but related - when using the executor:latest image, if we had the --cache-copy-layers flag, it was using cached versions of files copied from previous stages even when they had changed. When we removed that flag, it no longer reproduced.

@massimeddu-sj
Copy link

Hi, now that #2065 is merged, it would be great if this could be solved, so it will be safe to re-enable the cache for copy layers in multistage images.

For example, we have few images that where affected a lot by not be able to use the copy layer cache, increasing the build time by around 100%.

@tejal29 @thaituan any chance to see a fix for this issue?

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/caching For all bugs related to cache issues help wanted Looking for a volunteer! kind/bug Something isn't working priority/p3 agreed that this would be good to have, but no one is available at the moment.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants