-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
USER command should not require user or group IDs to exist #477
Comments
@hrobertson could be so kind to test the build with this modification? add a FROM scratch at the top of the dockerfile
Please post your result thanks a lot, it would help to find the error!! |
@miguelitoq76 I assume you didn't mean to have two FROMs?
In case you really did mean to have the two FROMs:
|
I have done some further investigation and the behaviour is even more erroneous than I first thought! Here I build three images containing the kaniko executor binary. One pulls the binary into a scratch base, the next into an alpine base, and the next creates a user within the image.
I build the three separate images:
And here I build the usertest3 image using kaniko inside each of those images.
kaniko binary in scratch
kaniko binary in alpine
As you can see, kaniko is erroneously looking for the /etc/passwd file in the environment in which kaniko is running, not the filesystem of the image it's building! Note it shouldn't even look for a passwd file at all when IDs are specified rather than names. kaniko binary in alpine with an existing user:
Until the first RUN directive causes
|
It looks like this is also an issue when using
|
Anyone been able to work around this? |
I encountered the same problem with a Dockerfile which has relevant lines like:
and it fails with the same problem as above.
I think it is a regression introduced somewhere between 3 Oct and 15 Nov. I use kaniko invoked from skaffold and skaffold v0.18 works fine (it uses kaniko v0.4 from 3 Oct) but v0.19 fails (it uses kaniko @ commit 0c29413 from Nov 15). skaffold v20 with kaniko v0.7 fails as well with the same problem. Maybe this might help to pinpoint the problem. |
…s for chowning purposes, otherwise /etc/passwd isn't found, or wouldn't have the users/groups from the base image
I think it's just this line kaniko/pkg/util/command_util.go Line 309 in b0b36ed
kaniko/pkg/util/command_util.go Line 309 in b0b36ed
Since all the code does is gets a User object and then returns the UID (which would match the userStr that we looked up in the first place) I feel like the correct behavior would be something like:
edit: playing around with this a little bit one issue seems to be that environment variables like HOME won't be set automatically if you use a numeric userid but that can be worked around if needed. |
Maybe unrelated, but using |
It happened to me with the cache enabled, as soon as I disabled it showed |
same as @mamoit building first without USER & cache enabled and then using USER will trigger this |
For me, a USER command in a cached layer will produce the already mentioned error message. Is there some kind of workaround for this? |
Happens for us as well and stops us from using Kaniko. |
Any update on this? |
I think I was able to work around this just now by adding a minimal /etc/passwd to my container before the USER instruction. Dockerfile: ARG UPSTREAM_IMAGE=prom/alertmanager
ARG UPSTREAM_TAG=v0.15.3
FROM $UPSTREAM_IMAGE:$UPSTREAM_TAG as upstream
FROM scratch
LABEL project=<my-project>
COPY --from=upstream /bin/alertmanager /bin/alertmanager
COPY --from=upstream /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=upstream /etc/nsswitch.conf /etc/nsswitch.conf
# Work around Kaniko bug.
COPY --chown=0:0 passwd /etc/passwd
EXPOSE 9093
USER 4191718
ENTRYPOINT ["/bin/alertmanager"]
CMD [ "--config.file=/etc/alertmanager/config.yml", \
"--storage.path=/alertmanager" ] passwd:
Kaniko build log:
(This Dockerfile is taking Prometheus' alertmanager, copying the binary and some other important stuff into a scratch container, and forcing it to run as a non-root user. Stripping down third-party images like this is part of the security process where I work.) This is a clunky workaround and I'd still appreciate this bug being fixed. |
Using gcr.io/kaniko-project/executor:debug allowed me to workaround this issue When using executor:latest
When using executor:debug
|
Suffering from this as well on
With the latest version:
|
Using |
@hrobertson, i tried reproducing error cases Failure 1 and failure 2. I don't see this error anymore for gcr.io/kaniko-project/executor:latest
|
closing this, since its an year old and i was not able to reproduce it |
based on what this issue is closed? |
This issue has still not been resolved, despite being closed. |
That's why we cannot centralize it as builder engine in pipeline systems because we want to have same DX between his local env & pipeline build env. Any news ? |
Nope. If anybody reading this is interested in digging into this and submitting a fix I'll be happy to review it. |
I believe this comment still holds pointing to where the issue is and a suggested way to resolve - #477 (comment) |
Great, sounds like we have an integration test Dockerfile that fails before the change, and an idea of where to make the change to make the test pass. All we need now is someone to propose the change 😇 |
Ok let me save you! Contributing guide is simple ? |
Same issue with the kaniko image gcr.io/kaniko-project/executor:v1.8.1-debug
|
Can anyone in the thread here confirm if this is still an issue w/ the latest version of Kaniko - |
When I attempt the repro Dockerfile(s) in the initial post I am seeing build success w/ the latest version of Kaniko
|
Yes this looks like it's fixed. Thanks for flagging that up @aaron-prindle . |
It is common for a Dockerfile to contain a USER directive of the form
USER 1000:1000
. docker build does not require a user and group with those ids to exist, nor does it create them. However, Kaniko errors.Interestingly, while creating a minimal dockerfile to demonstrate this I discovered that Kaniko errors in two different ways:
Working example using Docker:
Kaniko failure mode 1
Kaniko failure mode 2:
Note the addition of the RUN directive and the different error message from Kaniko
The text was updated successfully, but these errors were encountered: