-
Notifications
You must be signed in to change notification settings - Fork 785
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rootless buildah/stable image not working #3053
Comments
This means the host User Namespace is not large enough to include buildah inside of the container While in the container you have only 65000 UIDs, but the container wants to start with UID 100000. |
Ok I got it to work, but it is not pretty.
We are going to need a containers file that is not mounted on fuse-overlay, since fuse-overlay will not work on a fuse-overlay. We mount the volume into the podman container, add add the /dev/fuse device so that we can use fuse-overlay inside of the container. Otherwise we could use storage driver vfs.
Notice how logged in as root, this is because I need to modify the /etc/subuid and /etc/subgid files to use a smaller range, since my container has only 65k uids to use. I pick UID 2000 and then the next 50000 uids.
I also want to chown the homedir including the volume I mounted in, to be owned by the buildah user.
Now I switch to the buildah user and create the Containerfile.
Now I want to run buildah bud, but I have to use --isolation=chroot, otherwise buildah will try to create devices, which I am not allowed to do in a rootless environment.
There it works. We could make this much easier if we modified the default range of UIDs inside of the buildah stable container and defaulted for rootless users to isolation=chroot. |
I did the following and it appeared to work: FROM quay.io/buildah/stable:v1.19.6
RUN echo "build:2000:50000" > /etc/subuid
RUN echo "build:2000:50000" > /etc/subgid Should the buildah/stable image be updated to have these values? |
Seems reasonable to me. |
|
Yes, although this should be exposed in containers.conf. |
okay, let me test this on openshift since that's my "real" use-case. |
My custom image is still not working. I am not sure what is different from buildah/stable here. My image is not built FROM buildah/stable because I have my own base image. I am still running it with podman: You can pull it from there if you like. The dockerfile is here and the base image's dockerfile is here
Environment dump doesn't show any obvious problem
You can see the uidmap warnings are printed, and indeed when I run
What else could I check? |
Check to see if newuidmap and newgidmap have the setfcap flags set inside of your container. Sometimes you have to reinstall the shadow-utils package. Here is the Containerfile we use to build buildah/stable. https://github.com/containers/buildah/blob/master/contrib/buildahimage/stable/Dockerfile |
Wow, I already had I'm glad I opened an issue instead of continuing to bang my head against this. Thank you so much. |
Should I leave this open for the change proposed above #3053 (comment) ? |
Better yet open a PR to fix it. |
my pleasure! |
Thank you for this document; I have struggled with the same issue for days on my own image in which I wanted to use buildah. I only got it to work by picking apart the buildah image on quay.io ... and finding the odd uid range of 2000:50000. When I did this, it worked for me in my own image. Then I googled that (2000:50000) to see if there is any information why this is important... and it led immediately to this issue. Kind suggestion: update the documentation for this please. There are many articles out there talking about buildah within a container... and none of them mention this absolutely critical bit of information. |
@aaabdallah Thanks for the digging and discovery. I've created #3119 so we can clean this up. |
welll.....I only waisted 3 hours of my life trying to figure out why my rootlest buildah bud builds inside a container stopped working between buildah 1.15 and 1.19 when i finally found this issue. Doing the |
I'm trying to initiate CI builds from Gitlab with my Kubernetes runner. I'm facing similar issues described above and not really sure where to turn yet.. Firstly, my cluster is backed by CRI-O on OpenSUSE Kubic. The first error I observe is: $ buildah bud --format docker -f $CI_PROJECT_DIR/$CONTAINER_ROOT/Containerfile -t $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG $CONTAINER_ROOT
Error: error writing "0 0 4294967295\n" to /proc/25/uid_map: write /proc/25/uid_map: operation not permitted
level=error msg="error writing \"0 0 4294967295\\n\" to /proc/25/uid_map: write /proc/25/uid_map: operation not permitted"
level=error msg="(unable to determine exit status)"
Cleaning up file based variables
00:00
ERROR: Job failed: command terminated with exit code 1 This somewhat makes sense as our buildah stable Containerfile doesn't have a So to further debug this I drop into a temporary pod on Kubernetes
I'm greeted with as build user: If you don't see a command prompt, try pressing enter.
[root@buildah-bud /]# su build
[build@buildah-bud /]$ buildah pull docker.io/busybox
WARN[0000] Error loading container config when searching for local runtime: no such file or directory
ERRO[0000] failed to setup From and Bud flags: failed to get container config: no such file or directory
ERRO[0000] exit status 1 and as expected (I think) with root user: [root@buildah-bud /]# buildah pull busybox
ERRO[0000] error writing "0 0 4294967295\n" to /proc/67/uid_map: write /proc/67/uid_map: operation not permitted
Error: error writing "0 0 4294967295\n" to /proc/67/uid_map: write /proc/67/uid_map: operation not permitted
ERRO[0000] (unable to determine exit status Am I missing something?
Kubernetes runs as root.. FWIW the above hack buildah --log-level debug
DEBU[0000] running [buildah-in-a-user-namespace --log-level debug] with environment [SHELL=/bin/bash KUBERNETES_SERVICE_PORT_HTTPS=443 WHOAMI_SERVICE_
PORT_HTTP=80 KUBERNETES_SERVICE_PORT=443 HOSTNAME=buildah-bud WHOAMI_SERVICE_HOST=10.42.39.229 DISTTAG=f34container PWD=/ LOGNAME=build container=oci
HOME=/home/build LANG=C.UTF-8 KUBERNETES_PORT_443_TCP=tcp://10.42.0.1:443 WHOAMI_PORT_80_TCP=tcp://10.42.39.229:80 WH
OAMI_SERVICE_PORT=80 BUILDAH_ISOLATION=chroot TERM=xterm WHOAMI_PORT_80_TCP_PROTO=tcp USER=build SHLVL=2 WHOAMI_PORT_80_TCP_PORT=80 KUBERNETES_PORT_44
3_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_ADDR=10.42.0.1 KUBERNETES_SERVICE_HOST=10.42.0.1 KUBERNETES_PORT=tcp://10.42.0.1:443 KUBERNETES_PORT_443_TCP_P
ORT=443 WHOAMI_PORT=tcp://10.42.39.229:80 PATH=/home/build/.local/bin:/home/build/bin:/root/.local/bin:/root/bin:/usr/local/sbin:/usr/local/bin:/usr/s
bin:/usr/bin:/sbin:/bin WHOAMI_PORT_80_TCP_ADDR=10.42.39.229 _=/usr/bin/buildah TMPDIR=/var/tmp _CONTAINERS_USERNS_CONFIGURED=1], UID map [{ContainerI
D:0 HostID:1000 Size:1} {ContainerID:1 HostID:2000 Size:50000}], and GID map [{ContainerID:0 HostID:1000 Size:1} {ContainerID:1 HostID:2000 Size:50000
}] |
@rhatdan I know you and @umohnani8 have been digging around a lot in this space as of late, any tips? |
As @anthr76 already pointed out in #3053 (comment) I got the same issue when testing buildah on OKD 4.7 cluster. Solution was to add |
Because we want the image to be used by both |
FWIW @koceg you can always use Some other helpful doc: |
runAsUser is useful in this scenario though limited in Gitlab CI since you're pinning all users of images to the UID of build. Likely making your own image is best..? In this scenario. I still have to circle back to this and research further on the K8s end. I've been experimenting with user namespaces on CRI-o |
I'm hitting this as well using tekton on Openshift (Openshift Pipelines Operator on Openshift 4.7).
Is there a build image i can plug in to fix this? |
@nnachefski The url @anthr76 shared is working for me on OCP 4.8 - have you given it a try? Also as he commented, I need to learn more about it overall.
Working slimmer image:
|
@nnachefski I have the same problem, with OKD v4.8.0-0.okd-2021-11-14-052418 (k8s v1.21.2+9e8f924-1555),, tekton installed via the "Red Hat OpenShift Pipelines" ( currently v15.21)
Did you find a solution to the problem? [UPDATE]
[UPDATE 2] Gave
Added to the
|
Description
I cannot run
buildah
as the non-root user in a podman container locally. It fails to run setuid/setgid.The issue does not happen if I use the default root user.
Sorry for this issue I see you get a lot; however I have gone through a number of issues and documentation over the last couple days and had no luck.
Rolling back to version
1.16.2
fixes the warnings printed at the toperror running new{g,u}idmap
. That warning is introduced in version1.17.0
and later. But the final error is the same.I tried version
1.14.8
since I noticed it is used in this rootless tutorial. It seemed to go through the whole Containerfile before failing (rather than failing after failing to write the first layer) but failed in the end the same.I also tried podman with
--runtime crun
from here but that didn't fix it either.Describe the results you received:
Describe the results you expected:
A successful build
Output of
rpm -q buildah
orapt list buildah
:(inside container)
Output of
buildah version
:(inside container)
Output of
podman version
if reporting apodman build
issue:(outside container, because I used podman to run it. No podman inside container)
Output of
cat /etc/*release
:(inside container)
Output of
uname -a
:(inside container)
Output of
cat /etc/containers/storage.conf
:(inside container)
Other debug output
The text was updated successfully, but these errors were encountered: