-
Notifications
You must be signed in to change notification settings - Fork 27
Tracker for support for nested containers #282
Comments
what is also important is that the reference to |
Why not use additionalstores for this. Latest containers-common setup Podman and buildah to automatically look for an additional store in /usr/lib/containers/storage. If images are pulled into this store, then Podman will use this as a read/only store and /var/lib/containers/storage as a read/write store. |
I think using vfs backend is a bad idea btw, at least if you run non-readonly containers, because the vfs driver cannot use overlayfs for the container upper layer. The ideal approach would be to use the overlayfs backend with composefs enabled, because then there will be no whiteout files in the container storage (they are all inside the composefs blob in the storage). |
Adding in CAP_SYS_ADMIN seems to allow this to work?
|
Here is a little test I did to make this work. $ cat /tmp/Containerfile $ podman build -t bootc --cap-add SYS_ADMIN /tmp $ podman run -ti --cap-add SYS_ADMIN bootc podman images In order to use Overlay within a container you need to run the container with CAP_SYS_ADMIN or play with rootless containers. |
We're having a realtime conversation about this and I think there's general agreement that if the problem is that I still have an open uncertainty about whiteouts which I agree with Alex would be much better fixed by composefs - avoiding the need for metadata in general written directly into the container image filesystem. |
cross-building from arm M2 for x86_64 (after adding
This builds fine from arm M2 machine:
This fails from my arm M2 machine:
and here's the weird error:
|
Thx for progressing on this! I would feel better with some automated CI test cases that mimic the actual use case as a smoke test: a container image with whiteouts (!!!) referenced using sha digest in the containerfile. Then bootc the resulting image and ensure that the image referenced with the same digest as in the containerfile comes up and works correctly. And to add an additional requirement: building of these images has to work on OpenShift in a CI/CD pipeline without cluster-admin privilege's . |
The issue seems to be that podman without CAP_SYS_ADMIN fails over to setting up a User Namespace with a single mapping. I am talking to @giuseppe about whether or not this is required or how we could work around this. For now this will work fine with CAP_SYS_ADMIN added to the build. I don't see any issues with the Whiteouts being stored in the images, as they normally do on a host. The running of containers on containers is blocking overlay on overlay, but I don't think this is an issue we would see here. |
When we configure the user namespace we don't know what command is going to be executed by Podman so we don't check for that combination (and possibly we need also I think it is correct this way because even if you pull the images in that environment, you won't be able to use them until you gain |
Also relevant is ostreedev/ostree#2722 |
This relates to containers/bootc#128 - but isn't quite the same thing. Let's use this as a tracker for supporting "nesting" container images.
We should ideally support something like this:
Where
somecontainer.container
is a podman systemd unit that also uses:The reason I mentioned
--storage-driver=vfs
is to avoid overlayfs and nested whiteouts...I think as of recent overlayfs this is supported at runtime, but...I can't make a whiteout in a defaultpodman run
invocation; I think the device cgroup may be coming into play?Even if we could make the whiteout, I think we'd run into problems because there's no standard for nesting them at the OCI level. Also xref https://www.spinics.net/lists/linux-unionfs/msg11253.html
The text was updated successfully, but these errors were encountered: