You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a list of composefs issues but in this one I wanted to try to explicitly write down my goals/requirements for where we need to go with composefs and the c/storage and podman integration to be what I'd consider "done" (or realistically, "v1").
I was trying to lay things out a bit over in this issue on the composefs side, but it probably makes sense to track from this project in a more top-down fashion.
In a nutshell:
Must have an efficient mechanism to verify the integrity of a complete image before launching a container (podman run <image>) from it; corruption includes e.g. changing the config.json for an image, its set of layers, and of course the layer content. It's OK to default to lazy verification for layer content. The podman use_composefs=true doesn't use composefs for metadata, though in some quick testing, we do seem to be verifying the digest of the config using the manifest, which is great.
At the same time, to state the obvious: Must have baseline OCI features expected today around signatures, mirroring, pulling and pushing. Especially, must support storing individual layers, and layers that aren't changed across an image update should not need to be re-pulled.
Must also have podman image fsck $image which does upfront (non-lazy) verification, supporting re-fetch of a corrupted image can be re-pulled from the network. Related to above: efficient verification of individual layers, avoiding re-fetch of all layers for one corrupted file.
Must be able to hard require composefs+fsverity when starting a container image with podman run, and there must be a clear documented way to attach a signature to that image (whether sigstore or another scheme), and that signature can be verified and result in an an OCI bundle that must be fully fsverity enabled and have its trust chained from that signature verification.
Ideally: Maintain sync between booted host container image and app container images (containers/composefs#332 cuts against this but we really should share what we can)...this may end up with bootc images having two fsverity computations (one plain, one excluding UKI).
We have a list of composefs issues but in this one I wanted to try to explicitly write down my goals/requirements for where we need to go with composefs and the c/storage and podman integration to be what I'd consider "done" (or realistically, "v1").
I was trying to lay things out a bit over in this issue on the composefs side, but it probably makes sense to track from this project in a more top-down fashion.
In a nutshell:
podman run <image>
) from it; corruption includes e.g. changing theconfig.json
for an image, its set of layers, and of course the layer content. It's OK to default to lazy verification for layer content. The podmanuse_composefs=true
doesn't use composefs for metadata, though in some quick testing, we do seem to be verifying the digest of the config using the manifest, which is great.podman image fsck $image
which does upfront (non-lazy) verification, supporting re-fetch of a corrupted image can be re-pulled from the network. Related to above: efficient verification of individual layers, avoiding re-fetch of all layers for one corrupted file.podman run
, and there must be a clear documented way to attach a signature to that image (whether sigstore or another scheme), and that signature can be verified and result in an an OCI bundle that must be fully fsverity enabled and have its trust chained from that signature verification.Ideally: Maintain sync between booted host container image and app container images (containers/composefs#332 cuts against this but we really should share what we can)...this may end up with bootc images having two fsverity computations (one plain, one excluding UKI).
Additional desires/nice-to-have:
Downstream: https://issues.redhat.com/browse/RHELBU-2799
The text was updated successfully, but these errors were encountered: