-
Notifications
You must be signed in to change notification settings - Fork 548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add repositories files to image tarballs #526
Comments
Nevermind, this just has the image digest, not the layers. |
Some more context: containers/skopeo#425 |
Minor correction. I believe it's the digest of the top most layer. So essentially a layer digest but I don't think it will help much with the optimization PR. Maybe the pusher can avoid calculating the digest of the top most layer if the repositories file is present but that's it. |
That seems odd, but v1 images were basically linked lists, where each layer could be a complete image and referenced its parent image, so I can believe it. I'd be surprised if that's the case, but I'd like to see how this works when
|
The python implementation to extract this value is here. If you follow it, you'll see it's used to extract the value returned by the The above impl is specifically for v1 images so it's possible the value is different for v2 & manifest lists. |
Just took a look at this and it's not clear to me how to proceed. That digest value isn't present in the schema 2 manifest or config In order to replicate docker's (or containeregistry's) behavior, we'd need to convert schema 2 to schema 1, then schema 1 to v1, then take the top layer's value. Converting from schema 2 to schema 1 is nontrivial, but I've done it before. I don't have any idea how to convert from schema 1 to v1 (that was deprecated way before I started caring about container trivia). How is this value actually used? Why do we need it? We could use the digest of the top layer from the schema 2 manifest, but I suspect that wouldn't work out. |
Yeah you're right. It's not as simple as just putting the digest of the top most layer in the So I'm guessing it should be possible to generate this digest without going through the v2_2 -> v2 -> v1 conversion. As for how it's currently used, I found a bunch of cases in our internal codebase that load the docker image built by rules_docker as a v1 tarball. I have replaced a few but there might be a bunch more. The one that's going to be tricky to fix is the python containerregistry tests that test the v1 -> v2 compatibility layer. So I was hoping to just generate this file if it's simple enough instead of locating all tests & negotiating fixes with their owners. |
So do we need to be able to read and generate v1 tarballs? There are two tarball formats here, and we generate/read the more "modern" one in ggcr. I'm not sure how much we have to do to unblock bazelbuild/rules_docker#580, and I'm not sure what the easy path forward is (some changes here, some changes in the python implementation, change callers, etc.) Summarizing the differences: craneLet's take a look at what crane produces:
There's a manifest.json file, that points to the config, where we pulled the image from, and the layers. Note that these values point to files within the tarball, not necessarily their digests.
The config file is the normal config, from the registry:
We save the layer in its gzipped form:
dockerThere's a lot more stuff here:
There is a similar
The config file has the same contents, just a different name. The "Layers" points to a layer.tar in a If we look at
The contents of
That
|
Create a
repositories
file for image tarballs generated by thev1.tarball
package. Image tarballs generated bydocker save
includes this file and our internal repo requires this because this file is part of the v1 & v2 schema.For the image
l.gcr.io/google/bazel@sha256:97bfeed0303cae14af7e8f66aad6c13f00b2b33081c59d0f4258717b8b94efec
, therepositories
files looks like:This basically appears to be a map from the image name to the tag to the digest of the top most layer. This is currently blocking bazelbuild/rules_docker#580
The text was updated successfully, but these errors were encountered: