-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Adds OCI image support #2758
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: claudiubelu The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What's the use case? Also we're generally loathe to add dependencies (due to kind being embedded in tools / test suites / ...), though these seem fairly light. |
My main goal at this point is to be able to have official kube-proxy images for Windows (kubernetes/kubernetes#109939). However, the current image building process for it won't work for it. In k/k, the images are first built with However, this might not help me accomplish my goal at this point. I see in |
Ahhh. We have also to consider Kubernetes'
This is probably a remnant for re-tagging / pushing them?
I think we could take advantage of caching in buildx and build repeatedly with different output setting to produce the tars, or else we could fetch tars ( |
Also possible |
containerd supports OCI images: https://github.com/containerd/containerd/blob/b9bffd1f38c7e85f433c22d0968cffb196ede000/images/archive/importer.go#L126 . It even goes further: if it's not an OCI image, it converts it to OCI. |
This issue is a bit old, so my comment may not be relevant. This is a standard flow I use in my daily tinkering:
I very rarely have a problem with that. The OCI format works perfectly well. Sometimes(very rarely) I need to resort to squashing the image to a single layer(--squash-all) to sort the problems with permissions(I work in rootless context most of times) and/or reduce the overall size of the image. However, I do work on Linux, so that's not quite the same context. As a side note: I would love to see kind being able to consume the images directly from the docker/podman cache without the need for an interim step of saving the image as tarball or pushing it to a repo.. even if I can easily set such repo up on the same machine and keep pushing the images there. I find it equally (in)convenient to using the image dump. No real improvement in usability. What would help here a little is to enrich kind with the option to pull the images straight from cache. I guess it would be best to do it explicitly by expressing the intent with an extra parameter in the CLI. I would like to avoid, is to guess the source of an image. We do have such prompt on podman already when using the short image names and multiple repos in the config and it may become a source of confusion very quickly. There was some work done on this PR and I think it could be used as a starting point. Alternatively we could start from scratch and define a new set of requirements for such feature as I don't think it's only about OCI layout support. That support is already available I believe. It is more about taking out that extra interim translation (build -> cache -> archive -> cluster). It would be better still to get to the point where we have full transparency where the caching layer of the image builder is shared with the cluster i.e. it is mounted directly there so there is no transformation required. This could be pretty difficult to achieve though. I could try to use my spare time and help in pushing it a bit further.. unless somebody tried that before and analysed such flow and there is no point in going there. Just to be clear - I love simplicity, so if it's not simple, I'd rather shoot these extra commands and settle for a reliable flow rather than implementing the monster which would be impossible to manage. @BenTheElder : do you think it makes sense what I'm suggesting here? |
this is a very complex and brittle feature. at least in docker, e.g. the on-disk image storage is expressly not documented, before we even get into remote daemons. I'm pretty sure podman this is also the case (and podman also has nascent remote support now), not to mention builders like buildx. image layers are only officially exposed in these tools by way of pushing to a registry, or by way of exporting a complete image or images.
This PR is actually totally unrelated to loading images into a cluster and is about the images built by Kubernetes's build system, which are currently always docker tarballs. We need to handle them specially when building node images not quite in the same way as loading at runtime.
yes, difficult, non-portable, and depending on internal details of other tools 😬 and somewhere we still have to create content ingestible by containerd, so we're still probably going to be converting to a tarball somewhere in the pipeline ... |
@claudiubelu: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@claudiubelu: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
I think this is defunct for now, we can revisit later if needed. |
No description provided.