Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Platform Request: kubevirt #1126

Closed
9 of 12 tasks
rmohr opened this issue Mar 17, 2022 · 45 comments
Closed
9 of 12 tasks

Platform Request: kubevirt #1126

rmohr opened this issue Mar 17, 2022 · 45 comments

Comments

@rmohr
Copy link
Member

rmohr commented Mar 17, 2022

In order to implement support for a new cloud platform in Fedora CoreOS, we need to know several things about the platform. Please try to answer as many questions as you can.

  • Why is the platform important? Who uses it?

KubeVirt is an extension to kubernetes which allows managing VMs side-by-side with container workloads. It just recently entered the CNCF incubation phase (https://www.cncf.io/projects/kubevirt/) and is used for various virtualizatin products based on kubernetes. A non-exhaustive list of the most famous vendors:

KubeVirt aims to be as feature-rich as solutions like OpenStack or oVirt, allowing to converge the whole infrastructure stack to pure k8s to have unfied API paradigms and simpler to manage stacks, when working with k8s based infrastructure.

  • What is the official name of the platform? Is there a short name that's commonly used in client API implementations?

KubeVirt is the official name. Where needed lowercase kubevirt is used. In the kuberenetes api KubeVirt has its own kubevirt.io group name (kubevirt.io/v1/namespaces/mynamespace/virtualmachineinstances/myvm). From a technical perspective, in the kubernetes world kubevirt.io.

  • How can the OS retrieve instance userdata? What happens if no userdata is provided?

KubeVirt supports a broad range of boot config sources:

  • cloud-init: NoCloud (like e.g. oVirt) and ConfigDrive (like OpenStack)
  • ignition: via ConfigDrive

If no user-data is present, the VM has to be configured manually via ssh, VNC or the like.

  • Does the platform provide a way to configure SSH keys for the instance? How can the OS retrieve them? What happens if none are provided?
  • cloud-init (user-data or metadata)
  • ignition
  • qemu-guest-agent-exec

If no ssh keys are given, people can access the vms via console or vnc to do initial setup.

  • How can the OS retrieve network configuration? Is DHCP sufficient, or is there some other network-accessible metadata service?

DHCP is sufficient. cloud-init network config v1 can be used (and is automatically populated if cloud-init is used for bringing in user-data).

  • In particular, how can the OS retrieve the system hostname?
  • DHCP options
  • cloud-init
  • ignition
  • Does the platform require the OS to have a specific console configuration?

It is helpful if a console is provided on the first virtio-serial device. It is not mandatory to get a properly working VM, but it is very common that our users connect for various tasks via kubectl virt console {myvm} to this console to debug the VM. VNC consoles are also popular. We connect a small vga device to a qemu vnc server by default. Users can opt out of the vga device.

  • Is there a mechanism for the OS to report to the platform that it has successfully booted? Is the mechanism required?

Ther exist a few ways to indicate readiness, all optional:

  • tcp probes
  • http probes
  • exec probes (based on qemu-guest-agent)
  • Does the platform have an agent that runs inside the instance? Is it required? What does it do? What language is it implemented in, and where is the source code repository?

We support the qemu-guest-agent and recommend it (gives an overall better integration experience, also for services building on top, since there is first-class API support for retriving e.g. IP information of additional devices which can be used for routing, ...). We also support ssh-key injection and readiness probes based on the guest agent.

  • How are VM images uploaded to the platform and published to other users? Is there an API? What disk image format is expected?

We have containerDisks which basically are qcow2 files wrapped in containers and pushed to arbitrary container registries.

A very simple example to create on would be such a Dockerfile:

FROM scratch
ADD --chown=107:107 my.qcow2 /disk/

which can be built and pushed like this:

podman build . -t quay.io/myorg/myimage
podman push quay.io/myorg/myimage

The containerDisks can the be imported and used in different KubeVirt-enabled clusters in various ways.
A non exclusive list is:

  • Importing into PVCs for mutable boot disks
  • Direct usage by referencing the container directly on VMIs for ephemeral boot disks (like on pods or containers in general)

ContainerDisks can be hosted on private and public registries and freely mirrored, while one can ensure integrity by referencing the container digests.

  • Are there any other platform quirks we should know about?

KubeVirt is to a certain degree bound to the network model of k8s. In k8s every pod gets a different IP/MAC on each pod start. Responsible for this are the CNIs (container network interface). For as long as the VMs are ephemeral, the IP assignment to the VM works perfect. It gets a little bit more tricky when we talk about persistent root disks. There the guests very often don't identify eth0 after reboot again because of the changed MAC address and DHCP is not performed. We have ways to work around this with different network models, but if the guest can handle this, it gives the best user-experience.

  • Relationship to the openstack platform

KubeVirt is compatible with the openstack images. For visibility and discoverability for technical processes and users it would be helpful to have a containerDisk for kubevirt published and documented, as well as having it listedn in the release and stream json files. Getting its own sections and platform entries. I would however prefer to keep the openstack ignition ID in the guest.

A new kubevirt platform ID is introduced in the following PRs:

@dustymabe
Copy link
Member

We have containerDisks which basically are qcow2 files wrapped in containers and pushed to arbitrary container registries.

Is it really a requirement to wrap the qcow2 in a container? I can see how it would be useful in some cases, but most VM disk images are available as a qcow2 or raw disk image. Can Kubevirt not import a qcow2 directly (even if it then wrapped it in a container and stored it in a local registry)?

@rmohr
Copy link
Member Author

rmohr commented Mar 17, 2022

We have containerDisks which basically are qcow2 files wrapped in containers and pushed to arbitrary container registries.

Is it really a requirement to wrap the qcow2 in a container? I can see how it would be useful in some cases, but most VM disk images are available as a qcow2 or raw disk image. Can Kubevirt not import a qcow2 directly (even if it then wrapped it in a container and stored it in a local registry)?

Since it is fully k8s native, it is the preferred and general way on how to exchange images in kubevirt and make them available. Container registries in public and private clusters are the common denominator which allow us unified delivery, auditing, mirroring flows.

A sub-project in KubeVirt also supports importing various sources, including qcow2 over http. All are not optimal though and have their own downsides (like for http import, people would have to provide shasums directly to ensure integrity).

The most compatible ways are container registries accessible from a cluster. Think about it like the global AMI store. The kubevirt community also started creating a general containerdisk store in quay (https://quay.io/organization/containerdisks). It is backed by some tooling which is scraping release sites to pick up new released images and make them available in a unified way. I think this is a great example, where the superiority of the containerDisk is shown. Once I know where a disk is, it removes all variations on how to identify and verify them and how to detect and find updates.

@dustymabe
Copy link
Member

In my opinion if a containerdisk is required then we at least need to create a new artifact for this (i.e. we can't just ship the openstack qcow2 like we do today and go on with life), so we probably need a new platform.

I guess an alternative is that we still just ship the openstack qcow2, but we document how to create a containerdisk out of it and then interact with kubevirt to install it.

@rmohr
Copy link
Member Author

rmohr commented Mar 17, 2022

In my opinion if a containerdisk is required then we at least need to create a new artifact for this (i.e. we can't just ship the openstack qcow2 like we do today and go on with life), so we probably need a new platform.

I guess an alternative is that we still just ship the openstack qcow2, but we document how to create a containerdisk out of it and then interact with kubevirt to install it.

Yes, having a containerdisk is one of my top priorities. coreos/coreos-assembler#2750 has the create and publish flow already (target locations and credentials to push there are of course not there).

@jlebon
Copy link
Member

jlebon commented Mar 17, 2022

Thanks for filing this @rmohr! I think the additional answers prompted by the template are helpful.

My 2c on this is: let's just create a new platform ID and artifact for it. This means duplicating some code in Ignition and Afterburn, but long-term would be cleaner. Some random points:

  • The OpenStack handling in Ignition is really not great because it can't differentiate between the config drive and metadata server cases, which leads to race conditions. If in KubeVirt we know we'll always get a config drive, then that eliminates any ambiguity.
  • Globbing this with the openstack platform further dilutes the value of the platform ID as an API. This is already a concern with OCP bare metal IPI, though thankfully they're moving to metal.
  • Looking at the answers above, we may have to add KubeVirt-specific logic to Afterburn anyway.
  • An independent artifact means a better UX for users. We can also publish the container to e.g. quay.io like we do currently with the oscontainer.

@rmohr
Copy link
Member Author

rmohr commented Mar 17, 2022

  • The OpenStack handling in Ignition is really not great because it can't differentiate between the config drive and metadata server cases, which leads to race conditions. If in KubeVirt we know we'll always get a config drive, then that eliminates any ambiguity.
  • Globbing this with the openstack platform further dilutes the value of the platform ID as an API. This is already a concern with OCP bare metal IPI, though thankfully they're moving to metal.
  • Looking at the answers above, we may have to add KubeVirt-specific logic to Afterburn anyway.

That all sounds reasonable to me (but it is not a must for kubevirt, we don't have the race issue since we only have config drive). Ony question though: For hypershift we want to have a rhcos image for openshift 4.11. If we would introduce a new ignition platform ID, whould that delay releasing rhcos images? If so, could I introduce the kubevirt id as a follow up and rely on the openshift id until then?

Just since I am new in this area pointing to coreos/coreos-assembler@f0e6c52 again on what exactly I mean to ensure we talk about the same thing (basically kubevirt platform, but openshift ignition id). :)

  • An independent artifact means a better UX for users. We can also publish the container to e.g. quay.io like we do currently with the oscontainer.

That sounds great!

@lucab
Copy link
Contributor

lucab commented Mar 17, 2022

I think there has been some mixups on some of those answers, but reading through it I think that the environment looks like this:

  • network is auto-configured via DHCP
  • hostname is provided by DHCP
  • there is no metadata service on a link-local endpoint
  • instance userdata is provided through a config-drive (openstack compatible)
  • the VM does not need to actively signal to the underlying infra upon boot

@rmohr is that a correct summary?

@miabbott
Copy link
Member

miabbott commented Mar 17, 2022

I'm not convinced we need to define an entirely new platform for the KubeVirt use case.

My understanding is that the containerdisk format is just a transport mechanism for the OpenStack qcow2 disk image. End users will ultimately be booting guest VMs in OpenStack, so they won't require additional support from the likes of Igntion or Afterbun bootstrap the VM. (They can use existing support for OpenStack in Ignition/Afterburn.)

Shouldn't it be fine to just wrap the OpenStack qcow2 image in a container format without any additional changes? What else is needed to support the use case of delivering the container image?

(Edit: This comment was sitting in my browser before I saw the new comments above...so this question might be moot)

@rmohr
Copy link
Member Author

rmohr commented Mar 17, 2022

I think there has been some mixups on some of those answers, but reading through it I think that the environment looks like this:

  • network is auto-configured via DHCP

Yes

  • hostname is provided by DHCP

Yes

  • there is no metadata service on a link-local endpoint

Yes

  • instance userdata is provided through a config-drive (openstack compatible)

Yes, but if you use cloud-init, the hostname will be provided by the platform metadata over these drives too (basically in addition to dhcp at the same time, therefore the overlap).

  • the VM does not need to actively signal to the underlying infra upon boot

Correct.

@rmohr is that a correct summary?

In principle yes. I think the mixup comes from the fact that we send in some scenario the same infos on multiple channels at the same time :)

@rmohr
Copy link
Member Author

rmohr commented Mar 17, 2022

End users will ultimately be booting guest VMs in OpenStack

@miabbott just to clarify: kubevirt has nothing to to with OpenStack. It is 100% built on kubernetes. It is not an "abstraction-layer" based on k8s to openstack.

@miabbott
Copy link
Member

End users will ultimately be booting guest VMs in OpenStack

@miabbott just to clarify: kubevirt has nothing to to with OpenStack. It is 100% built on kubernetes. It is not an "abstraction-layer" based on k8s to openstack.

Understood. So it's conceivable that we may want to produce multiple containerdisks for different virt platforms in the future?

@bgilbert
Copy link
Contributor

It sounds like this is really a new platform, which happens to provide OpenStack-compatible configuration mechanisms. We already support other platforms that made a similar choice. If so, I agree with @jlebon that we should define a new Ignition platform ID rather than trying to reuse openstack. We could conceivably need to implement KubeVirt-specific behavior in the future, and it'd be good to avoid bolting that into the OpenStack providers.

We should not ship a kubevirt provider in stream metadata that uses the openstack platform ID internally. That would be confusing and we don't do it on any other platforms. If we don't want to add a new platform ID, it'd be better to add KubeVirt to stream metadata as an additional openstack artifact than to try to split the difference. Adding a new provider to Ignition and Afterburn should be doable for 4.11, so I'd be inclined to avoid short-term workarounds as well.

@rmohr
Copy link
Member Author

rmohr commented Mar 17, 2022

It sounds like this is really a new platform, which happens to provide OpenStack-compatible configuration mechanisms. We already support other platforms that made a similar choice.

Yes that describes it perfectly.

If so, I agree with @jlebon that we should define a new Ignition platform ID rather than trying to reuse openstack. We could conceivably need to implement KubeVirt-specific behavior in the future, and it'd be good to avoid bolting that into the OpenStack providers.

We should not ship a kubevirt provider in stream metadata that uses the openstack platform ID internally. That would be confusing and we don't do it on any other platforms. If we don't want to add a new platform ID, it'd be better to add KubeVirt to stream metadata as an additional openstack artifact than to try to split the difference.

The main issue with that is probably that we then provide an image as part of the openstack platform which effectively can't be consumed by openstack :)

Adding a new provider to Ignition and Afterburn should be doable for 4.11, so I'd be inclined to avoid short-term workarounds as well.

Could you point me roughly to the locations where these tools would need to be extended?

@bgilbert
Copy link
Contributor

If we don't want to add a new platform ID, it'd be better to add KubeVirt to stream metadata as an additional openstack artifact than to try to split the difference.

The main issue with that is probably that we then provide an image as part of the openstack platform which effectively can't be consumed by openstack :)

Yeah, that's fair.

Adding a new provider to Ignition and Afterburn should be doable for 4.11, so I'd be inclined to avoid short-term workarounds as well.

Could you point me roughly to the locations where these tools would need to be extended?

For Ignition, you can probably do something similar to coreos/ignition@1f710f7 (and also add docs in supported-platforms.md). For Afterburn, you can do something like coreos/afterburn@542ee1b.

@rmohr
Copy link
Member Author

rmohr commented Mar 18, 2022

For Ignition, you can probably do something similar to coreos/ignition@1f710f7 (and also add docs in supported-platforms.md). For Afterburn, you can do something like coreos/afterburn@542ee1b.

Thanks I will check that out. One last question: If openstack adds new features which you pick up, it may have to be duplicated for kubevirt. Is this a concern?

@rmohr
Copy link
Member Author

rmohr commented Mar 18, 2022

For Ignition, you can probably do something similar to coreos/ignition@1f710f7 (and also add docs in supported-platforms.md). For Afterburn, you can do something like coreos/afterburn@542ee1b.

@bgilbert done. Now I need to figure out how to test it all together :)

@rmohr
Copy link
Member Author

rmohr commented Mar 18, 2022

@bgilbert done. Now I need to figure out how to test it all together :)

Updated the PRs with test results.

@rmohr
Copy link
Member Author

rmohr commented Mar 18, 2022

End users will ultimately be booting guest VMs in OpenStack

@miabbott just to clarify: kubevirt has nothing to to with OpenStack. It is 100% built on kubernetes. It is not an "abstraction-layer" based on k8s to openstack.

Understood. So it's conceivable that we may want to produce multiple containerdisks for different virt platforms in the future?

@miabbott I am not sure I understand that question. Could you elaborate?

@miabbott
Copy link
Member

Understood. So it's conceivable that we may want to produce multiple containerdisks for different virt platforms in the future?

@miabbott I am not sure I understand that question. Could you elaborate?

I think I was confusing myself by some of the details from the original RFE that specifically mentioned shipping the OpenStack qcow2 in the containerdisk format. So I was making the assumption we'd have different containerdisks for different hypervisors/virt platforms (i.e. vSphere, RHEV, OpenStack, etc).

Looking at the implementation in coreos/coreos-assembler#2750 and the docs in https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#containerdisk-workflow-example, it's more clear that the qcow2 shipped in the containerdisk is generic.

Regardless, I think you are getting the right information from others on this thread, so please continue to do the good work :)

@bgilbert
Copy link
Contributor

If openstack adds new features which you pick up, it may have to be duplicated for kubevirt. Is this a concern?

Not for now. We have some existing code duplication which this will make slightly worse, but that's a problem for another day.

@bgilbert
Copy link
Contributor

@rmohr We'll also need to update stream-metadata-rust to add the container-disk link.

@rmohr
Copy link
Member Author

rmohr commented Mar 25, 2022

@rmohr We'll also need to update stream-metadata-rust to add the container-disk link.

Opened a PR: coreos/stream-metadata-rust#24. Thanks.

@bgilbert
Copy link
Contributor

And I just realized that docs should be updated too, and in particular the stream metadata rationale.

@gursewak1997
Copy link
Member

We now have https://quay.io/repository/fedora/fedora-coreos-kubevirt to upload our KubeVirt images.
Issue: https://pagure.io/releng/issue/11375

@qinqon
Copy link

qinqon commented Apr 10, 2023

Is this going to be backported to 4.13 ?

@cverna
Copy link
Member

cverna commented Apr 11, 2023

Considering this is a new artefact, we are currently not planning to backport this in 4.13.

dustymabe added a commit to dustymabe/fedora-coreos-pipeline that referenced this issue Apr 19, 2023
For Fedora CoreOS we'll ship to quay.io/fedora/fedora-coreos-kubevirt.
See coreos/fedora-coreos-tracker#1126 (comment)
dustymabe added a commit to dustymabe/fedora-coreos-pipeline that referenced this issue May 1, 2023
For Fedora CoreOS we'll ship to quay.io/fedora/fedora-coreos-kubevirt.
See coreos/fedora-coreos-tracker#1126 (comment)
dustymabe added a commit to coreos/fedora-coreos-pipeline that referenced this issue May 3, 2023
For Fedora CoreOS we'll ship to quay.io/fedora/fedora-coreos-kubevirt.
See coreos/fedora-coreos-tracker#1126 (comment)
@dustymabe
Copy link
Member

The kubevirt artifact was added to the FCOS pipeline to build in coreos/fedora-coreos-pipeline#860

@qinqon
Copy link

qinqon commented May 4, 2023

The kubevirt artifact was added to the FCOS pipeline to build in coreos/fedora-coreos-pipeline#860

@dustymabe is this the last step to have a kubevirt fcos at https://quay.io/repository/fedora/fedora-coreos-kubevirt ?

@dustymabe
Copy link
Member

It's there! The images are getting pushed as new builds come in. For example, quay.io/fedora/fedora-coreos-kubevirt:testing-devel got pushed this morning, but that's the devel stream. The prod streams will exist when we do our next round of releases.

I'll comment here when that happens.

@dustymabe dustymabe added status/pending-testing-release Fixed upstream. Waiting on a testing release. status/pending-stable-release Fixed upstream and in testing. Waiting on stable release. status/pending-next-release Fixed upstream. Waiting on a next release. labels May 4, 2023
@qinqon
Copy link

qinqon commented May 4, 2023

It's there! The images are getting pushed as new builds come in. For example, quay.io/fedora/fedora-coreos-kubevirt:testing-devel got pushed this morning, but that's the devel stream. The prod streams will exist when we do our next round of releases.

I'll comment here when that happens.

Thanks!, I am already testing the testing-devel.

@dustymabe
Copy link
Member

Docs landed in coreos/fedora-coreos-docs#528

@qinqon
Copy link

qinqon commented May 5, 2023

It's there! The images are getting pushed as new builds come in. For example, quay.io/fedora/fedora-coreos-kubevirt:testing-devel got pushed this morning, but that's the devel stream. The prod streams will exist when we do our next round of releases.

I'll comment here when that happens.

@dustymabe, I have being testing the testing-devel and rawhide and looks like neither of them has the qemu guest agent with is quite useful for kubevirt, do you know if they can be build with it installed and activated ?

The one I was using for testing got it correctly configured:

[core@worker1 ~]$ systemctl status qemu-guest-agent
● qemu-guest-agent.service - QEMU Guest Agent
     Loaded: loaded (/usr/lib/systemd/system/qemu-guest-agent.service; enabled;>
     Active: active (running) since Fri 2023-05-05 07:35:19 UTC; 29s ago
   Main PID: 1373 (qemu-ga)
      Tasks: 2 (limit: 2392)
     Memory: 1.9M
        CPU: 8ms
     CGroup: /system.slice/qemu-guest-agent.service
             └─1373 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio->

May 05 07:35:19 localhost systemd[1]: Started QEMU Guest Agent.

@cgwalters
Copy link
Member

In CoreOS we only have one image across all platforms, so adding the qemu guest agent would appear everywhere. This has come up a few times, see #74 as well as coreos/afterburn#458

You could probably comment on #74 with what specific functionality you see missing.

@qinqon
Copy link

qinqon commented May 5, 2023

In CoreOS we only have one image across all platforms, so adding the qemu guest agent would appear everywhere. This has come up a few times, see #74 as well as coreos/afterburn#458

You could probably comment on #74 with what specific functionality you see missing.

But why qemu-guest-agent is present at https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20230303.3.0/x86_64/fedora-coreos-37.20230303.3.0-qemu.x86_64.qcow2.xz ?

@cgwalters
Copy link
Member

But why qemu-guest-agent is present at https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20230303.3.0/x86_64/fedora-coreos-37.20230303.3.0-qemu.x86_64.qcow2.xz ?

Well, you made me double check, but:

walters@toolbox /v/s/w/m/fcos> curl -L https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20230303.3.0/x86_64/fedora-coreos-37.20230303.3.0-qemu.x86_64.qcow2.xz | xz -d > fedora-coreos-37.20230303.3.0-qemu.x86_64.qcow2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  628M  100  628M    0     0  37.3M      0  0:00:16  0:00:16 --:--:-- 36.4M
fedora-coreos-38.20230414.3.0-qemu.x86_64.qcow2
walters@toolbox /v/s/w/m/fcos> cosa run --qemu-image fedora-coreos-37.20230303.3.0-qemu.x86_64.qcow2 
Fedora CoreOS 37.20230303.3.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/tag/coreos

Last login: Fri May  5 12:42:36 2023
[core@cosa-devsh ~]$ rpm -qa|grep -i qemu
[core@cosa-devsh ~]$ 

But again the thing that's really important to understand here is that for us what's in the disk images (qcow2, AMI) is 95% just a "shell" around the container image which is what we use for OS updates. (Yes, today FCOS uses ostree native but it's helpful to still think of the OS content this way)

IOW you can do:

$ podman run --pull=always --rm -ti quay.io/fedora/fedora-coreos:stable
bash-5.2# rpm -qa|grep -i qemu
bash-5.2# 

And whatever you see there is the same stuff exactly that is in the disk image when you boot it.

@qinqon
Copy link

qinqon commented May 5, 2023

But why qemu-guest-agent is present at https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20230303.3.0/x86_64/fedora-coreos-37.20230303.3.0-qemu.x86_64.qcow2.xz ?

Well, you made me double check, but:

walters@toolbox /v/s/w/m/fcos> curl -L https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20230303.3.0/x86_64/fedora-coreos-37.20230303.3.0-qemu.x86_64.qcow2.xz | xz -d > fedora-coreos-37.20230303.3.0-qemu.x86_64.qcow2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  628M  100  628M    0     0  37.3M      0  0:00:16  0:00:16 --:--:-- 36.4M
fedora-coreos-38.20230414.3.0-qemu.x86_64.qcow2
walters@toolbox /v/s/w/m/fcos> cosa run --qemu-image fedora-coreos-37.20230303.3.0-qemu.x86_64.qcow2 
Fedora CoreOS 37.20230303.3.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/tag/coreos

Last login: Fri May  5 12:42:36 2023
[core@cosa-devsh ~]$ rpm -qa|grep -i qemu
[core@cosa-devsh ~]$ 

But again the thing that's really important to understand here is that for us what's in the disk images (qcow2, AMI) is 95% just a "shell" around the container image which is what we use for OS updates. (Yes, today FCOS uses ostree native but it's helpful to still think of the OS content this way)

IOW you can do:

$ podman run --pull=always --rm -ti quay.io/fedora/fedora-coreos:stable
bash-5.2# rpm -qa|grep -i qemu
bash-5.2# 

And whatever you see there is the same stuff exactly that is in the disk image when you boot it.

I think I cooked in the qemu-guest-agent, pushed to my quay.io/ellorent repo and forget about it, sorry about this.

@dustymabe
Copy link
Member

The fix for this went into next stream release 38.20230514.1.0. Please try out the new release and report issues.

@dustymabe
Copy link
Member

The fix for this went into testing stream release 38.20230514.2.0. Please try out the new release and report issues.

@dustymabe
Copy link
Member

The fix for this went into stable stream release 38.20230430.3.1.

@dustymabe dustymabe removed status/pending-testing-release Fixed upstream. Waiting on a testing release. status/pending-stable-release Fixed upstream and in testing. Waiting on stable release. status/pending-next-release Fixed upstream. Waiting on a next release. labels May 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests