Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reenable paused live migration #777

Conversation

SeanMooney
Copy link
Contributor

@SeanMooney SeanMooney commented May 30, 2024

This commit unskips paused live migration for the local storage job
Related: OSPRH-7198

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/ba0fc244b04449d1bd9d60211970c1a3

✔️ openstack-k8s-operators-content-provider SUCCESS in 3h 18m 58s
✔️ nova-operator-kuttl SUCCESS in 46m 46s
nova-operator-tempest-multinode FAILURE in 2h 26m 30s
✔️ nova-operator-tempest-multinode-ceph SUCCESS in 2h 58m 58s (non-voting)

Copy link
Contributor

@bogdando bogdando left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Copy link
Contributor

openshift-ci bot commented Jun 11, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bogdando, SeanMooney

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [SeanMooney,bogdando]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

raukadah added a commit to raukadah/tcib that referenced this pull request Jun 18, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 18, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 18, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 18, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 18, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 18, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 18, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 18, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 18, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 19, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 19, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 20, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 21, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 22, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 22, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 22, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 22, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 22, 2024
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 23, 2024
openstack-k8s-operators/ci-framework#1892 adds
Meta content provider job to test opendev and github changes together.

This pr replaces tcib content provider with meta content provider
allowing users to test the content from different sources.

Depends-On: https://review.opendev.org/c/openstack/nova/+/921687
Depends-On: openstack-k8s-operators/nova-operator#777
Depends-On: openstack-k8s-operators/ci-framework#1892
Depends-On: openstack-k8s-operators/edpm-image-builder#28

Signed-off-by: Chandan Kumar <raukadah@gmail.com>
raukadah added a commit to raukadah/tcib that referenced this pull request Jun 24, 2024
openstack-k8s-operators/ci-framework#1892 adds
Meta content provider job to test opendev and github changes together.

This pr replaces tcib content provider with meta content provider
allowing users to test the content from different sources.

Depends-On: https://review.opendev.org/c/openstack/nova/+/921687
Depends-On: openstack-k8s-operators/nova-operator#777
Depends-On: openstack-k8s-operators/ci-framework#1892
Depends-On: openstack-k8s-operators/edpm-image-builder#28

Signed-off-by: Chandan Kumar <raukadah@gmail.com>
Copy link
Contributor

openshift-ci bot commented Jul 10, 2024

New changes are detected. LGTM label has been removed.

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/2cd4749291e84815a7200ab0fa31b532

✔️ openstack-k8s-operators-content-provider SUCCESS in 3h 48m 52s
✔️ nova-operator-kuttl SUCCESS in 44m 57s
nova-operator-tempest-multinode FAILURE in 1h 17m 47s
nova-operator-tempest-multinode-ceph FAILURE in 1h 28m 17s

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/6d5f26cb3c7b41fe8c4ede0d3f2408de

✔️ openstack-k8s-operators-content-provider SUCCESS in 3h 18m 43s
✔️ nova-operator-kuttl SUCCESS in 44m 54s
nova-operator-tempest-multinode FAILURE in 1h 25m 28s
✔️ nova-operator-tempest-multinode-ceph SUCCESS in 2h 32m 20s

This commit unskips paused live migration for the local storage job
Related: OSPRH-7198
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/b3a5861f54724210a9f69892541def9b

✔️ openstack-k8s-operators-content-provider SUCCESS in 3h 45m 06s
✔️ nova-operator-kuttl SUCCESS in 47m 24s
nova-operator-tempest-multinode FAILURE in 1h 22m 43s
✔️ nova-operator-tempest-multinode-ceph SUCCESS in 2h 56m 54s

@SeanMooney
Copy link
Contributor Author

check-rdo tempest did not run

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/49bddecf00d244869f75e73b5dd89689

✔️ openstack-k8s-operators-content-provider SUCCESS in 3h 12m 21s
✔️ nova-operator-kuttl SUCCESS in 47m 17s
nova-operator-tempest-multinode FAILURE in 2h 00m 37s
✔️ nova-operator-tempest-multinode-ceph SUCCESS in 2h 54m 11s

@gibizer
Copy link
Contributor

gibizer commented Jul 12, 2024

https://logserver.rdoproject.org/77/777/79ac6ffd12c34a20dc60c1e96b645c0d38b2eff7/github-check/nova-operator-tempest-multinode/4868a83/controller/ci-framework-data/tests/test_operator/tempest-tests/stestr_results.html

Traceback (most recent call last):
  File "/usr/lib/python3.9/site-packages/tempest/api/compute/admin/test_live_migration.py", line 161, in test_live_block_migration_paused
    self._test_live_migration(state='PAUSED')
  File "/usr/lib/python3.9/site-packages/tempest/api/compute/admin/test_live_migration.py", line 141, in _test_live_migration
    self._live_migrate(server_id, source_host, state, volume_backed)
  File "/usr/lib/python3.9/site-packages/tempest/api/compute/admin/test_live_migration.py", line 80, in _live_migrate
    waiters.wait_for_server_status(self.servers_client, server_id, state)
  File "/usr/lib/python3.9/site-packages/tempest/common/waiters.py", line 80, in wait_for_server_status
    raise exceptions.BuildErrorException(details, server_id=server_id)
tempest.exceptions.BuildErrorException: Server 7b7edae6-f878-4756-a858-86f836d02c33 failed to build and is in ERROR status
Details:
Body: {"os-migrateLive": {"host": "compute-2.ci-rdo.local", "block_migration": "auto"}}
    Response - Headers: {'date': 'Thu, 11 Jul 2024 22:10:44 GMT', 'server': 'Apache', 'content-length': '0', 'openstack-api-version': 'compute 2.25', 'x-openstack-nova-api-version': '2.25', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-590602d0-1d3d-4ac9-b000-cdd98a2400b4', 'x-compute-request-id': 'req-590602d0-1d3d-4ac9-b000-cdd98a2400b4', 'content-type': 'application/json', 'set-cookie': '0dc6017b143850df8350099417b4ec9f=0f09f76195e02870d3607f0512d6feaf; path=/; HttpOnly; Secure; SameSite=None', 'connection': 'close', 'status': '202', 'content-location': 'https://nova-public-openstack.apps-crc.testing/v2.1/servers/7b7edae6-f878-4756-a858-86f836d02c33/action'}
❯ grep req-590602d0-1d3d-4ac9-b000-cdd98a2400b4 sosreport-compute-*/**/messages | grep ERROR
sosreport-compute-1-2024-07-11-rujznhs/var/log/messages:Jul 11 18:10:59 np0004787595 nova_compute[129891]: 2024-07-11 22:10:59.946 2 ERROR nova.virt.libvirt.driver [None req-590602d0-1d3d-4ac9-b000-cdd98a2400b4 3de117d20e5b45c1ba86c8136c841ef3 eab438b17a2a4575a3abff1f78afc4ff - - default default] [instance: 7b7edae6-f878-4756-a858-86f836d02c33] Live Migration failure: operation failed: domain is not running: libvirt.libvirtError: operation failed: domain is not running#033[00m
sosreport-compute-1-2024-07-11-rujznhs/var/log/messages:Jul 11 18:11:06 np0004787595 nova_compute[129891]: 2024-07-11 22:11:06.965 2 ERROR nova.compute.manager [None req-590602d0-1d3d-4ac9-b000-cdd98a2400b4 3de117d20e5b45c1ba86c8136c841ef3 eab438b17a2a4575a3abff1f78afc4ff - - default default] [instance: 7b7edae6-f878-4756-a858-86f836d02c33] Post live migration at destination compute-2.ci-rdo.local failed: nova.exception_Remote.InstanceNotFound_Remote: Instance 7b7edae6-f878-4756-a858-86f836d02c33 could not be found.
sosreport-compute-2-2024-07-11-hiwjsys/var/log/messages:Jul 11 18:11:06 np0004787596 nova_compute[129411]: 2024-07-11 22:11:06.756 2 ERROR nova.compute.manager [None req-590602d0-1d3d-4ac9-b000-cdd98a2400b4 3de117d20e5b45c1ba86c8136c841ef3 eab438b17a2a4575a3abff1f78afc4ff - - default default] [instance: 7b7edae6-f878-4756-a858-86f836d02c33] Unexpected error during post live migration at destination host.: nova.exception.InstanceNotFound: Instance 7b7edae6-f878-4756-a858-86f836d02c33 could not be found.#033[00m
sosreport-compute-2-2024-07-11-hiwjsys/var/log/messages:Jul 11 18:11:06 np0004787596 nova_compute[129411]: 2024-07-11 22:11:06.956 2 ERROR oslo_messaging.rpc.server [None req-590602d0-1d3d-4ac9-b000-cdd98a2400b4 3de117d20e5b45c1ba86c8136c841ef3 eab438b17a2a4575a3abff1f78afc4ff - - default default] Exception during message handling: nova.exception.InstanceNotFound: Instance 7b7edae6-f878-4756-a858-86f836d02c33 could not be found.

it seems qemu coredumped during the migration

Jul 11 18:10:59 np0004787595 virtqemud[129507]: QEMU_MONITOR_RECV_EVENT: mon=0x7f91e00095d0 event={"timestamp": {"seconds": 1720735859, "microseconds": 360014}, "event": "MIGRATION", "data": {"status": "device"}}
Jul 11 18:10:59 np0004787595 virtqemud[129507]: QEMU_MONITOR_RECV_EVENT: mon=0x7f91e00095d0 event={"timestamp": {"seconds": 1720735859, "microseconds": 360555}, "event": "MIGRATION_PASS", "data": {"pass": 3}}
Jul 11 18:10:59 np0004787595 systemd[1]: Created slice Slice /system/systemd-coredump.
Jul 11 18:10:59 np0004787595 systemd[1]: Started Process Core Dump (PID 137551/UID 0).
Jul 11 18:10:59 np0004787595 systemd-coredump[137552]: Resource limits disable core dumping for process 137492 (qemu-kvm).
Jul 11 18:10:59 np0004787595 systemd-coredump[137552]: Process 137492 (qemu-kvm) of user 107 dumped core.
Jul 11 18:10:59 np0004787595 systemd[1]: systemd-coredump@0-137551-0.service: Deactivated successfully.
Jul 11 18:10:59 np0004787595 kernel: device tap38ce0ca4-3a left promiscuous mode
Jul 11 18:10:59 np0004787595 NetworkManager[5945]: <info>  [1720735859.5564] device (tap38ce0ca4-3a): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Jul 11 18:10:59 np0004787595 nova_compute[129891]: 2024-07-11 22:10:59.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jul 11 18:10:59 np0004787595 ovn_controller[75600]: 2024-07-11T22:10:59Z|00057|binding|INFO|Releasing lport 38ce0ca4-3ade-46f8-99b8-78ba6843ed37 from this chassis (sb_readonly=0)
Jul 11 18:10:59 np0004787595 ovn_controller[75600]: 2024-07-11T22:10:59Z|00058|binding|INFO|Setting lport 38ce0ca4-3ade-46f8-99b8-78ba6843ed37 down in Southbound
Jul 11 18:10:59 np0004787595 ovn_controller[75600]: 2024-07-11T22:10:59Z|00059|binding|INFO|Removing iface tap38ce0ca4-3a ovn-installed in OVS
Jul 11 18:10:59 np0004787595 nova_compute[129891]: 2024-07-11 22:10:59.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jul 11 18:10:59 np0004787595 nova_compute[129891]: 2024-07-11 22:10:59.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jul 11 18:10:59 np0004787595 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jul 11 18:10:59 np0004787595 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000c.scope: Consumed 1.881s CPU time.
Jul 11 18:10:59 np0004787595 systemd-machined[104609]: Machine qemu-6-instance-0000000c terminated.
Jul 11 18:10:59 np0004787595 virtqemud[129507]: QEMU_MONITOR_CLOSE: mon=0x7f91e00095d0
Jul 11 18:10:59 np0004787595 kernel: device tap38ce0ca4-3a entered promiscuous mode
Jul 11 18:10:59 np0004787595 virtqemud[129507]: delete device: 'tap38ce0ca4-3a'
Jul 11 18:10:59 np0004787595 kernel: device tap38ce0ca4-3a left promiscuous mode
Jul 11 18:10:59 np0004787595 NetworkManager[5945]: <info>  [1720735859.8294] manager: (tap38ce0ca4-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/23)
Jul 11 18:10:59 np0004787595 nova_compute[129891]: 2024-07-11 22:10:59.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jul 11 18:10:59 np0004787595 ovn_controller[75600]: 2024-07-11T22:10:59Z|00060|binding|INFO|Claiming lport 38ce0ca4-3ade-46f8-99b8-78ba6843ed37 for this chassis.
Jul 11 18:10:59 np0004787595 ovn_controller[75600]: 2024-07-11T22:10:59Z|00061|binding|INFO|38ce0ca4-3ade-46f8-99b8-78ba6843ed37: Claiming fa:16:3e:41:60:15 10.100.0.8
Jul 11 18:10:59 np0004787595 ovn_controller[75600]: 2024-07-11T22:10:59Z|00062|binding|INFO|Setting lport 38ce0ca4-3ade-46f8-99b8-78ba6843ed37 ovn-installed in OVS
Jul 11 18:10:59 np0004787595 ovn_controller[75600]: 2024-07-11T22:10:59Z|00063|if_status|INFO|Not setting lport 38ce0ca4-3ade-46f8-99b8-78ba6843ed37 down as sb is readonly
Jul 11 18:10:59 np0004787595 nova_compute[129891]: 2024-07-11 22:10:59.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jul 11 18:10:59 np0004787595 virtnodedevd[129675]: The interface 'tap38ce0ca4-3a' was removed before we could query it.
Jul 11 18:10:59 np0004787595 nova_compute[129891]: 2024-07-11 22:10:59.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jul 11 18:10:59 np0004787595 virtqemud[129507]: operation failed: domain is not running
Jul 11 18:10:59 np0004787595 nova_compute[129891]: 2024-07-11 22:10:59.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jul 11 18:10:59 np0004787595 ovn_controller[75600]: 2024-07-11T22:10:59Z|00064|binding|INFO|Releasing lport 38ce0ca4-3ade-46f8-99b8-78ba6843ed37 from this chassis (sb_readonly=0)
Jul 11 18:10:59 np0004787595 nova_compute[129891]: 2024-07-11 22:10:59.876 2 DEBUG nova.virt.libvirt.guest [None req-590602d0-1d3d-4ac9-b000-cdd98a2400b4 3de117d20e5b45c1ba86c8136c841ef3 eab438b17a2a4575a3abff1f78afc4ff - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jul 11 18:10:59 np0004787595 nova_compute[129891]: 2024-07-11 22:10:59.876 2 INFO nova.virt.libvirt.driver [None req-590602d0-1d3d-4ac9-b000-cdd98a2400b4 3de117d20e5b45c1ba86c8136c841ef3 eab438b17a2a4575a3abff1f78afc4ff - - default default] [instance: 7b7edae6-f878-4756-a858-86f836d02c33] Migration operation has completed#033[00m
Jul 11 18:10:59 np0004787595 nova_compute[129891]: 2024-07-11 22:10:59.877 2 INFO nova.compute.manager [None req-590602d0-1d3d-4ac9-b000-cdd98a2400b4 3de117d20e5b45c1ba86c8136c841ef3 eab438b17a2a4575a3abff1f78afc4ff - - default default] [instance: 7b7edae6-f878-4756-a858-86f836d02c33] _post_live_migration() is started..#033[00m

from the qemu log:

char device redirected to /dev/pts/1 (label charserial0)
2024-07-11T22:10:31.275160Z qemu-kvm: warning: Deprecated CPU topology (considered invalid): Unsupported clusters parameter mustn't be specified as 1
warning: old compression is deprecated; use multifd compression methods instead
warning: old compression is deprecated; use multifd compression methods instead
warning: old compression is deprecated; use multifd compression methods instead
warning: block migration is deprecated; use blockdev-mirror with NBD instead
2024-07-11 22:10:39.607+0000: Domain id=6 is tainted: custom-monitor
2024-07-11 22:10:59.046+0000: initiating migration
qemu-kvm: ../block.c:6979: int bdrv_inactivate_recurse(BlockDriverState *): Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.
2024-07-11 22:10:59.810+0000: shutting down, reason=crashed

@gibizer
Copy link
Contributor

gibizer commented Jul 12, 2024

❯ cat sosreport-compute-1-2024-07-11-rujznhs/sos_commands/coredump/coredumpctl_dump
           PID: 137492 (qemu-kvm)
           UID: 107 (qemu)
           GID: 107 (qemu)
        Signal: 6 (ABRT)
     Timestamp: Thu 2024-07-11 22:10:59 UTC (41min ago)
  Command Line: /usr/libexec/qemu-kvm -name guest=instance-0000000c,debug-threads=on -S -object $'{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-6-instance-0000000c/master-key.aes"}' -machine pc-q35-rhel9.4.0,usb=off,dump-guest-core=off,memory-backend=pc.ram,hpet=off,acpi=on -accel kvm -cpu Nehalem,x2apic=on,hypervisor=on,vme=on -m size=131072k -object $'{"qom-type":"memory-backend-ram","id":"pc.ram","size":134217728}' -overcommit mem-lock=off -smp 1,sockets=1,dies=1,clusters=1,cores=1,threads=1 -uuid 7b7edae6-f878-4756-a858-86f836d02c33 -smbios $'type=1,manufacturer=RDO,product=OpenStack Compute,version=27.4.0-0.20240709131752.f732f84.el9,serial=7b7edae6-f878-4756-a858-86f836d02c33,uuid=7b7edae6-f878-4756-a858-86f836d02c33,family=Virtual Machine' -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=30,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-shutdown -boot strict=on -device $'{"driver":"pcie-root-port","port":16,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x2"}' -device $'{"driver":"pcie-root-port","port":17,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x2.0x1"}' -device $'{"driver":"pcie-root-port","port":18,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x2.0x2"}' -device $'{"driver":"pcie-root-port","port":19,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x2.0x3"}' -device $'{"driver":"pcie-root-port","port":20,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x2.0x4"}' -device $'{"driver":"pcie-root-port","port":21,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x2.0x5"}' -device $'{"driver":"pcie-root-port","port":22,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x2.0x6"}' -device $'{"driver":"pcie-root-port","port":23,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x2.0x7"}' -device $'{"driver":"pcie-root-port","port":24,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":true,"addr":"0x3"}' -device $'{"driver":"pcie-root-port","port":25,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x3.0x1"}' -device $'{"driver":"pcie-root-port","port":26,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x3.0x2"}' -device $'{"driver":"pcie-root-port","port":27,"chassis":12,"id":"pci.12","bus":"pcie.0","addr":"0x3.0x3"}' -device $'{"driver":"pcie-root-port","port":28,"chassis":13,"id":"pci.13","bus":"pcie.0","addr":"0x3.0x4"}' -device $'{"driver":"pcie-root-port","port":29,"chassis":14,"id":"pci.14","bus":"pcie.0","addr":"0x3.0x5"}' -device $'{"driver":"pcie-root-port","port":30,"chassis":15,"id":"pci.15","bus":"pcie.0","addr":"0x3.0x6"}' -device $'{"driver":"pcie-root-port","port":31,"chassis":16,"id":"pci.16","bus":"pcie.0","addr":"0x3.0x7"}' -device $'{"driver":"pcie-root-port","port":32,"chassis":17,"id":"pci.17","bus":"pcie.0","multifunction":true,"addr":"0x4"}' -device $'{"driver":"pcie-root-port","port":33,"chassis":18,"id":"pci.18","bus":"pcie.0","addr":"0x4.0x1"}' -device $'{"driver":"pcie-root-port","port":34,"chassis":19,"id":"pci.19","bus":"pcie.0","addr":"0x4.0x2"}' -device $'{"driver":"pcie-root-port","port":35,"chassis":20,"id":"pci.20","bus":"pcie.0","addr":"0x4.0x3"}' -device $'{"driver":"pcie-root-port","port":36,"chassis":21,"id":"pci.21","bus":"pcie.0","addr":"0x4.0x4"}' -device $'{"driver":"pcie-root-port","port":37,"chassis":22,"id":"pci.22","bus":"pcie.0","addr":"0x4.0x5"}' -device $'{"driver":"pcie-root-port","port":38,"chassis":23,"id":"pci.23","bus":"pcie.0","addr":"0x4.0x6"}' -device $'{"driver":"pcie-root-port","port":39,"chassis":24,"id":"pci.24","bus":"pcie.0","addr":"0x4.0x7"}' -device $'{"driver":"pcie-root-port","port":40,"chassis":25,"id":"pci.25","bus":"pcie.0","addr":"0x5"}' -device $'{"driver":"pcie-pci-bridge","id":"pci.26","bus":"pci.1","addr":"0x0"}' -device $'{"driver":"piix3-usb-uhci","id":"usb","bus":"pci.26","addr":"0x1"}' -blockdev $'{"driver":"file","filename":"/var/lib/nova/instances/_base/e27bc3abb019f3161ac0ff3371e6526de499cf64","node-name":"libvirt-3-storage","read-only":true,"cache":{"direct":true,"no-flush":false}}' -blockdev $'{"driver":"file","filename":"/var/lib/nova/instances/7b7edae6-f878-4756-a858-86f836d02c33/disk","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap","cache":{"direct":true,"no-flush":false}}' -blockdev $'{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-2-storage","backing":"libvirt-3-storage"}' -device $'{"driver":"virtio-blk-pci","bus":"pci.3","addr":"0x0","drive":"libvirt-2-format","id":"virtio-disk0","bootindex":1,"write-cache":"on"}' -blockdev $'{"driver":"file","filename":"/var/lib/nova/instances/7b7edae6-f878-4756-a858-86f836d02c33/disk.config","node-name":"libvirt-1-storage","read-only":true,"cache":{"direct":true,"no-flush":false}}' -device $'{"driver":"ide-cd","bus":"ide.0","drive":"libvirt-1-storage","id":"sata0-0-0","write-cache":"on"}' -netdev $'{"type":"tap","fd":"33","vhost":true,"vhostfd":"35","id":"hostnet0"}' -device $'{"driver":"virtio-net-pci","rx_queue_size":512,"host_mtu":1442,"netdev":"hostnet0","id":"net0","mac":"fa:16:3e:41:60:15","bus":"pci.2","addr":"0x0"}' -add-fd set=0,fd=32,opaque=serial0-log -chardev pty,id=charserial0,logfile=/dev/fdset/0,logappend=on -device $'{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' -device $'{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"}' -audiodev $'{"id":"audio1","driver":"none"}' -object $'{"qom-type":"tls-creds-x509","id":"vnc-tls-creds0","dir":"/etc/pki/qemu","endpoint":"server","verify-peer":true}' -vnc $'[::]:1,tls-creds=vnc-tls-creds0,audiodev=audio1' -device $'{"driver":"virtio-vga","id":"video0","max_outputs":1,"bus":"pcie.0","addr":"0x1"}' -global ICH9-LPC.noreboot=off -watchdog-action reset -incoming defer -device $'{"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.4","addr":"0x0"}' -object $'{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}' -device $'{"driver":"virtio-rng-pci","rng":"objrng0","id":"rng0","bus":"pci.5","addr":"0x0"}' -device $'{"driver":"vmcoreinfo"}' -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
    Executable: /usr/libexec/qemu-kvm
 Control Group: /machine.slice/machine-qemu\x2d6\x2dinstance\x2d0000000c.scope/libvirt/emulator
          Unit: machine-qemu\x2d6\x2dinstance\x2d0000000c.scope
         Slice: machine.slice
       Boot ID: 3b58a1c097fa4911871895351c43e367
    Machine ID: a18064704e5e8f1cdaa60a2f4b5c8b03
      Hostname: compute-1.ci-rdo.local
       Storage: none
       Message: Process 137492 (qemu-kvm) of user 107 dumped core.
Coredump entry has no core attached (neither internally in the journal nor externally on disk).

@gibizer
Copy link
Contributor

gibizer commented Jul 16, 2024

It seems this is a know issue in qemu. They cannot support paused live migration without unpausing the instance. We enabled back and forth migration in this job so we cannot re-enable paused live migration in it until https://issues.redhat.com/browse/RHEL-48801 is fixed.

@gibizer gibizer closed this Jul 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants