Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: use manged_path for running VMs with MBS devices #104

Merged
merged 4 commits into from
Jun 2, 2022

Conversation

bennyz
Copy link
Member

@bennyz bennyz commented Feb 28, 2022

No description provided.

@muliby-lb
Copy link

I'm happy to confirm that the latest drop passed a sniff test with Lightbits LightOS! One VM one volume, so not exactly comprehensive testing yet, but a great step forward!
image

@bennyz bennyz changed the title WIP core: use run_path for running VMs with MBS devices WIP core: use maanged_path for running VMs with MBS devices Apr 20, 2022
@bennyz bennyz changed the title WIP core: use maanged_path for running VMs with MBS devices WIP core: use manged_path for running VMs with MBS devices Apr 20, 2022
@muliby-lb
Copy link

Hi @bennyz, any updates on when this one will be merged or what else is needed before it can be merged?

@bennyz bennyz force-pushed the mbs-run-path branch 2 times, most recently from 3e592fb to 837e8ce Compare May 9, 2022 13:43
Send the storage domain id to vdsm when attaching an MBS volume. This
will be useful in case we need to distinguish between two volumes with
the same id (live storage migration for example).

Signed-off-by: Benny Zlotnik <bzlotnik@redhat.com>
Since multiple volumes with the same id can be attached to the same host
(live storage migration for example), we now send the sd_id as well to
be able to distinguish between them.

Signed-off-by: Benny Zlotnik <bzlotnik@redhat.com>
@bennyz bennyz changed the title WIP core: use manged_path for running VMs with MBS devices core: use manged_path for running VMs with MBS devices May 19, 2022
@bennyz bennyz requested a review from sleviim May 19, 2022 12:41
When copying from a traditional domain to an MBS domain we create a new
ManagedBlockStorageDisk that is based on the source disk with updated
paramteters.

Currently createManagedBlockDiskFromDiskImage adds the target
Storage Domain to the target's disk list, however, this list already
contains the source domain, which makes the disk present on two Storage
Domains.

This patch overrides the original disks Storage Domain list.
@bennyz bennyz requested a review from didib as a code owner May 26, 2022 08:07
@bennyz
Copy link
Member Author

bennyz commented May 26, 2022

/ost

@nirs nirs added this to the ovirt-4.5.1 milestone May 26, 2022
Copy link
Member

@nirs nirs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is possible, I would split to 2 prs:

  1. Use managed_path
  2. Specify sd_id

The first should be enough to test this feature. The second is need to so someone can add LSM in the future.

path = (String) managedBlockStorageDisk.getDevice().get(DeviceInfoReturn.MANAGED_PATH);
} else {
path = (String) managedBlockStorageDisk.getDevice().get(DeviceInfoReturn.PATH);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this work for running vm when we upgrade the cluster version while the vm is running?

The vm will start using "path". After the upgrade, if we migrate the vm, we will use "managed_path", and the migration will likely fail.

Can we always use "managed_path"?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about older cluster levels, if we do live migration in a 4.6 cluster for example?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Migration to/from 4.6 cluster will not work with managed_path. But since this is tech preview we don't need to care about it.

If we want to avoid this issue we can use managed_path only for lightos, and continue to use path for rbd and mpath. But this means we carry to ways to work with mbs storage, which is bad. I this to break this now and not later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now it is clear, we use manage_path for cluster level 4.7, and path for older cluster levels since vdsm in older versions did not support managed_path.

// For rbd we need to pass the entire path since we rely on more than a single
// variable e.g: /dev/rbd/<pool-name>/<vol-name>
metadata = Collections.singletonMap("RBD", path);
metadata.put("RBD", path);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So before we started with empty map, and then replaced it with Collections.singletonMap?
Using metadata looks good, but is this change needed to support managed_path or a cleanup that we can do later?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unfortunately it is not a cleanup, singletonMap was incorrect because it's immutable which would prevent setting metadata.put("managed", "true"); later on

"GUID", (String)attachment.get(DeviceInfoReturn.SCSI_WWN),
"managed", "true"
);
metadata.put("GUID", (String)attachment.get(DeviceInfoReturn.SCSI_WWN));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also nicer, but is this require to get this change merged?

}

metadata.put("managed", "true");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this the only required change?

@@ -142,7 +142,9 @@ public VDSReturnValue attachManagedBlockStorageDisk(ManagedBlockStorageDisk disk

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change makes sense - but is required only to allow LSM which is not supported yet.
Can we defer this change so we can merge sooner the lightos support?

Until we do this change, all disks will use blank uuid. Adding the sd id is likely
wanted also in 4.5.1, but we can add it after the basics are merged, no?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the vdsm PR included these changes so I tested them all together, I can separate it now since it's merged in vdsm

@@ -76,6 +76,7 @@ private VDSReturnValue detachVolume() {
AttachManagedBlockStorageVolumeVDSCommandParameters params =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is similar to the previous commit adding sd_id, no?

path = (String) managedBlockStorageDisk.getDevice().get(DeviceInfoReturn.MANAGED_PATH);
} else {
path = (String) managedBlockStorageDisk.getDevice().get(DeviceInfoReturn.PATH);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now it is clear, we use manage_path for cluster level 4.7, and path for older cluster levels since vdsm in older versions did not support managed_path.

@bennyz bennyz merged commit 0e5d532 into oVirt:master Jun 2, 2022
@bennyz bennyz deleted the mbs-run-path branch June 2, 2022 07:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants