-
-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add the ability to clone to non-shared storage on different nodes #178
Conversation
Awesome! Thanks for the PR @wbpascal, I'll review it in the next few days. In the meantime, would you mind to address the linter issue(s)? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @wbpascal! I like the comments in the code, and the way you structured it. The provider's code base is not the best to work with, but that's what we have.
I can't really test this change on multiple nodes, as I don't have a clustered PVE in my lab, but my test on a single node has failed. Can you please take a look at my comments to the PR?
proxmox/virtual_environment_vm.go
Outdated
@@ -163,6 +164,39 @@ func (c *VirtualEnvironmentClient) GetVMStatus(ctx context.Context, nodeName str | |||
return resBody.Data, nil | |||
} | |||
|
|||
// MigrateVMAsync migrates a virtual machine. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// MigrateVMAsync migrates a virtual machine. | |
// MigrateVM migrates a virtual machine. |
@@ -373,6 +376,13 @@ func resourceVirtualEnvironmentVM() *schema.Resource { | |||
ForceNew: true, | |||
Default: dvResourceVirtualEnvironmentVMCloneNodeName, | |||
}, | |||
mkResourceVirtualEnvironmentVMCloneTempDatastoreID: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you pls add the new argument to the doc?
https://github.com/bpg/terraform-provider-proxmox/blob/main/docs/resources/virtual_environment_vm.md
@@ -30,6 +31,22 @@ func (c *VirtualEnvironmentClient) DeleteDatastoreFile(ctx context.Context, node | |||
return nil | |||
} | |||
|
|||
// ListDatastoreFiles gets status information for a given datastore. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// ListDatastoreFiles gets status information for a given datastore. | |
// GetDatastoreStatus gets status information for a given datastore. |
@@ -1163,10 +1174,52 @@ func resourceVirtualEnvironmentVMCreateClone(ctx context.Context, d *schema.Reso | |||
|
|||
cloneTimeout := d.Get(mkResourceVirtualEnvironmentVMTimeoutClone).(int) | |||
|
|||
if cloneNodeName != "" && cloneNodeName != nodeName { | |||
datastoreStatus, err := veClient.GetDatastoreStatus(ctx, cloneNodeName, cloneDatastoreID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If cloneNodeName
is empty the veClient.GetDatastoreStatus
call will fail. I would leave the if
statement that was here before and put the logic under it. This will also allow to remove duplicate condition on lines 1189 and 1196.
I.e.
if cloneNodeName != "" && cloneNodeName != nodeName {
...
if cloneDatastoreShared {
// simple clone here
} else {
// clone to temp then migrate
}
} else {
// simple clone on the same node
}
// Temporarily clone to local node | ||
err = veClient.CloneVM(ctx, cloneNodeName, cloneVMID, cloneRetries, cloneBody, cloneTimeout) | ||
|
||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a small style thing, I would keep error check right on the next line after the call that produced the error, here and line 1178.
I think there is already a linting rule for that, but I have most of them disabled at the moment. You know, the original code is not super great.
Thanks for the review @bpg! So this kind of opened another can of worms because I remembered that source VMs may use multiple different datastores (of which only some may be shared while others may not). Therefore to properly decide if the workaround needs to be used, all datastores used in the source VM have to be iterated over. Also it is probably in our interest to not move any disks to different storages until we absolutely have to (e.g. disks that are already on a shared storage), so I modified the workaround to not use specific target datastores where possible. This should work in most cases, except in some where local datastores may differ between nodes. I have documented this case and how to work around it. This also means that the |
migrateBody.TargetStorage = &cloneDatastoreID | ||
} | ||
|
||
err = veClient.MigrateVM(ctx, cloneNodeName, vmID, migrateBody, cloneTimeout) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My local linter says that this is an ineffectual assignment to err
. I don't get this because if I understand this linter error correctly, it indicates that err
is definitely overwritten after this assignment before it is read the next time (the if
in line 1232). I can't see how this is possible. Can you please help me with this @bpg?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This means the value of err
is not used/checked. Just add
if err != nil {
return diag.FromErr(err)
}
on line 1228, and it should be good to go.
I think I have fixed every outstanding issue now. Could you please take another look when you have time, @bpg? |
Introduction
As mentioned in various different forum posts on places like the Proxmox forums (e.g. https://forum.proxmox.com/threads/500-cant-clone-to-non-shared-storage-local.49078/#post-229727), the Promox API currently does not support using the Clone API to clone a VM to a different node if the target datastore is non-shared. The above forum post also mentions a workaround, which can be used to circumvent this limitation which is to first clone the VM to the local node (with a non-shared datastore) and then migrate it to the target node and datastore using the Migrate API.
I have added this workaround to the provider under the specific condition that the user wants to clone a VM to a different node than the source node and the target datastore is non-shared.
Because we first have to clone the VM to the source node, I have also added a new argumenttemp_datastore_id
to theclone
block. In this argument, the user can enter the local datastore on the source VM to which the VM is cloned in the first step before migration.Please be aware that this is my first time writing code in Go, so I am happy for any ideas and suggestions for improvements.
Community Note
Relates OR Closes #0000