-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
keys must be loaded to remove top-level vdev #10939
Comments
You could start by checking what error code in being returned, with |
Looks like EACCES, but there's also a confusing (because I truthfully am not at all experienced with strace output) Echoing
|
Am I just stupid and need to have all datasets mounted (keys loaded) to be able to remove a vdev? That doesn't make a lot of sense to me, given what I think I know about how ZFS works, but .. idfk 😭 |
WELP. Turns out, that is indeed the issue. I mounted every dataset and re-ran the vdev remove, and it works fine. |
Ah, right. The keys have to be loaded so that we can reset the ZIL logs, so that we don't write to the the device that's being removed. If you didn't want to mount the filesystems, you could use |
I was going to look at updating the docs to include that requirement for a top level removal, but ultimately wasn't sure about whether it was a key loaded req or a mounted one. Appreciate the help Matthew! I'll open an mr this weekend to include that requirement in the man pages. Agree with you, too that the error message could deff be improved |
This change is relation to openzfs#10939
The error returned by `zpool remove` when the encryption keys aren't loaded isn't very helpful. Furthermore, the man pages make no mention that the keys need to be loaded. This change doesn't resolve the error message but it does update the man page to mention this requirement. Authored-by: grodik <pat@litke.dev> Reviewed-by: Matthew Ahrens <mahrens@delphix.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #10939 Closes #10948
The error returned by `zpool remove` when the encryption keys aren't loaded isn't very helpful. Furthermore, the man pages make no mention that the keys need to be loaded. This change doesn't resolve the error message but it does update the man page to mention this requirement. Authored-by: grodik <pat@litke.dev> Reviewed-by: Matthew Ahrens <mahrens@delphix.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #10939 Closes #10948
The error returned by `zpool remove` when the encryption keys aren't loaded isn't very helpful. Furthermore, the man pages make no mention that the keys need to be loaded. This change doesn't resolve the error message but it does update the man page to mention this requirement. Authored-by: grodik <pat@litke.dev> Reviewed-by: Matthew Ahrens <mahrens@delphix.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes openzfs#10939 Closes openzfs#10948
The error returned by `zpool remove` when the encryption keys aren't loaded isn't very helpful. Furthermore, the man pages make no mention that the keys need to be loaded. This change doesn't resolve the error message but it does update the man page to mention this requirement. Authored-by: grodik <pat@litke.dev> Reviewed-by: Matthew Ahrens <mahrens@delphix.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes openzfs#10939 Closes openzfs#10948
System information
Describe the problem you're observing
I have a pool of mirrored drives. I am unable to remove any of the mirrors due to zpool reporting "permission denied". Yes, I am running the remove as root.
Describe how to reproduce the problem
Honestly, I have no idea. I created a pool with mirrors and removed them successfully prior to creating this pool. All ashifts are 12 (and it would puke a different error anyway)
The vdevs all have matched disks within themselves (same make and model per vdev), but vdevs are a 3, 4, and 8 TB in size. All ashifts were set to 12 when the mirrors were added to the pool. They're all WD Reds. It's a dual socket SuperMicro X10DRi-T4+ with 2x E5-2640 v3 and 128gigs of ram (ddr4 @2133)
Include any warning/errors/backtraces from the system logs
Nothing from the kernel, just zfs refusing to listen to the bigger hammer 😢
What can I do to help further debug this issue?
The text was updated successfully, but these errors were encountered: