-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
does zfs not use guids for drive (cache/spare) #2155
Comments
I was able to fix it by issuing an import with -d /dev/mapper, scrubbing now.
|
@prometheanfire GUIDs are used for both cache and spares and can be used to remove them. To find the GUID of them, run |
thanks, it's fixed now, but I'll keep that in mind next time (hopefully no next time though). My question then is why, even if the disks were reordered, did zfs not use the correct disks in the correct places? It had access to all the correct disks and could have matched their function to guid. |
@prometheanfire The issue is that we need to open the devices by path under Linux. There are provisions to store an alternate path so we could store the by-id path but currently we don't. We could also try to automatically reconstruct the pool, this of course only happens manually now when you use the |
I'm closing this issue out because the original problem was resolved. However, we should open a new issue to get the alternate paths implemented in the vdev label. This would improve our resilience when devices are reordered. |
Caused it to break on import, drive ordering got all sorts of fucked up.
I added what are now cryptb1 and cryptc1
I don't know of a way to remove the cache devices.
The spare and cache devices are both spare/cache AND in the raidz3 somehow...
The text was updated successfully, but these errors were encountered: