-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ZFS: create volumes with more than 8k blocksize #128
Comments
Please can you add something like |
Yes. However, even with this property you will have to manually calculate the wanted volblocksize and set the mentioned property accordingly. |
Thanks. This will help a lot.
I have no problem with the calculation. Can you put it to the documentation for others? |
I did a test with 32G VHD and I got:
Why do you add (33.561.640−32×2^20) = 7208k Bytes? If you add bytes does this mean I can't use these volumes as native proxmox volumes on desaster recovery?
|
For some reasons ZFS does not do the rounding of the volsize by itself, so Linstor has to do it. Doing so, I had to add a check for this new This will be fixed in the next release. Until then, please use
I am not sure what do you mean by this? |
I meant, if you increase the size you will do this for LINSTOR/DRBD metadata. Where do you put the metadata? If there is something with LINSTOR on upgrade or defect database or something else I can use the zvol native with zfs rename and patching the vm*.conf. |
If DRBD is using internal metadata, DRBD writes them at the end of the device as stated in the docs |
Thanks for the link. I normally use 2 Intel Optane as ZIL with underlying LVM. So I could use them as metadata store, too? |
That should work. In Linstor currently only DRBD with internal metadata and the Luks layer need additional space for metadata (although Luks constantly requires 16MB, which should be fine with ordinary blocksizes :) )
Yep, sounds like a good idea. |
Now I did this with a workaround from #176 and LINBIT/linstor-client/issues/42:
Here is the reward:
Does somebody from Proxmox (@Fabian-Gruenbichler?) put this to Proxmox docs for other people? |
we have a (not-yet-updated and thus not-yet-merged) patch for our docs for the general 'raidz + zvol => high space usage overhead with default settings' issue, which we will include at some point in our reference documentation. I don't think we'll add linstore specific hints to our documentation, as that integration and plugin is not developed by us. |
the zfs block size can be specified with the following property setter: |
linstor-server/satellite/src/main/java/com/linbit/linstor/storage/utils/ZfsCommands.java
Line 69 in ddc51b7
If you have a ZFS RAID pool with ashift=12 and more than 3 (+Parity) HDDs the blocksize should be more than 8k.
Here is a thread which describes the problem:
https://forum.proxmox.com/threads/zfs-replica-2x-larger-than-original.49801/
So please add -o volblocksize= while creating the volume.
If you have x + parity HDDs then
blocksize = 2 ^ floor(log2(x)) * 2 ^ ashift
If you have 16 disk with RAIDZ3 and ashift=12 => x=(16-3)=13 =>
floor(log2(13)) = 3 =>
blocksize = 2 ^ 3 * 2 ^ 12 =>
blocksize = 32k
The text was updated successfully, but these errors were encountered: