-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ZOL 0.8.1 does not appear to honor recordsize=1M on zfs recv #9347
Comments
No. The recordsize of existing data is preserved in the send stream itself. That's why the option
Can you cite this? You likely misinterpreted it, and/or this is a documentation issue. |
From ' man zfs ' ( for 0.8.1 ) zfs receive [-Fhnsuv] [-d|-e] [-o origin=snapshot] [-o property=value] [-x property] filesystem If -o property=value or -x property is specified, it applies to the effective value of the property Sets the specified property as if the command zfs set property=value was invoked immediately before the receive. When receiving a stream from zfs send -R, causes the property to be inherited by all descendant datasets, as through zfs inherit property was run on any descendant datasets that have this property set on the sending system.
The -o option may be specified multiple times, for different properties. An error results if the same property is specified in multiple -o or -x options. |
"The receiving system must have the large_blocks pool feature enabled as well." I can't see that as a listed feature flag on your pool |
@kneutron Do I understand correctly that you are expecting |
OK to close |
@ahrens Is this still true of ZFS 2.x+? That I was looking at this OpenZFS documentation which suggested
In my case, I'm replicating a large dataset with recordsize 128K, whose files are probably better suited to 1M, to a new dataset whose recordsize is 1M, with Thanks! |
|
Good to know. Is there an example command of how to use |
|
I tried this on a VM, and I think the results show that for The key takeaway I observed with In actual practice on a 2.2.2 ZFS system, I also inspected one of my files created originally on a 128K recordsize dataset, and This corroborates @ahrens and @amotin 's assertions that Here's what I tried on my toy dataset. Setting up the pools:
Then setting up the source data with 128K and 1M files:
Then inspecting the source objects in the 128K dataset:
Then migrating the source data to the 1M dataset with
Then I generated some fresh objects directly to the 1M dataset to compare to:
|
System information
Type | Version/Name
Linux | Ubuntu 19.04
Distribution Name | Ubuntu Disco
Distribution Version |
Linux Kernel | 5.0.0-27-generic #28-Ubuntu SMP
Architecture | x86_64
ZFS Version | # zpool version
zfs-0.8.1-1
zfs-kmod-0.8.1-1
SPL Version | # modinfo spl | grep -iw version
version: 0.8.1-1
Describe the problem you're observing
Backing up a 6x2TB-disk mirror to a single USB3 drive (before converting it to a RAIDZ2), observed Writes/sec should be roughly equivalent to MB/sec written but they are higher sometimes (1,000+/sec) when observed using ' zpool iostat '
Note - Had to manually copy and delete a few datasets before doing ' zfs send ' to the 6TB drive due to " cannot receive: invalid stream (bad magic number) " errors, which were detected with additional '-v' in send/recv and worked around. Source pool was created on Ubuntu 14.04 LTS and has not been upgraded yet:
zpool upgrade
This system supports ZFS pool feature flags.
All pools are formatted using feature flags.
Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(5) for details.
POOL FEATURE
zseatera2
multi_vdev_crash_dump
large_dnode
sha512
skein
edonr
userobj_accounting
encryption
project_quota
device_removal
obsolete_counts
zpool_checkpoint
spacemap_v2
allocation_classes
resilver_defer
bookmark_v2
Note - Destination pool was created under ZFS 0.8.1 and has all features enabled by default.
Describe how to reproduce the problem
(date; time zfs send -LecvvR zseatera2@Sat |pv -t -r -b -W -i 2 -B 200M |zfs recv -svv -o recordsize=1024k zwd6t/from-zseatera2; date)
2>~/zfs-send-errs.txt
^ should translate ALL incoming datasets to 1M recordsize regardless, according to ' man zfs '
( from ' zpool iostat -k 5 ' )
The pool is still copying over at time of post and the properties appear to be set, but observed I/O would seem to indicate smaller record sizes being written with send/recv - where when the data was copied over manually, the desired recordsize and associated I/O matched up.
Include any warning/errors/backtraces from the system logs
The text was updated successfully, but these errors were encountered: