Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce vdev properties #11711

Merged
merged 2 commits into from
Nov 30, 2021
Merged

Conversation

allanjude
Copy link
Contributor

Signed-off-by: Allan Jude allan@klarasystems.com

Motivation and Context

As discussed in the first March 2021 OpenZFS Leadership Meeting, this is a work-in-progress copy of my vdev properties feature.

It is also discussed further here on the developers mailing list and was originally presented at the OpenZFS Developers Summit 2018 hackathon where a working group expanded on the idea and won 2nd prize, and again in 2019 as it matured.

Description

Allows the user to set and get properties and user-properties of individual vdevs.
The properties are stored in the per-vdev ZAP that was added as part of the device removal feature.

Set a user property:
zpool set systems.klara:myproperty="value" poolname vdevname

Read a property (number of bytes written to this vdev since import):
zpool get write_bytes poolname vdevname

How Has This Been Tested?

This is still a WIP, and it has only been tested manually. A set of tests need to be added still.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

@ahrens ahrens requested a review from mmaybee March 9, 2021 04:01
@behlendorf behlendorf added the Status: Code Review Needed Ready for review and testing label Mar 12, 2021
@behlendorf behlendorf added Status: Work in Progress Not yet ready for general review and removed Status: Code Review Needed Ready for review and testing labels Mar 26, 2021
@allanjude
Copy link
Contributor Author

Maybe I need some help reading the test output. For the CentOS 8 and Stream 8 runs, I don't see which test is failing.

@behlendorf
Copy link
Contributor

Yeah, I don't blame you it's definitely not obvious. From what I can tell none of the tests unexpectedly failed. However the Linux builders did appear to hit an issue when trying to unload the kmods. That caused the ZTS to hit their timeout and get reported as a failure.

From the CentOS 8 stdout log

command timed out: 4200 seconds without output running ['runurl', 'https://raw.githubusercontent.com/openzfs/zfs-buildbot/master/scripts/bb-test-zfstests.sh'], attempting to kill
process killed by signal 15
program finished with exit code -1
elapsedTime=21506.004557

And in its console log

[16738.287722] ZFS: Unloaded module v2.0.0-rc1_476_g3a0fc66b1 (DEBUG mode)
[16957.936164] INFO: task rmmod:686842 blocked for more than 120 seconds.
[16957.937264]       Tainted: P           OE    --------- -  - 4.18.0-240.15.1.el8_3.x86_64 #1
[16957.938576] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[16957.939902] rmmod           D    0 686842 686824 0x00004080
[16957.940858] Call Trace:
[16957.941326]  __schedule+0x2a6/0x700
[16957.941917]  ? radix_tree_node_ctor+0x40/0x40
[16957.942682]  schedule+0x38/0xa0
[16957.943259]  schedule_timeout+0x246/0x2f0
[16957.943945]  ? kernfs_put+0xe5/0x190
[16957.944544]  wait_for_completion+0x11f/0x190
[16957.945351]  ? wake_up_q+0x80/0x80
[16957.945960]  mod_kobject_put+0x4c/0x70
[16957.946598]  free_module+0x17/0x1a0
[16957.947229]  __x64_sys_delete_module+0x1bb/0x280
[16957.948020]  do_syscall_64+0x5b/0x1a0
[16957.948624]  entry_SYSCALL_64_after_hwframe+0x65/0xca

It looks to me like it was caused by a missing zfs_kobj_fini() in zfs_sysfs_fini().

Copy link
Contributor

@mmaybee mmaybee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made a pass over the changes and made a few suggestions and comments.

cmd/zhack/zhack.c Outdated Show resolved Hide resolved
cmd/zpool/zpool_main.c Outdated Show resolved Hide resolved
cmd/zpool/zpool_main.c Outdated Show resolved Hide resolved
cmd/zpool/zpool_main.c Show resolved Hide resolved
cmd/zpool/zpool_main.c Show resolved Hide resolved
include/sys/fs/zfs.h Outdated Show resolved Hide resolved
lib/libzfs/libzfs_pool.c Outdated Show resolved Hide resolved
lib/libzfs/libzfs_pool.c Outdated Show resolved Hide resolved
module/zcommon/zpool_prop.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
@allanjude
Copy link
Contributor Author

Do we need to change some of the ASSERT()s to being >=

VERIFY3(spa->spa_dspace > spa->spa_nonallocating_dspace) failed (1006632960 > 1006632960)

@mmaybee
Copy link
Contributor

mmaybee commented Jun 11, 2021

Huh, maybe... what was the context of that assertion failure? I thought we only made that check when we were about to set a drive as "non-allocating", and I would have always expected to see some "slop" in the spa_dspace...

@allanjude allanjude changed the title WIP: vdev properties Introduce vdev properties Aug 17, 2021
@mmaybee mmaybee added Status: Code Review Needed Ready for review and testing and removed Status: Work in Progress Not yet ready for general review labels Aug 17, 2021
@mmaybee mmaybee requested review from a user, behlendorf and ahrens August 17, 2021 16:45
module/zfs/vdev_removal.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
cmd/zpool/zpool_main.c Outdated Show resolved Hide resolved
cmd/zpool/zpool_main.c Outdated Show resolved Hide resolved
cmd/zpool/zpool_main.c Outdated Show resolved Hide resolved
cmd/zpool/zpool_main.c Show resolved Hide resolved
cmd/zpool/zpool_main.c Outdated Show resolved Hide resolved
module/zcommon/zpool_prop.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
@allanjude allanjude force-pushed the upstream_vdev_properties branch 2 times, most recently from 429fa74 to 6b52382 Compare October 7, 2021 01:06
Copy link
Member

@ahrens ahrens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your work on this. I'll take a look at the rest of the kernel code too, hopefully later this week.

cmd/zhack/zhack.c Outdated Show resolved Hide resolved
cmd/zpool/zpool_main.c Show resolved Hide resolved
cmd/zpool/zpool_main.c Outdated Show resolved Hide resolved
cmd/zpool/zpool_main.c Outdated Show resolved Hide resolved
module/zcommon/zpool_prop.c Show resolved Hide resolved
module/zcommon/zpool_prop.c Outdated Show resolved Hide resolved
module/zcommon/zpool_prop.c Show resolved Hide resolved
module/zcommon/zpool_prop.c Outdated Show resolved Hide resolved
module/zfs/spa_misc.c Outdated Show resolved Hide resolved
module/zfs/spa_misc.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Show resolved Hide resolved
module/zfs/vdev.c Outdated Show resolved Hide resolved
module/zfs/vdev.c Show resolved Hide resolved
module/zfs/vdev.c Show resolved Hide resolved
module/zfs/vdev_removal.c Outdated Show resolved Hide resolved
module/zfs/vdev_removal.c Outdated Show resolved Hide resolved
module/zfs/vdev_removal.c Outdated Show resolved Hide resolved
module/zfs/vdev_removal.c Outdated Show resolved Hide resolved
module/zfs/vdev_removal.c Show resolved Hide resolved
module/zfs/vdev_removal.c Outdated Show resolved Hide resolved
@ahrens
Copy link
Member

ahrens commented Oct 24, 2021

you'll also want to fix the checkstyle errors

module/zfs/vdev_removal.c Outdated Show resolved Hide resolved
@jebsolutions
Copy link

Path value can be set (but not validated) and can make the pool not mount on next boot. The automatic "Where is this vdev" logic seems to be broken.

[root@fbsd13beta ~]# zpool get path zroot da0p4
NAME PROPERTY VALUE SOURCE
da0p4 path /dev/da0p4 -
[root@fbsd13beta ~]# zpool set path="Asdf" zroot da0p4

reboot

FreeBSD console error message on next boot:
Mounting from zfs:root/ROOT/default failed with error 22; retrying for 3 more seconds
Mounting from zfs:root/ROOT/default failed with error 22
Loader variables:
vfs.root.mountfrom=zfs:zroot/default

If I boot from the FreeBSD 13.0 ISO and manually import the poor twice...first time fails...second time works. Ignored the warnings about read only, etc. This seemed to correct the bad "path" parameter. I can reboot and everything is good again.

The "path seems to have moved lets fix it" logic is not longer working with ZFS head + this PR. Not sure if this exposed a previous bug since FreeBSD 13.0 openzfs or by this PR?

@allanjude
Copy link
Contributor Author

Path value can be set (but not validated) and can make the pool not mount on next boot. The automatic "Where is this vdev" logic seems to be broken.

[root@fbsd13beta ~]# zpool get path zroot da0p4 NAME PROPERTY VALUE SOURCE da0p4 path /dev/da0p4 - [root@fbsd13beta ~]# zpool set path="Asdf" zroot da0p4

reboot

FreeBSD console error message on next boot: Mounting from zfs:root/ROOT/default failed with error 22; retrying for 3 more seconds Mounting from zfs:root/ROOT/default failed with error 22 Loader variables: vfs.root.mountfrom=zfs:zroot/default

If I boot from the FreeBSD 13.0 ISO and manually import the poor twice...first time fails...second time works. Ignored the warnings about read only, etc. This seemed to correct the bad "path" parameter. I can reboot and everything is good again.

The "path seems to have moved lets fix it" logic is not longer working with ZFS head + this PR. Not sure if this exposed a previous bug since FreeBSD 13.0 openzfs or by this PR?

This is fixed, setting the vdev path now returns EINVAL if the path does not start with /dev/ (matching the logic in vdev_geom.c that was causing the boot failure)

@allanjude allanjude force-pushed the upstream_vdev_properties branch 2 times, most recently from e99d3f3 to 8a5bd9b Compare November 27, 2021 18:14
Copy link
Contributor

@debdrup debdrup left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't really comment on anything else, other than to say I look forward to it landing. :)

man/man7/vdevprops.7 Outdated Show resolved Hide resolved
man/man7/vdevprops.7 Show resolved Hide resolved
@ahrens
Copy link
Member

ahrens commented Nov 29, 2021

In the example output here: #11711 (comment) the order of vdevs for all-vdevs has children before parents, which is the opposite of zpool list -v. Do we care about the ordering in this case?

@mmaybee mmaybee added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Code Review Needed Ready for review and testing labels Nov 30, 2021
Signed-off-by: Allan Jude <allan@klarasystems.com>
Add properties, similar to pool properties, to each vdev.
This makes use of the existing per-vdev ZAP that was added as
part of device evacuation/removal.

A large number of read-only properties are exposed as well,
many of the members of struct vdev_t that provide useful
statistics.

Add support for properties.vdev in SYSFS

Add support for read-only "removing" vdev property
Add the "allocation" property that can be set to "off"
Refactor space management for non-allocating devices
Add ability to remove user vdev property by setting value to ""

Signed-off-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Mark Maybee <mark.maybee@delphix.com>
@allanjude
Copy link
Contributor Author

In the example output here: #11711 (comment) the order of vdevs for all-vdevs has children before parents, which is the opposite of zpool list -v. Do we care about the ordering in this case?

for_each_vdev_cb() walks the vdev list and calls the callback for the children before the parent. I do find that annoying.

@mmaybee mmaybee merged commit 2a673e7 into openzfs:master Nov 30, 2021
sdimitro pushed a commit to sdimitro/zfs that referenced this pull request May 23, 2022
Add properties, similar to pool properties, to each vdev.
This makes use of the existing per-vdev ZAP that was added as
part of device evacuation/removal.

A large number of read-only properties are exposed,
many of the members of struct vdev_t, that provide useful
statistics.

Adds support for read-only "removing" vdev property.
Adds the "allocating" property that defaults to "on" and
can be set to "off" to prevent future allocations from that
top-level vdev.

Supports user-defined vdev properties.
Includes support for properties.vdev in SYSFS.

Co-authored-by: Allan Jude <allan@klarasystems.com>
Co-authored-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Closes openzfs#11711
nicman23 pushed a commit to nicman23/zfs that referenced this pull request Aug 22, 2022
Add properties, similar to pool properties, to each vdev.
This makes use of the existing per-vdev ZAP that was added as
part of device evacuation/removal.

A large number of read-only properties are exposed,
many of the members of struct vdev_t, that provide useful
statistics.

Adds support for read-only "removing" vdev property.
Adds the "allocating" property that defaults to "on" and
can be set to "off" to prevent future allocations from that
top-level vdev.

Supports user-defined vdev properties.
Includes support for properties.vdev in SYSFS.

Co-authored-by: Allan Jude <allan@klarasystems.com>
Co-authored-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Closes openzfs#11711
nicman23 pushed a commit to nicman23/zfs that referenced this pull request Aug 22, 2022
Add properties, similar to pool properties, to each vdev.
This makes use of the existing per-vdev ZAP that was added as
part of device evacuation/removal.

A large number of read-only properties are exposed,
many of the members of struct vdev_t, that provide useful
statistics.

Adds support for read-only "removing" vdev property.
Adds the "allocating" property that defaults to "on" and
can be set to "off" to prevent future allocations from that
top-level vdev.

Supports user-defined vdev properties.
Includes support for properties.vdev in SYSFS.

Co-authored-by: Allan Jude <allan@klarasystems.com>
Co-authored-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Closes openzfs#11711
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants