Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add physical device size to SIZE column in 'zpool list -v' #13106

Merged
merged 1 commit into from
Mar 9, 2022

Conversation

akashb-22
Copy link
Contributor

Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Signed-off-by: Akash B akash-b@hpe.com
Closes #12561

Motivation and Context

Add physical device size to 'zpool list -v' to see the individual device size.

Description

Generally, idea is to report vdev_psize to the userspace. We thought of using the existing field vs_space from vdev_stat_t structure to have the vdev_psize values only for the leaf/child vdevs as the field is unused by them anyway. However, vs_space is being used/referenced in multiple places for different purposes (ie: determining toplevel vdevs in print_list_stats, print_iostat_default) etc. This change could have unintended consequences. So, we added a new member <vs_pspace> to the vdev_stat_t structure for this purpose. With this the existing toplevel logic and other areas are undisturbed and vs_pspace is reported for the leaf vdevs (even for toplevel physical devices) and vs_space for the toplevel vdevs.

How Has This Been Tested?

  1. Manually tested by creating various zpool configurations with different drive size.
  2. Have run ZTS/ZLOOP tests with the fix.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

@akashb-22
Copy link
Contributor Author

akashb-22 commented Feb 15, 2022

---- sample output ----

truncate -s 1G file{1..3}
truncate -s 2G file{4..5}
truncate -s 4G file{6..8}
truncate -s 8G file{9..12}
truncate -s 10G file{13..14}
truncate -s 5G file{15..19}
-
[root@rocky6x-kvm draid]# zpool list -v
NAME                                SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-oss0                          44.2G   214K  44.2G        -         -     0%     0%  1.00x    ONLINE  -
  /root/test/files/draid/file1        1G      0   960M        -         -     0%  0.00%      -    ONLINE
  /root/test/files/draid/file2        1G      0   960M        -         -     0%  0.00%      -    ONLINE
  /root/test/files/draid/file3        1G      0   960M        -         -     0%  0.00%      -    ONLINE
  mirror-3                         1.88G      0  1.88G        -         -     0%  0.00%      -    ONLINE
    /root/test/files/draid/file4      2G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file5      2G      -      -        -         -      -      -      -    ONLINE
  raidz1-4                         11.5G      0  11.5G        -         -     0%  0.00%      -    ONLINE
    /root/test/files/draid/file6      4G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file7      4G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file8      4G      -      -        -         -      -      -      -    ONLINE
  draid1:1d:4c:1s-5                23.5G      0  23.5G        -         -     0%  0.00%      -    ONLINE
    /root/test/files/draid/file9      8G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file10     8G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file11     8G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file12     8G      -      -        -         -      -      -      -    ONLINE
special                                -      -      -        -         -      -      -      -  -
  mirror-6                         4.50G   214K  4.50G        -         -     0%  0.00%      -    ONLINE
    /root/test/files/draid/file15     5G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file16     5G      -      -        -         -      -      -      -    ONLINE
logs                                   -      -      -        -         -      -      -      -  -
  mirror-7                         4.50G      0  4.50G        -         -     0%  0.00%      -    ONLINE
    /root/test/files/draid/file17     5G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file18     5G      -      -        -         -      -      -      -    ONLINE
cache                                  -      -      -        -         -      -      -      -  -
  /root/test/files/draid/file19       5G      0  5.00G        -         -     0%  0.00%      -    ONLINE
spare                                  -      -      -        -         -      -      -      -  -
  /root/test/files/draid/file13      10G      -      -        -         -      -      -      -     AVAIL
  /root/test/files/draid/file14      10G      -      -        -         -      -      -      -     AVAIL
  draid1-5-0                          8G      -      -        -         -      -      -      -     AVAIL
[root@rocky6x-kvm draid]# zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-oss0  44.2G   220K  44.2G        -         -     0%     0%  1.00x    ONLINE  -
[root@rocky6x-kvm draid]# zfs list
NAME             USED  AVAIL     REFER  MOUNTPOINT
pool-oss0        183K  23.3G       24K  /pool-oss0
pool-oss0/ost0    24K  23.3G       24K  /mnt/ost0```

@behlendorf behlendorf added the Status: Code Review Needed Ready for review and testing label Feb 15, 2022
@akashb-22
Copy link
Contributor Author

@behlendorf @tonynguien
Considering the <pool_checkpoint/checkpoint_lun_expsz> test case and expandsize feature, currently with this patch we'd see the actual device size as seen by zfs (irrespective of the device being expanded or not). However, we have the EXPANDSZ column to report the expansion size of the vdevs and zpool. I have attached below the reporting size seen during the zpool-expand with this patch.

truncate -s 1G /root/test/files/draid/file{1..3}
zpool create pool-oss0 -f /root/test/files/draid/file1 /root/test/files/draid/file2 /root/test/files/draid/file3
[root@rocky6x-kvm draid]#
[root@rocky6x-kvm draid]# zpool list -v
NAME                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-oss0                       2.81G   136K  2.81G        -         -     0%     0%  1.00x    ONLINE  -
  /root/test/files/draid/file1     1G  46.5K   960M        -         -     0%  0.00%      -    ONLINE
  /root/test/files/draid/file2     1G  47.5K   960M        -         -     0%  0.00%      -    ONLINE
  /root/test/files/draid/file3     1G  42.5K   960M        -         -     0%  0.00%      -    ONLINE
[root@rocky6x-kvm draid]#
[root@rocky6x-kvm draid]# zpool export pool-oss0
[root@rocky6x-kvm draid]# truncate -s 4G /root/test/files/draid/file{1..3}
[root@rocky6x-kvm draid]# zpool import pool-oss0 -d.
[root@rocky6x-kvm draid]# zpool list -v
NAME                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-oss0                       2.81G   180K  2.81G        -        9G     0%     0%  1.00x    ONLINE  -
  /root/test/files/draid/file1     4G    61K   960M        -        3G     0%  0.00%      -    ONLINE
  /root/test/files/draid/file2     4G    62K   960M        -        3G     0%  0.00%      -    ONLINE
  /root/test/files/draid/file3     4G    57K   960M        -        3G     0%  0.00%      -    ONLINE
[root@rocky6x-kvm draid]#
[root@rocky6x-kvm draid]# zpool online -e pool-oss0 /root/test/files/draid/file1
[root@rocky6x-kvm draid]#
[root@rocky6x-kvm draid]# zpool list -v
NAME                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-oss0                       5.81G   194K  5.81G        -        6G     0%     0%  1.00x    ONLINE  -
  /root/test/files/draid/file1     4G  65.5K  3.94G        -         -     0%  0.00%      -    ONLINE
  /root/test/files/draid/file2     4G  66.5K   960M        -        3G     0%  0.00%      -    ONLINE
  /root/test/files/draid/file3     4G  61.5K   960M        -        3G     0%  0.00%      -    ONLINE
[root@rocky6x-kvm draid]#

RAIDZ

[root@rocky6x-kvm draid]# truncate -s 1G /root/test/files/draid/file{1..3}
[root@rocky6x-kvm draid]# zpool create pool-oss0 raidz1 /root/test/files/draid/file{1..3}
[root@rocky6x-kvm draid]#
[root@rocky6x-kvm draid]# zpool list -v
NAME                               SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-oss0                         2.75G   232K  2.75G        -         -     0%     0%  1.00x    ONLINE  -
  raidz1-0                        2.75G   232K  2.75G        -         -     0%  0.00%      -    ONLINE
    /root/test/files/draid/file1     1G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file2     1G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file3     1G      -      -        -         -      -      -      -    ONLINE
[root@rocky6x-kvm draid]#
[root@rocky6x-kvm draid]# zpool export pool-oss0
[root@rocky6x-kvm draid]# truncate -s 4G /root/test/files/draid/file{1..3}
[root@rocky6x-kvm draid]# zpool import pool-oss0 -d.
[root@rocky6x-kvm draid]#
[root@rocky6x-kvm draid]# zpool list -v
NAME                               SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-oss0                         2.75G   256K  2.75G        -        9G     0%     0%  1.00x    ONLINE  -
  raidz1-0                        2.75G   256K  2.75G        -        9G     0%  0.00%      -    ONLINE
    /root/test/files/draid/file1     4G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file2     4G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file3     4G      -      -        -         -      -      -      -    ONLINE
[root@rocky6x-kvm draid]#
[root@rocky6x-kvm draid]# zpool online -e pool-oss0 /root/test/files/draid/file1
[root@rocky6x-kvm draid]# zpool list -v
NAME                               SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-oss0                         11.8G   277K  11.7G        -         -     0%     0%  1.00x    ONLINE  -
  raidz1-0                        11.8G   277K  11.7G        -         -     0%  0.00%      -    ONLINE
    /root/test/files/draid/file1     4G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file2     4G      -      -        -         -      -      -      -    ONLINE
    /root/test/files/draid/file3     4G      -      -        -         -      -      -      -    ONLINE
[root@rocky6x-kvm draid]#

I'm not sure with the way [SIZE, EXPANDSZ] is reported with this patch is fine or not. But, in all cases, we see the actual device size reported for the physical devices, which we'd want to see.
If this looks fine, I think I may need to modify the <pool_checkpoint/checkpoint_lun_expsz> test case accordingly.

@behlendorf
Copy link
Contributor

Generally speaking I like this change, not providing the physical size as part of the zpool list output always struct me as a bit of an oversight. We are overloading what the size column here indicates; usable capacity for top-level vdevs, physical size for leaf vdevs. However, that doesn't seem unreasonable given the zpool list man page only says it reports "health status and space usage". Alternately we could add a PSIZE column, although I think I prefer using the SIZE column.

@akashb-22
Copy link
Contributor Author

akashb-22 commented Feb 24, 2022

For most of the people using mirror, raidz/draid on their zpools, this would not have any impact other than displaying the device size instead of "-". As the SIZE field is anyway unused by the child/leaf vdevs, instead of having "-" it would be meaningful to report the device size. I'm not sure having PSIZE field for this purpose will be fine or not, but we will see a lot of "-" in the SIZE, PSIZE fields for not applicable vdevs, and also <zpool list> header getting expanded (by adding PSIZE column) can affect people scraping this output.
My suggestion is to report psize to SIZE column which makes the output simple and easily understandable.

This change would slightly impact these 3 cases on how SIZE is reported:

  1. zpool with striped vdevs(toplevel physical device) will now be reporting psize instead of vs_space. However, there is not much difference between them.
  2. If any device/vdev is physically expanded(but the device is not yet expanded with zpool online -e) the actual physical device size will be reported. However, we do have the EXPSZ column to report the possible device expansion size.
  3. Any special(special, slog, cache, spare) devices would also be now reporting the device size for the applicable vdevs(physical devices).
    I have also attached these cases above.

@nabijaczleweli
Copy link
Contributor

nabijaczleweli commented Feb 24, 2022

Uuh, is it the usable capacity? This is what I get on one of my pools:

NAME                                    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
filling                                25.5T  6.52T  18.9T        -       64M     1%    25%  1.00x    ONLINE  -
  mirror-0                             3.64T   444G  3.21T        -       64M     9%  11.9%      -    ONLINE
    ata-HGST_HUS726T4TALE6L4_V6K2L4RR      -      -      -        -       64M      -      -      -    ONLINE
    ata-HGST_HUS726T4TALE6L4_V6K2MHYR      -      -      -        -       64M      -      -      -    ONLINE
  raidz1-1                             21.8T  6.09T  15.7T        -         -     0%  27.9%      -    ONLINE
    ata-HGST_HUS728T8TALE6L4_VDKT237K      -      -      -        -         -      -      -      -    ONLINE
    ata-HGST_HUS728T8TALE6L4_VDGY075D      -      -      -        -         -      -      -      -    ONLINE
    ata-HGST_HUS728T8TALE6L4_VDKVRRJK      -      -      -        -         -      -      -      -    ONLINE

ata-HGST_HUS728T8TALE6L4_* are 8T, so usable capacity of three of them in raidz1 is 16T? (likewise, ALLOC and FREE are also inflated by 1/(2/3)). This is in contrast with the mirror vdev which is 2x4T and, well, accurate. Or am I misunderstanding what "usable capacity" means here?

@behlendorf
Copy link
Contributor

"Usable" capacity was poor phrasing. It's the amount of space which can be "allocated" from the perspective of the pool. This space doesn't include the vdev labels, and it will be further rounded down to the last full metaslab. For dRAID vdev's some additional space per disk is also reserved. It does include the parity space, which is why your pool reports >16TB.

zpool with striped vdevs(toplevel physical device) will now be reporting psize instead of vs_space. However, there is not much difference between them.

While minor it seems like we should be able to get rid of this discrepancy. What if we did something like only report vs_psize when vs_space == 0? Are there other cases this wouldn't cover?

If any device/vdev is physically expanded(but the device is not yet expanded with zpool online -e) the actual physical device size will be reported. However, we do have the EXPSZ column to report the possible device expansion size.

My feeling is this is actually desirable since it makes it easy to spot different sized drives in a top-level vdev.

Any special(special, slog, cache, spare) devices would also be now reporting the device size for the applicable vdevs(physical devices).

This seems reasonable to me.

@akashb-22
Copy link
Contributor Author

akashb-22 commented Feb 24, 2022

boolean_t toplevel = (vs->vs_space != 0);
Initially, my thought was to report vs_space when (toplevel == 1). (as you suggested above)
Later, I intentionally modified it (toplevel && (vs->vs_pspace == 0))
This is because, for zpool w/ striped vdevs (also a toplevel), vs_space will be reported instead of psize. As vs_space, in this case will be slightly less than psize and it does not seem to be consistent with other devices of the same size in other toplevel vdevs.
And also we'd want to report the exact physical device size for all physical devices in the zpool, even if it's a toplevel vdev.
For eg: (Below is not the o/p, we want to see)

truncate -s 1G file{1..5}
[root@localhost files]# zpool list -v
NAME                              SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                         4.62G   175K  4.62G        -         -     0%     0%  1.00x    ONLINE  -
  /root/test/test/files/file1     960M  47.5K   960M        -         -     0%  0.00%      -    ONLINE
  /root/test/test/files/file2     960M  48.5K   960M        -         -     0%  0.00%      -    ONLINE
  raidz1-2                       2.75G    79K  2.75G        -         -     0%  0.00%      -    ONLINE
    /root/test/test/files/file3     1G      -      -        -         -      -      -      -    ONLINE
    /root/test/test/files/file4     1G      -      -        -         -      -      -      -    ONLINE
    /root/test/test/files/file5     1G      -      -        -         -      -      -      -    ONLINE

However, it's also possible to report the psize if vs_pspace !=0 and report vs_space for the rest. This would also conclude that we always report psize if it's a physical device. I'm not sure if that will be fine?

@akashb-22
Copy link
Contributor Author

Change: Report psize if it's a physical device else report vs_space.

Copy link

@ghoshdipak ghoshdipak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me

Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
@behlendorf behlendorf added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Code Review Needed Ready for review and testing labels Mar 4, 2022
@behlendorf behlendorf merged commit 1282274 into openzfs:master Mar 9, 2022
nicman23 pushed a commit to nicman23/zfs that referenced this pull request Aug 22, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
nicman23 pushed a commit to nicman23/zfs that referenced this pull request Aug 22, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Aug 30, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
lundman pushed a commit to openzfsonwindows/openzfs that referenced this pull request Sep 2, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
tonyhutter pushed a commit to tonyhutter/zfs that referenced this pull request Sep 15, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Sep 23, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Sep 23, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Sep 23, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Sep 23, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Sep 23, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Sep 23, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Sep 23, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Sep 23, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Sep 23, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Sep 23, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
andrewc12 pushed a commit to andrewc12/openzfs that referenced this pull request Sep 23, 2022
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes openzfs#12561
Closes openzfs#13106
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

zpool list -v should show SIZE for spares
5 participants