Skip to content

Commit

Permalink
Fix the ZFS checksum error histograms with larger record sizes
Browse files Browse the repository at this point in the history
My analysis in PR openzfs#14716 was incorrect.  Each histogram bucket contains
the number of incorrect bits, by position in a 64-bit word, over the
entire record.  8-bit buckets can overflow for record sizes above 2k.
To forestall that, saturate each bucket at 255.  That should still get
the point across: either all bits are equally wrong, or just a couple
are.

Sponsored-by:	Axcient
Signed-off-by:	Alan Somers <asomers@gmail.com>
  • Loading branch information
asomers committed Jul 10, 2023
1 parent ca960ce commit 1f661a3
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion module/zfs/zfs_fm.c
Original file line number Diff line number Diff line change
Expand Up @@ -790,7 +790,7 @@ update_histogram(uint64_t value_arg, uint8_t *hist, uint32_t *count)
/* We store the bits in big-endian (largest-first) order */
for (i = 0; i < 64; i++) {
if (value & (1ull << i)) {
hist[63 - i]++;
hist[63 - i] = MAX(hist[63 - i], hist[63 - i] + 1);
++bits;
}
}
Expand Down

0 comments on commit 1f661a3

Please sign in to comment.