-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CKSUM and WRITE errors with 2.2.1 stable, when vdevs are atop LUKS #15533
Comments
A good number of commits in that range overlap the state of 2.2. It's a shame you probably can't bisect due to this being already data in peril. Any way you could do that? |
If I had a solid reproducer or spare hardware to set an experiment up, I would most certainly run it over here. |
Was anything logged by the kernel or zed when the write errors happened? Please clarify "backup machine": is that a machine/pool you're sending streams to? Or just another unrelated computer? |
The system log should have the type of errors can you isolate them and post them here? And |
Yes, please stand by.... Here is a sample of zpool events + journalctl logs from that time:
The media server experienced the failures first. After a day of running the same software on the backup machine (it's just a Borg backup server, no ZFS send or receive) that's when I decided it had to be software instead of hardware. Both machines have ECC memory. |
I can confirm 2.2.0 as released in this repository does not have the data corruption issue. |
2.2.0 or 2.2.1? That's a relief if 2.2.0, as that's what went into 14-RELEASE, right? |
What I build and what I am running right now. |
Tonight I'll try to upgrade my backup server to 2.2.1. Wish me luck. |
I will report on the results after several days of testing. |
I just ran into this after upgrading to zfs 2.2.1 (immediately after reboot). I'm also on Fedora and running zfs on top of LUKS. I'm seeing write errors, but not any checksum errors, and I'm pretty sure it's not a hardware issue. None of the drives are reporting SMART errors and each vdev is showing a similar number of errors per drive despite being connected via two different paths (1 drive via LSI HBA, 1 drive via native Intel SATA). I'm going to try downgrading to zfs 2.2.0 to see if that helps. Unfortunately, I can't downgrade further than that because I've already enabled the new zpool features.
|
So far, after downgrading to 2.2.0 (had to build the RPMs from source for Fedora 39), the issue seems to have disappeared. Also, I just noticed I'm also using a SLOG and L2ARC like @Rudd-O. So it looks like both our setups have these things in common: Fedora + kernel 6.5 + LUKS encrypted disks + striped mirrors + SLOG + L2ARC + ECC memory. In case it matters at all, it doesn't look like I've ever (intentionally or inadvertently) used block cloning:
|
Same here,
|
With this bug in 2.2.1 and the block cloning bug in 2.2.0 I guess I'll continue putting off upgrading to 2.2.x and Fedora 39/FreeBSD 14. Both of these are the most serious bugs I've personally noticed making it into a released zfs version, and it happened two releases in a row. Could 2.2.2 be made into a small bug fixing release instead of a normal release so there's more confidence in getting a trustworthy 2.2.x version? |
Are you also running ZFS on top of LUKS? Asking since I see /dev/mapper/ devices. |
I'm deffo LUKS but the copy paste above from our friends who have repro'd the bug doesn't seem like it's LUKS. Gotta say that my heart almost came out thru my esophagus when I got Alertmanager alerts about various drives in several machines popping off. If anyone is interested, I'm using https://github.com/Rudd-O/zfs-stats-exporter plus Node Exporter, and the following alerting rules for ZFS:
None of my drives tripped the SMART rules:
|
Maybe I'm missing something but everyone who confirmed the bug in here are running ZFS on top of LUKS. blind-oracle hasn't confirmed it but given his device paths resides in dev/mapper I would guess he is as well. |
Maybe some funky interaction with device-mapper? |
@broizter Yes, it's running on top of LUKS since it's much faster than built-in encryption. So yeah, might be some device-mapper related bug which is absent in 2.2.0 |
I had the same issue using zfs 2.2.1 with LUKS, linux 6.6.2. |
Yep. 2.2.1 has that problem too (kernel 6.5). Reverting to 2.2.0 now. |
So we know master at the commit in the description, and 2.2.1 both share the issue. |
Same WRITE error issue on my laptop with two single-disk zpools on LUKS.
Running zfs
|
Same here. Important data points! |
Me as well. All of my LUKS volumes are formatted as LUKS2 with a 4 KiB sector size (including the ones backing SLOG and L2ARC). |
In my case all devices are native 4k (SLOG/ARC and spinning disks), so probably it does not matter much. |
@Rudd-O I'd rename the issue, it's more like write errors than data corruption I think. At least downgrading to 2.2.0 and doing |
Simplifies our code a lot, so we don't have to wait for each and reassemble them. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Closes openzfs#15533 Closes openzfs#15588 (cherry picked from commit 72fd834)
Before 4.5 (specifically, torvalds/linux@ddc58f2), head and tail pages in a compound page were refcounted separately. This means that using the head page without taking a reference to it could see it cleaned up later before we're finished with it. Specifically, bio_add_page() would take a reference, and drop its reference after the bio completion callback returns. If the zio is executed immediately from the completion callback, this is usually ok, as any data is referenced through the tail page referenced by the ABD, and so becomes "live" that way. If there's a delay in zio execution (high load, error injection), then the head page can be freed, along with any dirty flags or other indicators that the underlying memory is used. Later, when the zio completes and that memory is accessed, its either unmapped and an unhandled fault takes down the entire system, or it is mapped and we end up messing around in someone else's memory. Both of these are very bad. The solution on these older kernels is to take a reference to the head page when we use it, and release it when we're done. There's not really a sensible way under our current structure to do this; the "best" would be to keep a list of head page references in the ABD, and release them when the ABD is freed. Since this additional overhead is totally unnecessary on 4.5+, where head and tail pages share refcounts, I've opted to simply not use the compound head in ABD page iteration there. This is theoretically less efficient (though cleaning up head page references would add overhead), but its safe, and we still get the other benefits of not mapping pages before adding them to a bio and not mis-splitting pages. There doesn't appear to be an obvious symbol name or config option we can match on to discover this behaviour in configure (and the mm/page APIs have changed a lot since then anyway), so I've gone with a simple version check. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Closes openzfs#15533 Closes openzfs#15588 (cherry picked from commit c6be6ce)
I still encountered spurious WRITE errors adding a mirror on a LUKS partitioned device both with zfs_vdev_disk_classic set to 0 and to 1. |
on 6.8.10-asahi nixos, zfs 2.2.4, macbook air m2, zfs_vdev_disk_classic=0 and zfs_vdev_disk_classic=1 both result in several hundred zio error=5 type=2 with a luks2 header while trying to install. LUKS1 results in no errors |
I encountered this on Linux 6.1 with ZFS 2.2.4 after replacing a disk in a mirror. I tried a ton of different ZFS versions and the different kernel module parameters from this issue, and they did not help. Then, I noticed that LUKS on the new disk had defaulted to 4k sectors rather than 512: After reformatting the LUKS volume with 512 byte sectors, resilvering completed without errors and a scrub looks to be going well also, so this seems to have resolved it for me. This pool does have ashift=12, for what it's worth. |
For what it's worth, I'm using ZFS 2.2.4 and Linux 6.6 and I have more than a dozen ZFS pools with ashift=12 across several machines, the vast majority of which using LUKS2 (with 4K LUKS sectors, despite 512-byte sectors in the underlying physical device) and I've never encountered these errors (I'm only subscribed to this issue because I find them concerning). I run scrubs weekly. |
That's an odd case and any fs on that might not be too happy. Thanks for the report |
@sempervictus Agreed. I think at some point LUKS must have changed how it decides default sector size, or perhaps there is some difference between my disks that affects that. Both of my disks advertise 512 byte sectors, but they are SSDs from different manufacturers (Samsung 990 Pro and Sabrent Rocket 4.0 Plus). |
@ryantrinkle SSDs under the hood use some 4k-16k NAND page size, so 512 is just a catch-all default. Using 4k with them is very much Ok and probably LUKS tries to be smart here (if SSD -> set 4k) |
Please see here for a debugging patch that I hope will reveal more info about what's going on: #15646 (comment) (if possible, I would prefer to keep discussion going in #15646, so its all in one place). |
Before 5.4 we have to do a little math. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Closes openzfs#15533 Closes openzfs#15588
The regular ABD iterators yield data buffers, so they have to map and unmap pages into kernel memory. If the caller only wants to count chunks, or can use page pointers directly, then the map/unmap is just unnecessary overhead. This adds adb_iterate_page_func, which yields unmapped struct page instead. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Closes openzfs#15533 Closes openzfs#15588
This is just renaming the existing functions we're about to replace and grouping them together to make the next commits easier to follow. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Closes openzfs#15533 Closes openzfs#15588
Light reshuffle to make it a bit more linear to read and get rid of a bunch of args that aren't needed in all cases. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Closes openzfs#15533 Closes openzfs#15588
This is just setting up for the next couple of commits, which will add a new IO function and a parameter to select it. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Closes openzfs#15533 Closes openzfs#15588
This commit tackles a number of issues in the way BIOs (`struct bio`) are constructed for submission to the Linux block layer. The kernel has a hard upper limit on the number of pages/segments that can be added to a BIO, as well as a separate limit for each device (related to its queue depth and other scheduling characteristics). ZFS counts the number of memory pages in the request ABD (`abd_nr_pages_off()`, and then uses that as the number of segments to put into the BIO, up to the hard upper limit. If it requires more than the limit, it will create multiple BIOs. Leaving aside the fact that page count method is wrong (see below), not limiting to the device segment max means that the device driver will need to split the BIO in half. This is alone is not necessarily a problem, but it interacts with another issue to cause a much larger problem. The kernel function to add a segment to a BIO (`bio_add_page()`) takes a `struct page` pointer, and offset+len within it. `struct page` can represent a run of contiguous memory pages (known as a "compound page"). In can be of arbitrary length. The ZFS functions that count ABD pages and load them into the BIO (`abd_nr_pages_off()`, `bio_map()` and `abd_bio_map_off()`) will never consider a page to be more than `PAGE_SIZE` (4K), even if the `struct page` is for multiple pages. In this case, it will load the same `struct page` into the BIO multiple times, with the offset adjusted each time. With a sufficiently large ABD, this can easily lead to the BIO being entirely filled much earlier than it could have been. This is also further contributes to the problem caused by the incorrect segment limit calculation, as its much easier to go past the device limit, and so require a split. Again, this is not a problem on its own. The logic for "never submit more than `PAGE_SIZE`" is actually a little more subtle. It will actually never submit a buffer that crosses a 4K page boundary. In practice, this is fine, as most ABDs are scattered, that is a list of complete 4K pages, and so are loaded in as such. Linear ABDs are typically allocated from slabs, and for small sizes they are frequently not aligned to page boundaries. For example, a 12K allocation can span four pages, eg: -- 4K -- -- 4K -- -- 4K -- -- 4K -- | | | | | :## ######## ######## ######: [1K, 4K, 4K, 3K] Such an allocation would be loaded into a BIO as you see: [1K, 4K, 4K, 3K] This tends not to be a problem in practice, because even if the BIO were filled and needed to be split, each half would still have either a start or end aligned to the logical block size of the device (assuming 4K at least). --- In ideal circumstances, these shortcomings don't cause any particular problems. Its when they start to interact with other ZFS features that things get interesting. Aggregation will create a "gang" ABD, which is simply a list of other ABDs. Iterating over a gang ABD is just iterating over each ABD within it in turn. Because the segments are simply loaded in order, we can end up with uneven segments either side of the "gap" between the two ABDs. For example, two 12K ABDs might be aggregated and then loaded as: [1K, 4K, 4K, 3K, 2K, 4K, 4K, 2K] Should a split occur, each individual BIO can end up either having an start or end offset that is not aligned to the logical block size, which some drivers (eg SCSI) will reject. However, this tends not to happen because the default aggregation limit usually keeps the BIO small enough to not require more than one split, and most pages are actually full 4K pages, so hitting an uneven gap is very rare anyway. If the pool is under particular memory pressure, then an IO can be broken down into a "gang block", a 512-byte block composed of a header and up to three block pointers. Each points to a fragment of the original write, or in turn, another gang block, breaking the original data up over and over until space can be found in the pool for each of them. Each gang header is a separate 512-byte memory allocation from a slab, that needs to be written down to disk. When the gang header is added to the BIO, its a single 512-byte segment. Pulling all this together, consider a large aggregated write of gang blocks. This results a BIO containing lots of 512-byte segments. Given our tendency to overfill the BIO, a split is likely, and most possible split points will yield a pair of BIOs that are misaligned. Drivers that care, like the SCSI driver, will reject them. --- This commit is a substantial refactor and rewrite of much of `vdev_disk` to sort all this out. `vdev_bio_max_segs()` now returns the ideal maximum size for the device, if available. There's also a tuneable `zfs_vdev_disk_max_segs` to override this, to assist with testing. We scan the ABD up front to count the number of pages within it, and to confirm that if we submitted all those pages to one or more BIOs, it could be split at any point with creating a misaligned BIO. If the pages in the BIO are not usable (as in any of the above situations), the ABD is linearised, and then checked again. This is the same technique used in `vdev_geom` on FreeBSD, adjusted for Linux's variable page size and allocator quirks. `vbio_t` is a cleanup and enhancement of the old `dio_request_t`. The idea is simply that it can hold all the state needed to create, submit and return multiple BIOs, including all the refcounts, the ABD copy if it was needed, and so on. Apart from what I hope is a clearer interface, the major difference is that because we know how many BIOs we'll need up front, we don't need the old overflow logic that would grow the BIO array, throw away all the old work and restart. We can get it right from the start. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Closes openzfs#15533 Closes openzfs#15588
This makes the submission method selectable at module load time via the `zfs_vdev_disk_classic` parameter, allowing this change to be backported to 2.2 safely, and disabled in favour of the "classic" submission method if new problems come up. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Closes openzfs#15533 Closes openzfs#15588
Simplifies our code a lot, so we don't have to wait for each and reassemble them. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Closes openzfs#15533 Closes openzfs#15588
Before 4.5 (specifically, torvalds/linux@ddc58f2), head and tail pages in a compound page were refcounted separately. This means that using the head page without taking a reference to it could see it cleaned up later before we're finished with it. Specifically, bio_add_page() would take a reference, and drop its reference after the bio completion callback returns. If the zio is executed immediately from the completion callback, this is usually ok, as any data is referenced through the tail page referenced by the ABD, and so becomes "live" that way. If there's a delay in zio execution (high load, error injection), then the head page can be freed, along with any dirty flags or other indicators that the underlying memory is used. Later, when the zio completes and that memory is accessed, its either unmapped and an unhandled fault takes down the entire system, or it is mapped and we end up messing around in someone else's memory. Both of these are very bad. The solution on these older kernels is to take a reference to the head page when we use it, and release it when we're done. There's not really a sensible way under our current structure to do this; the "best" would be to keep a list of head page references in the ABD, and release them when the ABD is freed. Since this additional overhead is totally unnecessary on 4.5+, where head and tail pages share refcounts, I've opted to simply not use the compound head in ABD page iteration there. This is theoretically less efficient (though cleaning up head page references would add overhead), but its safe, and we still get the other benefits of not mapping pages before adding them to a bio and not mis-splitting pages. There doesn't appear to be an obvious symbol name or config option we can match on to discover this behaviour in configure (and the mm/page APIs have changed a lot since then anyway), so I've gone with a simple version check. Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Closes openzfs#15533 Closes openzfs#15588
This reverts commit bd7a02c which can trigger an unlikely existing bio alignment issue on Linux. This change is good, but the underlying issue it exposes needs to be resolved before this can be re-applied. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#15533 Issue openzfs#16631
This reverts commit bd7a02c which can trigger an unlikely existing bio alignment issue on Linux. This change is good, but the underlying issue it exposes needs to be resolved before this can be re-applied. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#15533 Issue openzfs#16631
It seems out our notion of "properly" aligned IO was incomplete. In particular, dm-crypt does its own splitting, and assumes that a logical block will never cross an order-0 page boundary (ie, the physical page size, not compound size). This effectively means that it needs to be possible to split a BIO at any page or block size boundary and have it work correctly. This updates the alignment check function to enforce these rules (to the extent possible). Our response to misaligned data is to make some new allocation that is properly aligned, and copy the data into it. It turns out that linearising (via abd_borrow_buf()) is not enough, because we allocate eg 4K blocks from a general purpose slab, and so may receive (or already have) a 4K block that crosses pages. So instead, we allocate a new ABD, which is guaranteed to be aligned properly to block sizes, and then copy everything into it, and back out on the way back. Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Closes #16687 #16631 #15646 #15533 #14533
It seems out our notion of "properly" aligned IO was incomplete. In particular, dm-crypt does its own splitting, and assumes that a logical block will never cross an order-0 page boundary (ie, the physical page size, not compound size). This effectively means that it needs to be possible to split a BIO at any page or block size boundary and have it work correctly. This updates the alignment check function to enforce these rules (to the extent possible). Our response to misaligned data is to make some new allocation that is properly aligned, and copy the data into it. It turns out that linearising (via abd_borrow_buf()) is not enough, because we allocate eg 4K blocks from a general purpose slab, and so may receive (or already have) a 4K block that crosses pages. So instead, we allocate a new ABD, which is guaranteed to be aligned properly to block sizes, and then copy everything into it, and back out on the way back. Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Closes openzfs#16687 openzfs#16631 openzfs#15646 openzfs#15533 openzfs#14533
It seems out our notion of "properly" aligned IO was incomplete. In particular, dm-crypt does its own splitting, and assumes that a logical block will never cross an order-0 page boundary (ie, the physical page size, not compound size). This effectively means that it needs to be possible to split a BIO at any page or block size boundary and have it work correctly. This updates the alignment check function to enforce these rules (to the extent possible). Our response to misaligned data is to make some new allocation that is properly aligned, and copy the data into it. It turns out that linearising (via abd_borrow_buf()) is not enough, because we allocate eg 4K blocks from a general purpose slab, and so may receive (or already have) a 4K block that crosses pages. So instead, we allocate a new ABD, which is guaranteed to be aligned properly to block sizes, and then copy everything into it, and back out on the way back. Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Closes openzfs#16687 openzfs#16631 openzfs#15646 openzfs#15533 openzfs#14533 (cherry picked from commit 63bafe6)
It seems out our notion of "properly" aligned IO was incomplete. In particular, dm-crypt does its own splitting, and assumes that a logical block will never cross an order-0 page boundary (ie, the physical page size, not compound size). This effectively means that it needs to be possible to split a BIO at any page or block size boundary and have it work correctly. This updates the alignment check function to enforce these rules (to the extent possible). Our response to misaligned data is to make some new allocation that is properly aligned, and copy the data into it. It turns out that linearising (via abd_borrow_buf()) is not enough, because we allocate eg 4K blocks from a general purpose slab, and so may receive (or already have) a 4K block that crosses pages. So instead, we allocate a new ABD, which is guaranteed to be aligned properly to block sizes, and then copy everything into it, and back out on the way back. Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Closes openzfs#16687 openzfs#16631 openzfs#15646 openzfs#15533 openzfs#14533
It seems out our notion of "properly" aligned IO was incomplete. In particular, dm-crypt does its own splitting, and assumes that a logical block will never cross an order-0 page boundary (ie, the physical page size, not compound size). This effectively means that it needs to be possible to split a BIO at any page or block size boundary and have it work correctly. This updates the alignment check function to enforce these rules (to the extent possible). Our response to misaligned data is to make some new allocation that is properly aligned, and copy the data into it. It turns out that linearising (via abd_borrow_buf()) is not enough, because we allocate eg 4K blocks from a general purpose slab, and so may receive (or already have) a 4K block that crosses pages. So instead, we allocate a new ABD, which is guaranteed to be aligned properly to block sizes, and then copy everything into it, and back out on the way back. Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Closes openzfs#16687 openzfs#16631 openzfs#15646 openzfs#15533 openzfs#14533 (cherry picked from commit 63bafe6)
I build and regularly test ZFS from the master branch. A few days go I built and tested the commit specified in the headline of this issue, deploying it to three machines.
On two of them (the ones that had mirrored pools), a data corruption issue arose where many WRITE errors (hundreds) would accumulate when deleting snapshots, but no CKSUM errors took place, nor was there evidence that hardware was the issue. I tried a scrub, and that just made the problem worse.
Initially I assumed I had gotten extremely unlucky and hardware was dying, because two mirrors of one leg were experiencing the issue, but none of the drives of the other leg were -- so I decided best to be safe and attach a third mirror drive to the first leg (that was $200, oof). Since I had no more drive bays, I popped the new drive into a USB port (USB 2.0!) and attached it to the first leg.
During the resilvering process, the third drive also began experiencing WRITE errors, and the first CKSUM errors.
I tried different kernels (6.4, 6.5 from Fedora) to no avail. The error was present either way. zpool clear was followed by a few errors whenever disks were written to, and hundreds of errors whenever snapshots were deleted (I have zfs-auto-snapshot running in the background).
Then, my backup machine began experiencing the same WRITE errors. I can't have this backup die on me, especially not that I have actual data corruption on the big data file server.
At this point I concluded there must be some serious issue with the code, and decided to downgrade all machines to a known-good build. After downgrading the most severely affected machine (whose logs are above) to my build of e47e9bb, everything appears nominal and the resilvering is progressing without issues. Deleting snapshots also is no longer causing issues.
Nonetheless, I have forever lost what appears to be "who knows what" metadata, and of course four days trying to resilver unsuccessfully:
In conclusion, something added between e47e9bb..786641d is causing this issue.
The text was updated successfully, but these errors were encountered: