-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issuing "zpool trim" locks up zfs and makes pool importable only in RO mode #16056
Comments
Probably SMR to blame. |
@IvanVolosyuk Why do you think so? |
ZFS on SMR drives has a lot of issues. It is generally recommended to avoid those drives. Search in google 'zfs smr' for the full picture. e.g. https://www.tspi.at/2022/11/26/whynosmrforzfs.html SMR drives live their own life and it might be that it's not the change of ZFS version caused it start working again, but just passage of time and SMR internal logic progress. It might be useful to repeat swapping between ZFS 2.2.2 an 2.2.3 several times to see if it indeed related to some changes there. This is critical information - how easily repeatable the problem is. When pool fails to import it might be actually waiting for the drive to respond to the issued commands. Thus, it might take just a long time for the drive to respond. Look at dmesg logs if there were disk command failures / timeouts there. There are two changes in 2.2.3 which look relevant: |
I have tried reproducing the issue on non-SMR drives but I am not able to do so. I agree with what @IvanVolosyuk has suggested. Trying 2.2.3 without the two commits mentioned might help narrow down the issue. |
@IvanVolosyuk @usaleem-ix I don't have experience with tracking down bugs. What should I do before I try it again to ensure useful information would be gathered and where would it be most likely logged?
I actually make a point of keeping an eye on these drives by looking at CPU and IO activity especially during writes. I also listen to them and touch them to see what's going on. SMR is implemented differently in different drives and across manufacturers it seems. In this particular case I'm absolutely sure there was no activity that I might have missed. After issuing the TRIM I heard the heads park as ZFS locked up. In the case of the following import attempts, the drive was not trying to read anything past the point where the import process hung. There might be some interaction between what ZFS tried to do to make the drive complete a TRIM and the drive's firmware. I suspect the subsequent imports did not succeed, because it might have looked like the drive was in the middle of an active TRIM operation (or something else), when it actually wasn't. Version 2.2.3 refused to proceed, while version 2.2.2 just imported and completed the pending TRIM. Now, was it something in the firmware that was left in a weird state or was it something with the file system on disk? I'm aware of the drawbacks of SMR. I'm only using such disks for sequential writes of mostly big files and have found that TRIM helps a lot when the filesystem has had time to open up all kinds of inconvenient holes due to file deletions / writes / deletions ... Drive managed SMR is supposed be completely transparent. I don't get how ZFS could have a problem with the particular technology (excluding use cases where SMR is practically unusable regardless of the file system). |
I would have started with trying to consistently reproduce with ZFS 2.2.3. Capturing stack traces of the zfs kernel threads and zpool import can be useful for investigating of possible deadlock: Reverting those 2 commits and trying ZFS 2.2.3 without them can also help further narrow down the problem. |
Will do. It will be in a few days though. I don't have a place to dump the drive's contents right now. If anyone has a WD Elements or MyPassport 2.5" 5TB drive they don't mind messing with, please go ahead in the meantime. This particular one is a MyPassport WD50NDZW-11BCSS1, the slower 4800 RPM variant. Some Elements and MyPassport have a 5400 RPM variant of this drive. If it's a problem with ZFS, it should work with other drives as long it's trimmable and possibly a HDD not an SSD, but who knows. BTW, I ran that line and learned zsh and bash apparently work differently when it comes to the "read" part. zsh seems to ignore the "-n 1 z" maybe. It's unclear to me if "read" is a builtin or standalone in either case. |
I think I hit the same problem as @gititdone876. It appears the problem started somewhere in the recent version (I'm not 100% sure which one as I was doing updates over the last few days). Currently running:
First I started getting kernel reported hungs for Trying to narrow down the issue I've got the system to hang countless of times. Since the error was pointing out to an SSD, I just replaced the SSD with a similar one, doing
Looking at the kernel log the dreaded error is back:
It appears like something is truly broken in the structure of the data, as I replaced the disk in question and the error still persists. None of the disks are SMRs. The HDD is |
|
|
@IvanVolosyuk Interesting find, I need to look around if I have a spare SSD that isn't one of these models. Linking the source for posteriority: https://github.com/torvalds/linux/blob/f2f80ac809875855ac843f9e5e7480604b5cbff5/drivers/ata/libata-core.c#L4208 I downgraded ZFS to 2.2.2 and the same
Looking at the git release history the trim performance improvement has been introduced in 2.2.3 (#15843). Excuse my stupid questions, but would ZIL contain broken discard request in this case, or downgrading to 2.2.2 and seeing the same behavior proves that it's not the new trim changes? Edit:
and the result is the same as above. I wonder if 2.2.3 somehow damaged the on-disk structure. I cannot try the 2.1.x as this pool is upgraded with Currently, I came back to |
It might be too separate bugs here, one from original poster and one from you. The change was indeed in 2.2.3, so if it is reproducible before that - it should be something else. You should probably create a separate bug for your issue as well, as your case looks pretty different from the original poster. |
Turns out I have logs for the I'm not a coder, but I've been looking around some kernel code and learning more about ZFS since this happened, trying to understand how the linux storage stack is organized and how ZFS grafts itself into it. USB attached drives too. If someone can point me to a good resource (preferably with pictures) detailing how things are organized, that would be awesome. I've started watching this - ZFS TRIM Explained by Brian Behlendorf. It turns out the Should I include the logs in this comment, or edit my original post? |
Probably doesn't matter too much; personally I'd add to the original post with a line "edit 2024-04-xx adding logs as requested in [link to comment]". But I don't think it matters so much as long as we can all find it!
Excellent; being able to compare will be gold. For what its worth, Incidentally, 2.2.3 (via 08fd5cc via #15843) does introduce buggy TRIM error handling on some kernel versions. I've spent the morning chasing that down, and am just finishing up my testing; should be posting a PR within the next 24h. However if I'm right, it won't "fix" issues with any TRIM itself, rather, it'll just stop the pool hanging, so I'm not sure if it actually properly resolves this issue. I only mention it to say that I'm working on it :) |
@robn @usaleem-ix Added logs in original post. |
@gititdone876 thank you. "oopsie" :) So, I'm not totally sure what's happening, because this particular kernel version (6.7) is responding in a way I can quite see my way through. Specifically, If you can be bothered reproducing it again on 6.7+2.2.3, I'd love if you could post the contents of the following two files just after the TRIM errors are thrown (while the pool is stuck, if you like).
The first will show if the OpenZFS TRIM subsystem is receiving the errors at all. If so, then it shouldn't be hanging, though it might be the earlier failure causing the hang (that is, this issue might not be entirely about TRIM). If its not getting the errors though, #16070 should fix that part (#16081 for 2.2). Even if that's not it, I'd still like to see it again with that patch applied - it reorganises the whole error response path, so it shouldn't be possible for anything to fall through the cracks. Meanwhile, if "Crash your pool, apply these patches then try to crash it again" is a lot to ask, so I understand if you would rather not. Avoiding TRIM until 2.2.4 is out (which will have all these fixes) is also a good option for you from here, and then we can go around again and see if there's more to all this. Thanks for the info, it is helping a lot! |
This is what
FW version make an appearance I think. I have a Seagate SMR disk that shows zeros for DISC-GRAN and DISC-MAX. BTW, I had to manually change SCSI settings to be able to trim a very similar WD 5TB disk with I did trim this pool on the machine with I'm keeping an eye on this, while I learn more. Let me know if I can provide any more relevant information. |
System information
Describe the problem you're observing
After zfs updated from 2.2.2 to 2.2.3 a pool which had
autotrim=on
locked up zfs on import. There wasiowait
activity and zfs could not be killed. Hard shutdown. I somehow managed to turn offautotrim
, scrubbed with no issues and used the pool for a while until I accidentally issued a manualzpool trim
which led to the same problem. After restartzpool import
would not import the pool, would not exit and there was noiowait
activity, but still had to do a hard shutdown. This was on Manjaro with kernel6.7.7-1
. Tried on a different machine with EndeavourOS and kernel6.6.23-1-lts
,zfs-2.2.3
, same thing. I tried to import read-only and that worked, but several combinations of kernels6.6.23-1-lts
,6.8.2-arch2-1
andzfs-2.2.3
andlatest
from git, would have the exact same problem -zpool import
does not exit, does not import. While the pool was imported as read-onlyzpool get all
would showcapacity 0%
which is not true. Files were readable, including partial ones that were being written during the hang.I built
zfs-2.2.2
and running kernel6.6.23-1-lts
on the EndeavourOS machine solved the problem. As soon as I ranzpool import
theTRIM
operation, that had failed previously, went ahead and finished successfully and the pool is now importable and writable withzfs-2.2.3
, kernel6.7.7-1
on the Manjaro system. Scrubbed with no errors. I'm guessing the issue is withzfs-2.2.3
, not a specific kernel.The pool is on a single external USB SMR drive - WD50NDZW-11BCSS1
Describe how to reproduce the problem
Issue a
zpool trim
or haveautotrim=on
with an SMR HDD connected via USB withzfs-2.2.3
EDIT: 2024-04-08 Here come the logs
Some of the errors might be identical, sorry about that. I just put everything between the problematic
zpool trim
and the eventual resolution in one file in chronological order. I don't remember when I tried importing as normal and when as read-only. Check the output ofzpool history -i
for an overview.oopsie.log
zpool_history.log
zpool_get_all.log
The text was updated successfully, but these errors were encountered: