-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Let zfs mount all tolerate in-progress mounts #8881
Conversation
Signed-off-by: Don Brady <don.brady@delphix.com>
Codecov Report
@@ Coverage Diff @@
## master #8881 +/- ##
=========================================
+ Coverage 78.65% 78.76% +0.1%
=========================================
Files 382 382
Lines 117791 117785 -6
=========================================
+ Hits 92652 92771 +119
+ Misses 25139 25014 -125
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think including this additional resiliency makes good sense. But I'd like to better understand why you were seeing concurrent mounts at all. zfs mount -a
should only be run once by the zfs-mount service. Which is conflicting with an unrelated zfs mount
in your environment?
As I understand it, the reason we're seeing these racing mounts is that both the @aerusso I was hoping to get your thoughts of the best way to handle this from a systemd perspective. We used to run the |
The zfs-mount service can unexpectedly fail to start when zfs encounters a mount that is in progress. This service uses zfs mount -a, which has a window between the time it checks if the dataset was mounted and when the actual mount (via mount.zfs binary) occurs. The reason for the racing mounts is that both zfs-mount.target and zfs-share.target are allowed to execute concurrently after the import. This is more of an issue with the relatively recent addition of parallel mounting, and we should consider serializing the mount and share targets. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed by: John Kennedy <john.kennedy@delphix.com> Reviewed-by: Allan Jude <allanjude@freebsd.org> Signed-off-by: Don Brady <don.brady@delphix.com> Closes openzfs#8881
The zfs-mount service can unexpectedly fail to start when zfs encounters a mount that is in progress. This service uses zfs mount -a, which has a window between the time it checks if the dataset was mounted and when the actual mount (via mount.zfs binary) occurs. The reason for the racing mounts is that both zfs-mount.target and zfs-share.target are allowed to execute concurrently after the import. This is more of an issue with the relatively recent addition of parallel mounting, and we should consider serializing the mount and share targets. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed by: John Kennedy <john.kennedy@delphix.com> Reviewed-by: Allan Jude <allanjude@freebsd.org> Signed-off-by: Don Brady <don.brady@delphix.com> Closes openzfs#8881
The zfs-mount service can unexpectedly fail to start when zfs encounters a mount that is in progress. This service uses zfs mount -a, which has a window between the time it checks if the dataset was mounted and when the actual mount (via mount.zfs binary) occurs. The reason for the racing mounts is that both zfs-mount.target and zfs-share.target are allowed to execute concurrently after the import. This is more of an issue with the relatively recent addition of parallel mounting, and we should consider serializing the mount and share targets. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed by: John Kennedy <john.kennedy@delphix.com> Reviewed-by: Allan Jude <allanjude@freebsd.org> Signed-off-by: Don Brady <don.brady@delphix.com> Closes openzfs#8881
The zfs-mount service can unexpectedly fail to start when zfs encounters a mount that is in progress. This service uses zfs mount -a, which has a window between the time it checks if the dataset was mounted and when the actual mount (via mount.zfs binary) occurs. The reason for the racing mounts is that both zfs-mount.target and zfs-share.target are allowed to execute concurrently after the import. This is more of an issue with the relatively recent addition of parallel mounting, and we should consider serializing the mount and share targets. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed by: John Kennedy <john.kennedy@delphix.com> Reviewed-by: Allan Jude <allanjude@freebsd.org> Signed-off-by: Don Brady <don.brady@delphix.com> Closes openzfs#8881
The zfs-mount service can unexpectedly fail to start when zfs encounters a mount that is in progress. This service uses zfs mount -a, which has a window between the time it checks if the dataset was mounted and when the actual mount (via mount.zfs binary) occurs. The reason for the racing mounts is that both zfs-mount.target and zfs-share.target are allowed to execute concurrently after the import. This is more of an issue with the relatively recent addition of parallel mounting, and we should consider serializing the mount and share targets. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed by: John Kennedy <john.kennedy@delphix.com> Reviewed-by: Allan Jude <allanjude@freebsd.org> Signed-off-by: Don Brady <don.brady@delphix.com> Closes openzfs#8881
The zfs-mount service can unexpectedly fail to start when zfs encounters a mount that is in progress. This service uses zfs mount -a, which has a window between the time it checks if the dataset was mounted and when the actual mount (via mount.zfs binary) occurs. The reason for the racing mounts is that both zfs-mount.target and zfs-share.target are allowed to execute concurrently after the import. This is more of an issue with the relatively recent addition of parallel mounting, and we should consider serializing the mount and share targets. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed by: John Kennedy <john.kennedy@delphix.com> Reviewed-by: Allan Jude <allanjude@freebsd.org> Signed-off-by: Don Brady <don.brady@delphix.com> Closes #8881
This reverts commit a9cd8bf which introduced a segfault when running `zfs mount -a` multiple times when there are mountpoints which are not empty. This segfault is now seen frequently by the CI after the mount code was updated to directly call mount(2). The original reason this logic was added is described in openzfs#8881. Since then the systemd `zfs-share.target` has been updated to run "After" the `zfs-mount.server` which should avoid this issue. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#9560
This reverts commit a9cd8bf which introduced a segfault when running `zfs mount -a` multiple times when there are mountpoints which are not empty. This segfault is now seen frequently by the CI after the mount code was updated to directly call mount(2). The original reason this logic was added is described in openzfs#8881. Since then the systemd `zfs-share.target` has been updated to run "After" the `zfs-mount.server` which should avoid this issue. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#9560
This reverts commit a9cd8bf which introduced a segfault when running `zfs mount -a` multiple times when there are mountpoints which are not empty. This segfault is now seen frequently by the CI after the mount code was updated to directly call mount(2). The original reason this logic was added is described in #8881. Since then the systemd `zfs-share.target` has been updated to run "After" the `zfs-mount.server` which should avoid this issue. Reviewed-by: Don Brady <don.brady@delphix.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #9560 Closes #10364
This reverts commit a9cd8bf which introduced a segfault when running `zfs mount -a` multiple times when there are mountpoints which are not empty. This segfault is now seen frequently by the CI after the mount code was updated to directly call mount(2). The original reason this logic was added is described in openzfs#8881. Since then the systemd `zfs-share.target` has been updated to run "After" the `zfs-mount.server` which should avoid this issue. Reviewed-by: Don Brady <don.brady@delphix.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes openzfs#9560 Closes openzfs#10364 (cherry picked from commit d1b84da)
This reverts commit a9cd8bf which introduced a segfault when running `zfs mount -a` multiple times when there are mountpoints which are not empty. This segfault is now seen frequently by the CI after the mount code was updated to directly call mount(2). The original reason this logic was added is described in openzfs#8881. Since then the systemd `zfs-share.target` has been updated to run "After" the `zfs-mount.server` which should avoid this issue. Reviewed-by: Don Brady <don.brady@delphix.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes openzfs#9560 Closes openzfs#10364
Motivation and Context
The
zfs-mount
service can unexpectedly fail to start when zfs encounters a mount that is in progress. This service useszfs mount -a
, which has a window between the time it checks if the dataset was mounted and when the actual mount (viamount.zfs
binary) occurs.Description
Simple way to demonstrate this in-progress mount window:
The suggested solution is to check if the failure was due to an in-progress mount and not treat it as an error.
How Has This Been Tested?
Manually tested the simple case described above to confirm
zfs mount -a
is working as intended.Tested with a suite of automated test that was routinely encountering this error. A
journalctl
audit confirms that thezfs-mount
service no longer fails to start when it encounters an in-progress mount.Types of changes
Checklist:
Signed-off-by
.