Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make txg_wait_synced conditional in zfsvfs_teardown #9115

Merged
merged 1 commit into from
Aug 15, 2019

Conversation

PaulZ-98
Copy link
Contributor

@PaulZ-98 PaulZ-98 commented Aug 2, 2019

The call to txg_wait_synced in zfsvfs_teardown should
be made conditional on the objset having dirty data.
This can prevent unnecessary txg_wait_synced during
some unmount operations.

Signed-off-by: Paul Zuchowski pzuchowski@datto.com

The function zfsvfs_teardown only skips txg_wait_synced if the objset is read-only. We can also avoid txg_wait_synced if the objset has no dirty data. Detect if the objset is dirty and don't call txg_wait_synced if it is not.

Motivation and Context

Applications that perform lots of sync tasks are already forcing txg_wait_synced. If the application also limits ZFS mounts by unmounting datasets after use, it could be helpful to performance to avoid txg_wait_synced during unmount when we can.

Description

Add to the conditional test for txg_wait_synced in zfsvfs_teardown.

How Has This Been Tested?

Ran the zfs test suite.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (a change to man pages or other documentation)

Checklist:

@ahrens ahrens added Status: Code Review Needed Ready for review and testing Type: Performance Performance improvement or performance problem labels Aug 5, 2019
for (int t = 0; t < TXG_SIZE; t++) {
os_dirty = dmu_objset_is_dirty(os, t);
if (os_dirty)
break;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nit] I think this would be slightly easier to read as:

if (dmu_objset_is_dirty(...)) {
  os_dirty = B_TRUE;
  break;
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Matt I made that change.

@ahrens ahrens requested a review from grwilson August 5, 2019 18:31
The call to txg_wait_synced in zfsvfs_teardown should
be made conditional on the objset having dirty data.
This can prevent unnecessary txg_wait_synced during
some unmount operations.

Signed-off-by: Paul Zuchowski <pzuchowski@datto.com>
@codecov
Copy link

codecov bot commented Aug 6, 2019

Codecov Report

Merging #9115 into master will increase coverage by 0.02%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #9115      +/-   ##
==========================================
+ Coverage   79.18%   79.21%   +0.02%     
==========================================
  Files         400      400              
  Lines      121692   121656      -36     
==========================================
+ Hits        96360    96367       +7     
+ Misses      25332    25289      -43
Flag Coverage Δ
#kernel 79.72% <100%> (+0.04%) ⬆️
#user 67.4% <ø> (+0.35%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c81f179...16e8c2d. Read the comment docs.

@behlendorf behlendorf added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Code Review Needed Ready for review and testing labels Aug 14, 2019
@behlendorf behlendorf merged commit e2b31b5 into openzfs:master Aug 15, 2019
tonyhutter pushed a commit to tonyhutter/zfs that referenced this pull request Dec 24, 2019
The call to txg_wait_synced in zfsvfs_teardown should
be made conditional on the objset having dirty data.
This can prevent unnecessary txg_wait_synced during
some unmount operations.

Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Zuchowski <pzuchowski@datto.com>
Closes openzfs#9115
tonyhutter pushed a commit to tonyhutter/zfs that referenced this pull request Dec 27, 2019
The call to txg_wait_synced in zfsvfs_teardown should
be made conditional on the objset having dirty data.
This can prevent unnecessary txg_wait_synced during
some unmount operations.

Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Zuchowski <pzuchowski@datto.com>
Closes openzfs#9115
tonyhutter pushed a commit that referenced this pull request Jan 23, 2020
The call to txg_wait_synced in zfsvfs_teardown should
be made conditional on the objset having dirty data.
This can prevent unnecessary txg_wait_synced during
some unmount operations.

Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Zuchowski <pzuchowski@datto.com>
Closes #9115
allanjude pushed a commit to KlaraSystems/zfs that referenced this pull request Apr 28, 2020
The call to txg_wait_synced in zfsvfs_teardown should
be made conditional on the objset having dirty data.
This can prevent unnecessary txg_wait_synced during
some unmount operations.

Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Zuchowski <pzuchowski@datto.com>
Closes openzfs#9115

Signed-off-by: Bryant G. Ly <bly@catalogicsoftware.com>

Conflicts:
	module/zfs/zfs_vfsops.c
asomers added a commit to asomers/zfs that referenced this pull request Jun 14, 2024
This applies the same change in openzfs#9115 to FreeBSD.  This was actually the
old behavior in FreeBSD 12; it only regressed when FreeBSD support was
added to OpenZFS.  As far as I can tell, the timeline went like this:

* Illumos's zfsvfs_teardown used an unconditional txg_wait_synced
* Illumos added the dirty data check [^4]
* FreeBSD merged in Illumos's conditional check [^3]
* OpenZFS forked from Illumos
* OpenZFS removed the dirty data check in openzfs#7795 [^5]
* @mattmacy forked the OpenZFS repo and began to add FreeBSD support
* OpenZFS PR openzfs#9115[^1] recreated the same dirty data check that Illumos
  used, in slightly different form.  At this point the OpenZFS repo did
  not yet have multi-OS support.
* Matt Macy merged in FreeBSD support in openzfs#8987[^2] , but it was based on
  slightly outdated OpenZFS code.

In my local testing, this vastly improves the reboot speed of a server
with a large pool that has 1000 datasets and is resilvering an HDD.

[^1]: openzfs#9115
[^2]: openzfs#8987
[^3]: freebsd/freebsd-src@10b9d77
[^4]: illumos/illumos-gate@5aaeed5
[^5]: openzfs#7795

Sponsored by:	Axcient
Signed-off-by:	Alan Somers <asomers@gmail.com>
behlendorf pushed a commit that referenced this pull request Aug 9, 2024
This applies the same change in #9115 to FreeBSD.  This was actually the
old behavior in FreeBSD 12; it only regressed when FreeBSD support was
added to OpenZFS.  As far as I can tell, the timeline went like this:

* Illumos's zfsvfs_teardown used an unconditional txg_wait_synced
* Illumos added the dirty data check [^4]
* FreeBSD merged in Illumos's conditional check [^3]
* OpenZFS forked from Illumos
* OpenZFS removed the dirty data check in #7795 [^5]
* @mattmacy forked the OpenZFS repo and began to add FreeBSD support
* OpenZFS PR #9115[^1] recreated the same dirty data check that Illumos
  used, in slightly different form.  At this point the OpenZFS repo did
  not yet have multi-OS support.
* Matt Macy merged in FreeBSD support in #8987[^2] , but it was based on
  slightly outdated OpenZFS code.

In my local testing, this vastly improves the reboot speed of a server
with a large pool that has 1000 datasets and is resilvering an HDD.

[^1]: #9115
[^2]: #8987
[^3]: freebsd/freebsd-src@10b9d77
[^4]: illumos/illumos-gate@5aaeed5
[^5]: #7795

Sponsored by: Axcient
Reviewed-by: Allan Jude <allan@klarasystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Alan Somers <asomers@gmail.com>
Closes #16268
lundman pushed a commit to openzfsonwindows/openzfs that referenced this pull request Sep 4, 2024
This applies the same change in openzfs#9115 to FreeBSD.  This was actually the
old behavior in FreeBSD 12; it only regressed when FreeBSD support was
added to OpenZFS.  As far as I can tell, the timeline went like this:

* Illumos's zfsvfs_teardown used an unconditional txg_wait_synced
* Illumos added the dirty data check [^4]
* FreeBSD merged in Illumos's conditional check [^3]
* OpenZFS forked from Illumos
* OpenZFS removed the dirty data check in openzfs#7795 [^5]
* @mattmacy forked the OpenZFS repo and began to add FreeBSD support
* OpenZFS PR openzfs#9115[^1] recreated the same dirty data check that Illumos
  used, in slightly different form.  At this point the OpenZFS repo did
  not yet have multi-OS support.
* Matt Macy merged in FreeBSD support in openzfs#8987[^2] , but it was based on
  slightly outdated OpenZFS code.

In my local testing, this vastly improves the reboot speed of a server
with a large pool that has 1000 datasets and is resilvering an HDD.

[^1]: openzfs#9115
[^2]: openzfs#8987
[^3]: freebsd/freebsd-src@10b9d77
[^4]: illumos/illumos-gate@5aaeed5
[^5]: openzfs#7795

Sponsored by: Axcient
Reviewed-by: Allan Jude <allan@klarasystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Alan Somers <asomers@gmail.com>
Closes openzfs#16268
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Accepted Ready to integrate (reviewed, tested) Type: Performance Performance improvement or performance problem
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants