Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zfs destroy is failing #47

Closed
datacore-rm opened this issue Oct 1, 2021 · 7 comments
Closed

zfs destroy is failing #47

datacore-rm opened this issue Oct 1, 2021 · 7 comments

Comments

@datacore-rm
Copy link

zfs destroy is failing most of the times. It is failing at dsl_destroy_head_check_impl() where it expects holds to be 0, but is 1 as in #9

image

@datacore-rm
Copy link
Author

datacore-rm commented Oct 5, 2021

This issue is same as you have documented in zfs_windows_unmount(). The 'ds_longholds' ref count is held during zfs mount and released during zfs unmount. So in successful case, zfs.exe is able to find mount point and sends unmount ioctl and followed by destroy ioctl. In failure case, zfs.exe is not able to find mount point and it only sends destroy command and hence ref count is not decreased.

But compared to zfsin, function zfs_remove_driveletter() is still called from zfs_windows_unmount(). If this function call is commented, then this "zfs destroy" functionality works fine as in zfsin

	// Delete mountpoints for our volume manually
	// Query the mountmgr for mountpoints and delete them
	// until no mountpoint is left Because we are not satisfied
	// with mountmgrs work, it gets offended and doesn't
	// automatically create mointpoints for our volume after we
	// deleted them manually But as long as we recheck that in
	// mount and create points manually (if necessary),
	// that should be ok hopefully

	// We used to loop here and keep deleting anything we find,
	// but we are only allowed to remove symlinks, anything
	// else and MountMgr ignores the device.

@datacore-rm
Copy link
Author

To repro, create and destroy same zfs twice.

zpool create -f testpool PHYSICALDRIVE1
zfs create testpool/fs1
zfs create testpool/fs1/fs2
zfs destroy testpool/fs1/fs2
zfs create testpool/fs1/fs2
zfs destroy testpool/fs1/fs2

error: cannot destroy testpool/fs1/fs2: dataset is busy.

@lundman
Copy link

lundman commented Oct 8, 2021

After you unmount the dataset, the directory/reparse point is still there (and can't be deleted), so that is going to be a problem....

@lundman
Copy link

lundman commented Oct 8, 2021

Ah amusingly, when we go to delete it, it calls zfs_lookup, and zfs_find_dvp_vp() which notices it is a REPARSE and returns that status. Looks like I should demote it from reparse before I can delete it.

@lundman
Copy link

lundman commented Oct 8, 2021

OK so in the zfs_lookup() case (from a CreateFile()) we need to pay attention to FILE_OPEN_REPARSE_POINT - so that you can open the actual item, instead of always being forwarded.
Then I added a DeleteReparsePoint() which will open the reparse point (using FILE_OPEN_REPARSE_POINT) , and remove the flag ZFS_REPARSE, and delete ZPL_SA_SYMLINK data. Making the mountpoint reparse point back into a regular directory.
Since that calls the FS with FSCTL_DELETE_REPARSE_POINT we now handle that IRP, calling delete_reparse_point() which does the flag removal, SA delete.

Finally ioc_unmount() will then call to delete reparse point after unmount, which leaves a regular directory that can (will) be deleted.

This part works.

However, the directory can not be removed, (not empty), I believe due to vp->use_count being 1. I need to track this down.

@lundman
Copy link

lundman commented Oct 11, 2021

OK part one 3e83da0

still has the "Is Busy" problem you already mentioned

@lundman
Copy link

lundman commented Oct 12, 2021

As you mentioned, with 8c4a594 commented out, i can create/destroy multiple times. It is not an area that I am familiar with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants