Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

log flood when deleting the volume when clone volume exist #236

Open
pawanpraka1 opened this issue Nov 11, 2020 · 1 comment
Open

log flood when deleting the volume when clone volume exist #236

pawanpraka1 opened this issue Nov 11, 2020 · 1 comment
Assignees
Labels
backlog Will be picked up as a roadmap item. refactoring Modifying existing code.
Milestone

Comments

@pawanpraka1
Copy link
Contributor

When clone volume exist and we delete the source volume, the daemonset gets flooded with the error log saying

E1110 11:34:19.572435       1 zfs_util.go:597] zfs: could not destroy snapshot for the clone vol zfspv-pool/pvc-728b13d6-9a46-467f-a35f-782abd07f377 snap pvc-728b13d6-9a46-467f-a35f-782abd07f377 err exit status 1
E1110 11:34:19.572475       1 volume.go:251] error syncing 'openebs/pvc-728b13d6-9a46-467f-a35f-782abd07f377': exit status 1, requeuing
I1110 11:34:49.563328       1 volume.go:136] Got update event for ZV zfspv-pool/pvc-728b13d6-9a46-467f-a35f-782abd07f377
I1110 11:34:49.563379       1 zfs_util.go:592] destroying snapshot pvc-9c8f4b79-dc9f-4dd1-84a6-9668236ab031@pvc-728b13d6-9a46-467f-a35f-782abd07f377 for the clone zfspv-pool/pvc-728b13d6-9a46-467f-a35f-782abd07f377
E1110 11:34:49.572847       1 zfs_util.go:670] zfs: could not destroy snapshot pvc-9c8f4b79-dc9f-4dd1-84a6-9668236ab031@pvc-728b13d6-9a46-467f-a35f-782abd07f377 cmd [destroy zfspv-pool/pvc-9c8f4b79-dc9f-4dd1-84a6-9668236ab031@pvc-728b13d6-9a46-467f-a35f-782abd07f377] error: cannot destroy 'zfspv-pool/pvc-9c8f4b79-dc9f-4dd1-84a6-9668236ab031@pvc-728b13d6-9a46-467f-a35f-782abd07f377': snapshot has dependent clones
use '-R' to destroy the following datasets:
zfspv-pool/pvc-728b13d6-9a46-467f-a35f-782abd07f377
E1110 11:34:49.572878       1 zfs_util.go:597] zfs: could not destroy snapshot for the clone vol zfspv-pool/pvc-728b13d6-9a46-467f-a35f-782abd07f377 snap pvc-728b13d6-9a46-467f-a35f-782abd07f377 err exit status 1
E1110 11:34:49.572921       1 volume.go:251] error syncing 'openebs/pvc-728b13d6-9a46-467f-a35f-782abd07f377': exit status 1, requeuing

Here what is happening is since there is a clone volume present, the destroy will fail because of the clone volume. The volume mgmt will keep on trying to delete and keep on failing with the error until we delete the clone volume. This will flood the log with unnecessary error messages.

@pawanpraka1 pawanpraka1 added this to the v1.1.0 milestone Nov 11, 2020
@pawanpraka1 pawanpraka1 modified the milestones: v1.1.0, v1.2.0 Nov 23, 2020
@pawanpraka1 pawanpraka1 removed this from the v1.2.0 milestone Dec 14, 2020
@Abhinandan-Purkait Abhinandan-Purkait added backlog Will be picked up as a roadmap item. refactoring Modifying existing code. enhancement Add new functionality to existing feature and removed openforce enhancement Add new functionality to existing feature labels Jun 6, 2024
@avishnu
Copy link
Member

avishnu commented Sep 24, 2024

Should get resolved with the PR: #350, which prevents volume deletion if a snapshot is present.

@avishnu avishnu added this to the v4.2 milestone Sep 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog Will be picked up as a roadmap item. refactoring Modifying existing code.
Projects
None yet
Development

No branches or pull requests

4 participants