Skip to content
This repository has been archived by the owner on Mar 26, 2020. It is now read-only.

parallel volume deletion requests fail with device or resource busy for some of the PVs #1479

Open
atinmu opened this issue Jan 14, 2019 · 1 comment
Assignees
Labels
brick-multiplexing-issue tracker label to capture all issues related to brick multiplexing feature bug priority: high

Comments

@atinmu
Copy link
Contributor

atinmu commented Jan 14, 2019

On a GCS deployment (with brick multiplexing enabled), when PV deletion requests are sent on parallel, some of volume delete requests fail with device or resource busy error which means the lv unmount has failed as the underlying brick might be holding some reference. However the surprising fact was ls on the brick path resulted in 'no such file or directory' error.

Unfortunately I couldn't persist the logs when this issue was seen however this is fairly reproducible. The only thing what we need to do here is scale 100 PVCs (with smart volumes) on a GD2 cluster and start sending parallel volume stop/delete requests. On GCS deployment sending parallel PV delete requests from kube would replicate the same problem.

@atinmu atinmu added bug priority: high GCS/1.0 Issue is blocker for Gluster for Container Storage brick-multiplexing-issue tracker label to capture all issues related to brick multiplexing feature labels Jan 14, 2019
@atinmu
Copy link
Contributor Author

atinmu commented Jan 17, 2019

Taking this out from GCS/1.0 tag considering we're not going to make brick multiplexing a default option in GCS/1.0 release.

@atinmu atinmu removed the GCS/1.0 Issue is blocker for Gluster for Container Storage label Jan 17, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
brick-multiplexing-issue tracker label to capture all issues related to brick multiplexing feature bug priority: high
Projects
None yet
Development

No branches or pull requests

2 participants