-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pod use cephfs volume when reboot minion appear two mount point #778
Comments
@Madhu-1 |
@jianglingxia please add steps to reproduce this issue |
1)reboot the minion 2)pod in minion 172.20.0.2 then rescheadul to 172.20.0.4 172.20.0.2 NotReady 71d v1.13.6 nginx4-1-trrdw 0/1 Terminating 0 18h 172.20.0.2 3the pod in minion mountpath is : the minion 172.20.0.2 no more one pod but because the minion reboot so the csiplugin restart it then the driver mountcache fuction maybe remounted one volume,then the minion has one path Log file created at: 2020/02/22 15:32:48 |
Yes, #282 is buggy and removed in the master branch. |
mount-cache will auto remounted path when csi container restart and umount if kubernetes api server delete pod on the node lately |
@huaizong this shouldnt be an issue any more with latest versions. Closing this for now. Please feel free to reopen if required. |
Describe the bug
the application pod first in minion 172.20.0.3 used cephfs and the mount point in 172.20.0.3
then i reboot the 172.20.0.3 minion ,the pod auto scheduler to
and the mount point in 172.20.0.2 is
but the 172.20.0.3 the origin mount point existed,may be need umount ?
A clear and concise description of what the bug is.
Environment details
The text was updated successfully, but these errors were encountered: