Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remount old mount point when csi plugin unexpect exit #282

Merged
merged 10 commits into from
Apr 2, 2019
Merged

remount old mount point when csi plugin unexpect exit #282

merged 10 commits into from
Apr 2, 2019

Conversation

huaizong
Copy link

issue #217 #91

Goal

we try to solve when csi exit unexpect, the pod use cephfs pv can not auto recovery because lost mount relation until pod be killed and reschedule to other node. i think this is may be a problem. may be csi plugin can do more thing to remount the old path so when pod may be auto recovery when pod exit and restart, the old mount path can use.

NoGoal

Pod should exit and restart when csi plugin pod exit and mount point lost. if pod not exit will get error of transport endpoint is not connected.

implment logic

csi-plugin start:

1. load all MountCachEntry  from node local dir
2. check if volID exist in cluster, if no we ignore this entry, if yes continue
3. check if stagingPath exist, if yes we mount the path
4. check if all targetPath exist, if yes we binmount to staging path

NodeServer:

  1. NodeStageVolume: add MountCachEntry on local dir include readonly attr and ceph secret
  2. NodeStagePublishVolume: add pod bind mount path to MountCachEntry and persist local dir
  3. NodeStageunPublishVolume: remove pod bind mount path From MountCachEntry and persist local dir
  4. NodeStageunStageVolume: remove MountCachEntry from local dir

issue #217

Goal

we try to solve when csi exit unexpect, the pod use cephfs pv can not auto recovery because lost mount relation until pod be killed and reschedule to other node. i think this is may be a problem. may be csi plugin can do more thing to remount the old path so when pod may be auto recovery when pod exit and restart, the old mount path can use.

NoGoal

Pod should exit and restart when csi plugin pod exit and mount point lost. if pod not exit will get error of **transport endpoint is not connected**.

implment logic

csi-plugin start:

	1. load all MountCachEntry  from node local dir
	2. check if volID exist in cluster, if no we ignore this entry, if yes continue
	3. check if stagingPath exist, if yes we mount the path
	4. check if all targetPath exist, if yes we binmount to staging path

NodeServer:

1. NodeStageVolume: add MountCachEntry on local dir include readonly attr and ceph secret
2. NodeStagePublishVolume: add pod bind mount path to MountCachEntry  and persist local dir
3. NodeStageunPublishVolume: remove pod bind mount path From MountCachEntry  and persist local dir
4. NodeStageunStageVolume: remove MountCachEntry  from local dir
@huaizong
Copy link
Author

@rootfs @gman0

@humblec
Copy link
Collaborator

humblec commented Mar 26, 2019

@huaizong First of all thanks for the PR. One question I have here is on below:

3. check if stagingPath exist, if yes we mount the path
4. check if all targetPath exist, if yes we binmount to staging path

To make sure I understand correctly, at time of plugin restart we are doing volumeConstruction of existing mount with a minimal operation which is "remount ". Are we good enough by just retrying it as "remount" or rather do we need to remount it with option rw explicit ? One other thought here is, is there a chance where we remount staging volume without an active pod in the node.

@huaizong
Copy link
Author

@huaizong First of all thanks for the PR. One question I have here is on below:

3. check if stagingPath exist, if yes we mount the path
4. check if all targetPath exist, if yes we binmount to staging path

To make sure I understand correctly, at time of plugin restart we are doing volumeConstruction of existing mount with a minimal operation which is "remount ".

yes, csi plugin may restart because of unexpect exit and need to remount existing mount path, or pod use pv will lost connected to the mount path.

Are we good enough by just retrying it as "remount" or rather do we need to remount it with option rw explicit ?

staging path mount just remount, and when bindmount pod targetpath will bindmount with rw option

One other thought here is, is there a chance where we remount staging volume without an active pod in the node.

kubernetes call unstage when without an active pod in the node and the mount cache will clean, so when restart it will not mount staging path whithout active and pod in the node.

pkg/util/nodecache.go Outdated Show resolved Hide resolved
pkg/cephfs/driver.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
@gman0
Copy link
Contributor

gman0 commented Mar 26, 2019

@huaizong the idea itself is fine, but there are issues with the implementation that need to be fixed before we can get this merged.

@gman0
Copy link
Contributor

gman0 commented Mar 26, 2019

Another thing - just putting it out there - can be done after the code reviews...deployment manifests need to be updated to mount the emptyDir by default, as well as the documentation which should mention the possibility of caching mount info.

    2. support user-defined cache dir
    3. if not define mountcachedir disable mountcache
@huaizong
Copy link
Author

Another thing - just putting it out there - can be done after the code reviews...deployment manifests need to be updated to mount the emptyDir by default, as well as the documentation which should mention the possibility of caching mount info.

now this pr support user defined mount cache info save dir, also support deployment manifests update to mount the emptyDir by default.

@huaizong
Copy link
Author

@humblec @gman0 PTAL

@gman0
Copy link
Contributor

gman0 commented Mar 27, 2019

@huaizong my apologies, i'm busy today...will continue with the review tomorrow

pkg/cephfs/driver.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
cmd/cephfs/main.go Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
@huaizong
Copy link
Author

@gman0 @Madhu-1 @humblec

This PR is useful for scenarios that use ceph-fuse, and there is no obvious benefit to kernel-based scenarios, so whether it is worth adding complexity to support ceph-fuse scenarios.

pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
@humblec
Copy link
Collaborator

humblec commented Apr 1, 2019

@huaizong indeed this is useful. I have couple of quick comments. PTAL ..

pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache_test.go Outdated Show resolved Hide resolved
pkg/cephfs/nodeserver.go Outdated Show resolved Hide resolved
pkg/cephfs/nodeserver.go Outdated Show resolved Hide resolved
pkg/cephfs/nodeserver.go Outdated Show resolved Hide resolved
pkg/cephfs/nodeserver.go Outdated Show resolved Hide resolved
pkg/cephfs/mountcache.go Outdated Show resolved Hide resolved
@humblec
Copy link
Collaborator

humblec commented Apr 2, 2019

@huaizong Thanks !! LGTM.

@gman0 can you take a final look at this PR ? this is indeed good to have!

Copy link
Contributor

@gman0 gman0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @huaizong for the work and thank you guys for help with the review

@gman0 gman0 merged commit d0d5da8 into ceph:csi-v1.0 Apr 2, 2019
wilmardo pushed a commit to wilmardo/ceph-csi that referenced this pull request Jul 29, 2019
…n-exit-v2

remount old mount point when csi plugin unexpect exit
@jianglingxia
Copy link

Hi,all
the problem has resolved that Transport endpoint is not connected?
why i use the ceph csi driver v1.2.0 and kubernetes version is v1.13,when pod use cephfs pvc and stop the daemonset named container cephfsplugin,then my pod container exec df -h error log :Transport endpoint is not connected
and must restart the pod can resolved the problem,there has some method not restart the pod resolved the problem?thanks all

df -h

df: `/mnt': Transport endpoint is not connected
Filesystem Size Used Avail Use% Mounted on
rootfs 745G 50G 695G 7% /
overlay 745G 50G 695G 7% /
tmpfs 63G 0 63G 0% /dev
tmpfs 63G 0 63G 0% /sys/fs/cgroup

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 7, 2020

if you are using cephfs fuse client this issue is present.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 7, 2020

@jianglingxia why you have stopped daemonset pods? this should be running as long as storage is needed

@jianglingxia
Copy link

I tested the scenero that when Power down and power on,then the ds and sts will restart it,so maybe the problem will exist,another scenero that when the ceph csi driver upgrade,the application pod use cephfs with cephfs fuse also will exist the problem,

using cephfs fuse client this issue is present --> the problem can resolved it or the commuty will plan to resolved?

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 7, 2020

I tested the scenero that when Power down and power on,then the ds and sts will restart it,so maybe the problem will exist,another scenero that when the ceph csi driver upgrade,the application pod use cephfs with cephfs fuse also will exist the problem,

we recommend you to use cephfs kernel client, if you use cephfs fuse client you need to drain the nodes to use the PVC again

the upgrade is covered i think doc #770

using cephfs fuse client this issue is present --> the problem can resolved it or the commuty will plan to resolved?

yes we will fix this issue, but there is no ETA yet

@jianglingxia
Copy link

the daemonset not upgrade,but restart the csi-cephfsplugin container,like follow:

ae33cf5ab2d2 ed6f186ec08a "/usr/local/bin/cephc" 21 hours ago Up 21 hours k8s_csi-cephfsplugin_ceph1-csi-cephfsplugin-czl9r_default_3fd90197-32d4-11ea-9d2e-744aa4028226_0

[root@paas-controller-172-20-0-3:/home/ubuntu]$ docker restart ae33cf5ab2d2
ae33cf5ab2d2

the minion mountpath is :
ceph-fuse 1.0G 0 1.0G 0% /paasdata/docker/plugins/kubernetes.io/csi/pv/pvc-8bdd24b9-3383-11ea-8500-744aa4028242/globalmount

the app nginx11-1-dp6lh use cephfs volume pvc-8bdd24b9-3383-11ea-8500-744aa4028242
nginx11-1-dp6lh 1/1 Running 0 9m2s 100.100.0.9 172.20.0.3

but the app named nginx11-1-dp6lh pod container error is
df -h
df: `/test': Transport endpoint is not connected
Filesystem Size Used Avail Use% Mounted on
rootfs 745G 50G 695G 7% /
overlay 745G 50G 695G 7% /
tmpfs 63G 0 63G 0% /dev
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/mapper/ncl-paasdata 745G 50G 695G 7% /dev/termination-log
/dev/mapper/ncl-paasdata 745G 50G 695G 7% /etc/resolv.conf
/dev/mapper/ncl-paasdata 745G 50G 695G 7% /etc/hostname

ls -ih /

ls: cannot access /test: Transport endpoint is not connected

why the app pod /test mount path can not read and write and must restart the app container,thanks for your reply!

@jianglingxia
Copy link

@Madhu-1
if need resolved the problem,need encode the csi driver or k8s code? and how to exec it ,thanks
maybe the problem cause some app not running

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 14, 2020

the daemonset not upgrade,but restart the csi-cephfsplugin container,like follow:

ae33cf5ab2d2 ed6f186ec08a "/usr/local/bin/cephc" 21 hours ago Up 21 hours k8s_csi-cephfsplugin_ceph1-csi-cephfsplugin-czl9r_default_3fd90197-32d4-11ea-9d2e-744aa4028226_0

[root@paas-controller-172-20-0-3:/home/ubuntu]$ docker restart ae33cf5ab2d2
ae33cf5ab2d2

the minion mountpath is :
ceph-fuse 1.0G 0 1.0G 0% /paasdata/docker/plugins/kubernetes.io/csi/pv/pvc-8bdd24b9-3383-11ea-8500-744aa4028242/globalmount

the app nginx11-1-dp6lh use cephfs volume pvc-8bdd24b9-3383-11ea-8500-744aa4028242
nginx11-1-dp6lh 1/1 Running 0 9m2s 100.100.0.9 172.20.0.3

but the app named nginx11-1-dp6lh pod container error is
df -h
df: `/test': Transport endpoint is not connected
Filesystem Size Used Avail Use% Mounted on
rootfs 745G 50G 695G 7% /
overlay 745G 50G 695G 7% /
tmpfs 63G 0 63G 0% /dev
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/mapper/ncl-paasdata 745G 50G 695G 7% /dev/termination-log
/dev/mapper/ncl-paasdata 745G 50G 695G 7% /etc/resolv.conf
/dev/mapper/ncl-paasdata 745G 50G 695G 7% /etc/hostname

ls -ih /

ls: cannot access /test: Transport endpoint is not connected

why the app pod /test mount path can not read and write and must restart the app container,thanks for your reply!

@jianglingxia please open a separate issue with reproducer steps

Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this pull request Feb 10, 2020
PR ceph#282 introduces the mount cache to
solve cephfs fuse mount issue when cephfs plugin pod
restarts .This is not working as intended. This PR removes
the code for maintainability.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
mergify bot pushed a commit that referenced this pull request Feb 11, 2020
PR #282 introduces the mount cache to
solve cephfs fuse mount issue when cephfs plugin pod
restarts .This is not working as intended. This PR removes
the code for maintainability.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/ceph-csi that referenced this pull request Apr 15, 2024
Sync 'odf/devel' into ds/sync/release-4.16
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants