-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CIFS Mount not performed despite logs indicating "mount succeeded" #443
Comments
so finally the volume is mounted on the directory on the agent node instead of a smb mount? |
I enabled the crazy-level CIFS logging and compared with a manual mount. The two are awfully similar, and no error is thrown The dmesg for the csi mount:
Except for the fact that a samba session already exists when using a manual mount (I guess some kind of persistance) : Manual mount
I wanted to check if perhaps the samba share was being magically somehow unmounted, so I cleared dmesg and run the umount, and behold
Which looks very much like the logs in the CSI mount (starting from 1060127.117858) (it's a bit hard to see in Github but I have them side by side in VSCode) Here's
(/dst is my NFS share) So now I have to find out why is my share being unmounted... |
Sorry I think I'm narrowing down my issues
This is indicating a host mount that is not valid for microk8s. More on that tomorrow |
Got it to work by changing the mount points to match microk8s paths: [...]
- name: smb
image: mcr.microsoft.com/k8s/csi/smb-csi:latest
imagePullPolicy: IfNotPresent
args:
- "--v=5"
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(KUBE_NODE_NAME)"
- "--metrics-address=0.0.0.0:29645"
ports:
- containerPort: 29643
name: healthz
protocol: TCP
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 30
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
securityContext:
privileged: true
volumeMounts:
- mountPath: /csi
name: socket-dir
- mountPath: /var/snap/microk8s/common/var/lib/kubelet/
mountPropagation: Bidirectional
name: mountpoint-dir
resources:
limits:
memory: 200Mi
requests:
cpu: 10m
memory: 20Mi
volumes:
- hostPath:
path: /var/snap/microk8s/common/var/lib/kubelet/plugins/smb.csi.k8s.io
type: DirectoryOrCreate
name: socket-dir
- hostPath:
path: /var/snap/microk8s/common/var/lib/kubelet/
type: DirectoryOrCreate
name: mountpoint-dir
- hostPath:
path: /var/snap/microk8s/common/var/lib/kubelet/plugins_registry/
type: DirectoryOrCreate
name: registration-dir Perhaps a quick mention in the docs might be warranted ? |
I had exactly the same problem and spent the whole day trying to figure it out. What's worse is that like mentioned, there are no errors. If anyone is using helm charts, this can be fixed by creating custom value file with: linux:
kubelet: /var/snap/microk8s/common/var/lib/kubelet and then deploying with: helm install --create-namespace --namespace smb-system csi-driver-smb csi-driver-smb/csi-driver-smb --version v1.7.0 -f ./csi-driver-smb-custom.yaml Also worth mentioning that |
maybe someone could contribute a doc on https://microk8s.io/docs, similar to https://microk8s.io/docs/nfs |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
EDIT:
Jump to this comment to understand the issue.
Basically I'm running microk8s and the mount points proposed in csi-smb-node.yaml need to be adjusted.
What happened:
I'm trying to mount a Samba share as a PV/PVC.
The logs of csi-smb-node (ctn smb) indicate that it should all be ok
However, the staging mount is not actually performed on the node:
(Note how the NFS volume mounted properly, for the same pod/container)
Also, mounting the share manually (with
sudo mount -t cifs
) using the options provided in the logs (+credentials) works fine and performs the mount.The folder in the container is created but remains empty and points to /dev/sdb2 instead of a correct mount. My NFS mount works fine so I'd expect my config to be ok too.
What you expected to happen:
A mounted volume, or a failure trown from here https://github.com/kubernetes/mount-utils/blob/6e81bcc03fc8c22aa460c3c4bd32a7ad602abd6c/mount_linux.go#L144
How to reproduce it:
I'm assuming it's a problem with my system, given that the logs indicate correct mounting. But I just can't figure out why it's not actually mounting.
Anything else we need to know?:
Environment:
kubectl version
): 1.23uname -a
): 5.4.0-42-genericThe text was updated successfully, but these errors were encountered: