-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persistent Volumes permission issue for 2+ node clutsters #12360
Comments
/kind support |
Hi @albert-salman, thanks for reporting your issue with minikube! To help me debug the issue you're experiencing, could you please provide me with a list of instructions to follow to allow me to reproduce this issue? Thanks! |
Hi @spowelljr
i have a suspicion that PVC is not correctly mapped at node which starts the initContainers's temporary container i am using in bitnami/elasticstack file master-statefulset.yaml
|
I think my problem is related. On a 2-node cluster, the spec.template.spec.securityContext.fsGroup=65534 setting is not respected and volume membership is not changed. It remains How to reproduce
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: ubuntu
name: ubuntu
spec:
replicas: 1
serviceName: ubuntu-headless
selector:
matchLabels:
app: ubuntu
template:
metadata:
labels:
app: ubuntu
spec:
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
containers:
- image: ubuntu:jammy-20211122
name: ubuntu
command: ["/bin/sleep", "365d"]
volumeMounts:
- mountPath: /d01
name: bigdisk
volumeClaimTemplates:
- metadata:
name: bigdisk
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5G
Minikube version Minikube was started as |
@albert-salman Do you see the problem if you use the fsGroup security context instead of initContainers? I do. BTW I think using initContainers just for setting permissions is a bit of overkill. |
On a single-node minikube cluster, the setting of spec.template.spec.securityContext.fsGroup=65534 results in volume permissions nobody@ubuntu-0:/$ ls -ald /d01
drwxrwxrwx 2 root root 6 Dec 6 02:25 /d01
nobody@ubuntu-0:/$ Expected: root:nobody @albert-salman can you confirm this? |
@albert-salman ping! |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Hi all, I have similar issue with minikube and I think it is related to host-pathprovisioner. minikube start --nodes=4 --cpus=3
$ minikube version
minikube version: v1.24.0
commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"archive", BuildDate:"2021-07-22T00:00:00Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"} I use security context for a pod and pvc dynamic claim. securityContext:
runAsNonRoot: true
runAsUser: 1001 Same issue when accessing volume - access denied. I found out checking different docs, PRs and code, that actually host-pathprovisioner doesn't support minikube runs standard controller as enabled by default addon:
And it indeed provisions host path on $ minikube ssh -n minikube
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ ls -la /tmp/hostpath-provisioner/
total 8
drwxr-xr-x 3 root root 4096 May 11 06:20 .
drwxrwxrwt 13 root root 300 May 11 06:18 ..
drwxr-xr-x 5 root root 4096 May 11 06:24 default
$ ls -la /tmp/hostpath-provisioner/default/
total 20
drwxr-xr-x 5 root root 4096 May 11 06:24 .
drwxr-xr-x 3 root root 4096 May 11 06:20 ..
drwxrwxrwx 2 root root 4096 May 11 06:24 mongod-data-minikube-cfg-0
drwxrwxrwx 2 root root 4096 May 11 06:24 mongod-data-minikube-rs0-0
drwxrwxrwx 2 root root 4096 May 11 06:20 pmm-storage-pmm-0 but on other nodes it provisions with $ minikube ssh -n minikube-m03
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ ls -la /tmp/hostpath-provisioner/default/
total 12
drwxr-xr-x 3 root root 4096 May 11 06:25 .
drwxr-xr-x 3 root root 4096 May 11 06:25 ..
drwxr-xr-x 2 root root 4096 May 11 06:25 mongod-data-minikube-rs0-0 And thus as all other nodes has Looks like a bug in provisioner code. |
So my theory on whats happens: provisioner runs only on master node and responds correctly with pvc to the k8s. It creates PVC only locally on master node. When scheduler schedules pod on some node kubelet checks for PVC, finds it, attaches to the container that runs under the docker with the path from that PVC that was created only on master node. $ ls -la /tmp
total 16
drwxrwxrwt 13 root root 300 May 11 06:19 .
drwxr-xr-x 18 root root 480 May 11 06:18 ..
drwxrwxrwt 2 root root 40 May 11 06:18 .ICE-unix
drwxrwxrwt 2 root root 40 May 11 06:18 .Test-unix
drwxrwxrwt 2 root root 40 May 11 06:18 .X11-unix
drwxrwxrwt 2 root root 40 May 11 06:18 .XIM-unix
drwxrwxrwt 2 root root 40 May 11 06:18 .font-unix
drwxr-xr-x 2 root root 40 May 11 06:18 gvisor
-rw-r--r-- 1 docker docker 79 May 11 06:18 h.2529
-rw-r--r-- 1 docker docker 126 May 11 06:18 h.2574
drwxr-xr-x 3 root root 4096 May 11 06:25 hostpath-provisioner
drwxr-xr-x 2 root root 4096 May 11 06:18 hostpath_pv
drwx------ 3 root root 60 May 11 06:18 systemd-private-4b03b55590774907b3007965a5da2612-systemd-logind.service-SctAZg
drwx------ 3 root root 60 May 11 06:18 systemd-private-4b03b55590774907b3007965a5da2612-systemd-resolved.service-bjwybg
drwx------ 3 root root 60 May 11 06:18 systemd-private-4b03b55590774907b3007965a5da2612-systemd-timesyncd.service-XtzY7f
$ docker run -it --rm -v /tmp/new/dir:/tmp/dir busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
50e8d59317eb: Pull complete
Digest: sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8
Status: Downloaded newer image for busybox:latest
/ # ls -la /tmp/
total 8
drwxrwxrwt 1 root root 4096 May 11 08:45 .
drwxr-xr-x 1 root root 4096 May 11 08:45 ..
drwxr-xr-x 2 root root 40 May 11 08:45 dir
/ # ls -la /tmp/dir/
total 4
drwxr-xr-x 2 root root 40 May 11 08:45 .
drwxrwxrwt 1 root root 4096 May 11 08:45 ..
/ # stat /tmp/dir
File: /tmp/dir
Size: 40 Blocks: 0 IO Block: 4096 directory
Device: 24h/36d Inode: 151276 Links: 2
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-05-11 08:45:50.933769106 +0000
Modify: 2022-05-11 08:45:42.786726156 +0000
Change: 2022-05-11 08:45:42.786726156 +0000
/ # id
uid=0(root) gid=0(root) groups=10(wheel)
/ # exit
$ ls -la /tmp/new/dir/
total 0
drwxr-xr-x 2 root root 40 May 11 08:45 .
drwxr-xr-x 3 root root 60 May 11 08:45 ..
$ stat /tmp/new/dir
File: /tmp/new/dir
Size: 40 Blocks: 0 IO Block: 4096 directory
Device: 24h/36d Inode: 151276 Links: 2
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-05-11 08:46:12.110882020 +0000
Modify: 2022-05-11 08:45:42.786726156 +0000
Change: 2022-05-11 08:45:42.786726156 +0000
Birth: - |
that theory confirmed by couple of facts:
when starting pod that would fail with access. So I think that current host-pathprovisioner doesn't account nodes somehow: SelectedNode is not used Provision. |
/remove-lifecycle stale |
/triage discuss |
So looks like the host-pathprovisioner in minikube doesn't support multinode at all, and can provision only master node. There are other host provisioner that support multi node:
I tried kubevirt legacy and it works for me: $ minikube start --nodes=4 --cpus=3
😄 minikube v1.24.0 on Fedora 35
...
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ minikube addons disable storage-provisioner
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌑 "The 'storage-provisioner' addon is disabled
$ kubectl delete storageclass standard
storageclass.storage.k8s.io "standard" deleted
$ kubectl apply -f kubevirt-hostpath-provisioner.yaml
storageclass.storage.k8s.io/standard created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-hostpath-provisioner created
clusterrole.rbac.authorization.k8s.io/kubevirt-hostpath-provisioner created
serviceaccount/kubevirt-hostpath-provisioner-admin created
daemonset.apps/kubevirt-hostpath-provisioner created here is apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubevirt.io/hostpath-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubevirt-hostpath-provisioner
subjects:
- kind: ServiceAccount
name: kubevirt-hostpath-provisioner-admin
namespace: kube-system
roleRef:
kind: ClusterRole
name: kubevirt-hostpath-provisioner
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubevirt-hostpath-provisioner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubevirt-hostpath-provisioner-admin
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kubevirt-hostpath-provisioner
labels:
k8s-app: kubevirt-hostpath-provisioner
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kubevirt-hostpath-provisioner
template:
metadata:
labels:
k8s-app: kubevirt-hostpath-provisioner
spec:
serviceAccountName: kubevirt-hostpath-provisioner-admin
containers:
- name: kubevirt-hostpath-provisioner
image: quay.io/kubevirt/hostpath-provisioner
imagePullPolicy: Always
env:
- name: USE_NAMING_PREFIX
value: "false" # change to true, to have the name of the pvc be part of the directory
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: PV_DIR
value: /tmp/hostpath-provisioner
volumeMounts:
- name: pv-volume # root dir where your bind mounts will be on the node
mountPath: /tmp/hostpath-provisioner/
#nodeSelector:
#- name: xxxxxx
volumes:
- name: pv-volume
hostPath:
path: /tmp/hostpath-provisioner/
|
$ kubectl get pods -o wide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system etcd-minikube 1/1 Running 0 9m45s 192.168.39.100 minikube <none> <none>
kube-system kindnet-7kzq4 1/1 Running 0 9m33s 192.168.39.100 minikube <none> <none>
kube-system kindnet-8kpws 1/1 Running 0 8m26s 192.168.39.235 minikube-m04 <none> <none>
kube-system kindnet-nvczc 1/1 Running 0 8m54s 192.168.39.177 minikube-m03 <none> <none>
kube-system kindnet-tnhh6 1/1 Running 0 9m22s 192.168.39.33 minikube-m02 <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 0 9m45s 192.168.39.100 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 0 9m45s 192.168.39.100 minikube <none> <none>
kube-system kube-proxy-5hbth 1/1 Running 0 8m26s 192.168.39.235 minikube-m04 <none> <none>
kube-system kube-proxy-gfhz9 1/1 Running 0 9m22s 192.168.39.33 minikube-m02 <none> <none>
kube-system kube-proxy-m7wmm 1/1 Running 0 8m54s 192.168.39.177 minikube-m03 <none> <none>
kube-system kube-proxy-r29lm 1/1 Running 0 9m33s 192.168.39.100 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 0 9m47s 192.168.39.100 minikube <none> <none>
kube-system kubevirt-hostpath-provisioner-5lqbc 1/1 Running 0 7m20s 10.244.0.3 minikube <none> <none>
kube-system kubevirt-hostpath-provisioner-g5z7k 1/1 Running 0 7m20s 10.244.1.2 minikube-m02 <none> <none>
kube-system kubevirt-hostpath-provisioner-gxgpm 1/1 Running 0 7m20s 10.244.2.2 minikube-m03 <none> <none>
kube-system kubevirt-hostpath-provisioner-s6nwx 1/1 Running 0 7m20s 10.244.3.2 minikube-m04 <none> <none> and deployment with different securityContext did work for me, as well as the directory were created on a right nodes with 777 permissions. |
@denisok Thanks for the links to the alternative hostpath provisioners. |
https://github.com/kubernetes-csi/csi-driver-host-path could be a good solution option as it is already available in Addons. But it also has the bug with permissions. I tried to:
I think there might be a bug in I think the way to go might be to update Also I can try to fix it myself if someone checks if that correct path, and what is the way - documenting, automatic detection what addons to enable, or maybe there some reason to not have |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
@denisok thank you for the detailed debugging and investigations, nice work here. minikube comes with a very basic in-house storage provisioner and I wouldn't be surprised if it doesn't cover corner cases or multi-nodes. if it helps here is the link to the code in these two files
I would love it if you or anyone make a PR to either add the functionality to cover this use case or add a tutorial on our website to how to use it with this workaround ? also adding @presztak |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Fix: minikube addons disable storage-provisioner
minikube addons disable default-storageclass
minikube addons enable volumesnapshots
minikube addons enable csi-hostpath-driver
kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' As suggested here: #15829 (review) Will be good to have it automatic for the multi-node. |
The csi-hostpath-driver workaround above doesn't work for me. Folders are created with 750 permissions and as such only privileged containers can access them. |
@denisok I confirm your fix fixed me the following issues:
Before your fix, neither worked. Much thanks, I do agree it should be automatic for the multi-node. |
Hello,
If minikube cluster is 2+ nodes then initContainers cannot set volume permissions inside PVCs and pods fail to access data there. Single node cluster do not have this issue.
The text was updated successfully, but these errors were encountered: