Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Statically provisioned PV/PVC don't set group ownership when fsGroup is defined #1365

Closed
jbehrends opened this issue Aug 25, 2022 · 10 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@jbehrends
Copy link
Contributor

/kind bug

What happened?
When following the "static-provisioning" example/method of using the aws-ebs-csi-driver, setting securityContext.fsGroup in your pod or deployment does NOT configure group ownership of the EBS volume mount. Setting fsGroup when using the "dynamic-provisioning" example DOES set the group ownership of the EBS volume correctly. This is an issue because our deployments use non-root uid/gid's and as a result the process running in the container can't write to the EBS volume mount because it's owned by root:root.

What you expected to happen?
Setting "securityContext.fsGroup" to a specific GID value should result in the volume mount and the files contained in the mount having the same GID applied to them.

How to reproduce it (as minimally and precisely as possible)?

  1. Create an EBS volume in the AWS console.
  2. Create a PV using the example provided here
  3. Create a PVC using the example provided here
  4. Use the example provided here and add "securityContext.fsGroup" and set a value of 1000. Then create the pod.

The resulting pod will have a volume mount with ownership of root:root and it SHOULD be root:1000

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version): EKS 1.21
  • Driver version: 1.11.2
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 25, 2022
@ConnorJC3
Copy link
Contributor

Kubelet is responsible for managing fsGroup permissions (there is an optional feature in k8s 1.22+ for the driver to manage it, but we don't opt in), The driver does not interfere in any way or otherwise set permissions on volumes. This bug should be reported to k/k.

@joshbranham
Copy link

@jbehrends were you able to confirm any behavior here? I am seeing the same thing, specifically with just using a raw awsElasticBlockStore volume on a Deployment and fsGroup. Previously this all worked fine with in-tree CSI but now with aws-ebs-csi-driver I am hitting permission issues.

@ConnorJC3
Copy link
Contributor

@joshbranham @jbehrends did some further digging on this, and I don't think it's a kubelet bug but rather intentional feature. The kubelet refuses to use the fsGroup if the volume doesn't specify an fsType, and the static provisioning examples don't (see https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/csi/csi_mounter.go#L405-L436). If this is what is affecting you, you'll see the following in your kubelet log:

I0830 20:43:01.185770    7805 csi_mounter.go:407] kubernetes.io/csi: mounter.SetupAt WARNING: skipping fsGroup, fsType not provided

I tested with the static provisioning example by setting spec.csi.fsType: ext4 on the PV and it set the fsGroup like expected. It's probably best practice to set the fsType explicitly anyways, so perhaps the driver docs should be updated to include that in the static provisioning example.

@joshbranham
Copy link

I can confirm setting fsType on the volume and fsGroup on the securityContext did fix my issue.

@jbehrends
Copy link
Contributor Author

Sorry, hadn't gotten a chance to test this till now. Also verified that setting "spec.csi.fsType" to ext4 fixes the issue.

@jbehrends
Copy link
Contributor Author

Opened a PR to update the static-provisioning example as @ConnorJC3 had suggested.

@wmesard
Copy link
Contributor

wmesard commented Sep 8, 2022

@joshbranham @jbehrends did some further digging on this, and I don't think it's a kubelet bug but rather intentional feature. The kubelet refuses to use the fsGroup if the volume doesn't specify an fsType, and the static provisioning examples don't (see https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/csi/csi_mounter.go#L405-L436).

It's certainly current behavior, but is it intentional? The volume is mounted at the point where supportsFSGroup() is called. Instead of looking at the spec, couldn't csiMountMgr.SetUpAt() look at the actual volume to get its fsType?

It seems weird/wrong to force the user to specify an otherwise-optional parameter when the information is right at hand.

@gnufied
Copy link
Contributor

gnufied commented Sep 8, 2022

EBS driver deployments could also default to FSGroupPolicy: File in CSIDriver object, which would remove all the guess work from kubelet and always result in chown and chmod of files. That is what we are doing in Openshift - https://github.com/openshift/aws-ebs-csi-driver-operator/blob/master/assets/csidriver.yaml#L11 which makes this a non-issue.

@ConnorJC3
Copy link
Contributor

@wmesard my understanding is that the external-provisioner sidecar is responsible for the "default fstype" magic. Thus, from kubernetes's side the volume really doesn't have an fsType set, the driver only sees one because the external provisioner slips one in when it calls the driver.

@gnufied good catch, opening a PR for that.

@torredil
Copy link
Member

Closing this now since #1377 has been merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

7 participants