Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust created PV owner to be the same as container user's UID #443

Closed
Diegunio opened this issue Mar 6, 2024 · 5 comments
Closed

Adjust created PV owner to be the same as container user's UID #443

Diegunio opened this issue Mar 6, 2024 · 5 comments
Labels
kind/enhancement lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Diegunio
Copy link

Diegunio commented Mar 6, 2024

I have deployed the hostpath-provisioner-operator and it works as it should. However, when I create a pod that has a volume attached and this pod operates on a specific uid, the owner of the folder/files is automatically set to root, which means the user in the container does not have access to the directory then pod is being killed.

I would like the owner of the directory/files to automatically change to the UID/GID that is contained in the container. Such a function exists in https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

An example of improper operation can be this yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
 name: mongodb7
 labels:
  name: mongodb7
spec:
 serviceName: "mongodb-server"
 replicas: 1
 selector:
   matchLabels:
     app: mongodb7
 template:
   metadata:
     labels:
       app: mongodb7
   spec:
     containers:
       - name: mongodb-server
         image: (mongodb7-image)
         imagePullPolicy: Always    
         ports:
           - name: mongodb-port
             containerPort: 27017
             protocol: TCP
         volumeMounts:
           - mountPath: /data/db
             name: data
           - mountPath: /var/log
             name: audit
     securityContext:
       fsGroup: 1001
     imagePullSecrets:
       - name: dockercfg-secret
 volumeClaimTemplates:
   - metadata:
       name: data
     spec:
       storageClassName: hostpath-csi
       accessModes:
         - ReadWriteMany
       resources:
         requests:
           storage: 250Gi       
   - metadata:
       name: audit
     spec:
       storageClassName: hostpath-csi
       accessModes:
         - ReadWriteOnce
       resources:
         requests:
           storage: 20Gi

After creating a pod through StatefulSet, the owner of the directory in the filesystem is root, not mongodb/1001 as is required by the container to be able to read data from the database.

Structure of my node filesystem, where PV belongs to:

drwxr-xr-x. 3 root root system_u:object_r:container_file_t:s0     17 Mar  6 05:50 data
	drwxr-x---. 6 root root system_u:object_r:container_file_t:s0 4096 Mar  6 07:59 csi
		drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0:c20,c36 6 Mar  6 07:33 pvc-bcac971b-6150-408f-858a-19565e92a5d
@awels
Copy link
Member

awels commented Mar 21, 2024

Hi, so this should already be working through #189 in fact we have tests to ensure it is working. In particular can you check that you csidriver resource has the proper fsGroup policy set (if you use the operator it should be set already). Also note that your example you are using RWX for the data directory, which I am assuming is just a copy and paste error since hpp doesn't support RWX at all, and it will fail to bind the PVC to a PV.

If there are any permission issues with the chmod that is executed then the pods of the daemonset will indicate what the problem is. I believe the csi-provisioner container is the one that calls the chmod.

@kubevirt-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 19, 2024
@kubevirt-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 19, 2024
@kubevirt-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

@kubevirt-bot
Copy link

@kubevirt-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants