-
Notifications
You must be signed in to change notification settings - Fork 547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create cephfs pvc with error 'Operation not permitted' #1818
Comments
I came across the same issue. User for CephFS is able to create subvolumegroups and subvolumes but it fails on the provisioner. An user will full admin rights works without problems. I couldn't find where the call to RADOS is done to find out which permission is missing or which action causes the problem. |
@deadjoker @sgissi these are the capabilities we require for the user in a ceph cluster for Ceph CSI to perform its actions https://github.com/ceph/ceph-csi/blob/master/docs/capabilities.md , even after giving these permissions if you still face issues, please revert! |
@humblec I followed this docs and still get this error. |
Thanks @deadjoker for confirming the setup . @yati1998 are we missing any capabilities in the doc ? |
Hi @deadjoker , |
Hi @Yuggupta27
Should I use a new ceph id with capability of
and create a |
@deadjoker Did you manage to get the issue resolved? I ran into exactly a similar error as well and not sure yet how to resolve the issue? |
@alamsyahho have not resolved this issue yet. I'm using admin account instead |
Understood. Probably i will have to use admin account for csi-cephfs as well then. Thanks for your reply |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation. |
This is still very valid.
trying to provision a cephfs subvolumegroup doesn't work using csi-cephfs-provisioner. However if I tell the storageclass to use admin, it works, so something is either missing from these caps or the code does something different when admin is used. Update: the csi-cephfs-provisioner is able to create subvolume groups
|
Weirdly enough this still fails if I give the csi-cephfs-provisioner client same caps as admin, but it works if I use the admin client.
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
I still wasn't able to solve the problem, I simply worked around it using |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation. |
@deadjoker, the ceph capabilities requirements you provided, from the following link have to be used in the userID section of the secret, for static provisioning only. The following example explains the meaning of the userID and adminID sections. If you expect a dynamic provisioning behaviour, you have to provide an admin user account, for some -not well documented- reasons. I've faced this issue in the past months -> only the client.admin user worked. When I created another admin user, say "client.admin123" with the same capabilities, it didn't work. A few posts are related to this pb -> this one for example Last days, users at work asked us to provide dynamic provisioning for our K8S/Ceph environments. So, I've tried this evening with an "up to date" config :
I've created again an alternative admin account with the same caps as client.admin... inserted these credentials at adminID : .... it works, now, with an alternative admin user ! Here is the user definition and caps for information : client.admink8s Very insecure... We do not want to expose an admin token in the clear in Kubernetes as we don't use protected secrets already. At least it would be appreciated not to require write capabilities for the monitors.. Can the development team clarify in the docs directory the minimal caps for an "admin" user for dynamic provisioning ? Or explain why it have to be a full admin having write caps for the Ceph mons.. ? @humblec ? I will also check at the code and ceph detailed caps next days Thanks a lot, |
Hi guys, I encountered this problem too, but I have been resolved. I found the doc in ceph-csi/docs/capabilities.md Here is the changeapiVersion: v1 #Required for dynamically provisioned volumes |
@drummerglen It's not resolved. So your "solution"/"resolved" is what exactly what everyone else did to work around the problem and nothing new. It's even written in the original post
It's not a solution/resolution to run as admin/superuser/god-mode, it's just a temporary work-around. |
@Raboo Oops, sorry I didn't read every comment. May I ask if any version has resolved this issue? |
@drummerglen no I don't think so. It seems very hard to figure out why this is happening and probably doesn't affect the majority of the users. |
@Raboo My ceph cluster was deployed by cephadm running on docker. I have no idea if it is the problem. |
Hi, until today the issue has not been resolved yet. Is there any ongoing fixing that is still pending? Or nobody really cares about this issue? It is very concerning that we need to expose our Ceph superuser credentials into ceph-csi client, a slight human or backend error might jeopardize the whole Ceph cluster. |
Hi, I am unsure if the issue is the same but you might want to look at #2687. |
Describe the bug
I deploy ceph-csi in k8s and use cephfs to provide pvc.
PVC created fail when I use a normal ceph user but succeed if I use admin ceph user.
Environment details
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : kernelSteps to reproduce
Steps to reproduce the behavior:
ceph auth caps client.k8sfs mon 'allow r' mgr 'allow rw' mds 'allow rw' osd 'allow rw tag cephfs *=*'
Actual results
Expected behavior
PVC should be created successfully and bound to a PV.
Logs
If the issue is in PVC creation, deletion, cloning please attach complete logs
of below containers.
provisioner pod.
Additional context
the ceph user 'k8sfs' caps:
this user has ability to create subvolume and subvolumegroup as well.
the 'csi' subvolumegroup is created when I use admin keyring in ceph-csi.
The text was updated successfully, but these errors were encountered: