-
Notifications
You must be signed in to change notification settings - Fork 547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cephfs volume creation error: setfattr: Operation not supported on k8s node #99
Comments
Does this might relate to this?
I have Ceph on Debian 9 (12.2.8 -> luminous) |
I don't think that's related, and the plugin defaults to the FUSE driver anyway. Could you please attach |
|
@rootfs could you please have a look? |
@compilenix it looks |
@rootfs well the provisioner doesn't need setfattr anyway so I don't think that's the issue here, rather that the plugin is complaining about |
setfattr is called here in provisioner the error message is
@compilenix what is your ceph release? @batrick any idea why setfattr failed with operation not supported on cephfs quota? |
I have Ceph on Debian 9 (12.2.8 -> luminous) |
Correct setfattr is not installed in the provisioner container / image. I've installed it manually to see if it makes any difference, using |
|
@compilenix can you turn on mds logging and post the mds log? |
@compilenix and also please could you try mounting the volume manually with After you get that error, check the logs to see the mount command and just copy-paste that along with the
so the command would be:
and also try |
the file the
the mds log level was set to 3 during this command:
using
This directory has many subfolders like this:
a directory named the new file contributed to used space on the cephfs data pool (which was completly empty up until now;
|
@compilenix can you setfattr on that directory?
|
No.
mds log at this time:
|
So the problem only occurs with the kernel client? What version of the kernel is being used? Kernel quota management is not supported until Mimic and 4.17 kernel: http://docs.ceph.com/docs/master/cephfs/kernel-features/ |
No, the problem occurs with the fuse client. The kernel is at version 4.15.0 (Ubuntu 18.04). Is there a option not to define a quota, this would work for me just fine? |
@compilenix do you use same cephcsi plugin container at quay.io? I am not sure if the cephfs-fuse is up to date but I'll check. A bit off topic, the attribute setting here and below appear only applicable to new kernel mounter or cephfs-fuse. I believe we need some if-else here: if it is a kernel mounter, we should avoid setting them since old kernel mounter will fail to mount later. @gman0 |
let's see if #100 fixes this issue |
@compilenix I merged #100, please try the new cephfs plugin image |
I've used this image url: quay.io/cephcsi/cephfsplugin:v0.3.0
Sure, i've updated the yml files (see attached, i've excluded the rbac config) to include csi-cephfsplugin.txt It does not seem to make a difference. |
This was not an issue before, the kernel client would just ignore the quota |
@compilenix @rootfs I tried to reproduce this issue with Ceph Luminous cluster (it's 12.2.4 but regardless) and it's indeed successfully failing with the aforementioned error message. There seems to be an incompatibility between Ceph Luminous cluster and Ceph Mimic FUSE driver when setting attributes. It's also worth noting that the kernel client does not exhibit this issue and works as expected. |
@gman0 which OS and kernel version did you use? |
@compilenix I've used ceph-container, tested on hosts:
and
with results in both cases as I've described in the previous message |
@gman0 can we close this issue? or still, this issue is present? |
Hi all, 1/use kernel : mount -t ceph 178.178.178.189:1091,178.178.178.19:1091,178.178.178.188:1091:/ /home/jlx -o name=admin,secret=AQA0whldex5NJhAAnLkp5U9Iwh+69lz9zbMhMg==,mds_namespace=cephfs mount error 22 = Invalid argument i found add mds_namespace para will fail 2/ use ceph-fuse |
cc @ajarr |
can anyone help me with the problem? I found that i exec the setfattr command in the csi-cephfsplugin container will print log Operation not supported but in my ceph cluster minion and k8s minion exec command can run correct, why ? thanks all! setfattr -n ceph.quota.max_bytes -v 2073741824 csi-vol-097f0e23-a221-11e9-8c5a-fa163e58264b-creating [root@node-7:/usr/bin]$ docker ps docker exec -it 6be1813bad0d /bin/sh |
CC @poornimag can you help? |
I made some private change to ignore error with the "setfattr“ operation, which is not support in my kernel version, and the volume create/mount is not impacted. |
yeah,today I also want to use the method you said,the problem is in csi driver the ceph-fuse version not compatible to my ceph cluster version? why the problem can not resolved and merge to the github?thanks all the csi driver :the ceph-fuse version is but my ceph cluster is : |
the problem i have fixed that the ceph client version not compatible to the ceph version in csi driver container,i modify the csi driver container ceph client version v1.14 to v1.12,then the pv and volume can dynamic created and pod use pvc claim can running,thanks to Madhu-1 : |
…-to-release-4.10 [release-4.10] Bug 2098562: rbd: create token and use it for vault SA
…tSA.createToken Pull-Request ceph#99 backported a commit that applies cleanly, but causes a build failure: internal/kms/vault_sa.go:313:12: kms.createToken undefined (type *VaultTenantSA has no field or method createToken) internal/kms/vault_sa.go:344:12: undefined: vaultTenantSA In recent versions, `vaultTenantSA` is used, but release-4.10 is stuck on the old naming of `VaultTenantSA`. The unexporting was done in upstream ceph#2750, which includes more changes than what we want in a backport. Signed-off-by: Niels de Vos <ndevos@redhat.com>
I'm hoping you can help me with that, though it seems not directly a cause of ceph-csi
itself.
attr's in general are supported and enabled by the filesystem (EXT4) on the node (tried with
setfattr -n user.foo -v bar foobar
).I've tried the setfattr command on the node and within the container. It does not work on either of those.
The k8s node is running on a
Ubuntu 18.04.1 LTS
Container (
kubectl exec csi-cephfsplugin-provisioner-0 -i -t -- sh -il
):Node:
Here are the logs:
logs-from-csi-cephfsplugin-attacher-in-csi-cephfsplugin-attacher-0.txt
logs-from-csi-provisioner-in-csi-cephfsplugin-provisioner-0.txt
logs-from-driver-registrar-in-csi-cephfsplugin-t94m4.txt
logs-from-csi-cephfsplugin-in-csi-cephfsplugin-t94m4.txt
Greetings and sorry for bothering you again 😞
The text was updated successfully, but these errors were encountered: