You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened: csi-driver-smb doesn't work on AL 2023 Nodes and doesnt spit out any (useful error). I installed a new Kubernetes Cluster on AWS with the Version 1.30. The Standard Image for 1.30 is AL 2023 as the old Version AL2 is deprecated. When using AL 2023, the Container doesn't mount the SMB Path properly. It shows the content, but when you try to cd into a directory on an SMB Share, it spits out either "Required key not available" or "sh: cd: can't cd to XXXXXX/: No error information". Which depends on the Container you are currently using.
When running AL2 Nodes this doesn't happen. I assume its some kind of SELinux or other container isolation stuff, but not sure how to debug it.
What you expected to happen:
Being able to get files from the Server and to it. Both when using AL 2023 and AL 2 using Kerberos.
How to reproduce it:
Spawn a cluster with two nodes, one with AL2 and one with AL2023. Create a secret with a Token and create a PVC and PV. Use the following mount options for the PV:
Spawn two Pods (for example the example nginx Pod from this Repo) with a NodeSelector, one for the Node with AL2 and one for the Node with AL2023.
Spawn two nodes one with AL 2023 and one with AL2. CD into the mounted root-dir. On AL2023 it should work and you should see the folders, but if you cd into a subfolder, you should get an error.
On AL2 you can do everything you want.
Anything else we need to know?:
We are using Kerberos and think the problem is kerberos related. The Directory is usable both on the Node as well as in the smb-container of the csi-driver-smb. Because both have the right ticket mounted in /var/lib/kubelet/kerberos. The Pods using the mount provided by csi-driver-smb doesn't have the ticket. But it's neither on AL2 or on AL2023 and it still works on AL2.
Kubernetes version (use kubectl version): v1.30.2-eks-db838b0
OS (e.g. from /etc/os-release):
NAME="Amazon Linux"
VERSION="2023"
ID="amzn"
ID_LIKE="fedora"
VERSION_ID="2023"
PLATFORM_ID="platform:al2023"
PRETTY_NAME="Amazon Linux 2023.5.20240701"
Kernel (e.g. uname -a): Linux ip-10-32XXXXXXXXXXXX 6.1.94-99.176.amzn2023.x86_64 test: fix travis config #1 SMP PREEMPT_DYNAMIC Tue Jun 18 14:57:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Install tools: krb5-workstation cifs-utils cronie
Others: (klist)
Ticket cache: FILE:/var/lib/kubelet/kerberos/krb5cc_0
Default principal: windows_XXXX@XXXXXXXX
Valid starting Expires Service principal
08/13/24 09:52:54 08/13/24 19:52:54 krbtgt/XX.XXX.XX@XX.XXX.XX
renew until 08/20/24 09:52:54
08/13/24 09:52:55 08/13/24 19:52:54 cifs/sdfs0XX.XXX.XX@XX.XXX.XX
renew until 08/20/24 09:52:54
08/13/24 10:00:24 08/13/24 19:52:54 cifs/sdfs0XX.XXX.XX@
renew until 08/20/24 09:52:54
What happened: csi-driver-smb doesn't work on AL 2023 Nodes and doesnt spit out any (useful error). I installed a new Kubernetes Cluster on AWS with the Version 1.30. The Standard Image for 1.30 is AL 2023 as the old Version AL2 is deprecated. When using AL 2023, the Container doesn't mount the SMB Path properly. It shows the content, but when you try to cd into a directory on an SMB Share, it spits out either "Required key not available" or "sh: cd: can't cd to XXXXXX/: No error information". Which depends on the Container you are currently using.
When running AL2 Nodes this doesn't happen. I assume its some kind of SELinux or other container isolation stuff, but not sure how to debug it.
What you expected to happen:
Being able to get files from the Server and to it. Both when using AL 2023 and AL 2 using Kerberos.
How to reproduce it:
Spawn a cluster with two nodes, one with AL2 and one with AL2023. Create a secret with a Token and create a PVC and PV. Use the following mount options for the PV:
Spawn two Pods (for example the example nginx Pod from this Repo) with a NodeSelector, one for the Node with AL2 and one for the Node with AL2023.
Spawn two nodes one with AL 2023 and one with AL2. CD into the mounted root-dir. On AL2023 it should work and you should see the folders, but if you cd into a subfolder, you should get an error.
On AL2 you can do everything you want.
Anything else we need to know?:
We are using Kerberos and think the problem is kerberos related. The Directory is usable both on the Node as well as in the smb-container of the csi-driver-smb. Because both have the right ticket mounted in /var/lib/kubelet/kerberos. The Pods using the mount provided by csi-driver-smb doesn't have the ticket. But it's neither on AL2 or on AL2023 and it still works on AL2.
Environment:
image: registry.k8s.io/sig-storage/smbplugin:v1.15.0
kubectl version
): v1.30.2-eks-db838b0NAME="Amazon Linux"
VERSION="2023"
ID="amzn"
ID_LIKE="fedora"
VERSION_ID="2023"
PLATFORM_ID="platform:al2023"
PRETTY_NAME="Amazon Linux 2023.5.20240701"
uname -a
): Linux ip-10-32XXXXXXXXXXXX 6.1.94-99.176.amzn2023.x86_64 test: fix travis config #1 SMP PREEMPT_DYNAMIC Tue Jun 18 14:57:56 UTC 2024 x86_64 x86_64 x86_64 GNU/LinuxTicket cache: FILE:/var/lib/kubelet/kerberos/krb5cc_0
Default principal: windows_XXXX@XXXXXXXX
Valid starting Expires Service principal
08/13/24 09:52:54 08/13/24 19:52:54 krbtgt/XX.XXX.XX@XX.XXX.XX
renew until 08/20/24 09:52:54
08/13/24 09:52:55 08/13/24 19:52:54 cifs/sdfs0XX.XXX.XX@XX.XXX.XX
renew until 08/20/24 09:52:54
08/13/24 10:00:24 08/13/24 19:52:54 cifs/sdfs0XX.XXX.XX@
renew until 08/20/24 09:52:54
Ticket server: cifs/sdfsXX.XXX.XX@XX.XXX.XX
The text was updated successfully, but these errors were encountered: