-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[k3s-upgrade] k3s binary missing container_runtime_exec_t context type after upgrade on selinux systems #379
Comments
The best current workaround for systemd-based systems is to add a drop-in that performs restorecon on the binary just prior to start, e.g.
at |
Based on our design discussion this morning, we should try to match the context label of the file that we install based on the context label of the file that it replaces. Standard SELinux tooling is very contextual which means when it is run in a container it needs some amount of bind-mount/config from the host to work correctly. But given that SELinux context labels are stored as extended attributes on the filesystem we just need some tooling that can help us set the correct attributes. The # replace the "copy" line with something like these lines
K3S_CONTEXT=$(getfilecon $K3S_HOST_BINARY 2>/dev/null)
cp -vf /opt/k3s $K3S_HOST_BINARY
if [ -n "${K3S_CONTEXT}" ]; then
setfilecon "${K3S_CONTEXT}" $K3S_HOST_BINARY
fi |
When this is done and ready for QA to test - Shylaja should test this. This is for our system-upgrade-controller testing story with RHEL/CENT environments where SELinux is enabled. This is needed for RKE2 GA and should be first tested via RKE2, then k3s. |
Validated correct context labels are set before and after install on both rke2 and k3s with SELinux Enforcing mode Before
After
K3S Upgrade from v1.18.9+k3s1 to v1.19.1+k3s1
After
|
Environmental Info:
K3s Version:
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
Describe the bug:
After upgrading k3s via
rancher/k3s-upgrade
the type portion of the context label for/usr/local/bin/k3s
reverts tobin_t
or something other thancontainer_runtime_exec_t
. When the k3s process is restarted by the supervisor process (typically systemd) it will then cascade via domain/file transitions incorrect labels that will cause new/recreated pods to fail to operate correctly.Steps To Reproduce:
curl -fsSL https://get.k3s.io | sh
with the following environment already exported:INSTALL_K3S_CHANNEL=stable
K3S_KUBECONFIG_MODE=0644
K3S_SELINUX=true
K3S_TOKEN=centos/7
curl -fsSL https://raw.githubusercontent.com/rancher/system-upgrade-controller/master/manifests/system-upgrade-controller.yaml | kubectl apply -f-
curl -fsSL https://raw.githubusercontent.com/rancher/system-upgrade-controller/master/examples/k3s-upgrade.yaml | kubectl apply -f-
kubectl label node --all k3s-upgrade=true
Expected behavior:
Post-reboot, new/recreated pods start correctly.
Actual behavior:
Post-reboot, new/created pods in CrashLoopBackOff. Pods that were in place prior to the upgrade remain unaffected because they were started with the correct labels/transitions from the previous k3s/containerd process.
Additional context / logs:
The text was updated successfully, but these errors were encountered: