-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
operator-sdk run bundle with security-context-config restricted fails to spawn registry pod due to runAsNonRoot #6430
Comments
We would need help in implementing this feature. @everettraven would be able to guide in implementing this feature. Please free to assign yourself. Thank you |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@varshaprasad96 Using: operator-sdk version: v1.32
|
I'm also still running into this :/
|
/reopen |
@kaovilai: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@kaovilai There is a flag for this, have you tried to run with this flag?
|
@acornett21 This issue is about a bug in the functionality of that flag. It is missing setting the |
@weshayutin said it didn't work on v1.33.0 |
Checking the master branch code, nothing is setting the |
consider my team unblocked with the flag working on latest GA 1.34.2 |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Bug Report
What did you do?
Ran the following to deploy my ansible(but it does not matter) based operator into my k8s cluster:
$ operator-sdk run bundle --security-context-config restricted kuberegistry.blub.tld/test/ansible-operator-dev/test1-bundle:v0.0.1
What did you expect to see?
What did you see instead? Under which circumstances?
The deployment fails directly with the following
Environment
Operator type:
N/A
Kubernetes cluster type:
Talos Linux 1.3.x with K8s 1.26.3 and PodSecurity set to restricted.
$ operator-sdk version
operator-sdk version: "v1.28.0", commit: "484013d1865c35df2bc5dfea0ab6ea6b434adefa", kubernetes version: "1.26.0", go version: "go1.19.6", GOOS: "linux", GOARCH: "amd64"
$ kubectl version
$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:40:17Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:33:12Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}
Possible Solution
As per the comment starting in line 244 the reasoning for not applying the
runAsNonRoot
container security context is that previous OpenShift and K8s versions did not support all needed options. We could probably by now assume that we don't need to support 1.19.x or possible even the older OpenShift versions anymore.Possible we could add detection modi for OpenShift/Legacy K8s or enhance the switch to allow the user to choose the right security level.
Additional context
Together with @everettraven on Slack(https://kubernetes.slack.com/archives/C0181L6JYQ2/p1683845500758729) we identified the following points:
Temporary Workaround via Kyverno Policy
The text was updated successfully, but these errors were encountered: