Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MicroShift 4.15+ support with helm charts #745

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions deployments/helm/nvidia-device-plugin/templates/role-binding.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,22 @@ roleRef:
kind: ClusterRole
name: {{ include "nvidia-device-plugin.fullname" . }}-role
apiGroup: rbac.authorization.k8s.io
{{- if .Capabilities.APIVersions.Has "security.openshift.io/v1/SecurityContextConstraints" }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "nvidia-device-plugin.fullname" . }}-role-binding
namespace: {{ include "nvidia-device-plugin.namespace" . }}
labels:
{{- include "nvidia-device-plugin.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "nvidia-device-plugin.fullname" . }}-role
subjects:
- kind: ServiceAccount
name: {{ include "nvidia-device-plugin.fullname" . }}-service-account
namespace: {{ include "nvidia-device-plugin.namespace" . }}
{{- end }}
{{- end }}
32 changes: 32 additions & 0 deletions deployments/helm/nvidia-device-plugin/templates/role.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,41 @@ rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
{{- if .Capabilities.APIVersions.Has "security.openshift.io/v1/SecurityContextConstraints" }}
- apiGroups:
- security.openshift.io
resourceNames:
- privileged
resources:
- securitycontextconstraints
verbs:
- use
{{- end }}
Comment on lines +13 to +22
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to shift this to a named template to use here and below? Not a blocker though.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @elezar, apologies for so late reply, I'm back to this thread.
Yes, it looks like a good approach to follow. Should we include as part of this PR? Or can I do it as a secondary one?

{{- if and .Values.gfd.enabled .Values.nfd.enableNodeFeatureApi }}
- apiGroups: ["nfd.k8s-sigs.io"]
resources: ["nodefeatures"]
verbs: ["get", "list", "watch", "create", "update"]
{{- end }}
{{- if .Capabilities.APIVersions.Has "security.openshift.io/v1/SecurityContextConstraints" }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we create a list consisting of ClusterRole and Role and loop over these to construct both? (note that for the default case we would only construct a ClusterRole.

I'm happy to do these as a follow-up.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With MicroShift we do also need a Role object to unblock such security issues:

[root@dell-prt7875-01 k8s-device-plugin]# helm upgrade -i nvdp deployments/helm/nvidia-device-plugin/     --version=0.15.0     --namespace nvidia-device-plugin     --create-namespace     --set-file config.map.config=/tmp/dp-example-config0.yaml
Release "nvdp" does not exist. Installing it now.
W0819 10:53:58.235559 3075925 warnings.go:70] would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (containers "nvidia-device-plugin-init", "nvidia-device-plugin-sidecar", "nvidia-device-plugin-ctr" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "nvidia-device-plugin-init", "nvidia-device-plugin-sidecar", "nvidia-device-plugin-ctr" must set securityContext.capabilities.drop=["ALL"]; containers "nvidia-device-plugin-sidecar", "nvidia-device-plugin-ctr" must not include "SYS_ADMIN" in securityContext.capabilities.add), restricted volume types (volumes "device-plugin", "mps-root", "mps-shm", "cdi-root" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "nvidia-device-plugin-init", "nvidia-device-plugin-sidecar", "nvidia-device-plugin-ctr" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "nvidia-device-plugin-init", "nvidia-device-plugin-sidecar", "nvidia-device-plugin-ctr" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W0819 10:53:58.236480 3075925 warnings.go:70] would violate PodSecurity "restricted:v1.24": privileged (containers "mps-control-daemon-mounts", "mps-control-daemon-ctr" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "mps-control-daemon-mounts", "mps-control-daemon-init", "mps-control-daemon-sidecar", "mps-control-daemon-ctr" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "mps-control-daemon-mounts", "mps-control-daemon-init", "mps-control-daemon-sidecar", "mps-control-daemon-ctr" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "mps-root", "mps-shm" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "mps-control-daemon-mounts", "mps-control-daemon-init", "mps-control-daemon-sidecar", "mps-control-daemon-ctr" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "mps-control-daemon-mounts", "mps-control-daemon-init", "mps-control-daemon-sidecar", "mps-control-daemon-ctr" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
NAME: nvdp
LAST DEPLOYED: Mon Aug 19 10:53:58 2024
NAMESPACE: nvidia-device-plugin
STATUS: deployed
REVISION: 1
TEST SUITE: None

Apologies, but no sure if I do also visualize the need for the loop over a list of ClusterRoles and Roles, as it would be a bit short ihmo.

Could you please clarify a bit this point?

metadata:
name: {{ include "nvidia-device-plugin.fullname" . }}-role
namespace: {{ include "nvidia-device-plugin.namespace" . }}
labels:
{{- include "nvidia-device-plugin.labels" . | nindent 4 }}
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups:
- security.openshift.io
resourceNames:
- privileged
resources:
- securitycontextconstraints
verbs:
- use
{{- end }}
{{- end }}
2 changes: 1 addition & 1 deletion deployments/helm/nvidia-device-plugin/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -149,4 +149,4 @@ mps:
# be created. This includes a daemon-specific /dev/shm and pipe and log
# directories.
# Pipe directories will be created at {{ mps.root }}/{{ .ResourceName }}
root: "/run/nvidia/mps"
root: "/run/nvidia/mps"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like an oversight due to the removal of the explicit value.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry for this typo. at the end, better to remove this file from the PR. what do you think?