-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flux unnecessarily fetching all secrets in the cluster #512
Comments
This issue looks relevant: kubernetes-sigs/controller-runtime#550 (comment) |
@mac-chaffee On a quick search around the code (including the extract you shared), we tend to do a Get with a namespaced name which should restrict the requirements at a namespace level. The audit logs may provide further insights as to how this can be overcome - it would be great if you could share the logs here. A good way to identify RBAC permissions that are missing is to export the audit logs and run it through audit2rbac. |
The caching happens at the cluster scope regardless of whether the When creating the Manager here: Line 113 in a214763
no namespace is specified (EDIT: unless you restrict Flux to a single namespace rather than a few specific namespaces), which means all secrets in the cluster are fetched and cached: https://github.com/kubernetes-sigs/controller-runtime/blob/1e4d87c9f9e15e4a58bb81909dd787f30ede7693/pkg/manager/manager.go#L188-L194 More info here: kubernetes-sigs/controller-runtime#1249 |
@mac-chaffee I tested this and could only reproduce this behaviour when the controller was started with {"serviceaccount":"system:serviceaccount:flux-system:helm-controller","verb":"list","resourceType":"secrets","resourceNs":"flux-system","decision":"allow"}
{"serviceaccount":"system:serviceaccount:flux-system:helm-controller","verb":"list","resourceType":"secrets","resourceNs":null,"decision":"allow"}
{"serviceaccount":"system:serviceaccount:flux-system:helm-controller","verb":"update","resourceType":"secrets","resourceNs":"flux-system","decision":"allow"}
{"serviceaccount":"system:serviceaccount:flux-system:helm-controller","verb":"watch","resourceType":"secrets","resourceNs":null,"decision":"allow"} Note that the second and fourth operations do not have a namespace associated with it. This is the expected behaviour as in this mode Flux is operating at cluster level. However, by starting the controller with {"serviceaccount":"system:serviceaccount:flux-system:helm-controller","verb":"create","resourceType":"secrets","resourceNs":"flux-system","decision":"allow"}
{"serviceaccount":"system:serviceaccount:flux-system:helm-controller","verb":"list","resourceType":"secrets","resourceNs":"flux-system","decision":"allow"}
{"serviceaccount":"system:serviceaccount:flux-system:helm-controller","verb":"update","resourceType":"secrets","resourceNs":"flux-system","decision":"allow"}
{"serviceaccount":"system:serviceaccount:flux-system:helm-controller","verb":"watch","resourceType":"secrets","resourceNs":"flux-system","decision":"allow"} Can you confirm what's the value of your |
I have Ideally, there would be three deployment options:
I've been testing out that third deployment model and it appears to work with the exception of the helm-controller fetching secrets. |
As of now, we may not be able to completely remove the For that use the The controller service account would still do a {"serviceaccount":"system:serviceaccount:flux-system:helm-controller","verb":"list","resourceType":"secrets","resourceNs":null,"decision":"allow"} The secrets needed for that specific {"serviceaccount":"system:serviceaccount:flux-system:tenant","verb":"create","resourceType":"secrets","resourceNs":"tenant-ns","decision":"allow"}
{"serviceaccount":"system:serviceaccount:flux-system:tenant","verb":"list","resourceType":"secrets","resourceNs":"tenant-ns","decision":"allow"}
{"serviceaccount":"system:serviceaccount:flux-system:tenant","verb":"update","resourceType":"secrets","resourceNs":"tenant-ns","decision":"allow"} You may also be interested in enforcing control plane level boundaries via admission controllers, as we show in our end-to-end example for multi-tenancy: https://github.com/fluxcd/flux2-multi-tenancy. |
I've noticed that when you specify
valuesFrom
in a HelmRelease, the helm-controller won't just fetch that one secret; it willlist
all secrets in the whole cluster instead:I understand Flux is meant to be installed cluster-wide on the assumption that you can use Flux to install stuff in any namespace, but I don't have any intention of doing that in e.g. "kube-system", so I've deployed flux with custom permissions that only allow it to read secrets in specific namespaces. So that
list
operation fails for me.I believe the
list
operation is triggered because of the client-go caching mechanism. The helm-controller's code doesn't perform a list:helm-controller/controllers/helmrelease_controller.go
Line 556 in 5a7d0f8
But maybe the cache config of the controller-runtime client may have the answer? https://github.com/kubernetes-sigs/controller-runtime/blob/f0351217e9e026533aece74d054b23cae5441659/pkg/client/client.go#L79
The text was updated successfully, but these errors were encountered: