Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ramen controllers are caching secret/configmaps from all namespaces #1434

Open
akalenyu opened this issue Jun 2, 2024 · 1 comment
Open

Comments

@akalenyu
Copy link
Contributor

akalenyu commented Jun 2, 2024

Initally sparked by a casual conversation with @nirs about a bug around secret objects,
I took a peek at the source code and it indeed looks like some controllers are caching all configmaps and secrets.
This both:

  • Aggressively increases memory usage (think: production cluster of an org with infinite CMs/Secrets)
  • Poses a security threat with Ramen having access to all user sensitive data in the cluster - rbac practice reference

This can be solved by instead only caching the secrets/configmaps in the ramen namespace
From a quick read, ramen only cares about those anyway

func (r *DRClusterReconciler) drClusterSecretMapFunc(ctx context.Context, obj client.Object) []reconcile.Request {
if obj.GetNamespace() != RamenOperatorNamespace() {
return []reconcile.Request{}
}

This can be demonstrated in one of the default drenv environments (I used test/envs/regional-dr-kubevirt.yaml):

for i in {1..200}; do kubectl --context dr1 create cm test-cm-$i -n default --from-file=../manifests/largedatafile.txt ; done

$ kubectl get pods --context dr1 -n ramen-system -w
ramen-dr-cluster-operator-896d8c9f6-krbtd   2/2     Running             0             11s
ramen-dr-cluster-operator-896d8c9f6-krbtd   1/2     OOMKilled           0             34s
ramen-dr-cluster-operator-896d8c9f6-krbtd   1/2     Running             1 (1s ago)    35s
ramen-dr-cluster-operator-896d8c9f6-krbtd   2/2     Running             1 (7s ago)    41s
ramen-dr-cluster-operator-896d8c9f6-krbtd   1/2     OOMKilled           1 (31s ago)   65s

May have to kill the existing ramen pod.
Could also watch the memory usage grow with minikube addons enable metrics-server --profile dr1
and then kubectl --context dr1 top pod -n ramen-system

See also

@akalenyu
Copy link
Contributor Author

akalenyu commented Jun 2, 2024

/assign akalenyu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant