Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Add best practice for reducing memory usage #2545

Merged
merged 1 commit into from
Jan 27, 2023

Conversation

blkperl
Copy link
Contributor

@blkperl blkperl commented Jan 27, 2023

Signed-off-by: William Van Hevelingen william.vanhevelingen@acquia.com

Checklist:

  • Either (a) I've created an enhancement proposal and discussed it with the community, (b) this is a bug fix, or (c) this is a chore.
  • The title of the PR is (a) conventional with a list of types and scopes found here, (b) states what changed, and (c) suffixes the related issues number. E.g. "fix(controller): Updates such and such. Fixes #1234".
  • I've signed my commits with DCO
  • I have written unit and/or e2e tests for my change. PRs without these are unlikely to be merged.
  • My builds are green. Try syncing with master if they are not.
  • My organization is added to USERS.md.

Signed-off-by: William Van Hevelingen <william.vanhevelingen@acquia.com>
@sonarcloud
Copy link

sonarcloud bot commented Jan 27, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
3.6% 3.6% Duplication

@github-actions
Copy link
Contributor

Go Published Test Results

1 834 tests   1 834 ✔️  2m 32s ⏱️
   105 suites         0 💤
       1 files           0

Results for commit 36c46fd.

@codecov
Copy link

codecov bot commented Jan 27, 2023

Codecov Report

Base: 81.66% // Head: 81.66% // No change to project coverage 👍

Coverage data is based on head (36c46fd) compared to base (373c4a0).
Patch has no changes to coverable lines.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #2545   +/-   ##
=======================================
  Coverage   81.66%   81.66%           
=======================================
  Files         126      126           
  Lines       19147    19147           
=======================================
  Hits        15636    15636           
  Misses       2717     2717           
  Partials      794      794           

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@github-actions
Copy link
Contributor

E2E Tests Published Test Results

    2 files      2 suites   1h 37m 21s ⏱️
  95 tests   90 ✔️ 3 💤 2
192 runs  184 ✔️ 6 💤 2

For more details on these failures, see this check.

Results for commit 36c46fd.

@zachaller zachaller merged commit b60f69d into argoproj:master Jan 27, 2023
@blkperl blkperl deleted the bestpractice_memory branch January 27, 2023 15:33
@blkperl
Copy link
Contributor Author

blkperl commented Jan 27, 2023

@zachaller Thanks for the review! Let me know if you have questions or suggestions on changing the wording.

We could also mention that operators may wish to do the same for Deployments as the argo-controller watches all ReplicaSets on the clusters. One way to avoid this is to use NewCache which I don't think the controller uses. I can open a new issue for this if you think that makes sense. The tigera operator uses NewCache with a filter for NetworkPolicies if you are looking for an example.

@zachaller
Copy link
Collaborator

Yea I would love a new issue for looking into using that cache to get an informer that is more limited in scope that say all replicasets to reduce memory usage. Rollouts was created before a lot of the kubebuilder stuff so there is quite a few things done a little lower level but I took a quick peak at that cache package and it looks like I can get an informer still off of it so I would like to spend time looking into that and would use the issue to help track that.

@zachaller
Copy link
Collaborator

zachaller commented Jan 27, 2023

I think rollouts might need to do something like this instead of the cache

kubeInformerFactory := kubeinformers.NewSharedInformerFactoryWithOptions(
				kubeClient,
				resyncDuration,
				kubeinformers.WithNamespace(namespace), kubeinformers.WithTweakListOptions(func(options *metav1.ListOptions) {
					options.LabelSelector = "rollouts-managed=managed-key"
				}))

The key being WithTweakListOptions, and we would also probably need to add some managed by label to the rs

We might even be able to just use a LabelSelector="rollouts-pod-template-hash" meaning just to have the key and match any value

@blkperl
Copy link
Contributor Author

blkperl commented Jan 31, 2023

Thanks, I filed #2552

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants