Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement]: Regularly monitor peak and resident memory usage of the provider #32289

Closed
breathingdust opened this issue Jun 29, 2023 · 4 comments
Assignees
Labels
engineering-initiative Maintainer driven proposals to improve operation, or development experience of the provider. enhancement Requests to existing resources that expand the functionality or scope. github_actions Pull requests that update Github_actions code prioritized Part of the maintainer teams immediate focus. To be addressed within the current quarter. provider Pertains to the provider itself, rather than any interaction with AWS.

Comments

@breathingdust
Copy link
Member

breathingdust commented Jun 29, 2023

Description

Recent issue #31722 has demonstrated that the increasing size of the provider, combined with the recent addition of resources with a deep and complex schema has significantly increased the peak memory requirements of using the provider. This has put pressure on memory requirements in practitioner infrastructure deploy pipelines and has resulted in OOM errors when making changes to their infrastructure.

While we work on optimizing memory requirements, we should also begin to actively monitor the memory footprint of the provider as it evolves. This is likely to consist of a scheduled GitHub action to obtain a memory profile of the running provider via an acceptance test, and visualizing the results via pprof.

We should also define thresholds for what is considered an acceptable, vs an anomalous increase. This should allow us to more proactively respond to provider changes which result in outsized memory requirements.

Affected Resource(s) and/or Data Source(s)

All

References

Would you like to implement a fix?

None

@breathingdust breathingdust added enhancement Requests to existing resources that expand the functionality or scope. needs-triage Waiting for first response or review from a maintainer. labels Jun 29, 2023
@github-actions
Copy link

Community Note

Voting for Prioritization

  • Please vote on this issue by adding a 👍 reaction to the original post to help the community and maintainers prioritize this request.
  • Please see our prioritization guide for information on how we prioritize.
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.

Volunteering to Work on This Issue

  • If you are interested in working on this issue, please leave a comment.
  • If this would be your first contribution, please review the contribution guide.

@breathingdust breathingdust added the github_actions Pull requests that update Github_actions code label Jun 29, 2023
@github-actions github-actions bot removed the needs-triage Waiting for first response or review from a maintainer. label Jun 29, 2023
@breathingdust breathingdust added provider Pertains to the provider itself, rather than any interaction with AWS. needs-triage Waiting for first response or review from a maintainer. and removed needs-triage Waiting for first response or review from a maintainer. labels Jun 29, 2023
@gavinelder
Copy link

@breathingdust can #26742 please be added as a reference , this is the provider in which we have observed the largest increase in memory consumption

@breathingdust breathingdust added engineering-initiative Maintainer driven proposals to improve operation, or development experience of the provider. prioritized Part of the maintainer teams immediate focus. To be addressed within the current quarter. labels Jul 25, 2023
@YakDriver YakDriver self-assigned this Aug 14, 2023
@YakDriver
Copy link
Member

YakDriver commented Sep 20, 2023

We are regularly monitoring memory now and have accomplished reducing memory to an extent. For future travelers, here is a sum up.

What we've done

For this issue, we have:

Performance changes from latest version (v5.17.0) to main
Allocated memory: 1.7% (5744.1MB to 5841.2MB) (Worse)  
In-use memory: 9.4% (24.0MB to 26.2MB) (Worse) (wide-fluctuations normal)
CPU time: 4.8% (29.0s to 30.4s) (Worse)

What remains to be done

See #33553.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
engineering-initiative Maintainer driven proposals to improve operation, or development experience of the provider. enhancement Requests to existing resources that expand the functionality or scope. github_actions Pull requests that update Github_actions code prioritized Part of the maintainer teams immediate focus. To be addressed within the current quarter. provider Pertains to the provider itself, rather than any interaction with AWS.
Projects
None yet
Development

No branches or pull requests

3 participants