Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

need k8s.pod.memory.working_set_memory_limit_utilization metric #33562

Open
halleystar opened this issue Jun 14, 2024 · 4 comments
Open

need k8s.pod.memory.working_set_memory_limit_utilization metric #33562

halleystar opened this issue Jun 14, 2024 · 4 comments
Labels
discussion needed Community discussion needed enhancement New feature or request receiver/kubeletstats Stale

Comments

@halleystar
Copy link

Component(s)

receiver/kubeletstats

Is your feature request related to a problem? Please describe.

need k8s.pod.memory.working_set_memory_limit_utilization, beacuse k8s pod oom is Obtained through working_set_memory / memory limit。But there is only k8s.pod.memory_limit_utilization.

Describe the solution you'd like

need k8s.pod.memory.working_set_memory_limit_utilization metric

Describe alternatives you've considered

need k8s.pod.memory.working_set_memory_limit_utilization metric

Additional context

No response

@halleystar halleystar added enhancement New feature or request needs triage New item requiring triage labels Jun 14, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@halleystar halleystar changed the title need k8s.pod.memory.working_set_memory_limit_utilization need k8s.pod.memory.working_set_memory_limit_utilization metric Jun 14, 2024
@ChrsMark
Copy link
Member

Hey @halleystar! Isn't the container's memory_working_set the crucial one here instead of the Pod's one? The OOM killer is triggered based on the container's memory_working_set metric I think, right?

Since we have multiple memory metrics I guess it would make sense to have their *memory*utilization equivalent ones. This one also has some good justification, but I would love other's opinions here as well.

We should also consider discussing it as part of the open-telemetry/semantic-conventions#1032 as well.

@halleystar
Copy link
Author

yes! your are right. It should be container's memory_working_set. I think we should have a relation memoryutilization at the same time. Otherwise we should provide container request and limit memory to calculate memory usage to observe whether OOM is about to occur.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Aug 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion needed Community discussion needed enhancement New feature or request receiver/kubeletstats Stale
Projects
None yet
Development

No branches or pull requests

3 participants