Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add available memory state for hostmetrics receiver #7417

Open
rogercoll opened this issue Jan 28, 2022 · 34 comments
Open

Add available memory state for hostmetrics receiver #7417

rogercoll opened this issue Jan 28, 2022 · 34 comments
Assignees
Labels
enhancement New feature or request receiver/hostmetrics waiting-for:semantic-conventions Waiting on something on semantic-conventions to be stabilized

Comments

@rogercoll
Copy link
Contributor

rogercoll commented Jan 28, 2022

The current Linux memory states provided by the hostmetricsreceiver might be not the most appropriate ones for an end user. The available memory states for Linux are:

[buffered, cached, free, slab_reclaimable, slab_unreclaimable, used]

All of those values are retrieved with gopsutil, which gets them from /proc/meminfo file.

A user might think that the free value represents the amount of memory that the system can use, but free (MemFree) is a count of how many pages are free in the buddy allocator, however there are a lot of ways pages can be returned to the buddy allocator in time of need. The latest is provided by the Linux kernel (>3.16) as MemAvailable in /proc/meminfo.

Can we get MemAvailable field from current states?

No, it is an independent field not computed with the current values. Gopsutils provides this value regardless the kernel version as it can be estimated from MemFree, Active(file), Inactive(file), and SReclaimable, as well as the "low" watermarks from /proc/zoneinfo.

Repercussion

The user will be able to know the actual available memory of the system.

It is already broken and quite difficult to achieve it as some of the memory state values are inclusive, for example the cached state. The assumption would be defined as:

total = buffered + cached + free + used + slab_reclaimable + slab_unreclaimable

But the used value is computed by gopsutils as: total - free - buffered - cached

Then:

total = buffered + cached + free + (total - free - buffered - cached) + slab_reclaimable + slab_unreclaimable

total = total + slab_reclaimable + slab_unreclaimable

The sum(state) will always be higher than the total.

(The following proposals do not solve the sum assumption, they only add more memory information for the user)

Proposal 1

Add a new memory state for Linux to provide the available memory. State name: avaialble

Proposal 2

Change used state for: total - available
This proposal requires to add the memory.limit metric as we won't be able to compute the total memory from the states. Currently, we can get the total amount of memory with the following promQL query: scalar(sum without (state) (system_memory_usage{state =~ "free|used|cached|buffered"}))

@jpkrohling
Copy link
Member

cc @dmitryax

@bogdandrutu
Copy link
Member

/cc @dashpole I think you are using this as well

@quentinmit
Copy link
Contributor

I don't think either of those proposals improves the problem with the values not adding up. I'd like to add proposal 3, fixing the set of states to exactly cover the memory on the system. If there are states that double-count memory, they should be separate metrics. Here are the non-overlapping states from Munin and how they calculate them from /proc/meminfo as an inspiration:

apps (MemTotal-MemFree-Buffers-Cached-Slab-PageTables-Percpu-SwapCached)
buffers (Buffers)
swap (SwapTotal-SwapFree)
cached (Cached-Shmem)
free (MemFree)
shmem (Shmem)
slab_cache (Slab)
swap_cache (SwapCached)
page_tables (PageTables)
per_cpu (Percpu)
vmalloc_used (VmallocUsed)

(I'm not sure if it makes sense to include swap; I'm guessing that with the current set of states we're not including swap? You could leave that out and I think the remaining states would represent total physical memory.)

I think the main difference between this and proposal 2 is that here we're changing the definition of used (what Munin calls apps) instead of changing the definition of free.

@rogercoll
Copy link
Contributor Author

@quentinmit I really like your proposal to solve the sum assumption, it provides much more information to the user and we prevent having overlapping states.

Tested without the Swap states as they are covered in the paging scraper and the sum usage is correct. Nonetheless, I have a couple of concerns:

  • The scraper relies on gopsutils for those metrics and the library is not currently providing percpu state (I can do the PR)
  • What about the "available" memory state that the kernel provides (MemAvailable)? (available != total - apps)

In my opinion, the available or non-available state is very useful metric when defining alerts. The metric used (or apps) should not be understood as non-available memory, usually, the latest is quite higher.

@rogercoll
Copy link
Contributor Author

rogercoll commented Apr 26, 2022

Proposal 3

Another possible solution would be to add a new memory metric only for this purpose, as making any change on the current states would break the sum assumption.

The memory system scrapper will generate the following metrics:

- system.memory.usage: state attributes
- system.memory.utilization: disabled by default, state attributes
- system.memory.available: no state attributes

I would appreciate any opinion on this

@rogercoll
Copy link
Contributor Author

Whether using the used state memory or the total - available memory can be truly significant in alerting systems. I made a small test to check the average variation between both values while stressing the memory:

image

The average difference is around 7/8% . By providing both values we let the user choose whatever it's best.

@TylerHelmuth
Copy link
Member

Another possible solution would be to add a new memory metric only for this purpose

Is this the same as your proposed solution 1?

I like the idea of adding a new metric; allows those who currently rely on used to continue to use it, while providing a more accurate measure for those who want it.

@rogercoll
Copy link
Contributor Author

@TylerHelmuth No, thanks for pointing that out. It would be a new proposal, let me update the comment.

Proposals 1 and 2 are based on the "state" attribute used in usage and utilization metrics. Proposal 3 won't do any modification on the state neither the current metrics, instead, add a new one.

@dmitryax
Copy link
Member

Adding another metric system.memory.available maybe be a good solution to one the problems described in this issue. But it's still unclear how to solve the total mismatch issue and also adding this metric brings confusion to end user questioning what's the difference between system.memory.available and system.memory.usage.

Potentially, we can solve both problems by introducing a concept of optional metric attribute values to the metrics builder. The attributes values can be enabled or disabled in metadata.yaml (default) and user configs. If attribute value is disabled, it means that datapoints with this attribute value won't be emitted.

With this functionality added to the metrics builder, we will have the following sets of state attribute values in this receivers:

  • Enabled by default: buffered, cached, free and used. Sums up to a correct total value.
  • Disabled by default: slab_reclaimable, slab_unreclaimable and available. Optional values that user can enable.

What do you think?

@bogdandrutu @djaglowski please share your opinion on the additional feature to the metrics builder

@djaglowski
Copy link
Member

I think we should aim for a design that adheres to the specification's aggregation expectations (ie. data points sum to a meaningful total). Supporting metrics that violate these expectations seems like the wrong approach to me.

There seem to be two layers to the problem here:

  1. What is the correct data model for this situation?
  2. How do we migrate from a flawed data model to the correct one?

I don't have a strong opinion on the first question, but in regards to the second, I think we basically have to decide whether to fix it, or to deprecate and replace it. The right approach depends on stability level. In my opinion, a per-metric stability level is necessary in order to have a clear approach (eg. alpha -> beta -> stable -> deprecated). If this metric is less than stable, then we should fix it. If it is "stable" but realized to be fundamentally flawed in its design, we should deprecate it and add one or more new metrics that amount to a more accurate model.

@dmitryax
Copy link
Member

dmitryax commented May 11, 2022

I think we should aim for a design that adheres to the specification's aggregation expectations (ie. data points sum to a meaningful total). Supporting metrics that violate these expectations seems like the wrong approach to me.

That's true, we should always aim to have meaningful total values. Proposing optional attribute values, I imply that the sum of attribute values enabled by default should represent the most reasonable total value. At the same time, I can think of use cases when user wants to disable some attributes values or add others optional ones and have their own custom meaningful sums out of them. In this case, moving slab_reclaimable and slab_unreclaimable to a different metric to get correct total value would significantly reduce flexibility of using them because they are part of the same memory usage metric.

@quentinmit
Copy link
Contributor

I think it's okay to keep SlabReclaimable and SlabUnreclaimable in the regular summable metric; their sum is just Slab. So you can include either slab or slab_reclaimable and slab_unreclaimable but not both. Other states are the problematic ones.

@github-actions
Copy link
Contributor

github-actions bot commented Nov 9, 2022

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Nov 9, 2022
@lootek
Copy link
Contributor

lootek commented Dec 16, 2022

I think this issue is still valid and worth addressing (@dmitryax) as we're clearly missing system.memory.available metrics.

And so I'd like to reopen the discussion as to me it seems like adding system.memory.available is what all of us agree to do (especially if it's just a matter of using a value already exposed by gopsutil).

As for fixing the flawed sum, I like this approach:

With this functionality added to the metrics builder, we will have the following sets of state attribute values in this receivers:

  • Enabled by default: buffered, cached, free and used. Sums up to a correct total value.
  • Disabled by default: slab_reclaimable, slab_unreclaimable and available. Optional values that user can enable.

@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@lootek
Copy link
Contributor

lootek commented Feb 15, 2023

@fatsheep9146 maybe I could take this one and try implementing those changes?

@fatsheep9146
Copy link
Contributor

@lootek It's yours!

@lootek
Copy link
Contributor

lootek commented Feb 15, 2023

@fatsheep9146 That's great, thanks! I'll followup soon :)

@github-actions github-actions bot added the Stale label Aug 1, 2023
@dmitryax dmitryax removed the Stale label Aug 1, 2023
@ItsLastDay
Copy link
Contributor

Hi folks. I'm interested in having available memory. Below I summarize my understanding of what is done, which decisions are made and which are still pending approval, and what can be the next steps.

What is done

Mainly, discussions on two issues:
A. The sum of system.memory.usage metric's state attribute values do not sum to "total memory of the system".
B. How to start exposing Linux Kernel's available memory to users?

My impression is that these issues can be solved independently. Hence, I'll describe them in detail separately.

Issue A: sum over state is bad

The problematic states are slab_reclaimable and slab_unreclaimable: [1]. Because of them, the sum over state attribute is higher than the total memory. The "slab" is a term internal to Linux Kernel: [2].

dmytryax@ proposed having slab_... states disabled by default. lootek@ implemented this proposal in #19149.

The implementation sparkled more discussion: e.g. #19149 (comment) by djaglowski@. He pointed out that if we write down all the mathematical relationships between the memory states, we'll be able to converge faster to a correct data model.
The implementation was rejected (or abandoned - either way, it's not merged).

As a result, another proposal took precedence: introduce a new metric system.memory.slab, and deprecate the slab states from system.memory.usage. rogercoll@ created an issue to add such metric to the specification first: open-telemetry/semantic-conventions#531. The issue got no comments since March.

Issue B: expose available memory from Linux Kernel'

rogercoll@ showed that available is really a different state: [3]. He also showed that available is unique to Linux Kernel: [4].

dmitryax@ proposed to add a separate metric system.linux.memory.available. He asked other maintainers for agreement before starting the implementation. This is where the story ends so far.

Next steps

Long-term

Based on the above, I think the main step is to understand the correct data model (quoting djaglowski@'s #7417 (comment)), i.e. to:

  1. Understand for which OS (operating systems) do we want to provide system.memory.usage metric
  2. Write down the memory states that we want to track, independent of the OS
    • currently we use gopsutil's nomenclature: code
    • my opinion on this: we really care about free and used. Everything else (like buffered) is OS-specific, and can go to OS-specific metrics. However, I don't know if that's a common practice in OTel semconv to split metrics by OS.
  3. Write down the mathematical relationships between those states, like in [receiver/hostmetrics] Add available memory state (optionally) #19149 (comment)
  4. Look at the actual OS from step number one. See how our states map to their states. Iterate our model further.

IMO, this will help us to build a long-term solution, which will be stable to library changes and OS changes. However, such solution might be hard to use: e.g. users would query system.memory.usage for general memory usage, and then query system.linux.memory.some_linux_specific_metric for details. I'm not sure I'm qualified enough to reason about this, because I haven't talked to any OTel customers. Tagging @dmitryax @braydonk from system semantic conventions WG.

Short-term

Alternatively, if we want to build a short-term solution, which still looks very pleasant, the next steps are:

For issue A:

  1. Approve Proposal to add system.memory.slab  semantic-conventions#531
  2. Implement the approved specification
  3. Deprecate the slab-related states in system.memory.usage

For issue B:

  1. Create a semconv proposal, similar to Proposal to add system.memory.slab  semantic-conventions#531, to add system.linux.memory.available
  2. Approve the proposal
  3. Implement it

As a result, both issues are solved.

[1] #7417 (comment)
[2] open-telemetry/semantic-conventions#531
[3] #7417 (comment)
[4] #19149 (comment)

@ItsLastDay
Copy link
Contributor

ItsLastDay commented Aug 15, 2023

I propose to walk the "short-term" path. I created open-telemetry/semantic-conventions#257 to add system.linux.memory.available.

@ItsLastDay
Copy link
Contributor

open-telemetry/semantic-conventions#257 is done. Next step: I'll implement system.linux.memory.available in hostmetricsreceiver.

codeboten pushed a commit that referenced this issue Nov 23, 2023
…ric (#27247)

Implement a new metric `system.linux.memory.available`,
which I defined in
open-telemetry/semantic-conventions#257.

**Link to tracking Issue:**
#7417

**Testing:** added a check to an existing unit test. I had to refactor
the test a bit: it assumed that a certain metric will be at position 0,
which is not true now.

**Documentation:** Added note in `metadata.yaml`

---------

Co-authored-by: Curtis Robert <92119472+crobert-1@users.noreply.github.com>
RoryCrispin pushed a commit to ClickHouse/opentelemetry-collector-contrib that referenced this issue Nov 24, 2023
…ric (open-telemetry#27247)

Implement a new metric `system.linux.memory.available`,
which I defined in
open-telemetry/semantic-conventions#257.

**Link to tracking Issue:**
open-telemetry#7417

**Testing:** added a check to an existing unit test. I had to refactor
the test a bit: it assumed that a certain metric will be at position 0,
which is not true now.

**Documentation:** Added note in `metadata.yaml`

---------

Co-authored-by: Curtis Robert <92119472+crobert-1@users.noreply.github.com>
@ItsLastDay
Copy link
Contributor

ItsLastDay commented Nov 27, 2023

I implemented all steps for "Issue B" from #7417 (comment).

So, answering the original question from this bug - "Can we get MemAvailable field from current states?": now yes. I propose to close this bug as done.

Feel free to open a separate thread for "Issue A" from #7417 (comment). I see that open-telemetry/semantic-conventions#531 is still open, and IMO it's not critical.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Jan 29, 2024
@quentinmit
Copy link
Contributor

Feel free to open a separate thread for "Issue A" from #7417 (comment). I see that open-telemetry/semantic-conventions#531 is still open, and IMO it's not critical.

The linked semconv issue doesn't address this bug - it doesn't fix the free state to represent something meaningful, and it doesn't propose fixing anything about the existing metric. I think we should keep using this issue for that, given that most of the comments are about it.

@github-actions github-actions bot removed the Stale label Jan 30, 2024
Copy link
Contributor

github-actions bot commented Apr 1, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@rogercoll
Copy link
Contributor Author

I would keep this issue open too, the existing memory metrics won't be aligned with the semantic conventions until the slab state is replaced.

SemConv progress update: open-telemetry/semantic-conventions#1078

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Jul 29, 2024
@rogercoll
Copy link
Contributor Author

/unstale system.linux.memory.slab metric will be available in the next semconv release: open-telemetry/semantic-conventions#1078

@github-actions github-actions bot removed the Stale label Jul 30, 2024
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Sep 30, 2024
@mx-psi mx-psi added the waiting-for:semantic-conventions Waiting on something on semantic-conventions to be stabilized label Nov 20, 2024
@github-actions github-actions bot removed the Stale label Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request receiver/hostmetrics waiting-for:semantic-conventions Waiting on something on semantic-conventions to be stabilized
Projects
Development

No branches or pull requests