-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman stats showing wrong memory usage #1642
Comments
can you reproduce it with any image? How have you created the container? |
@giuseppe Yes so far all images are affected (postgres image, prometheus image....) the containers have been created using podman native commands.
|
that is expected output, the file you've just created is in the memory cache and the kernel accounts for that in the I've tried your same command on cgroup v1 and the cgroup reports the following usage:
This memory is reclaimed if the container needs more, in fact you can see it is only cache:
You can give a hint the kernel about releasing a file with fadvise, e.g. I've tried the following C program: #include <fcntl.h>
int main() {
return posix_fadvise(1, 0, 0, POSIX_FADV_DONTNEED) ? 1 : 0;
} and from the container:
and after a while:
I am closing the issue since Podman is just reporting the information it gets from the kernel, but feel free to comment further |
@giuseppe Thank you very much for your help. I wrongly assumed that podman has the same behavior as docker. this is the excerpt from the documentation for so in your opinion @giuseppe what would be the best way to monitor the actual usage of memory ? |
thanks for the additional info, I'll take another look and compare with Docker |
@giuseppe I just want to add that i checked the docker source code and they have the following function // calculateMemUsageUnixNoCache calculate memory usage of the container.
// Cache is intentionally excluded to avoid misinterpretation of the output.
//
// On cgroup v1 host, the result is `mem.Usage - mem.Stats["total_inactive_file"]` .
// On cgroup v2 host, the result is `mem.Usage - mem.Stats["inactive_file"] `.
//
// This definition is consistent with cadvisor and containerd/CRI.
// * https://github.com/google/cadvisor/commit/307d1b1cb320fef66fab02db749f07a459245451
// * https://github.com/containerd/cri/commit/6b8846cdf8b8c98c1d965313d66bc8489166059a
//
// On Docker 19.03 and older, the result was `mem.Usage - mem.Stats["cache"]`.
// See https://github.com/moby/moby/issues/40727 for the background.
func calculateMemUsageUnixNoCache(mem types.MemoryStats) float64 {
// cgroup v1
if v, isCgroup1 := mem.Stats["total_inactive_file"]; isCgroup1 && v < mem.Usage {
return float64(mem.Usage - v)
}
// cgroup v2
if v := mem.Stats["inactive_file"]; v < mem.Usage {
return float64(mem.Usage - v)
}
return float64(mem.Usage)
} this is the actual link for the file https://github.com/docker/cli/blob/master/cli/command/container/stats_helpers.go |
calculate the memory usage on cgroup v1 using the same logic as cgroup v2. Since there is no single "anon" field, calculate the memory usage by summing the two fields "total_active_anon" and "total_inactive_anon". Closes: containers#1642 [NO NEW TESTS NEEDED] Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
opened a PR: #1643 |
We should probably match Docker's behaviour. Thanks @abdelaziz-ouhammou for diagnosing this. |
Hi @giuseppe , since issue was fixed recently, I assume it exists in earlier podman 3.4.2 version? |
calculate the memory usage on cgroup v1 using the same logic as cgroup v2. Since there is no single "anon" field, calculate the memory usage by summing the two fields "total_active_anon" and "total_inactive_anon". Closes: containers#1642 Closes: https://issues.redhat.com/browse/RHEL-16376 [NO NEW TESTS NEEDED] Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com> (cherry picked from commit 1a9d45c)
calculate the memory usage on cgroup v1 using the same logic as cgroup v2. Since there is no single "anon" field, calculate the memory usage by summing the two fields "total_active_anon" and "total_inactive_anon". Closes: containers#1642 [NO NEW TESTS NEEDED] Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
calculate the memory usage on cgroup v1 using the same logic as cgroup v2. Since there is no single "anon" field, calculate the memory usage by summing the two fields "total_active_anon" and "total_inactive_anon". Closes: containers#1642 [NO NEW TESTS NEEDED] Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
calculate the memory usage on cgroup v1 using the same logic as cgroup v2. Since there is no single "anon" field, calculate the memory usage by summing the two fields "total_active_anon" and "total_inactive_anon". Closes: containers#1642 [NO NEW TESTS NEEDED] Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
calculate the memory usage on cgroup v1 using the same logic as cgroup v2. Since there is no single "anon" field, calculate the memory usage by summing the two fields "total_active_anon" and "total_inactive_anon". Closes: containers#1642 [NO NEW TESTS NEEDED] Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
calculate the memory usage on cgroup v1 using the same logic as cgroup v2. Since there is no single "anon" field, calculate the memory usage by summing the two fields "total_active_anon" and "total_inactive_anon". Closes: containers#1642 Closes: https://issues.redhat.com/browse/RHEL-16376 [NO NEW TESTS NEEDED] Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com> (cherry picked from commit 1a9d45c)
Issue Description
Steps to reproduce the issue
Steps to reproduce the issue
example :
Describe the results you received
when runnin
podman stats
it shows a high memory utilisation for the containerthe process is normaly using 48mb
the only way to fix this is to manually run the following command on the host:
Describe the results you expected
I expect podman stats to be accurate for monitoring the memory usage of containers. But running a backup inside the container or any IO operation messes up with the output
podman info output
Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
Yes
Additional environment details
No response
Additional information
No response
The text was updated successfully, but these errors were encountered: