Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v0.51] cgroups: fix memory usage on cgroup v1 #2157

Merged

Conversation

giuseppe
Copy link
Member

calculate the memory usage on cgroup v1 using the same logic as cgroup v2.

Since there is no single "anon" field, calculate the memory usage by summing the two fields "total_active_anon" and "total_inactive_anon".

Closes: #1642

Closes: https://issues.redhat.com/browse/RHEL-16376

[NO NEW TESTS NEEDED]

Signed-off-by: Giuseppe Scrivano gscrivan@redhat.com
(cherry picked from commit 1a9d45c)

Copy link
Contributor

openshift-ci bot commented Sep 10, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: giuseppe

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@rhatdan
Copy link
Member

rhatdan commented Sep 10, 2024

LGTM

@TomSweeneyRedHat
Copy link
Member

I've put the merge hold on this as this branch is too far back. We need this fix in c/common v0.57 and Podman v4.9.-rhel. Then also please verify that it's in the Podman v5.2-rhel branch and c/common v0.60 @giuseppe

@giuseppe
Copy link
Member Author

both versions already have this change

@TomSweeneyRedHat
Copy link
Member

I've removed the do-not-merge/hold label. I've verified that the fix is in the v0.57 branch, which we need for RHEL 4.9 as @giuseppe pointed out. We do need this backported for this branch of c/common so it can be used by podman v4.4.1-rhel for https://issues.redhat.com/browse/ACCELFIX-285 and https://issues.redhat.com/browse/ACCELFIX-286.

The error that we're running into here is the same as #2154

@TomSweeneyRedHat
Copy link
Member

@giuseppe #2154 has merged. I think if you rebase, your test error will go away. Once merged, I'll spin a new release and merge it into the Podman v4.4 branch, and then we can kill off a number of Jira cards at once.

@TomSweeneyRedHat
Copy link
Member

Partially Fixes: https://issues.redhat.com/browse/RHEL-59127

@rhatdan
Copy link
Member

rhatdan commented Sep 18, 2024

/lgtm

calculate the memory usage on cgroup v1 using the same logic as cgroup
v2.

Since there is no single "anon" field, calculate the memory usage by
summing the two fields "total_active_anon" and "total_inactive_anon".

Closes: containers#1642

Closes: https://issues.redhat.com/browse/RHEL-16376

[NO NEW TESTS NEEDED]

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
(cherry picked from commit 1a9d45c)
@giuseppe giuseppe force-pushed the fix-memory-usage-cgroup-v-0.51 branch from 042ed6e to 00ff6a8 Compare September 18, 2024 14:10
@openshift-ci openshift-ci bot removed the lgtm label Sep 18, 2024
@TomSweeneyRedHat
Copy link
Member

LGTM

@TomSweeneyRedHat
Copy link
Member

/lgtm

@TomSweeneyRedHat
Copy link
Member

OK, I'm losing my mind a bit here. When I opened this up, it was all happy green test buttons. I added a LGTM and then a / lgtm, and then it showed a failure on a netavark network test that I've not seen before. @edsantiago do we have tests that run after a /lgtm? Anyway, I've rerun the failed tests in hopes that it was a flake.

@edsantiago
Copy link
Member

Weird. I see tide in merge pool and also see failed f36 test. Let's see what tide does...

@TomSweeneyRedHat
Copy link
Member

And after restarting, I hit a for real network flake, restarted again.

@edsantiago edsantiago merged commit e6b2edd into containers:v0.51 Sep 18, 2024
5 checks passed
@edsantiago
Copy link
Member

All green, but still in tide pool. I have not reviewed, but I'll trust y'all. Merged.

@TomSweeneyRedHat
Copy link
Member

TY @edsantiago ! I'll create the release tomorrow, time to get off this contraption now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants