Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rook should be cgroup aware #13815

Closed
uhthomas opened this issue Feb 26, 2024 · 0 comments · Fixed by #13816
Closed

Rook should be cgroup aware #13815

uhthomas opened this issue Feb 26, 2024 · 0 comments · Fixed by #13816
Labels

Comments

@uhthomas
Copy link
Contributor

uhthomas commented Feb 26, 2024

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior:

Go is not cgroup aware, which means rook will be throttled in containerised environment with Linux (Kubernetes).

The below image shows two halfs, the first with a 13600k (20 threads) and the second with an EPYC 7763 (128 threads). The first half throttles at about 5ms, and the second at 40ms.

image

Expected behavior:

Rook should not throttle when it has sufficient CPU. automaxprocs can be used to do this automatically.

How to reproduce it (minimal and precise):

Set CPU limits for the operator and observe the container will throttle.

File(s) to submit:

  • Cluster CR (custom resource), typically called cluster.yaml, if necessary

Logs to submit:

  • Operator's logs, if necessary

  • Crashing pod(s) logs, if necessary

    To get logs, use kubectl -n <namespace> logs <pod name>
    When pasting logs, always surround them with backticks or use the insert code button from the Github UI.
    Read GitHub documentation if you need help.

Cluster Status to submit:

  • Output of kubectl commands, if necessary

    To get the health of the cluster, use kubectl rook-ceph health
    To get the status of the cluster, use kubectl rook-ceph ceph status
    For more details, see the Rook kubectl Plugin

Environment:

  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Cloud provider or hardware configuration:
  • Rook version (use rook version inside of a Rook Pod):
  • Storage backend version (e.g. for ceph do ceph -v):
  • Kubernetes version (use kubectl version):
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):
@uhthomas uhthomas added the bug label Feb 26, 2024
uhthomas added a commit to uhthomas/rook that referenced this issue Feb 26, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: rook#13815

Signed-off-by: Thomas Way <thomas@6f.io>
uhthomas added a commit to uhthomas/rook that referenced this issue Feb 26, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: rook#13815

Signed-off-by: Thomas Way <thomas@6f.io>
uhthomas added a commit to uhthomas/rook that referenced this issue Feb 26, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: rook#13815

Signed-off-by: Thomas Way <thomas@6f.io>
uhthomas added a commit to uhthomas/rook that referenced this issue Feb 26, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: rook#13815

Signed-off-by: Thomas Way <thomas@6f.io>
uhthomas added a commit to uhthomas/rook that referenced this issue Feb 26, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: rook#13815

Signed-off-by: Thomas Way <thomas@6f.io>
mergify bot pushed a commit that referenced this issue Feb 26, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: #13815

Signed-off-by: Thomas Way <thomas@6f.io>
(cherry picked from commit 97e3e69)

# Conflicts:
#	go.sum
travisn pushed a commit that referenced this issue Feb 26, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: #13815

Signed-off-by: Thomas Way <thomas@6f.io>
(cherry picked from commit 97e3e69)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant