Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically set GOMAXPROCS in sharedmain #1583

Closed
julz opened this issue Aug 5, 2020 · 4 comments · Fixed by #1585
Closed

Automatically set GOMAXPROCS in sharedmain #1583

julz opened this issue Aug 5, 2020 · 4 comments · Fixed by #1585
Assignees

Comments

@julz
Copy link
Member

julz commented Aug 5, 2020

Expected Behavior

GOMAXPROCS for controllers is correctly set when deployed in a container with a cpu limit.

Actual Behavior

Container GOMAXPROCS are set as if they have access to all the cpus on the host

Steps to Reproduce the Problem

  1. Deploy e.g. knative serving
  2. Check GOMAXPROCS of e.g. autoscaler (which has cpu limit of 300m = less than 1 cpu)
  3. See that GOMAXPROCS is set to number of CPUs on host :-( (on my Minikube with 10 CPUs, it's set to 10).

Some context/other threads:

Proposed solution:

We could use something like https://github.com/uber-go/automaxprocs to fix this. Or we could explicitly set GOMAXPROCS env variable in our yamls to match set quotas, but that won't work well if operators change the limits. Or we could wait for upstream Go to fix this (See linked issue above), but it looks like this is no longer landing soon.

I think personally https://github.com/uber-go/automaxprocs seems like a reasonable approach here since it copes better with someone using the operator to bump cpu quotas, but would like to see if anyone has concerns with that before PRing :-)

/assign @vagababov @markusthoemmes mattmoor

@vagababov
Copy link
Contributor

well, since you assigned to me — me gonna just do it? 😃
This seems reasonable enough to just use automaxprocs, until golang is fixed itself.

@julz
Copy link
Member Author

julz commented Aug 5, 2020

ha! well I have a PR ready, I was assigning for people to tell me it's a bad idea before I send it :)

@julz
Copy link
Member Author

julz commented Aug 5, 2020

Pushed the PR :) #1585

@vagababov
Copy link
Contributor

💭

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants