Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document resource requests for core components #2085

Closed
3 tasks
minrk opened this issue Mar 8, 2021 · 3 comments · Fixed by #2226
Closed
3 tasks

Document resource requests for core components #2085

minrk opened this issue Mar 8, 2021 · 3 comments · Fixed by #2226

Comments

@minrk
Copy link
Member

minrk commented Mar 8, 2021

We used to set default resource requests on most pods, but after #2034 we no longer set any. This means it falls to documentation to cover resource reservations and limits on our core pods, and what reasonable values might be, and when they are a good idea.

Tasks:

  • explain a bit of background about resource reservations and when they are useful
  • identify all components that might want resource requests
  • identify reasonable values for key pods (mainly hub and proxy) and ranges based on load (e.g. using mybinder.org as a reference point for high load)
@consideRatio
Copy link
Member

While writing on the changelog for 1.0.0 I realize I think we must address this issue as #2034 set stopped setting default requests on containers.

I'm still very ambiguous about this in general, but I think not having them set add quite a bit of complexity that at least need to be mitigated by having a guiding documentation and suggestions.

@consideRatio
Copy link
Member

consideRatio commented May 22, 2021

In #2034 the default requests were removed which prompts some action points:

  • Documentation about the new resource requests/limits added/updated
  • Default values re-considered again before we lock something in.
    • The key motivation for the choices to be clarified

Overview of pods and their requests

Configuration pod cpu/memory requests before 1.0.0 Note
hub.resources hub 200m, 510Mi JupyterHub and KubeSpawner runs here. Can manage with small resources but could peak up to 1 CPU during very heavy load of simultaneous users starting and stopping servers.
proxy.chp.resources proxy 200m, 510Mi The container runs configurable-http-proxy. Will require small amounts of resources.
proxy.traefik.resources autohttps - The container performs TLS termination only. Will require small amounts of resources.
proxy.secretSync.resources autohttps - The sidecar container is a watchdog, watching a file for changes and updates a k8s Secret with those changes. Will require minimal resources.
scheduling.userScheduler.resources user-scheduler 50m, 256Mi The container runs a kube-scheduler binary with custom configuration to schedule the user pods. Will require a small amount of resources.
scheduling.userPlaceholder.resources user-placeholder - This is an explicit override of the default behavior to reuse the values in singleuser.cpu.guarantee|limit and singleuser.memory.guarantee|limit. It can be useful to increase this to a multiple of the typical real users' requests if you want to have may user-placeholder pods to reduce the pod scheduling complexity.
prePuller.resources hook|continuous-image-puller 0, 0 This pod's containers are all running echo or pause commands as a trick to pull the images. Will require minimal resources.
prePuller.hook.resources hook-image-awaiter 0, 0 The container just polls the k8s api-server. Will require minimal resources.
singleuser.cpu|memory.guarantee|limit jupyter-username 0, 1G The configuration syntax is different because it is native to the Spawner base class rather than Kubernetes. It is commonly useful to guarantee a certain amount of memory rather than CPU to help users share CPU with each other.

Deliberations

Should we ever set limits?

  • Answer: No

I see no benefit, only a drawbacks.

A drawback: Having a limit will cause a resource request to be assumed to match the limit if none is explicitly provided. This can cause very high requests if the request is unset.

Should we ever set requests?

  • Answer: ?

I see a benefit, and is vague about the drawbacks.

Considering this will involve understanding that:

  • LimitRange resources can influence requests/limits by
    • require them to be within a min/max range
    • requiring them to have a certain ratio between them
    • providing default values when omitted
  • 0m CPU requests are in practice 2m.
  • When you run a Pod on a Node, the Pod itself takes an amount of system resources. These resources are additional to the resources needed to run the container(s) inside the Pod.

  • PodOverhead can make pods get a little extra resources on top of what they requests. This shouldn't influences our discussion much as we can't count on this to be set.

@manics
Copy link
Member

manics commented May 22, 2021

This is nice to have for 1.0.0 but my feeling is it's not a blocker.

If you're still concerned then how about we document the previous limits, either in the docs as a placeholder for a more detailed explanation or in the release notes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants