-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pods per node ? Any recommendation ? #6287
Comments
@jeremyeder Thoughts? |
Depends what the pods are doing. If one pod can saturate your physical resources, then all you can run is one pod. If the pods are just sitting there running 'sleep', then you can obviously get a lot higher. We've got tests running with active storage and network I/O in the 100-200 pods per node range, with good results. Kube and OpenShift both default to 40 pods as a maximum because some of the communication between Kube and Docker is still undergoing optimization. Back to my original point though, pod limit is assuming your hardware can actually support the work being done by those pods and still remain within your business/SLA rules. |
Hi @jeremyeder I'm asking this because we are experiencing some deadlock in the Docker daemon, where the nodes's pod limit is 100, so we think there are some performance limitations not well documented, we see lot of REST requests to Docker from Kubenetes node. |
@roldancer ah ok. Do you have any more information about that ? Yes, the REST docker/kube is what I was referring to. Though we really haven't seen the issue @ 100 pods. What kind of hardware do you have and what version of Origin are you running? And how many nodes? |
@roldancer what exact kernel version are you using? |
Hi @jeremyeder, here is the information about our node, by the way we are using OSE 3.1. $ more /etc/redhat-release $ uname -a The node has 16 CPU's and 128GB of memory $ more /proc/cpuinfo |
The deadlock issue is currently being worked in https://bugzilla.redhat.com/show_bug.cgi?id=1292481 If you'd like, you can subscribe to the bugzilla, but I'll make sure to post a note here as well once we have verified a fix...we're currently testing it. Should know more next week; until then please use the latest RHEL 7.1.z kernel. |
Hi @jeremyeder. Right now @roldancer can't access the BZ you posted. It would be very appreciated if you update this issue as soon as we get any news on BZ 1292481. |
@jeremyeder what's the way to configure the max pods per node ? thanks. Update:
but failed to find out any official document about that, is that true anyway ? |
That's the right way to do it, yep. If you're using openshift-ansible, we also support setting kubeletArguments during install phase.
|
This fix is in 327.10 or higher kernel. |
Closing due to age. |
Hi All, I would like to know if there is any recommendation about the limit of pods per node, I was reading some performance reports from Kubernetes's site, all the test were using 30 pods per node, I want just to know if someone has experience using nodes with 100 pods or even more .
Many thanks for any advice.
The text was updated successfully, but these errors were encountered: