-
-
Notifications
You must be signed in to change notification settings - Fork 463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Resources requests and limits for each node #491
Comments
Hi @konradmalik , I'd love to see this and even more I'd love to accept a PR that makes this possible.
I'm sure, @louiznk would love to help to achieve this :) |
Ha, did not expect such problems honestly. But I'll definitely investigate and who knows, maybe succeed at least on a part of it. So I think there are 2 parts of this feature:
Am I wrong somewhere? |
Hi @konradmalik I tried a hack on cAdvisor (for memory limit only). It's works but change the behavior of cAdvisor can have too many impacts. |
Thank you for clarification @louiznk 👍 , I have a need for this functionality in k3d so I'll def. investigate further and post back here as soon as I'll have some ideas/working demo. |
Leaving this here: k3s-io/k3s#3005 (https://twitter.com/_AkihiroSuda_/status/1366689973672402945) |
This is rather a suggestion/question that I can start to work on and open a PR some time in the future if there is a need or it would be useful to have. Reactions are welcome 😉
Basically the feature in question is to be able to specify requests and limits in terms of cpu and memory for each docker container that runs a k3d node. Similar stuff that is possible with docker-compose for example.
First iteration would be to specify server and agent separately, regardless of their number. So no custom setups like 2 CPU for server 1 and 3 CPU for server 2, only things like: 1 CPU for servers, 3 CPU for agents. Granular control could be implemented later, or via adding node by node with different specs.
Specification of those limits would naturally be implemented in the yaml config and cli config defaulting to no-limits.
As another step (after the first implementation because I'm not sure how to do this or even if this is possible), this info could be propagated somehow to the k3s inside the dockers. This is also related to the multi-cluster setup. Not sure if I'm right but the last time I checked, one of the limitations of the k3d vs k3s was that
kubectl describe node
gives no info on the resources available on the node, so if I'm right nodes can be easily overprovisioned. Maybe the provided limits could be used somehow to force the k3s inside the container to acknowledge the resources? Not sure, this one is just an idea, something to investigate.What do you think about the limits implementation?
The text was updated successfully, but these errors were encountered: