-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Customize cpu/Memory resourceRequirements/Limit on mover pods #707
Comments
Hello team, is there any way to advance this issue, it is hihgly rated by ou customer |
I'll try to see if we can prioritize this in the v0.9 timeframe |
Since this seems to be getting some attention, I'll add some thoughts... As we look at setting requests and limits, there are significant implications of doing so. In the case of requests, I think it's a net benefit since it provides a way for the user to effectively reserve both CPU and memory for the mover pods so that they can be scheduled to a node where they will run efficiently and not risk OOM-kill. The down-side is that this makes it harder to schedule the mover pods, given that they may already be restricted due to storage topology. A resource request may make them unschedulable. In the case of limits, while it makes sense for us to add them by bringing in the appropriate go struct from kube, I don't think I would recommend that end users set them. By limiting CPU, it will cause replication to take longer with basically no up-side assuming resource requests for important workloads are properly set. Setting limits on memory will cause the replication to fail if the limit is hit. Unfortunately, we have no way to quantify the expected memory consumption of a given mover (looking at you, Restic), so even using this as a safety valve can be problematic. |
Resolves: backube#707 Signed-off-by: Tesshu Flower <tflower@redhat.com>
Resolves: backube#707 Signed-off-by: Tesshu Flower <tflower@redhat.com>
We can't use volsync without this due memory LimitRanges forced on namespace. @tesshuflower I see that #1072 was merged but changes still not available in latest helm chart release. Could you please advise any ETA when it will be available in helm chart? Thanks. |
@igor-nikiforov good timing as we are actually trying to get our v0.9.0 release (which will contain this feature) done - I'm hoping this will happen today sometime, possibly tomorrow. |
Describe the feature you'd like to have.
Customize from default mover pod cpu/memory resourceRequirements/resourceLimits
What is the value to the end user? (why is it a priority?)
Ability to scale pod requirements to the workload of the pod, ie very large, or very small.
How will we know we have a good solution? (acceptance criteria)
cpu/memory resourceRequirements/resourceLimits can be set
Additional context
The text was updated successfully, but these errors were encountered: