-
Notifications
You must be signed in to change notification settings - Fork 891
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scheduler: support the ability to automatically assign replicas evenly #4805
Comments
If the weights are set to the same, I understand that's the effect. I understand that sometimes the number of replicas is not divisible by the number of clusters. In this case, there must be some clusters with one more replica. |
For general scenarios, we can only achieve the maximum approximate average assignment. This is an unchangeable fact. |
How about describing it in detail at a community meeting? |
Given the plausibility of this feature, and the fact that the difficulty of implementing it is not very complicated, how about we do this requirement as an OSPP project @RainbowMango @whitewindmills |
if user specified it the strategy, will it ignore the result of |
@Vacant2333 |
hello, i wonder know that when will be different with when we use (( thanks for your answer @whitewindmills apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
#...
placement:
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- member1
weight: 1
- targetCluster:
clusterNames:
- member2
weight: 1 |
@Vacant2333
Hope it helps you. |
@whitewindmills i got it, if this feat is not add to OSPP, i would like to implement it~~ im watch on karmada-scheduler for now |
Hi @Vacant2333 We are going to add this task to the OSPP 2024. You can join in the discussion and review. |
/assign |
@whitewindmills explained the reason why introducing a new replica allocation method at #4805 (comment). I'd like to hear your opinions on the following questions:
@XiShanYongYe-Chang @chaunceyjiang @whitewindmills What's your thoughts? |
I prefer to keep it as it is. |
Why? Can you explain it in more detail? |
@RainbowMango |
@XiShanYongYe-Chang @chaunceyjiang What do you think? |
I think a new policy can be added to represent the average. The biggest difference between it and the StaticWeight policy is that the replicas is allocated considering the resources available. |
@RainbowMango what's your options. anyway, this PR #5225 is waiting for you to push forward. |
My opinion on this feature is we can try to enhance the legacy feature
I think it's a mistake that let Speaking of the use case mentioned on this issue:
I believe this is a reasonable use case, but more commonly, replicas are not evenly distributed across clusters, because some cluster servers as primary clusters while others act as backup clusters. In that case, the |
I agree with you. If |
I have just reviewed the code related to skipping spread constraints and available resources in the current
In conclusion, I believe that enhancing |
Hi, As discussed with @whitewindmills @XiShanYongYe-Chang and @ipsum-0320 on a temporary meeting, we need to revisit the original design of The |
In my opinion, currently the use case of |
/reopen |
@whitewindmills: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What would you like to be added:
Background
We want to introduce a new replica assignment strategy in the scheduler, which supports an even assignment of the target replicas across the currently selected clusters.
Explanation
After going through the filtering, prioritization, and selection phases, three clusters(
member1
,member2
,member3
) were selected. We will automatically assign 9 replicas equally among these three clusters, the result we expect is[{member1: 3}, {member2: 3}, {member3: 3}]
.Why is this needed:
User Story
As a developer, we have a deployment with 2 replicas that needs to be deployed with high availability across AZs. We hope Karmada can schedule it to two AZs and ensure that there is a replica on each AZ.
Our PropagationPolicy might look like this:
But unfortunately, the strategy
AvailableReplicas
does not guarantee that our replicas are evenly assigned.Any ideas?
We can introduce a new replica assignment strategy like
AvailableReplicas
, maybe we can name itAverageReplicas
. It is essentially different from static weight assignment, because it does not support spread constraints and is mandatory. When assigning replicas, it does not consider whether the cluster can place so many replicas.The text was updated successfully, but these errors were encountered: