-
Notifications
You must be signed in to change notification settings - Fork 859
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance behavioral control of Karmada propagation policies #4559
Comments
The label( If we introduce a similar thing on PropagationPolicy/OverridePolicy/ResourceBinding, we need to explain its behavior more. For example, if we are going to introduce a Thanks @tpiperatgod , sounds like a reasonable feature, we can continue the discussion here. |
Does such sequential startup mean that the next cluster cannot be started until the StatefulSet application of the first cluster is ready? |
Okay, let me illustrate with a little more exaggerated example. Let's say that my current multi-cluster environment consists of 3 member clusters, and I need to deploy a statefulset application across clusters, which needs to satisfy a principle of minimum number of healthy replicas, i.e., when the application has a total of 7 replicas, the minimum number of replicas for which the application can keep running normally is 5, i.e., the smallest odd number of replicas that is greater than half of the total replica count. Suppose the current distribution of applications is as follows member01: 3 member02: 4 Let's assume that there is a new scheduling at this time, and the application distribution after the scheduling changes to: member01: 3 member03: 4 It can be seen that the application replicas on member02 are all migrated to member03. If the resources are propagated directly according to the new scheduling result, only 3 replicas on member01 are running before the replicas on member03 are ready, which does not meet the minimum number of replicas required for the application to run normally. |
Thanks for the clarification. |
For example, in a propagation path like follows: PropagationPolicy -> ResourceBinding -> Work(cluster01)\Work(cluster02) Assume I now set a field like The user's app operator can fetch these two Works and control the propagation of the app to the specified cluster by cleaning up the label There is currently no mature solution for this, so if this feature request makes sense, then we can continue to refine it. |
Wow, this is concerning, given the label |
You're right, this is just an example, we can use other labels instead |
Hi @tpiperatgod, your issue topic is the same as #1567 and #4688. User need to specify a policy (the specific method is still under discussion) to suspend the work. The user controls when and how to cancel the suspension of the work. We have recently been pushing this requirement forward, can you participate in the discussion of these two issues? |
Hi @tpiperatgod, do you have any more detailed steps on this? |
Hi @tpiperatgod, I raised a proposal for resource propagation suspension: #5118, would you like to have a look? |
What would you like to be added:
Some controllers (endpointslice\mcs) can control the behavior of Work resource by labeling them.
Is it possible to extend the ability to PropagationPolicy and OverridePolicy, e.g. adding a new field, label, annotation, etc. to set the above label for Policy resources?
Why is this needed:
In the design of the Karmada propagation policy, ResourceBinding is created by the PropagationPolicy, which in turn creates a Work that applies the resources in the control plane to the corresponding member clusters.
The ResourceBinding holds the result of the Karmada framework's scheduling of the resource. In this process, we want to get the scheduling result first, and then control the propagation behavior of the resource based on the scheduling result combined with custom logic.
For example, for a StatefulSet application, we would like the application to start in a certain order in different clusters, not in a disorderly manner or the default order of the current Karmada framework scheduling.
The text was updated successfully, but these errors were encountered: