-
Notifications
You must be signed in to change notification settings - Fork 829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
native retain replicas should look at observedObject's labels #5072
Comments
HI @a7i, thanks, I thought wrong. |
Oh, hold on guys. The desired object should be the one after overriding by COP, which means it should have the annotation Can you help to check if the annotation in the manifest of the relevant Work object? |
It is certainly in the Work object:
what's the significance of |
It's nothing to do with the annotation The desired object that |
I don't know why it isn't work on your side, but I just made a test with the Get Started Example. 1 step: Create a similar ClusterOverridePolicy based on the one on this issue description: apiVersion: policy.karmada.io/v1alpha1
kind: ClusterOverridePolicy
metadata:
labels:
role: retain-replicas
name: retain-replicas
spec:
overrideRules:
- overriders:
labelsOverrider:
- operator: add
value:
resourcetemplate.karmada.io/retain-replicas: "true"
targetCluster:
clusterNames:
- member1
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment 2 step: update the replicas of the sample Deployment on
|
@RainbowMango I believe you need to modify the replicas on the karmada apiserver to reproduce my issue.
I will try to get a reproducible setup this week. |
Yeah, I did another test that modified the replicas on the Karmada apiserver, and the retention still works.
|
Hi @RainbowMango, here are some reproducible steps to create the retain issue we are seeing, using hpa in short --
The expected behavior at this point, is that the source cluster deployment will have x replicas, as controlled by the original hpa, and the member cluster deployment (with retain) will have y replicas, controlled by the member cluster hpa (which has been updated by the OP). The observed behavior is that both source and member clusters deployments are constantly updating between x and y replica count ![]() src on top, member1 on bottom. x=2, y=4 |
Hi @Chase-Marino I tried to reproduce it with the environment launched by S1: create a namespace named
S2: Apply deployments/hpa/pp
At this point, I can see the HPA is up and running and successfully scaled the deployment on member with 2 replicas:
Note that the replica of deployment on Karmada now is still apiVersion: apps/v1
kind: Deployment
metadata:
labels:
resourcetemplate.karmada.io/retain-replicas: "true"
role: retain-replicas
name: sample-deployment
namespace: retain-test
spec:
replicas: 1
# ...
status:
availableReplicas: 2
observedGeneration: 2
readyReplicas: 2
replicas: 2
updatedReplicas: 2 S3: apply op
Now, I can see the replica has been scaled to
Not happen on my side. So, If I remember correctly, in your environment you take the host cluster's kube-apiserver as the Karmada API Server, please confirm if it is true. In addition, can you help to check the options of |
Maybe you can try to do it again with the environment launched by |
I created a COP to add the retain replicas label in hopes that the member cluster replicas will be retained. this label is not in the karmada resource template.
What happened:
native retainWorkloadReplicas looks at desiredObject labels
karmada/pkg/resourceinterpreter/default/native/retain.go
Lines 146 to 151 in 0bc96a2
What you expected to happen:
I expect it to look at observedObject labels
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl-karmada version
orkarmadactl version
):The text was updated successfully, but these errors were encountered: