You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We can mention separate scaler configs for Canary and Primary Scaler.
What problem are you trying to solve?
During a new deployment, same number of min pods will be spawned, which throttles DB pool connections and hence new service is never up and running, which causes issue in further deployment.
Proposed solution
What do you want to happen? Add any considered drawbacks.
We want to use separate sclaer configs for canary and primary, so that 5-10% os min pods will do canary analysis and then rollout will start happening in primary after successful analysis.
Drawbacks - not as of now.
Any alternatives you've considered?
We tried disbaling autoscalerRef and configured and managed 2 HPAs but when canary analysis was successfully done, then flagger will bring primary HPA to min pods state of canary HPA. Example - currently desired pods or primary delploy are 100 and min pods of canary HPA are set to be 50, so after canary analysis, flagger will reset desired pods of primary HPA to min pods of canary HPA, instead of rollout to the primary HPA's desried number of pods.
Is there another way to solve this problem that isn't as good a solution?
N/A
I can see this feature to be mentioned in official docs
The autoscaler reference is optional, when specified, Flagger will pause the traffic increase while the target and primary deployments are scaled up or down. HPA can help reduce the resource usage during the canary analysis. When the autoscaler reference is specified, any changes made to the autoscaler are only made active in the primary autoscaler when a rollout for the deployment starts and completes successfully. Optionally, you can create two HPAs, one for canary and one for the primary to update the HPA without doing a new rollout. As the canary deployment will be scaled to 0, the HPA on the canary will be inactive.
Describe the feature
We can mention separate scaler configs for Canary and Primary Scaler.
What problem are you trying to solve?
During a new deployment, same number of min pods will be spawned, which throttles DB pool connections and hence new service is never up and running, which causes issue in further deployment.
Proposed solution
What do you want to happen? Add any considered drawbacks.
We want to use separate sclaer configs for canary and primary, so that 5-10% os min pods will do canary analysis and then rollout will start happening in primary after successful analysis.
Drawbacks - not as of now.
Any alternatives you've considered?
We tried disbaling autoscalerRef and configured and managed 2 HPAs but when canary analysis was successfully done, then flagger will bring primary HPA to min pods state of canary HPA. Example - currently desired pods or primary delploy are 100 and min pods of canary HPA are set to be 50, so after canary analysis, flagger will reset desired pods of primary HPA to min pods of canary HPA, instead of rollout to the primary HPA's desried number of pods.
Is there another way to solve this problem that isn't as good a solution?
N/A
I can see this feature to be mentioned in official docs
Ref - https://docs.flagger.app/usage/how-it-works
Root Cause - This issue is happening as Flagger is copying specs of canary HPA into primary HPA.
The text was updated successfully, but these errors were encountered: