-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
disable replicas reconciliation when annotation is present #736
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have only changed the mutators, which only applies when the objects exist in the cluster.
If you create the APIManager withh replicas set to some number, the deploymentconfig replicas will be set to that number as well. Then changes in the replica fields will not be reconciled. I would also set initial replicas to 1 if the annotation exists.
It also makes me feel uneasy that only backend (listernet, worker) and apicast (prodcution) deployment mutators are being conditioned by the annotation. Why not all of them? I see two options to be consistent.
- the annotation afffects to all replicas
- we have component specific annotation, for instance:
apps.3scale.net/apicast-replica-field: false
wdyt?
0992ef8
to
d8bb561
Compare
76a0aa1
to
e85defe
Compare
Hey @eguzki I think this is ready for a review again. I still have to test it locally but I think the general flow should make more sense now. |
a38493c
to
c6086b4
Compare
/lgtm |
Looks good and code works as expected, pod count did not change until set to false. |
missing doc that should be here |
7c7667d
to
f6bb767
Compare
f6bb767
to
db431de
Compare
Code Climate has analyzed commit db431de and detected 1 issue on this pull request. Here's the issue category breakdown:
View more on Code Climate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking good 👍
@@ -730,6 +731,9 @@ func TestApicastServicePortMutator(t *testing.T) { | |||
if err := grafanav1alpha1.AddToScheme(s); err != nil { | |||
t.Fatal(err) | |||
} | |||
if err := configv1.AddToScheme(s); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it really needed? I do not see why
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been added due to an error when it's not present:
FAIL: TestApicastServicePortMutator (0.00s)
/home/mstokluska/dev/3scale-operator/pkg/3scale/amp/operator/apicast_reconciler_test.go:749: no kind is registered for the type v1.ClusterVersion in scheme "pkg/runtime/scheme.go:101"
--- FAIL: TestApicastServicePortMutator/annotationTrue
```
This resolves the https://issues.redhat.com/browse/THREESCALE-8374
Verification
"apps.3scale.net/disable-apicast-production-replica-reconciler": "true"
confirm that the apicast production replicas aren't synced anymore then set the above annotation to false and confirm that syncing is back on.
"apps.3scale.net/disable-apicast-staging-replica-reconciler": "true"
confirm that the apicast staging isn't synced anymore then set the above annotation to false and confirm that syncing is back on.
"apps.3scale.net/disable-backend-listener-replica-reconciler": "true"
confirm that the backend listeners aren't synced anymore then set the above annotation to false and confirm that syncing is back on.
"apps.3scale.net/disable-backend-worker-replica-reconciler": "true"
confirm that the backend workers aren't synced anymore then set the above annotation to false and confirm that syncing is back on.
"apps.3scale.net/disable-cron-replica-reconciler": "true"
confirm that the backend cron aren't synced anymore then set the above annotation to false and confirm that syncing is back on.
Revert updates to APIManager annotations and confirm that replicas syncing is back working.