Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option for using kubectl replace --force when fail to apply #122

Closed
ordovicia opened this issue Sep 29, 2020 · 4 comments · Fixed by #271
Closed

Option for using kubectl replace --force when fail to apply #122

ordovicia opened this issue Sep 29, 2020 · 4 comments · Fixed by #271

Comments

@ordovicia
Copy link
Contributor

When we want change immutable fields (e.g. label selector in Deployment), currently we need to take two steps:

  1. Enable garbage collection for a resource and delete the resource from a repository
  2. Add the resource again to the repository with a new definition

It would be great if kustomize-controller does the above operation in one reconciliation by using kubectl replace --force command.

@stefanprodan
Copy link
Member

The kustomize controller doesn't apply one resource at a time, but all them bulk, I don't see how we could use replace.

@ordovicia
Copy link
Contributor Author

Ah, you are quite right.
Thank you for your explanation.

So, what do you think is the best practice for changing immutable fields when using GitOps Toolkit?
Do we need to delete old resources in a reconciliation and add the resources again with new definitions (manifests) in another reconciliation?

@stealthybox
Copy link
Member

If you are using kustomization.spec.prune: true, you could do this in a single commit like so:

❯ git diff
diff --git lib/podinfo/deployment.yaml lib/podinfo/deployment.yaml
index 7357a05..9476847 100644
--- lib/podinfo/deployment.yaml
+++ lib/podinfo/deployment.yaml
@@ -1,7 +1,7 @@
 apiVersion: apps/v1
 kind: Deployment
 metadata:
-  name: podinfo
+  name: podinfo-v2
 spec:
   minReadySeconds: 3
   revisionHistoryLimit: 5
@@ -13,6 +13,7 @@ spec:
   selector:
     matchLabels:
       app: podinfo
+      version: abcd
   template:
     metadata:
       annotations:
@@ -20,6 +21,7 @@ spec:
         prometheus.io/port: "9797"
       labels:
         app: podinfo
+        version: abcd
     spec:
       containers:
       - name: podinfod

This will delete the old deployment and create a new one with a slightly different name.
This explicitly gives you a new object while keeping your git history clean.

You're probably doing something similar anyway when updating ConfigMaps since old copies of objects need to exist for ReplicaSets to rollback properly.


Fields are immutable for a reason, so we should consider the consequences before we build a behavior that auto-replaces objects. There may be a sensible thing flux can do here though that allows the user to indicate that they want that behavior for a resource when they need it. Maybe an annotation would work.

Note: doing something special for a failed object is dependent on being able to have granular, structured status for each one as a result of the apply.
This is tracked in fluxcd/flux2#46

@ordovicia
Copy link
Contributor Author

Thank you @stealthybox. I understand that it is difficult to build a right design for the auto-replace feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants