Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document immutable ConfigMap/Secret behavior for our provider #1568

Open
lblackstone opened this issue May 6, 2021 · 9 comments
Open

Document immutable ConfigMap/Secret behavior for our provider #1568

lblackstone opened this issue May 6, 2021 · 9 comments
Labels
area/docs Improvements or additions to documentation kind/enhancement Improvements or new features

Comments

@lblackstone
Copy link
Member

The Kubernetes provider intentionally treats ConfigMap and Secret resources as immutable (requiring a replacement rather than updating in place).

This behavior works around a longstanding upstream issue, but may be surprising to some Kubernetes users.

We should clearly document this behavior, and the reasons for it.

Related:
#1567
#1560

@lblackstone lblackstone added area/docs Improvements or additions to documentation kind/enhancement Improvements or new features labels May 6, 2021
@lblackstone lblackstone self-assigned this May 6, 2021
@ekimekim
Copy link

Hi, is there any plans to make it possible to disable this behaviour? Maybe have a seperate type MutableConfigMap? This behaviour makes it impossible to properly implement dynamic config reloading without restarting pods. My only workaround so far is to not manage the ConfigMap via pulumi at all and instead shell out to kubectl apply. Obviously I'd really, REALLY prefer to not have to do that.

@lblackstone
Copy link
Member Author

@ekimekim Do you have any examples of dynamic config reloading you can share? My understanding is that there is no reliable mechanism for picking up the new config values without restarting the Pod.

We don't have current plans to change the behavior, but I'm definitely interested in learning more about the use case and finding a way to make it work for you.

@ekimekim
Copy link

Here is an extremely simplified example:
I have the following kubernetes manifests:

kind: ConfigMap
apiVersion: v1
metadata:
  name: example
data:
  foo.txt: |
    bar
---
kind: Pod
apiVersion: v1
metadata:
  name: example
spec:
  containers:
  - name: example
    image: "ubuntu:latest"
    command: ["sleep", "infinity"]
    volumeMounts:
    - name: example
      mountPath: /mnt
  volumes:
  - name: example
    configMap:
      name: example

We have a configmap containing foo.txt which has content bar\n. We have a pod that does nothing (sleep infinity) but mounts this configmap to /mnt.
I apply these manifests, then exec into the resulting container:

$ kubectl apply -f example.yaml
configmap/example created
pod/example created
$ kubectl exec -it example /bin/bash
root@example:/# cat /mnt/foo.txt
bar
root@example:/# 

I then modify my manifests so that foo.txt contains baz\n, and re-apply:

$ kubectl apply -f example.yaml
configmap/example configured
pod/example configured

Note the pod is unchanged and remains running.
Going back to the window where I have the pod open, I now check the contents of the file again:

root@example:/# cat /mnt/foo.txt
bar
root@example:/# cat /mnt/foo.txt
bar
root@example:/# cat /mnt/foo.txt
baz

It takes a few seconds, but the value is eventually updated.

How to have your application pick up these changes is application-specific. Some will automatically pick up such changes, or are reading from disk every time (for example, if you were using a configmap to serve static content using nginx). Many applications will re-read their config in response to a SIGHUP signal, which you can deliver via a sidecar container that is watching the directory for changes via polling or inotify (note you'll need sharedProcessNamespace=true on the pod for this to work).

To expand on our use case: We have a postgres database application which is disruptive to restart, so we want to minimize how often this occurs. We want to be able to push config changes that do not require restart to the configmap, and then have a sidecar container HUP the postgres process when those changes are seen. However, we can't do this if configmaps are not mutable. If the name changes then the pod must be restarted to use the new name, and if we use a fixed name and deleteBeforeUpdate then the configmap contents in the container never update, presumebly because kubernetes no longer considers that pod linked to that new instance of the configmap in the same way it was the old.

Some other prior art in the community: (disclaimer: I just found these with a quick search, I haven't tried them or endorse them)

It should be noted that everything above can also apply to Secrets.

For now we are using a workaround where we have a dynamic provisioner that passes the configmap manifest to kubectl for create/update/delete operations. One of the things I love about Pulumi is that it has this expressive power when you need it. This isn't ideal (for one, it requires that kubectl be installed and configured on the system) but works well enough.

@lblackstone
Copy link
Member Author

Thanks for the information! I think it should be possible to support both the current default while allowing an opt-in mutable override once pulumi/pulumi#6753 is done. We should be able to set the replaceOnChanges option by default for ConfigMap and Secret resources, but the option could be overwritten for mutable cases.

@nesl247
Copy link

nesl247 commented Aug 4, 2021

I'm asking this here but am happy to open up an issue, but I updated a Secret, and it was updated not replaced. I am using the server side apply functionality if that helps. This actually triggered a bug for us as we expected the Deployment to be updated but wasn't.

@jkinkead
Copy link

jkinkead commented Sep 9, 2021

So this behavior has caused me quite a lot of pain, pretty much since we started using Pulumi 18 months ago. :)

Adding a replaceOnChanges option is great for the case described above (mounting ConfigMaps instead of injecting them as environment variables), but doesn't help when you need a consistently-named object to refer to in other systems.

Right now, I have a ConfigMap that changes with each release, since it includes some per-deploy information - and since I don't want to crash my site every time I do a deploy, I'm using dynamic ConfigMap names to prevent the Deployments that depend on it from being deleted and recreated. The fact that they're recreated in this case feels like a fundamental error in Pulumi's implementation: If the goal is to restart the pods, why not just issue a rollout restart deployment? This would eliminate surprising behavior and possible outages, and make this behavior far more intuitive for a Kubernetes user.

@lblackstone
Copy link
Member Author

The fact that they're recreated in this case feels like a fundamental error in Pulumi's implementation: If the goal is to restart the pods, why not just issue a rollout restart deployment? This would eliminate surprising behavior and possible outages, and make this behavior far more intuitive for a Kubernetes user.

Thanks for the feedback. As you note, we took an opinionated stance here when it was implemented, and are reevaluating if that makes sense. Now that we have the replaceOnChanges option, I suspect that we may want to revisit the default behavior. We've learned a lot from feedback over the past several years, and I'm starting to lean more towards having the providers be a "no frills" foundational layer that can be used to build more opinionated layers (like kubernetesx).

@jsravn
Copy link

jsravn commented May 19, 2022

This is a must have imo. Kubernetes itself provides the concept of immutable vs mutable configmaps now as well. In our case, we have dynamic config that is hot reloaded and we don't want to bounce the pods.

@viveklak
Copy link
Contributor

viveklak commented Jun 6, 2022

Please note that we support configmap mutations as an opt-in provider config option - see details here: #1926. Due to the breaking nature of the default behavior, this is an opt-in flag at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/docs Improvements or additions to documentation kind/enhancement Improvements or new features
Projects
None yet
Development

No branches or pull requests

6 participants