-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document immutable ConfigMap/Secret behavior for our provider #1568
Comments
Hi, is there any plans to make it possible to disable this behaviour? Maybe have a seperate type |
@ekimekim Do you have any examples of dynamic config reloading you can share? My understanding is that there is no reliable mechanism for picking up the new config values without restarting the Pod. We don't have current plans to change the behavior, but I'm definitely interested in learning more about the use case and finding a way to make it work for you. |
Here is an extremely simplified example: kind: ConfigMap
apiVersion: v1
metadata:
name: example
data:
foo.txt: |
bar
---
kind: Pod
apiVersion: v1
metadata:
name: example
spec:
containers:
- name: example
image: "ubuntu:latest"
command: ["sleep", "infinity"]
volumeMounts:
- name: example
mountPath: /mnt
volumes:
- name: example
configMap:
name: example We have a configmap containing
I then modify my manifests so that
Note the pod is unchanged and remains running.
It takes a few seconds, but the value is eventually updated. How to have your application pick up these changes is application-specific. Some will automatically pick up such changes, or are reading from disk every time (for example, if you were using a configmap to serve static content using nginx). Many applications will re-read their config in response to a SIGHUP signal, which you can deliver via a sidecar container that is watching the directory for changes via polling or inotify (note you'll need sharedProcessNamespace=true on the pod for this to work). To expand on our use case: We have a postgres database application which is disruptive to restart, so we want to minimize how often this occurs. We want to be able to push config changes that do not require restart to the configmap, and then have a sidecar container HUP the postgres process when those changes are seen. However, we can't do this if configmaps are not mutable. If the name changes then the pod must be restarted to use the new name, and if we use a fixed name and deleteBeforeUpdate then the configmap contents in the container never update, presumebly because kubernetes no longer considers that pod linked to that new instance of the configmap in the same way it was the old. Some other prior art in the community: (disclaimer: I just found these with a quick search, I haven't tried them or endorse them)
It should be noted that everything above can also apply to Secrets. For now we are using a workaround where we have a dynamic provisioner that passes the configmap manifest to |
Thanks for the information! I think it should be possible to support both the current default while allowing an opt-in mutable override once pulumi/pulumi#6753 is done. We should be able to set the |
I'm asking this here but am happy to open up an issue, but I updated a |
So this behavior has caused me quite a lot of pain, pretty much since we started using Pulumi 18 months ago. :) Adding a Right now, I have a |
Thanks for the feedback. As you note, we took an opinionated stance here when it was implemented, and are reevaluating if that makes sense. Now that we have the |
This is a must have imo. Kubernetes itself provides the concept of immutable vs mutable configmaps now as well. In our case, we have dynamic config that is hot reloaded and we don't want to bounce the pods. |
Please note that we support configmap mutations as an opt-in provider config option - see details here: #1926. Due to the breaking nature of the default behavior, this is an opt-in flag at the moment. |
The Kubernetes provider intentionally treats ConfigMap and Secret resources as immutable (requiring a replacement rather than updating in place).
This behavior works around a longstanding upstream issue, but may be surprising to some Kubernetes users.
We should clearly document this behavior, and the reasons for it.
Related:
#1567
#1560
The text was updated successfully, but these errors were encountered: