Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Watchtower not applying docker-compose configs: after update #1988

Open
Nick2253 opened this issue Jun 21, 2024 · 7 comments
Open

Watchtower not applying docker-compose configs: after update #1988

Nick2253 opened this issue Jun 21, 2024 · 7 comments

Comments

@Nick2253
Copy link

Nick2253 commented Jun 21, 2024

Describe the bug

I have a container created through docker-compose that is configured with a configs: attribute in the compose file. After updating this container, the new container did not have the files specified in configs:, and therefore failed to run appropriately.

I had to redeploy the stack in order for the configs to be applied.

Steps to reproduce

  1. Create a container via docker-compose that is configured with the configs: attribute.
  2. Confirm that the deployed container contains the appropriate files.
  3. Update the container with Watchtower.
  4. Observe that the new container does not contain the appropriate files.
  5. Re-deploy the container docker compose up, and observe that the redeployed container now contains the appropriate files.

Expected behavior

The newly created containers will be configured exactly as specified in the compose file.

Screenshots

No response

Environment

  • CoreOS 40.20240602.3.0
  • Docker 24.0.5

Your logs

time="2024-06-21T00:47:23-07:00" level=info msg="Watchtower 1.7.1"
time="2024-06-21T00:47:23-07:00" level=info msg="Using no notifications"
time="2024-06-21T00:47:23-07:00" level=info msg="Checking all containers (except explicitly disabled with label)"
time="2024-06-21T00:47:23-07:00" level=info msg="Scheduling first run: 2024-06-21 04:00:00 -0700 PDT"
time="2024-06-21T00:47:23-07:00" level=info msg="Note that the first check will be performed in 3 hours, 12 minutes, 36 seconds"
time="2024-06-21T04:00:09-07:00" level=info msg="Found new nginx:alpine image (96e3e4fd2098)"
time="2024-06-21T04:00:32-07:00" level=info msg="Stopping /merginmaps-proxy (8fbca3069d77) with SIGTERM"
time="2024-06-21T04:00:33-07:00" level=info msg="Creating /merginmaps-proxy"
time="2024-06-21T04:00:34-07:00" level=info msg="Removing image f597a450f464"
time="2024-06-21T04:00:34-07:00" level=info msg="Session done" Failed=0 Scanned=19 Updated=1 notify=no

Additional context

Minimal working example docker-compose.yaml file:

version: '3.9'
services:
  web:
    image: nginx:alpine
    configs:
      - source: 01-config-test.sh
        target: /docker-entrypoint.d/01-config-test.sh
        mode: 0777
configs:
  01-config-test.sh:
    content: |
      #!/bin/sh
      echo "$$ME: info: docker-compose config applied correctly"

If the configs is applied correctly, you should see the following in your nginx container log:

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/01-config-test.sh
01-config-test.sh: info: docker-compose config applied correctly
...
Copy link

Hi there! 👋🏼 As you're new to this repo, we'd like to suggest that you read our code of conduct as well as our contribution guidelines. Thanks a bunch for opening your first issue! 🙏

@kramerrs
Copy link

I have a similar problem. I am using a container launching system. ShinyProxy, this system launches user applications, of which we have a number of types. We are using docker swarm, and each application gets a user for each service. In the past we used Watchtower to monitor the images for updates, it killed the container pulled the image, and restarted the container. This worked OK, since a developer was using and testing it, the active apps would give error messages, and eventually the service would stop and ShinyProxy would launch a new container. However, ShinyProxy has since moved to preinitialized containers to avoid long container load times related to network swarm configurations. This presents a problem because restarted containers don't reattach to the service or ShinyProxy. ShinyProxy is smart enough now to realize that it needs to launch a new container after a developler loads the first one, however I feel this will leave stale services around. Is it possible to have some sort of clean up script run to stop the services associated with the containers. For nightly restarts, I have already set up some docker service stop commands.

@Nick2253
Copy link
Author

@kramerrs, I'm not sure that your issue is related to mine in any way. If ShinyProxy manages and preinitializes containers, then there's really no way for Watchtower to know how to replicate what ShinyProxy is doing. On the otherhand, docker-compose is a first-party solution meant to streamline management of containers in place of direct command-line calls. However, everything that compose does should be doable from the docker CLI, so Watchtower should be able to replicate it.

In your case, I'd use linked containers flags to specify the relationships so that Watchtower does what you need it to do.

I'm not familiar with ShinyProxy, but there may be a way for Watchtower to use lifecycle hooks to make calls to ShinyProxy to tell it how it should manage its containers.

@kramerrs
Copy link

@Nick2253 Thank you for the suggestions. In this instance it would be sufficient to just to stop the service and or container, ShinyProxy is smart enough to relaunch containers and preinitialize them provided the images have been updated in the docker cache. I had seen the linked containers and lifecycle hooks, but it wasn't obvious how to accomplish this. I will go back and look at them some more.

@Nick2253
Copy link
Author

Nick2253 commented Jul 15, 2024

@kramerrs If all you need is for Watchtower to pull the latest images, then use the --no-restart argument. Watchtower will still pull new images, but won't restart the containers. If you can configure ShinyProxy to auto-relaunch when a new image is present, you're golden. Even if ShinyProxy can't tell when Watchtower has done this, you can probably configure ShinyProxy to automatically re-launch the containers nightly, right after Watchtower runs.

@gnowong
Copy link

gnowong commented Jul 26, 2024

So I have been looking into fixing this, and have gotten rather sucked into the Watchtower rabbit hole. It seems Watchtower is naively restarting the containers, which makes sense from a purely Docker perspective. It gets more complicated when Compose is introduced into the mix, since it operates at a higher level of abstraction. I am not an expert on Docker (or Compose), but it seems Watchtower would need to have a concept of a Service, so that whenever a container image within a service is updated, all of the containers within that service are restarted with the updated image. This goes along with what @kramerrs said

Is it possible to have some sort of clean up script run to stop the services associated with the containers.

I would imagine Watchtower could hook into Compose's APIs to restart the container within a service using the API equivalent of docker compose up -d, and then Compose would handle loading in the config and any other actions which need to be taken. I am going to try my hand at implementing this but was wondering if @piksel had any thoughts on the matter?

It also seems a number of other issues associated with Watchtower involve Watchtower-Compose interactions, and so it is possible doing this may fix some of the other issues.

@kramerrs
Copy link

At the service level there are large number of things that can happen, Watchtower isn't the only software that doesn't handle these situations gracefully, as there is clearly better support for other systems, kubernetes etc. However, in a Docker framework it is sort of necessary to handle networking routing. For me I would be satisfied with a hook where I could run a script to do the orchestration of restarts. Watchtower has an elaborate hook system for pre/post launch etc. However, they appear to run within the launched container, which isn't really feasible as ShinyProxy needs to launch the containers, and I don't want to add these scripts to each container (dozens). Watchtower could pull the image, then run the script (in it's own container or another spun up container). I could then use docker to stop the service. Which is all I need. Others may have other requirements, though I think being able to run a script at that point would go a long way towards being able to manage services. As is, I have been able to work around and just kill the containers, it leaves stale services and containers, however I have a nightly cleanup script now, so developers are able to continue, and ShinyProxy has made their service more robust to finding dead containers spinning up new services.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants