-
-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
services started with docker compose won't be stopped by sablier unless hit with a request first #153
Comments
Hi @xsolinsx, Without any prior request, Sablier has no way of knowing which containers to start and stop. The beta feature of |
Nice, so this is something that will be implemented in the next release right? Edit: I've found a sketchy workaround to execute right after the grep -h "names: " /path/to/traefik/file_dynamic_conf/sablier_middlewares_definitions.yml |
cut -d':' -f 2 |
sed "s/,/ /g" |
tr '\n' ' ' |
xargs docker stop |
I have just started implementing sablier with caddy and the new groups function. Do you think it would be possible to include the function to shutdown not needed containers if they are up, but shouldn't be up because of missing requests... |
On startup why cant sablier get a list of all containers with the sablier.enable label set to true, and apply the default session time to them? I've also noticed that even through the service is up, once sablier first starts, it still shows the "waiting" screen on the first hit, even though it should bypass it as the service is up. I assume its for the same underlying reason, it hasnt built this "internal registry" of services. |
Add healthcheck to you services fix the problem or configure a status server to check. |
There already is a healthcheck on the service. Currently
Expected
What is a status server? |
Sablier does not (yet) pick up already started services, you need to access them at least once in order for them to be downscaled. I will probably change this behavior with an option in the future. |
like uptime kuma, but, nevermind, I was wrong about it. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
unstale |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
unstale |
I am also quite interested in the resolution of this bug. Is there anything planned (roughly) when it will be solved? Will it be part of the next release? My current workaround for this bug is to extend my CICD-pipeline. After deploying and starting the container I am sending one request to the container without waiting for any response.
This is working for the dynamic and blocking strategy of sablier, as well as for all my containers regardless of their response and the http response status code. |
I do the same. When I restart services I have to hit them once with a request so Sablier kicks in. Not ideal. |
Reviving this issue, I will create this feature but I'd like to gather some feedback here. The goal is to create a startup behavior after everything has been loaded properly, workloads that have been auto-discovered (using labels), will be shutdown if they are currently running and not in an active session. How would you name such option ?
if you have some ideas, that'd be great. I know that @gorositopablo you'd be interested by this. |
I think I like And I am really looking forward to this new function. |
This feature adds the capability to stop unregistered running instances upon startup. Previously, you had to stop running instances manually or issue an initial request that will shut down instances afterwards. With this change, all discovered instances will be shutdown. They need to be registered using labels. E.g.: sablier.enable=true Fixes #153
Agree, I believe that |
This feature adds the capability to stop unregistered running instances upon startup. Previously, you had to stop running instances manually or issue an initial request that will shut down instances afterwards. With this change, all discovered instances will be shutdown. They need to be registered using labels. E.g.: sablier.enable=true Fixes #153
🎉 This issue has been resolved in version 1.8.0-beta.7 🎉 The release is available on:
Your semantic-release bot 📦🚀 |
You can try using the docker tag This is enabled by default. Because as you all stated, this is the behavior we'd actually expect from Sablier. And I agree. It's not a breaking change in the way that old configurations are still valid. You can opt-out by setting I'm waiting for your feedback! |
Thanks for implementing this. Just a quick question. If I'm using |
Yes. Sablier needs to know which container to stop. For that, you need to add the following label You can still use This is purely auto-discovery done by Sablier at startup, and it does not involve any reverse proxy configuration or adjustment. |
Just tested with the beta and it works well. |
if I run twice the same |
I confirm I have the same behavior |
Can you please share a quick reproducible compose file ? One behavior of the auto-stop is to stop all containers that are currently running, but that Sablier is not aware of. So upon restart, Sablier loads back your containers session from the previous run and it will not shut them down. |
I think what they are referring to is the auto-stopping is only run at startup and not periodically. Then, if you restart your container (not the sablier one), it does not get stopped again until you make another request. i.e. The containers are stopped at "startup" of sablier only. Perhaps it should run periodically? |
The current behavior is to stop the found containers at startup only, not periodically. I think that the desired behavior is actually to register all your containers, and have Sablier stop them for you. If the current behavior does not meet your needs, maybe we can refine, change or add configuration option to get to the desired behavior. What behaviors would you like to have ? |
For my case, I'm happy with the current behavior. However, I can imagine the scenario described in: https://github.com/acouvreur/sablier/issues/153#issuecomment-1999167626 For example, say you have some automatic updates of your container, or you are doing some development requiring updating it. You may be only hitting the service on an internal address and not via Sablier. Every time that happens, you'd need to access it with Sablier to have it notice the new up state. You can of course update the image of your containers while they are stopped, which is one way around this. Perhaps others can chime in on their use case/reasoning. |
Exactly the issue and behavior I expect! Check periodically would be the solution from my POV. |
I think doing it periodically makes sense, though we should be a bit careful on the period. Imagine a scenario where you bring up this container at the end of the time period. It will immediately get stopped. It should probably work like:
|
Perhaps this feature should be some kind of "reconciliation" that could happen periodically. I better understand the use case now. We can maybe add this as an extra feature, and also have some opt-in/opt-out labels. We can go further and have some labels with pre-defined hours for reconciliation. Any way, we might have w new feature here, which is not the From my point of view, the |
I'm not sure about all the comments with docker-compose, but I've tried with So, let's say that I: While Sablier does kill the pod at the start of its lifecycle, it means that I need to restart Sablier each time that I have a new release. Thanks @acouvreur |
Good Idea @acouvreur . I think we can start simple without specific hours and add it later if requested? I personaly don't have this need |
🎉 This issue has been resolved in version 1.8.0 🎉 The release is available on:
Your semantic-release bot 📦🚀 |
Describe the bug
I have a docker compose which starts a multitude of services, sablier is correctly configured as it is able to start/stop services as needed but (here's the bug) ONLY when the services have been hit by a request.
If I only start the services with
docker compose up -d
all services would stay up & running forever.In order for them to be "detected" by sablier it seems I need to hit them with a request.
Context
Expected behavior
After starting services with
docker compose up -d
Sablier should wait for the time configured in the config and then stop them without the need to hit them first with a request.The text was updated successfully, but these errors were encountered: