-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wait for other pool workers to shut down before forking up new workers #139
Comments
Using Heroku & resque-pool, how do I signal USR2 so jobs complete but no new resque jobs are executed? I'm happy (and prefer) to not process enqueued jobs with old code when I do a release but I don't see any help on that. I would like to send USR2 signal to all my resque workers, wait until working jobs reach 0, then do my release and restart workers (resque-pool) to resume work. Just can't figure out how to do it or if I'm missing something obvious :/ |
Any plans to do a stable release of this? I'd say it is better to release with double memory usage, than no release at all. I see it as 0.7.dev, but would be nice to polish things out and release a stable one. |
This is the work around I did: I added a Resque "before_enqueue" hook. Inside, I check for a flag in redis to know if I paused the processing or not. If the flag is set, then I simply push in the future using rescue_scheduler (using another flag for the amount of time to defer, but it's not really neeeded). When I do a release, I stop the scheduler, set the flag to defer job and stop the workers when no work is done anymore. At which point, I can happily release. This is the code I have in my hook.
|
@jchatel I suppose the |
I made some changes to my fork, to make it easier to debug.
|
Re: #132 and #137, the zero-downtime approach starts a new pool while the old pool is still running. Although this is usually fine, it can lead to issues in memory-constrained environments. Our default config loader should take workers from other pools (or orphaned workers with no pool) into account.
See https://github.com/backupify/resque-pool/compare/nevans:master and the discussion on #132. I prefer reading
Resque.redis.smembers("workers")
to looking atps
, butps
might be more fool-proof.The text was updated successfully, but these errors were encountered: