-
-
Notifications
You must be signed in to change notification settings - Fork 754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SIGTERM on multiple workers fails #671
Conversation
@@ -71,6 +64,7 @@ def restart(self): | |||
self.process.start() | |||
|
|||
def shutdown(self): | |||
self.process.terminate() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean my worker can shutdown in middle of transaction happening in, say, Starlette's backgorund task?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure on this one,
one thing also it that I miread the doc, it says to terminate before join in case of a pool, we are not in this case....(https://docs.python.org/3/library/multiprocessing.html?highlight=join#multiprocessing.pool.Pool.join)
kind of stuck, dunno how to solve this, this is annoying to get the 20s timeout reached in a container
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rafalp I think it wont shutdown in middle of transaction.
Uvicorn handles sigterm, and sends shutdown event to Starlette.
If it does, it's a Starlette issue (maybe on shutdown Starlette should wait for all background tasks..)
ok after putting loggers basically on every line it appear to me that the mutiprocess can hang indefinitely not only on docker but also normally. docker sends a SIGTERM signal and not a SIGINT now if your NOT in a container and
now here is the deal, if the signal is a SIGTERM, it will hang indefinitely on so I think in that case only we need to terminate() the process, maybe there would be a nicer way that handles that case In case of one worker a SIGTERM is nicely dealt with the Line 571 in a839da2
|
Similar comment wrt. the Pull Request name on this as in #636 😄 |
Fix for #668
it essentially reverts #620 and applies what the python documentation recommends ie terminate or close a process before join
To summarize, before #620 processes in docker containers were not killed gracefully, so it fixed that, but it introduced a bug described in #668 where for non-docker containers the shutdown was too abrupt and children lifespan events were skipped.
I think this PR deals nicely with both issues, it's rather boring to test things inside and out container, so I hope this is ok