-
Notifications
You must be signed in to change notification settings - Fork 14.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[druid] Fixing issue 3894 multi-processing w/ Gunicorn #3895
[druid] Fixing issue 3894 multi-processing w/ Gunicorn #3895
Conversation
Multithreaded execution was introduced because refreshing a cluster with many datasources took... a long time. Metadata queries were issued sequentially, but multithreading this makes refreshing lots of datasources much faster. |
Thanks @Mogball for the explanation. Are you opposed to using multi-threading rather than multi-processing? Also could you provide context on the |
Looks okay to me. I'm pretty sure it should be thread-safe.
|
If you are just waiting on druid IO threads instead of processes should be fine. To avoid the possible creation of ton of threads you can initialize the pool outside the function so the number is fixed. |
@mistercrunch @Mogball @xrmx I'm not overly familiar with this portion of the code base, and the interactions with Gevent, I'm merely trying to fix an issue which is plaguing us in production. I can't fathom the scale @Mogball of your Druid datasources, but for us this request takes mere seconds and thus I'm unsure whether the 4x speedup (maximum) of a multi-threaded environment outweighs the additional complexity, potential thread-safeness etc. |
The speed up is more than just 4x (or whatever the number of available threads). Most of the time is spent waiting for Druid to respond, which means that the metadata requests are issued more or less simultaneously for each datasource, and then processed when they all start coming back. This makes a difference of like 40 seconds to 3 seconds (when hard refreshing all datasources). This should be thread-safe since there is no interaction outside of the datasource object and with the Superset backend during refresh. |
Heads up that there are caveats with thread-safety around SQLAlchemy |
|
This PR fixes issue #3894.
It seems that using multi-processing for fetching Druid datasource metadata asynchronously fails under a Gunicorn environment. Switching to multi-threading seems to remedy the problem.
Note previously
Pool()
created only 4 processes and I'm not overly certain why this part of the code needs to leverage multi-processing from a performance standpoint, i.e., I would be supportive of remove the multi-threading/processing logic entirely.Also it seems like we're using the synchronous PyClient and thus
refresh_async
seems like a misnomer to me.to: @mistercrunch @Mogball @xrmx