-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very slow performance of create_annotated_heatmap
for small dataset
#2299
Comments
But the This can be seen in the snippet below. My further problem is that I'm using this within a
Output:
|
So in the output above the first one is slow, but the second one is quite fast... This should bear out for the third, fourth etc. Basically once things are loaded the performance should be quite good. This is really annoying for local development, however, admittedly. |
Using Python 3.7 with PR at #2368, the code snippet in the original is much improved on my workstation: plotly 4.6: 0.328 s |
Insisting a little on this one due to other factors not mentioned above. It seems that performance when run from a Dash app is not only related to importing. I've recreated the snipped above, running in bare Python/plotly, and from within a Dash app. There seems to be an 4 s overhead during first run when importing things. After that, heatmap creation takes 0.25 s to 0.3 s in bare python, while from Dash it takes 1.25 s to 1.3 s, hence around 5x slower. Bare python/plotly:
From Dash:
And this is the snipped being time bench-marked from Dash. There are 4 dataframes (4 plots being created), each dataframe is flat 6 x 13.
|
@haphaeu can you confirm that these results were obtained using |
No, those results were done with version 4.5.2. Well spotted. Using same conda environment, I've updated only Here's the re-run with Bare python/plotly:
From Dash:
Indeed significantly faster. Thanks and nicely done! |
Great! Is this running on Python 3.7? If not, upgrading to Python 3.7 might increase import performance further :) |
This is Python 3.8 |
Ah well, no free lunch there then :) |
This is taking over 4s to run, which seems excessively large for such small dataset:
The text was updated successfully, but these errors were encountered: