-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
analytics - improve performance on big logs #556
Comments
The culprit is this seq scan:
...because we're not using the index with this where clause: Better use
which enables a far speedier index scan. |
Monthly requests optimized with f67a9a1 For the global stats, we might need to consolidate the results in a separate table. |
Or maybe a calculated table maintained regularly by the analytics apps ? (just some random thoughts) |
@Vampouille is the "Hero of the Day", see http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html |
I saw several indexes on log table, maybe we can speed up insertion by removing those indexes (for example : layer, service) without slowing down "select" queries |
+1, thanks |
I just finished some tests with 6M records on my desktop. Query duration changes from 2500ms to 80ms with the following modifications :
I think I will commit those changes in the beginning of next week. |
When the ogc-statistics database gets huge, analytics takes too much time to display the stats.
The text was updated successfully, but these errors were encountered: