-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Performance Regression in 1.63 #13331
Comments
I wonder if it's the default size of the |
@squahtx that looks like a good guess! |
The size of that cache can be controlled by adding an entry to It sounds like the default size for that cache is too small and we should ship with a larger default. I'd be interested in seeing what a good cache factor for your deployment turns out to be. |
I doubled the cache factor from our global default of 1 to 2 and will observe the metrics for a bit. Thanks for the prompt help! |
1.63.1 seems to have fixed the problem, load and event send time is back to normal immediately after upgrading. |
Description
After updating to Synapse 1.63.0 today, one of my Synapse instances experiences a noticeable performance regression:
According to the metrics (snapshot at https://fsr-ops.cs.tu-dortmund.de/dashboard/snapshot/PyUD0nsC3zYGM3AyCOrwWOOWTbdRsXZo?orgId=0)
handle_new_client_event
andaction_for_event_by_user
now consume significantly more CPU and database resources.#13100 and #13078 seem to have touched these functions for 1.63, so maybe they are the culprit?
Steps to reproduce
Homeserver
fachschaften.org
Synapse Version
1.63.0
Installation Method
Docker (matrixdotorg/synapse)
Platform
Docker on Ubuntu 20.04 in LXC
Relevant log output
I didn't see something relevant in the log output (and there is just too much to paste everything) and the Grafana snapshot is hopefully more helpful than a huge amount of logs.
Anything else that would be useful to know?
No response
The text was updated successfully, but these errors were encountered: