Skip to content

Commit

Permalink
Fix a potentially-huge sql query (matrix-org#7274)
Browse files Browse the repository at this point in the history
We could end up looking up tens of thousands of events, which could cause large
amounts of data to be logged to the postgres log.
  • Loading branch information
richvdh authored and phil-flex committed Jun 16, 2020
1 parent 6de707a commit 3801bbc
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 7 deletions.
1 change: 1 addition & 0 deletions changelog.d/7274.bugfix
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fix a sql query introduced in Synapse 1.12.0 which could cause large amounts of logging to the postgres slow-query log.
23 changes: 16 additions & 7 deletions synapse/storage/data_stores/main/event_federation.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,19 +173,28 @@ def _get_auth_chain_difference_txn(
for event_id in initial_events
}

# The sorted list of events whose auth chains we should walk.
search = [] # type: List[Tuple[int, str]]

# We need to get the depth of the initial events for sorting purposes.
sql = """
SELECT depth, event_id FROM events
WHERE %s
ORDER BY depth ASC
"""
clause, args = make_in_list_sql_clause(
txn.database_engine, "event_id", initial_events
)
txn.execute(sql % (clause,), args)
# the list can be huge, so let's avoid looking them all up in one massive
# query.
for batch in batch_iter(initial_events, 1000):
clause, args = make_in_list_sql_clause(
txn.database_engine, "event_id", batch
)
txn.execute(sql % (clause,), args)

# The sorted list of events whose auth chains we should walk.
search = txn.fetchall() # type: List[Tuple[int, str]]
# I think building a temporary list with fetchall is more efficient than
# just `search.extend(txn)`, but this is unconfirmed
search.extend(txn.fetchall())

# sort by depth
search.sort()

# Map from event to its auth events
event_to_auth_events = {} # type: Dict[str, Set[str]]
Expand Down

0 comments on commit 3801bbc

Please sign in to comment.