Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Make historical events discoverable from backfill for servers without any scrollback history (MSC2716) (federation) #10245

Merged
Show file tree
Hide file tree
Changes from 44 commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
d2e2aa7
Make historical messages available to federated servers
MadLittleMods Jun 24, 2021
2d942ec
Debug message not available on federation
MadLittleMods Jun 25, 2021
38bcf13
Add base starting insertion point when no chunk ID is provided
MadLittleMods Jun 25, 2021
e405a23
Fix messages from multiple senders in historical chunk
MadLittleMods Jun 29, 2021
36f1565
Remove debug lines
MadLittleMods Jun 29, 2021
05d6c51
Messing with selecting insertion event extremeties
MadLittleMods Jul 7, 2021
defc536
Merge branch 'develop' into madlittlemods/2716-backfill-historical-ev…
MadLittleMods Jul 7, 2021
dfad8a8
Move db schema change to new version
MadLittleMods Jul 7, 2021
7d850db
Add more better comments
MadLittleMods Jul 7, 2021
164dee4
Make a fake requester with just what we need
MadLittleMods Jul 7, 2021
04b1f7e
Store insertion events in table
MadLittleMods Jul 8, 2021
b703962
Make base insertion event float off on its own
MadLittleMods Jul 8, 2021
8c205e5
Validate that the app service can actually control the given user
MadLittleMods Jul 9, 2021
7b8b2d1
Add some better comments on what we're trying to check for
MadLittleMods Jul 9, 2021
281588f
Merge branch 'develop' into madlittlemods/2716-backfill-historical-ev…
MadLittleMods Jul 9, 2021
4226165
Continue debugging
MadLittleMods Jul 12, 2021
baae5d8
Share validation logic
MadLittleMods Jul 12, 2021
c05e43b
Add inserted historical messages to /backfill response
MadLittleMods Jul 13, 2021
02b1bea
Remove debug sql queries
MadLittleMods Jul 13, 2021
66cf5be
Merge branch 'develop' into madlittlemods/2716-backfill-historical-ev…
MadLittleMods Jul 13, 2021
ab8011b
Some marker event implemntation trials
MadLittleMods Jul 14, 2021
f20ba02
Clean up PR
MadLittleMods Jul 14, 2021
64aeb73
Rename insertion_event_id to just event_id
MadLittleMods Jul 14, 2021
ea7c30d
Add some better sql comments
MadLittleMods Jul 14, 2021
9a6fd3f
More accurate description
MadLittleMods Jul 14, 2021
0f6179f
Add changelog
MadLittleMods Jul 14, 2021
5970e3f
Make it clear what MSC the change is part of
MadLittleMods Jul 14, 2021
bc13396
Add more detail on which insertion event came through
MadLittleMods Jul 14, 2021
669da52
Address review and improve sql queries
MadLittleMods Jul 14, 2021
9a86e05
Only use event_id as unique constraint
MadLittleMods Jul 14, 2021
8999567
Fix test case where insertion event is already in the normal DAG
MadLittleMods Jul 15, 2021
35a4569
Remove debug changes
MadLittleMods Jul 15, 2021
b2be8ce
Switch to chunk events so we can auth via power_levels
MadLittleMods Jul 20, 2021
04a29fe
Switch to chunk events for federation
MadLittleMods Jul 20, 2021
258fa57
Add unstable room version to support new historical PL
MadLittleMods Jul 20, 2021
9352635
Fix federated events being rejected for no state_groups
MadLittleMods Jul 21, 2021
5c454b7
Merge branch 'develop' into madlittlemods/2716-backfill-historical-ev…
MadLittleMods Jul 21, 2021
e881cff
Merge branch 'develop' into madlittlemods/2716-backfill-historical-ev…
MadLittleMods Jul 21, 2021
c9330ec
Merge branch 'develop' into madlittlemods/2716-backfill-historical-ev…
MadLittleMods Jul 23, 2021
a8c5311
Only connect base insertion event to prev_event_ids
MadLittleMods Jul 23, 2021
ae606c7
Make it possible to get the room_version with txn
MadLittleMods Jul 24, 2021
bc896cc
Allow but ignore historical events in unsupported room version
MadLittleMods Jul 24, 2021
f231066
Move to unique index syntax
MadLittleMods Jul 24, 2021
465b3d8
High-level document how the insertion->chunk lookup works
MadLittleMods Jul 24, 2021
44bb3f0
Remove create_event fallback for room_versions
MadLittleMods Jul 28, 2021
4d936b5
Merge branch 'develop' into madlittlemods/2716-backfill-historical-ev…
MadLittleMods Jul 28, 2021
706770c
Use updated method name
MadLittleMods Jul 28, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions changelog.d/10245.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Make historical events discoverable from backfill for servers without any scrollback history (part of MSC2716).
MadLittleMods marked this conversation as resolved.
Show resolved Hide resolved
3 changes: 0 additions & 3 deletions synapse/api/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -198,9 +198,6 @@ class EventContentFields:
MSC2716_CHUNK_ID = "org.matrix.msc2716.chunk_id"
# For "marker" events
MSC2716_MARKER_INSERTION = "org.matrix.msc2716.marker.insertion"
MSC2716_MARKER_INSERTION_PREV_EVENTS = (
"org.matrix.msc2716.marker.insertion_prev_events"
)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we will use this 🤷

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switched to backfilling the insertion event in #10420



class RoomTypes:
Expand Down
27 changes: 27 additions & 0 deletions synapse/api/room_versions.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,9 @@ class RoomVersion:
# MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending
# m.room.membership event with membership 'knock'.
msc2403_knocking = attr.ib(type=bool)
# MSC2716: Adds m.room.power_levels -> content.historical field to control
# whether "insertion", "chunk", "marker" events can be sent
msc2716_historical = attr.ib(type=bool)


class RoomVersions:
Expand All @@ -88,6 +91,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
V2 = RoomVersion(
"2",
Expand All @@ -101,6 +105,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
V3 = RoomVersion(
"3",
Expand All @@ -114,6 +119,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
V4 = RoomVersion(
"4",
Expand All @@ -127,6 +133,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
V5 = RoomVersion(
"5",
Expand All @@ -140,6 +147,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
V6 = RoomVersion(
"6",
Expand All @@ -153,6 +161,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
MSC2176 = RoomVersion(
"org.matrix.msc2176",
Expand All @@ -166,6 +175,7 @@ class RoomVersions:
msc2176_redaction_rules=True,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
MSC3083 = RoomVersion(
"org.matrix.msc3083",
Expand All @@ -179,6 +189,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc2403_knocking=False,
msc2716_historical=False,
)
V7 = RoomVersion(
"7",
Expand All @@ -192,6 +203,21 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=True,
msc2716_historical=False,
)
MSC2716 = RoomVersion(
"org.matrix.msc2716",
RoomDisposition.STABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=True,
msc2716_historical=True,
)


Expand All @@ -207,6 +233,7 @@ class RoomVersions:
RoomVersions.MSC2176,
RoomVersions.MSC3083,
RoomVersions.V7,
RoomVersions.MSC2716,
)
}

Expand Down
38 changes: 38 additions & 0 deletions synapse/event_auth.py
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,13 @@ def check(
if event.type == EventTypes.Redaction:
check_redaction(room_version_obj, event, auth_events)

if (
event.type == EventTypes.MSC2716_INSERTION
or event.type == EventTypes.MSC2716_CHUNK
or event.type == EventTypes.MSC2716_MARKER
):
check_historical(room_version_obj, event, auth_events)

logger.debug("Allowing! %s", event)


Expand Down Expand Up @@ -504,6 +511,37 @@ def check_redaction(
raise AuthError(403, "You don't have permission to redact events")


def check_historical(
room_version_obj: RoomVersion,
event: EventBase,
auth_events: StateMap[EventBase],
) -> None:
"""Check whether the event sender is allowed to send historical related
events like "insertion", "chunk", and "marker".

Returns:
None

Raises:
AuthError if the event sender is not allowed to send historical related events
("insertion", "chunk", and "marker").
"""
# Ignore the auth checks in room versions that do not support historical
# events
if not room_version_obj.msc2716_historical:
return

user_level = get_user_power_level(event.user_id, auth_events)

historical_level = _get_named_level(auth_events, "historical", 100)

if user_level < historical_level:
raise AuthError(
403,
'You don\'t have permission to send send historical related events ("insertion", "chunk", and "marker")',
)


def _check_power_levels(
room_version_obj: RoomVersion,
event: EventBase,
Expand Down
3 changes: 3 additions & 0 deletions synapse/events/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,9 @@ def add_fields(*fields):
if room_version.msc2176_redaction_rules:
add_fields("invite")

if room_version.msc2716_historical:
add_fields("historical")

elif event_type == EventTypes.Aliases and room_version.special_case_aliases_auth:
add_fields("aliases")
elif event_type == EventTypes.RoomHistoryVisibility:
Expand Down
6 changes: 4 additions & 2 deletions synapse/handlers/federation.py
Original file line number Diff line number Diff line change
Expand Up @@ -2714,9 +2714,11 @@ async def _update_auth_events_and_context_for_auth(
event.event_id,
e.event_id,
)
context = await self.state_handler.compute_event_context(e)
missing_auth_event_context = (
await self.state_handler.compute_event_context(e)
)
await self._auth_and_persist_event(
origin, e, context, auth_events=auth
origin, e, missing_auth_event_context, auth_events=auth
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will merged separately in #10439 but need the fix here as well

)

if e.event_id in event_auth_events:
Expand Down
1 change: 1 addition & 0 deletions synapse/handlers/room.py
Original file line number Diff line number Diff line change
Expand Up @@ -951,6 +951,7 @@ async def send(etype: str, content: JsonDict, **kwargs) -> int:
"kick": 50,
"redact": 50,
"invite": 50,
"historical": 100,
}

if config["original_invitees_have_ops"]:
Expand Down
7 changes: 6 additions & 1 deletion synapse/rest/client/v1/room.py
Original file line number Diff line number Diff line change
Expand Up @@ -504,7 +504,6 @@ async def on_POST(self, request, room_id):

events_to_create = body["events"]

prev_event_ids = prev_events_from_query
inherited_depth = await self._inherit_depth_from_prev_ids(
prev_events_from_query
)
Expand All @@ -516,6 +515,10 @@ async def on_POST(self, request, room_id):
chunk_id_to_connect_to = chunk_id_from_query
base_insertion_event = None
if chunk_id_from_query:
# All but the first base insertion event should point at a fake
# event, which causes the HS to ask for the state at the start of
# the chunk later.
prev_event_ids = [fake_prev_event_id]
# TODO: Verify the chunk_id_from_query corresponds to an insertion event
pass
# Otherwise, create an insertion event to act as a starting point.
Expand All @@ -526,6 +529,8 @@ async def on_POST(self, request, room_id):
# an insertion event), in which case we just create a new insertion event
# that can then get pointed to by a "marker" event later.
else:
prev_event_ids = prev_events_from_query

base_insertion_event_dict = self._create_insertion_event_dict(
sender=requester.user.to_string(),
room_id=room_id,
Expand Down
88 changes: 79 additions & 9 deletions synapse/storage/databases/main/event_federation.py
Original file line number Diff line number Diff line change
Expand Up @@ -936,15 +936,46 @@ def _get_backfill_events(self, txn, room_id, event_list, limit):
# We want to make sure that we do a breadth-first, "depth" ordered
# search.

query = (
"SELECT depth, prev_event_id FROM event_edges"
" INNER JOIN events"
" ON prev_event_id = events.event_id"
" WHERE event_edges.event_id = ?"
" AND event_edges.is_state = ?"
" LIMIT ?"
)
# Look for the prev_event_id connected to the given event_id
query = """
SELECT depth, prev_event_id FROM event_edges
/* Get the depth of the prev_event_id from the events table */
INNER JOIN events
ON prev_event_id = events.event_id
/* Find an event which matches the given event_id */
WHERE event_edges.event_id = ?
AND event_edges.is_state = ?
LIMIT ?
"""
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the previous query just in this new format with comments


# Look for the "insertion" events connected to the given event_id
connected_insertion_event_query = """
SELECT e.depth, i.event_id FROM insertion_event_edges AS i
/* Get the depth of the insertion event from the events table */
INNER JOIN events AS e USING (event_id)
/* Find an insertion event which points via prev_events to the given event_id */
WHERE i.insertion_prev_event_id = ?
LIMIT ?
"""

# Find any chunk connections of a given insertion event
chunk_connection_query = """
SELECT e.depth, c.event_id FROM insertion_events AS i
/* Find the chunk that connects to the given insertion event */
INNER JOIN chunk_events AS c
ON i.next_chunk_id = c.chunk_id
/* Get the depth of the chunk start event from the events table */
INNER JOIN events AS e USING (event_id)
/* Find an insertion event which matches the given event_id */
WHERE i.event_id = ?
LIMIT ?
"""

# In a PriorityQueue, the lowest valued entries are retrieved first.
# We're using depth as the priority in the queue.
# Depth is lowest at the oldest-in-time message and highest and
# newest-in-time message. We add events to the queue with a negative depth so that
# we process the newest-in-time messages first going backwards in time.
queue = PriorityQueue()

for event_id in event_list:
Expand All @@ -970,9 +1001,48 @@ def _get_backfill_events(self, txn, room_id, event_list, limit):

event_results.add(event_id)

# Try and find any potential historical chunks of message history.
#
# First we look for an insertion event connected to the current
# event (by prev_event). If we find any, we need to go and try to
# find any chunk events connected to the insertion event (by
# chunk_id). If we find any, we'll add them to the queue and
# navigate up the DAG like normal in the next iteration of the loop.
txn.execute(
connected_insertion_event_query, (event_id, limit - len(event_results))
)
connected_insertion_event_id_results = txn.fetchall()
logger.debug(
"_get_backfill_events: connected_insertion_event_query %s",
connected_insertion_event_id_results,
)
MadLittleMods marked this conversation as resolved.
Show resolved Hide resolved
for row in connected_insertion_event_id_results:
connected_insertion_event_depth = row[0]
connected_insertion_event = row[1]
queue.put((-connected_insertion_event_depth, connected_insertion_event))

# Find any chunk connections for the given insertion event
txn.execute(
chunk_connection_query,
(connected_insertion_event, limit - len(event_results)),
)
chunk_start_event_id_results = txn.fetchall()
logger.debug(
"_get_backfill_events: chunk_start_event_id_results %s",
chunk_start_event_id_results,
)
for row in chunk_start_event_id_results:
if row[1] not in event_results:
queue.put((-row[0], row[1]))

# Navigate up the DAG by prev_event
txn.execute(query, (event_id, False, limit - len(event_results)))
prev_event_id_results = txn.fetchall()
logger.debug(
"_get_backfill_events: prev_event_ids %s", prev_event_id_results
)

for row in txn:
for row in prev_event_id_results:
if row[1] not in event_results:
queue.put((-row[0], row[1]))

Expand Down
Loading