Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Add a schema delta to drop unstable private read receipts. #13692

Merged
merged 2 commits into from
Sep 1, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions changelog.d/13692.removal
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Remove support for unstable [private read receipts](https://github.com/matrix-org/matrix-spec-proposals/pull/2285).
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
/* Copyright 2022 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

-- Drop previously received private read receipts so they do not accidentally
-- get leaked to other users.
DELETE FROM receipts_linearized WHERE receipt_type = "org.matrix.msc2285.read.private";
DELETE FROM receipts_graph WHERE receipt_type = "org.matrix.msc2285.read.private";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I don't think we have indices on receipt_type, so this might take a while?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The closest I can do to test this is a SELECT over the same rows, each query in that case takes a few seconds on matrix.org:

matrix=> EXPLAIN ANALYZE SELECT * FROM receipts_linearized WHERE receipt_type = 'org.matrix.msc2285.read.private';
                                                                   QUERY PLAN                                               
                    
------------------------------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=1000.00..395762.79 rows=682557 width=166) (actual time=1.551..3131.541 rows=687402 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   ->  Parallel Seq Scan on receipts_linearized  (cost=0.00..326507.09 rows=284399 width=166) (actual time=1.121..3027.560 r
ows=229134 loops=3)
         Filter: (receipt_type = 'org.matrix.msc2285.read.private'::text)
         Rows Removed by Filter: 3531513
 Planning time: 0.149 ms
 Execution time: 3149.429 ms
(8 rows)

matrix=> EXPLAIN ANALYZE SELECT * FROM receipts_graph WHERE receipt_type = 'org.matrix.msc2285.read.private';
                                                                QUERY PLAN                                                                 
-------------------------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=1000.00..372414.02 rows=665573 width=131) (actual time=0.923..2521.685 rows=687399 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   ->  Parallel Seq Scan on receipts_graph  (cost=0.00..304856.72 rows=277322 width=131) (actual time=1.034..2427.001 rows=229133 loops=3)
         Filter: (receipt_type = 'org.matrix.msc2285.read.private'::text)
         Rows Removed by Filter: 3531484
 Planning time: 0.764 ms
 Execution time: 2539.453 ms
(8 rows)

I think having a slightly slow start-up time for this is a reasonable compromise to avoiding a background update (and needing the additional filtering code while that runs).

This was the conclusion we came to for SimonBrandner#1 (comment) (and there was a team discussion I can dig up).