You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
WOW! 150 Million rows in 3307*500=1,700,000 Rooms!
unfortunately the database grew from 56 to 59GB in /var/lib/pistgres, although the gzipped sql export of the table state_groups_state shrinked from 5.6GB to 1.5GB. ( I guess a database VACUUM will be needed in the end?
After running VACUUM FULL on my database it shrank down to 23GB.
So we should add this too to the Readme: use VACUUM after compressing to really free up the disk-space.
(I thought we could delete the three extra tables in the database: state_compressor_state, state_compressor_progress and state_compressor_total_progress, but they are so small, that it would be not much space to win)
the readme only suggests
This means, that it will only check in 100 rooms and quit as I understand is as a layman.
I got the result after 100 chunks:
this seems to have worked.
Then I started again running with
-n 10000
now to try to compress my 56GB database.Is this the best option? or could you provide more informatinon in the readme please?
The text was updated successfully, but these errors were encountered: