Send and retrieve dumps from an S3 store #447
Replies: 2 comments 2 replies
-
Hi @Kerollmops I think I understand! Is it not possible to point out the If I go back to the steps, which ones would be eliminated with this solution?
If there is no need to stream the file, isn't there a way to do this at the machine level more than at Meilisearch level? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi there, Yann from Koyeb here. From our understanding, a simple solution to handle data persistence, auto-healing, and horizontal scaling could be to rely on Meilisearch snapshots and S3-compatible object storage to distribute these snapshots. This would enable highly resilient and performant deployments with horizontal scaling for use-cases that do not require real-time index updates:
This would be a highly scalable solution as long as you can afford a few minutes delay on the index update propagation. I might have missed something, so let me know if that doesn't make sense. |
Beta Was this translation helpful? Give feedback.
-
Bonjour Guillaume,
After a meeting with the @meilisearch/cloud-team about what would be the best/easiest way to update a Meilisearch server, we bring the idea of the engine being able to send and retrieve dumps from an S3 store.
When it comes to updating a Meilisearch server:
This solution is cumbersome for the Cloud team and not really reliable in the sense that the disk space will always grow, you will always need more space from update to update it is because when you decide on the size of the new disk space you take the previous disk space and add a little bit more space for the dump to fit, but the new version has now more space (as you can't shrink the disk size). Doing a new update will force the new disk space to be even bigger than previously.
We imagined a basic solution where Meilisearch would be able to support sending and retrieving dumps from an S3 store. To make it simple: you do not need to change the size of the disk as the dumps are no more stored on it, the only thing you need is a read/write access to an S3 store and be able to specify it to the Meilisearch server.
We decided that we would try to mount a virtual disk based on an S3 store where we upload the dump to the S3 store once it is finished and read it from the S3 store from the new machine (with the default disk size) by using a virtual network disk using s3fs-fuse or equivalent.
Beta Was this translation helpful? Give feedback.
All reactions