Is a bug? Regarding the issue of being unable to recover when the number of keys is very large #589
Unanswered
cnjeffreyloo
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When I generate a large amount of data using Tsavorite in Garnet (with the number of keys exceeding 300 million), and then save a checkpoint and load it again, an error occurs, and I can no longer recover the original data. What could be the issue?
Here is the source code:
Run result:
Note:
The same error also exists in FasterKV (see also microsoft/FASTER#924).
Using the Docker version of Garnet, after writing a large amount of data and restarting the service to recover the data, the same issue occurred.
Example:
In summary, can we consider this a bug in FasterKV and Tsavorite?
Beta Was this translation helpful? Give feedback.
All reactions