-
Notifications
You must be signed in to change notification settings - Fork 981
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support compression on serialization of big values #3324
Comments
@kostasrim I suggest using RDB_OPCODE_DF_MASK for that. I't a meta opcode that can say in advance what to expect in keyvalue that is gonna be parsed next. Right now we only have |
@romange LIT |
@kostasrim does that mean that, once #3241 is merged, older versions of Dragonfly won't be able to read RDB/DFS saved with new Dragonfly versions? |
For any such breaking changes, we should first disable (by default, toggle-able via a flag) the write path, and have the read path support it. It should be like that for at least 1 version, and only then we can enable it by default (or even remove the flag if makes sense) |
I synced with Shahar and he brought up that there should be an api for continues compression. I worked on this CompressorImpl class in the past, and I think we dont need to break changes. If we change the CompressorImpl class to create context and free the context once we finish full entry compression I believe this should be good. This is like breaking the compression to many frames that will be sent one after the other and with last frame once we finished serializing the entry |
We decided to disable compression on big values and we will revisit this for optimization if needed in the future. I am closing this for now |
Currently the protocol on
rdb loader
does not support compressed data and it breaks.Previously, we only called
FlushToSink
inSaveEpilogue
and beforeSaveBody
(to flush previous entries). However, with the current changes, we callFlushToSink
when we serialize X number of bytes. This doesn't work with the protocol implemented inrdb_load
because we now split let's say a list in multiple chunks andLoadKeyValue
onRdbLoader
expects to read a string (for the values) but it gets acompressed blob
and fails with unrecognized rdb type.Extend the protocol to support loading compressed data in
LoadKeyValuePair
of therdb_loader
The text was updated successfully, but these errors were encountered: