-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Controlling flushing #3
Comments
I had overlooked this point. I was only thinking about getting maximum compression, not minimising latency. I am thinking that adding an option to flush after each chunk might be the simplest way forward. I don't plan to support it in the first version, but I will mention it in the explainer. |
Nice to see you again! Yeah, having such an option sounds good. |
I added some mention of this issue in #4. PTAL. |
It's been a while, Hirano-san, Adam-san! Sounds reasonable not to work on this for the 1st version. The change LGTM. Feel free to just close this issue for now, or leave it open to track discussion. Up to you. It was just a drive-by FYI comment. Not as a stakeholder :) |
It depends on the internal state and the input, but when a new chuck is written to a CompressStream, the CompressStream may have 2 choices, buffering some input bytes until it can generate a full compressed byte, or flushing the buffered data (e.g. by performing the "sync flush" of Zlib).
Let's suppose that we want to stream a sequence of JSON objects through a CompressStream and DecompressStream (or some non-web decompressor) at the other peer where there could be some latency between objects. It's good if the receiver side could start processing new chunks as soon as possible. However, without an API to instruct the stream to or not to flush, CompressStream needs to decide whether or not to flush by itself.
In terms of efficiency, this might be negligible. For compatibility POV, maybe worth investigating.
If this kind of usage is just out of scope, it's ok. Never mind :) Then, I suggest that you discuss it in the explainer or somewhere more appropriate.
The text was updated successfully, but these errors were encountered: