You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Version: 8.2.1 and prior (any version ~4.x.x and above, due to 1543c78)
Platform: Any
Subsystem: Zlib
When using the flush() method on a zlib stream, the stream will not be flushed fully if a callback is not provided and ws.needDrain is true. This appears to have been introduced all the way back in #3534, in an attempt to prevent flush listeners piling up. However, the solution prevents the requested flush from actually occurring if you don't provide a callback function, due to line 317. When there is not data waiting to be drained, the code follows the path at line 321 and works as expected. Therefore, this problem appears when you've just written more than 16384 bytes (1 default chunk) to the stream and then try to flush, with the result being that you get no/partial flushes.
Below is an example that demonstrates the point:
const zlib = require('zlib');
/// Test with small write (< chunk size -> needDrain == false), works as expected
// Create streams
const zipper1 = zlib.createGzip();
const unzipper1 = zlib.createGunzip();
zipper1.pipe(unzipper1);
// Write to stream
zipper1.write('some small data');
// Attempt to flush stream
zipper1.flush();
// Check output
unzipper1.on('data', (d) => {
console.log('zipper1: Short data flush received ' + d.length + ' bytes');
});
/// Test with large write
// Create streams
const zipper2 = zlib.createGzip();
const unzipper2 = zlib.createGunzip();
zipper2.pipe(unzipper2);
// Write to stream
zipper2.write('A'.repeat(17000));
// Attempt to flush stream
zipper2.flush();
// Check output
unzipper2.on('data', (d) => {
console.log('zipper2: Long data flush received ' + d.length + ' bytes');
});
/// Test with large write and callback on flush, works as expected
// Create streams
const zipper3 = zlib.createGzip();
const unzipper3 = zlib.createGunzip();
zipper3.pipe(unzipper3);
// Write to stream
zipper3.write('A'.repeat(17000));
// Attempt to flush stream
zipper3.flush(() => {});
// Check output
unzipper3.on('data', (d) => {
console.log('zipper3: Long data flush with callback received ' + d.length + ' bytes');
});
Which produces the output:
zipper1: Short data flush received 15 bytes
zipper3: Long data flush with callback received 16384 bytes
zipper3: Long data flush with callback received 616 bytes
What this tells us:
The short messages are flushing as expected on zipper1, great!
The long messages are not flushing at all when no-arg flush() is used (no bytes are received on zipper2)
Adding a callback causes the long messages to flush as expected (all 17000 bytes received in two chunks on zipper3)
This issue bubbles up to higher libraries that use zlib: I came across this whilst trying to compress and flush messages in an SSE event-stream using compression (see their SSE example and try sending large messages or see this expressjs/compression#86). Once the messages hit a certain size, they would fail to flush properly due to this issue and would be delayed.
Possible solutions:
Bind the drain handler (as on line 319) regardless of whether there is a callback provided (using a no-op handler for the nested flush or none at all). This effectively reverts zlib: only apply drain listener if given callback #3534 and would mean the listeners could pile up.
Bind the drain handler the first time no-args flush() is called and whenever a callback is provided. This requires tracking the needDrain state, as when it transitions to true again we need to re-allow a new listener to be bound by flush(). This follows more of the intention of zlib: only apply drain listener if given callback #3534, without this broken flush issue.
@MylesBorins: Your input here would be appreciated, to understand the motivation of the original change
The text was updated successfully, but these errors were encountered:
Backport-PR-URL: #14571
Backport-Reviewed-By: James M Snell <jasnell@gmail.com>
Fixes: #14523
PR-URL: #14527
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
When using the
flush()
method on a zlib stream, the stream will not be flushed fully if a callback is not provided andws.needDrain
is true. This appears to have been introduced all the way back in #3534, in an attempt to prevent flush listeners piling up. However, the solution prevents the requested flush from actually occurring if you don't provide a callback function, due to line 317. When there is not data waiting to be drained, the code follows the path at line 321 and works as expected. Therefore, this problem appears when you've just written more than 16384 bytes (1 default chunk) to the stream and then try to flush, with the result being that you get no/partial flushes.Below is an example that demonstrates the point:
Which produces the output:
What this tells us:
flush()
is used (no bytes are received on zipper2)This issue bubbles up to higher libraries that use zlib: I came across this whilst trying to compress and flush messages in an SSE event-stream using compression (see their SSE example and try sending large messages or see this expressjs/compression#86). Once the messages hit a certain size, they would fail to flush properly due to this issue and would be delayed.
Possible solutions:
drain
handler (as on line 319) regardless of whether there is a callback provided (using a no-op handler for the nested flush or none at all). This effectively reverts zlib: only apply drain listener if given callback #3534 and would mean the listeners could pile up.needDrain
state, as when it transitions to true again we need to re-allow a new listener to be bound by flush(). This follows more of the intention of zlib: only apply drain listener if given callback #3534, without this broken flush issue.@MylesBorins: Your input here would be appreciated, to understand the motivation of the original change
The text was updated successfully, but these errors were encountered: