-
Notifications
You must be signed in to change notification settings - Fork 30.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpected RangeError [ERR_INVALID_OPT_VALUE] while reading from a HTTP download stream #25057
Comments
Added some new findings: Above error only happens when I added console logging to display the progress. If turn off any of the console outputs, the error will gone. It seems like the high frequency console output blocks the network download stream, until the download stream buffers too much. |
There is disagreement on how console.log should function. Some people who want console.log() to be sync, so it never loses data (@Fishrock123 is one, I believe). But sync console causes production instability, as you find :-(. You might want to switch to a fast logging package, maybe That said, pipes are supposed to apply backpressure to avoid this kind of problem. Could you provide a complete example? /cc @mcollina |
@XiaoningLiu would you be so kind and provide a minimal code example that includes the logging which seems to be the cause here? |
I bisected this back to cb5ed4560326. I'm still grokking that commit but @addaleax maybe you can spot it right away 🤞 . The two symptoms we're seeing are:
Unfortunately I haven't come up with a short repro. Our code looks like this though: const pt = new PassThrough();
const req = https.get({...}); // fast, >100 MB/s
req.on("response", resp => {
resp.on("end", () => pt.end());
resp.pipe(pt, {end: false});
});
const ws = fs.createWriteStream("path"); // slow, 10-20 MB/s
pt.pipe(ws); |
Got a repro: /*
* Make certs using something like:
* openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
* Then run this script with --expose-gc.
* Log lines beginning with "recv" indicate num bytes received by the _write fn and RSS.
*/
const https = require("https");
const {fork} = require("child_process");
const fs = require("fs");
const {Writable} = require("stream");
class SlowBlackHole extends Writable {
constructor(...args) {
super(...args);
this.chunkNo = 0;
this.bytesProcessed = 0;
}
_write(chunk, encoding, callback) {
setTimeout(callback);
this.bytesProcessed += chunk.length;
if (this.chunkNo++ % 1000 === 0) {
gc();
console.log("recv",
this.bytesProcessed.toLocaleString(),
process.memoryUsage().rss.toLocaleString());
}
}
}
if (!process.argv.includes("--server")) {
const server = fork(__filename, ["--server"]);
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
server.on("message", () => { // server is listening
const req = https.get({port: 3333});
const dst = new SlowBlackHole();
req.once("response", res => {
res.pipe(dst);
});
});
} else {
const chunk = Buffer.alloc(8 * 1024);
const server = https.createServer({
key: fs.readFileSync("./key.pem"),
cert: fs.readFileSync("./cert.pem")
}, (req, res) => {
let sent = 0;
let chunkNo = 0;
const send = () => {
res.write(chunk, send);
sent += chunk.length;
if (chunkNo++ % 2000 === 0)
console.log("sent", sent.toLocaleString());
};
send();
});
server.listen(3333, () => {
process.send("listening");
});
server.timeout = 1000;
} Commit prior to cb5ed45 plateaus at around 50MB RSS. cb5ed45 shows unbounded RSS growth. |
@zbjornson The fact that cb5ed45 seems to play some kind of role sounds to me like there might be something more going on than what is resolved by #26965, so it’s probably worth investigating in what way that commit made a difference. Thanks for pinging me in the PR, though. :) |
@zbjornson Looking into it more, your repro still seems to work when the cluster module is moved out of the picture, but requires HTTPS rather than HTTP or raw TLS in order to fail on v10.15.3. That makes it seem like an issue with the interaction of the HTTP parser and the TLS module, maybe, but I can’t quite figure out how the linked commit would affect it. (I can’t really figure out why the linked commit would change anything at all in terms of behaviour, but I see that it did. 😄 ) |
@addaleax hmm, cluster module? I used child_process in my repro just to make it a single-file repro, but didn't use the cluster module. Thanks for looking into it! |
@zbjornson Yeah, sorry, that’s what I meant 😄 You can create and connect to a server within a single process, I just did that because debugging gets a bit easier and there’s no risk of port conflicts: example in the fold/*
* Make certs using something like:
* openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
* Then run this script with --expose-gc.
* Log lines beginning with "recv" indicate num bytes received by the _write fn and RSS.
*/
const https = require("https");
const {fork} = require("child_process");
const fs = require("fs");
const {Writable} = require("stream");
class SlowBlackHole extends Writable {
constructor(...args) {
super(...args);
this.chunkNo = 0;
this.bytesProcessed = 0;
}
_write(chunk, encoding, callback) {
setTimeout(callback);
this.bytesProcessed += chunk.length;
if (this.chunkNo++ % 1000 === 0) {
gc();
console.log("recv",
this.bytesProcessed.toLocaleString(),
process.memoryUsage().rss.toLocaleString());
}
}
}
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
const chunk = Buffer.alloc(8 * 1024);
const server = https.createServer({
key: fs.readFileSync("./key.pem"),
cert: fs.readFileSync("./cert.pem")
}, (req, res) => {
let sent = 0;
let chunkNo = 0;
const send = () => {
res.write(chunk, send);
sent += chunk.length;
if (chunkNo++ % 2000 === 0)
console.log("sent", sent.toLocaleString());
};
send();
});
server.listen(0, () => {
const req = https.get({port:server.address().port});
const dst = new SlowBlackHole();
req.once("response", res => {
res.pipe(dst);
});
});
server.timeout = 1000; |
is there anything actionable here? does this need to remain open? |
Since #26965, the repro above no longer shows unbounded RSS growth (just tested again in in v12.18.0). This other issues I see referencing this one look unrelated. --> Looks like this can be closed! |
v10.14.2
64-bit Windows
http, stream, buffer
Use axios to download a file from HTTPs URL by piping response readable stream into a fs Writable stream. However, the Node.js process will exit unexpectedly with following errors thrown. The errors seem from Node.js core modules related to TLSSocket, stream and buffer.
Another reason I think this should be a Node.js issue is that, with the same code, the issue can be stable reproduced in 10.14.2, but 8.14.0 doesn't have any issues.
My code is very simple. And my dev box network speed is very good reach to 100~200MB/s download speed max, not sure it's related to the root cause.
Another strange behavior between 10.14.2 and 8.14.0 is that, if I add "data" event listener to the response download stream. The 8.14.0 stables emits buffer with size 16384. However, for 10.14.2, sometimes a very large buffer larger than 2GB will be emitted, then triggers a "end" event. And the big buffer size is not stable, and I'm guessing if the large buffer reaches a limitation to alloc a new buffer, then the error will be triggered. So the question is, why sometimes have such a big buffer emitted?
The text was updated successfully, but these errors were encountered: