-
Notifications
You must be signed in to change notification settings - Fork 30.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http2 client stream processing is 20x slower than curl #50972
Comments
Ok, so even though I read all the issues about http2 in this repo, and specifically this one: #38426 , I didn't manage to have it in the right combination of settings. The fix was that the client needs to also do But I think it's a dupe of #38426. I think it's time to at least tune that window a bit. 8MB was enough to get to close to curl speeds. However, this doesn't fix it in the other direction, so when sending a large stream to the server. |
So I think there is still an issue, I would love to help debug whats going on. The initial issue I wrote it for downloading to the client, since thats a bit easier to write. but if we use the stream in "reverse" the fixes of correctly setting the Here is a combination of client & server that do the same after each other: server: const fs = require("fs");
const stream = require("stream");
const http2 = require("http2");
const crypto = require("crypto");
const server = http2.createSecureServer({
key: fs.readFileSync("./localhost-privkey.pem"),
cert: fs.readFileSync("./localhost-cert.pem"),
allowHTTP1: true, // http2 upgrade
settings: { initialWindowSize: 8 * 1024 * 1024}
});
server.on('error', (err) => console.log(err));
const buffer = crypto.randomBytes(256*1024*1024);
server.on("connect", (session) => {
session.setLocalWindowSize(8*1024*1024);
});
server.on("stream", (st, headers) => {
st.respond({
'content-type': 'application/octet-stream',
":status": 200,
});
if (headers[':path'] === "/download") {
stream.Readable.from(buffer).pipe(st);
}
else {
st.on('data', d => st.resume());
st.on('error', console.error);
st.on('end', () => st.close());
}
})
server.listen(8443, () => {
console.log("HTTPS (HTTP/2) listening on 8443 port");
}); client: const fs = require("fs");
const http2 = require("http2");
const process = require("process");
const crypto = require("crypto");
const stream = require("stream");
const buffer = crypto.randomBytes(256*1024*1024);
function upload() {
const client = http2.connect(process.env["TARGET_HOST"] ?? "https://localhost:8443", {
ca: fs.readFileSync("./localhost-cert.pem"),
checkServerIdentity: () => undefined,
settings: { initialWindowSize: 8 * 1024 * 1024}
});
client.once('remoteSettings', settings => {
client.setLocalWindowSize(settings.initialWindowSize ?? (8 * 1024 * 1024));
});
const recv = client.request({":path": "/upload"}, {endStream: false});
recv.on("response", () => {
const start = process.hrtime();
stream.Readable.from(buffer)
.pipe(recv)
.on("end", () => {
const [timeSpendS, timeSpendNS] = process.hrtime(start);
const mbSize = buffer.length / (1024 * 1024);
console.log(
`Uploading to h2 speed (${Math.trunc(mbSize)} MB): ${
mbSize / (timeSpendS + timeSpendNS * 1e-9)
} MB/s`
);
client.close();
round++;
if (round < 3) {
// to give node a fair chance of getting close to curl
// we go throught the upload 3 times (sequentually).
// such that node has had time to JIT etc.
upload();
}
});
});
}
let round = 0;
function download() {
const client = http2.connect(process.env["TARGET_HOST"] ?? "https://localhost:8443", {
ca: fs.readFileSync("./localhost-cert.pem"),
checkServerIdentity: () => undefined,
settings: { initialWindowSize: 8 * 1024 * 1024}
});
client.once('remoteSettings', settings => {
client.setLocalWindowSize(8 * 1024 * 1024);
});
const recv = client.request({":path": "/download"});
recv.on("response", () => {
const start = process.hrtime();
let received = 0;
recv.on('data', d => {
received += d.length;
} );
recv.on('error', console.error);
recv.on('end', () => {
const [timeSpendS, timeSpendNS] = process.hrtime(start);
const mbSize = received / (1024 * 1024);
console.log(
`Downloading from h2 speed (${Math.trunc(mbSize)} MB): ${
mbSize / (timeSpendS + timeSpendNS * 1e-9)
} MB/s`
);
client.close();
round++;
if (round < 3) {
// to give node a fair chance of getting close to curl
// we do the download 3 times (sequentually).
// such that node has had time to JIT etc.
download();
}
else {
// download is done, now do the upload
round = 0;
upload();
}
});
});
}
download() localhost:
curl:
10gbit local network:
curl:
So is there maybe some extra window size parameter I have yet to discover to get streams to upload at the same speed as the other direction? |
Closing this issue in favor of a much simpler variant of the same problem: #51014 |
Version
v20.10.0
Platform
Linux matrix1 5.10.0-25-amd64 #1 SMP Debian 5.10.191-1 (2023-08-16) x86_64 Linux
Subsystem
http2
What steps will reproduce the bug?
Server:
which you launch on server A
client:
which you copy onto server B.
the ssl certs can be generated with:
localhost
if we have the server running on machine A and run the following commands on A:
and we run the node http2 client:
we see that it gets quite close to curl.
private 10Gbit network
if we run curl on machine B:
we see it hit 400MB/s (the machines are connected by a 10Gbit connection. Which is actually quite nice).
if we run the the nodejs script:
This is 20x slower than the curl speed. Something is wrong. We found this issue while debugging throughput issues with grpc-js, which is using http2 for it's transport layer.
Note, the same is true if we turn around the direction. so if h2 server is receiving a large stream that a client (for example curl) is sending, it's also maxed out at 20MB/s.
How often does it reproduce? Is there a required condition?
Every time if the http2 client is not going via the loopback. We've also tested the other direction, so the client sending stuff to the server on the stream, same performance loss, but now on the server side.
We've also reproduced this on other servers.
What is the expected behavior? Why is that the expected behavior?
Reasonable performance of h2 library, also when not on the loopback. A current workaround is to use Caddy as a grpc frontend that always connect to the node server via loopback.
What do you see instead?
I already wrote the output in the repo case description
Additional information
in the non localhost case, cpu of the server idles around 1% with the 19MB/s case. the 400mb/s gets it to a short peak of 25%.
iperf is reproting 1.2Gbit up & down between the two severs.
The text was updated successfully, but these errors were encountered: