-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Figure out why Caddy isn't using sendfile #4731
Comments
(Sorry @mohammed90 😅) - We use |
There was an attempt to fix it here many years ago, went inactive: https://go-review.googlesource.com/c/go/+/7893/3 |
It should, ReadFrom will unconditionally* invoke *However this will only apply if the file is larger than 512 bytes as the first 512 byte chunk is explicitly handled before triggering sendfile. I can reproduce this behaviour with bare net/http so the problem is either in the size of the test files, or something else is missing from the equation. Looking briefly at the file server I don't see any major red flags that would invalidate this, but it's late I'll have a proper look tomorrow night. |
@flga Ohh, thank you for looking into this. This kernel stuff goes a little over my head "kerrantly" 🙃 I don't know how to debug this in Caddy; specifically, I've never used A test config might look like this:
That's HTTP-only, and no Or, without using a config file:
I would also expect that to use sendfile when serving up files. It would be absolutely awesome if you had a chance to dig into this and help us figure out how to get sendfile to be used (even on TLS*).
|
Took a peek during lunch, sendfile is indeed not being called in caddy and the reason is that the response writer received in the file server is not a vanilla http responsewriter, it's an instance of our own ResponseRecorder. This type hides the ReadFrom method from io.Copy so the sendfile behaviour is never even considered. Doing a quick & dirty patch so that it exposes ReadFrom causes sendfile to be used again as expected. ResponseWriterWrapper suffers from the same issue, it's effectively hiding some upgrade interfaces tho it's not clear to me how impactful that is or if it is even relevant given we support the major ones like Flusher. Anyway, this was just a quick dirty gut check, I'll need to look at the code a little closer to see where best to spend our efforts. re TLS: I'm not sure this is worth pursuing, it seems to me we would be fighting against net/http and crypto/tls the whole way so the gains would have to be huge in order to be worth it. A quick search for ktls in the go repo came up with this: golang/go#44506 and a related post from Fillo https://words.filippo.io/playing-with-kernel-tls-in-linux-4-13-and-go/ |
D'oh, I totally forgot that we wrap all the ResponseWriters with our own recorder type so that we can provide logging: caddy/modules/caddyhttp/server.go Lines 221 to 223 in 8cc8f9f
That's actually pretty reasonable, I would be happy to review that patch!
I wish I knew a better way to wrap the response writer without having to implement all these interfaces, but clearly we are missing some. Again, I'd be happy to add it there, as the purpose of this wrapper type is to make it function like the underlying type. Re: kTLS, I'm OK with it if we defer/shelve that for now, but if you feel like hacking on it just to see a PoC, I won't stop you! Depending on how aggregious it is (haha) we might not merge it, but if you want to have some fun then go ahead. If it turns out reasonable maybe we can even merge it. Thanks for looking into things! |
You can read more about this problem in https://blog.merovius.de/posts/2017-07-30-the-trouble-with-optional-interfaces/ |
@zekjur @stapelberg Should we retest on 25Gbps again with new caddy? :D |
I just updated caddy on my router, so feel free to give it a shot. I did some tests in my local network, but found no measurable difference — caddy was reaching 25 Gbit/s before and after :) (see also: https://michael.stapelberg.ch/posts/2022-05-14-http-and-https-download-25gbit/) |
@stapelberg Any latency improvements? Even if it's saturating the network I would expect lower latency (TTFB). Make sure your test payload is large enough too. |
I wouldn’t expect the TTFB to be any lower, as the few buffer allocations should be immeasurably quick. Only if you do larger transfers for longer does it add up. Did you see any TTFB improvements in your tests? Can you share the payload you used, and with which tool you measured? |
@stapelberg I did see lower latency, but I was just testing on the loopback interface. My payload and test procedure was very simple, as documented in the linked PR (#5022) -- you can see charts and stats and program output there! You might want to use an even larger payload on such a high-bandwidth interface. But I'm not sure, maybe that's enough! |
Before patch (commit 1c9c8f6):
After patch:
As I expected, I don’t see a significant change in latency. With an 83% standard deviation, there is quite a bit of noise in this measurement anyway. |
Yeah, I think once you get bandwidth that high, tests like this are a little tricky to find reliable. Still cool if it's saturating your link though! |
still cannot use sendfile() in a http handler:
OR
either way the sendfile / splice will not be called for the following reason:
@stapelberg @mholt @flga @francislavoie can any one help me on how can i streaming a file using sendfile()? |
Context:
Caddy should use Unix sendfile when there's no TLS and no compression. (And even for TLS, we should look into kTLS. But maybe unsafe, sigh.)
The text was updated successfully, but these errors were encountered: