-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track sent items using SmallVec #1657
base: main
Are you sure you want to change the base?
Conversation
1. Use NSS Release 2. Add workflow_dispatch 3. Move some env values up in the file
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1657 +/- ##
==========================================
- Coverage 95.40% 95.39% -0.01%
==========================================
Files 113 113
Lines 36721 36717 -4
==========================================
- Hits 35032 35028 -4
Misses 1689 1689 ☔ View full report in Codecov by Sentry. |
Co-authored-by: Lars Eggert <lars@eggert.org> Signed-off-by: Martin Thomson <mt@lowentropy.net>
@larseggert, I'm looking at the profiles that were generated here and it looks like these are off the |
I can't get to it this week. Wonder if the checkout action needs another parameter? |
Signed-off-by: Lars Eggert <lars@eggert.org>
Let's see what the new flamegraphs show (also because LTO is now enabled.) |
Benchmark resultsPerformance differences relative to 76630a5.
Client/server transfer resultsTransfer of 134217728 bytes over loopback.
|
It might be worthwhile revisiting this now that @mxinden made some improvements related to |
It still looks pretty bad
|
Failed Interop TestsQUIC Interop Runner, client vs. server, differences relative to 3e65261. neqo-latest as client
neqo-latest as server
All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
DO NOT MERGE THIS - IT MAKES THINGS SLOWER
I thought that this was worth sharing though.
I was looking at the profiles that @KershawChang shared and noticed that
write_stream_frame
was eating a decent amount of CPU because there was an allocation in there. It took me a while to track it down, but it turns out that this is the first place that we add aRecoveryToken
to theVec
of those that we hold in eachSentPacket
.The obvious thing to test in this case is
SmallVec
.SmallVec
is designed for places where you mostly add a few items to a collection. It holds a small array into which values are deposited unless more than the array size is added, when those values are allocated on the heap, just like a normalVec
. This means that if you stay within the array, you avoid a heap allocation, which can save a bunch of work. The cost is a little complexity, the extra stack allocation, and that your stuff needs to move if you ever overflow the backing array. So if your prediction is wrong, it is more expensive.I tried this out and it makes performance worse. With a backing array of size 1, the performance change isn't observable. But larger values progressively degrade performance. Increasing the array size to 8 makes it worse than the value of 4 I started with. My theory is that the increased size of the base object starts is the problem. This collection is typically kept in a
SentPacket
. Our acknowledgements code needs extra time to move the larger struct around.The size of the performance regression suggests that there are some ways we could further improve how
SentPacket
is handled.Benchmark results