-
Notifications
You must be signed in to change notification settings - Fork 10k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use new PipeWriter Json overloads #55740
Conversation
How's the perf 😄 |
/benchmark mvcjsoninput2k aspnet-citrine-win mvc |
Benchmark started for mvcjsoninput2k on aspnet-citrine-win with mvc. Logs: link |
An error occurred, please check the logs |
Some highlights:
Json400k full comparison
Some lowlights:
MVCJsonOutput2M full comparison
Will need to look into what's wrong with the MVC case. A quick glance at wire shark does show Pipes sending 65k at a time, whereas Stream sends in ~14k, ~59k, and ~29k chunks so there might be some window acking slowdown in the 65k case? |
This: aspnetcore/src/Servers/Kestrel/Core/src/Internal/Http/Http1OutputProducer.cs Lines 753 to 754 in 9203f25
|
src/Servers/Kestrel/Core/src/Internal/Http/Http1OutputProducer.cs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We didn't add this API not to use it. We can always revert if it regresses some scenario we didn't realize, but it looks good to me.
Do we have updated MVCJsonOutput2M numbers that show the calls to StartAsync fixed the regression?
return WriteAsJsonAsyncSlow(response.Body, value, options, response.HttpContext.RequestAborted); | ||
return WriteAsJsonAsyncSlow(startTask, response.BodyWriter, value, options, | ||
ignoreOCE: !cancellationToken.CanBeCanceled, | ||
cancellationToken.CanBeCanceled ? cancellationToken : response.HttpContext.RequestAborted); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "Slow" path is normal one right, since the middleware doesn't pass a cancellation token?
I think it would be better to change WriteAsJsonAsync
not pass in RequestAborted
by default similar to WriteAsync
. Essentially serializing JSON data to /dev/null should be faster most of the time than handling an OCE anyway unless you're doing something unusual like IAsyncEnumerable
serialization, where you would probably want to explicitly pass in RequestAborted
.
Right now, the only way to get the fast pass is to pass in a token from a cancellable source and then not cancel it. Passing in RequestAborted
explicitly mostly works for this, but that's annoying and could result in OCEs in your logs that you don't want.
@JamesNK @Tratcher do you have opinions on not passing in RequestAborted
by default?
if (!response.HasStarted) | ||
{ | ||
// Flush headers before starting Json serialization. This avoids an extra layer of buffering before the first flush. | ||
startTask = response.StartAsync(cancellationToken); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be nice to get some HTTP/2 numbers too. I don't think this optimization helps in that case, but it probably doesn't hurt either. @sebastienros Do we have any HTTP/2 scenarios that use WriteAsJsonAsync
?
I think it was the incorrect flushing in Json. I did run this a couple weeks ago and the regression was gone.
That's kind of funny 😆
This was already this way before this PR, so something to follow up on? |
Merging so perf CI can let us know what improves and what doesn't. |
No description provided.