-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AnonymousPipeStreams.Linux, Process.Unix Streams: perform async I/O by using Socket implementation #44647
Conversation
Tagging subscribers to this area: @dotnet/ncl Issue Details
|
Nice. Thanks for trying this. Did you notice any impact in local microbenchmarks? |
TE benchmarks are currently broken on net6.0. But I am trying hard to find why ... |
cc @geoffkizer |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I worked on this further to the point where I've added an |
/azp run runtime-libraries-coreclr outerloop |
Azure Pipelines successfully started running 1 pipeline(s). |
Why not create new operation types in SocketAsyncContext? If you do that, you shouldn't have to touch the existing send/recv paths at all. |
I think it will result in code duplication until we meet the |
@sebastienros, were you able to get these working again? Thanks! |
It's working again (since yesterday).
In case I can't find the time to build the PR, to use a custom dll, add:
|
@tmds I've built your fork and run the JSON platform benchmarks in 4 configruations:
The results:
the trace files are available here |
Thanks for checking, @adamsitnik. That's good, that means this has basically zero impact, which is as good as we could hope for. The only number that jumps out at me is the First Request ms value. Is that difference just noise, or is that meaningful? |
Thanks for running those benchmarks Adam. Since the benchmarks show no regression, I've worked out the implementation further. These new changes won't impact the benchmarks. I'm struggling to make the There are a couple of ways this could be implemented. I've avoided the work and risk of refactoring |
Tagging subscribers to this area: @dotnet/ncl Issue DetailsThis change is to see if there are functional, or performance changes when using read/write instead of recvmsg/sendmsg. By using read/write we could look into leveraging the existing Socket (and SocketAsyncEngine) for performing async operations on pipe file descriptors, see #44329. @adamsitnik @sebastienros can you benchmark this using the techempower benchmarks?
|
src/libraries/System.Net.Sockets/src/System/Net/Sockets/Socket.Unix.cs
Outdated
Show resolved
Hide resolved
Would it be better to move this polling into corelib and decouple it from sockets so other things can take advantage? |
src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Unix.cs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good. Thanks for working on this. Are there any cancellation-related tests we can enable now or add for pipe streams on unix?
@geoffkizer, can you take a look at the socket parts? |
src/libraries/System.Net.Sockets/src/System/Net/Sockets/SocketPal.Unix.cs
Show resolved
Hide resolved
src/libraries/System.Net.Sockets/src/System/Net/Sockets/SocketPal.Unix.cs
Show resolved
Hide resolved
src/libraries/System.Net.Sockets/tests/FunctionalTests/CreateSocketTests.cs
Show resolved
Hide resolved
src/libraries/System.Net.Sockets/tests/FunctionalTests/CreateSocketTests.cs
Outdated
Show resolved
Hide resolved
I don't have any major concerns -- a couple small issues above. |
My concerns have all been addressed -- @stephentoub is this ready to merge? |
/azp run |
You have several pipelines (over 10) configured to build pull requests in this repository. Specify which pipelines you would like to run by using /azp run [pipelines] command. You can specify multiple pipelines using a comma separated list. |
/azp list |
/azp run runtime |
Azure Pipelines successfully started running 1 pipeline(s). |
Thanks, @tmds! |
{ | ||
if (_isAsync) | ||
{ | ||
return ReadAsync(buffer, offset, count, CancellationToken.None).GetAwaiter().GetResult(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this was just copied from before; but why is this ReadAsync(...).GetAwaiter().GetResult()
and Read(Span)
is base.Read(buffer)
?
Couldn't this also be
return base.Read(new Span(buffer, offset, count));
Or is the other one wrong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@benaadams the Stream.Read(Span)
method has an additional overhead of renting an array from the ArrayPool
and copying the memory:
runtime/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs
Lines 659 to 677 in 0f64b26
public virtual int Read(Span<byte> buffer) | |
{ | |
byte[] sharedBuffer = ArrayPool<byte>.Shared.Rent(buffer.Length); | |
try | |
{ | |
int numRead = Read(sharedBuffer, 0, buffer.Length); | |
if ((uint)numRead > (uint)buffer.Length) | |
{ | |
throw new IOException(SR.IO_StreamTooLong); | |
} | |
new ReadOnlySpan<byte>(sharedBuffer, 0, numRead).CopyTo(buffer); | |
return numRead; | |
} | |
finally | |
{ | |
ArrayPool<byte>.Shared.Return(sharedBuffer); | |
} | |
} |
this is why we use Read(Array)
here and in few other places like FileStream
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, and then it calls into the above overload using the array
This change is to see if there are functional, or performance changes when using read/write instead of recvmsg/sendmsg.
By using read/write we could look into leveraging the existing Socket (and SocketAsyncEngine) for performing async operations on pipe file descriptors, see #44329.
@adamsitnik @sebastienros can you benchmark this using the techempower benchmarks?
cc @stephentoub @wfurt