Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUIC] handle concurrent stream reads and concurrent stream writes #52627

Closed
wfurt opened this issue May 11, 2021 · 10 comments · Fixed by #67329
Closed

[QUIC] handle concurrent stream reads and concurrent stream writes #52627

wfurt opened this issue May 11, 2021 · 10 comments · Fixed by #67329
Assignees
Milestone

Comments

@wfurt
Copy link
Member

wfurt commented May 11, 2021

While we do support concurrent read and write, concurrent reads (e.g. multiple read operations pending) and concurrent writes (multiple writers) are problematic. There is note in the code but there is no logic to prevent this and it will certainly not work correctly at the moment. (and tests)

We should add logic to trace pending IO and most likely block the concurrency like SslStream does.
This will possibly also help with #32142

@dotnet-issue-labeler dotnet-issue-labeler bot added the untriaged New issue has not been triaged by the area owner label May 11, 2021
@ghost
Copy link

ghost commented May 11, 2021

Tagging subscribers to this area: @dotnet/ncl
See info in area-owners.md if you want to be subscribed.

Issue Details

While we do support concurrent read and write, concurrent reads (e.g. multiple read operations pending) and concurrent writes (multiple writers) are problematic. There is note in the code but there is no logic to prevent this and it will certainly not work correctly at the moment. (and tests)

We should add logic to trace pending IO and most likely block the concurrency like SslStream does.
This will possibly also help with #32142

Author: wfurt
Assignees: -
Labels:

area-System.Net.Quic

Milestone: -

@geoffkizer
Copy link
Contributor

We don't generally enforce this across all Streams (e.g. HTTP request/response streams). Generally speaking, it's up to the user to ensure they don't use the Stream concurrently.

That said, I don't have a strong opinion here. Maybe @stephentoub does.

@wfurt
Copy link
Member Author

wfurt commented May 12, 2021

SslStream throws PNSP while other stream types may not. I would think that we either block it or or we make sure it works properly. I think if we want to support it, we should add tests and fix the state handling. I'm not sure if that would be ever beneficial to HttpClient or Kestrel. Since QUIC will not be public API in 6.0 we may focus on the simple cases and decide later.

@geoffkizer
Copy link
Contributor

I don't think we should ever make it work properly, it's not worth the effort/overhead.

I do think we should have a consistent philosophy here and apply it across all our various Streams.

@karelz karelz added bug and removed untriaged New issue has not been triaged by the area owner labels May 18, 2021
@karelz karelz added this to the Future milestone May 18, 2021
@stephentoub
Copy link
Member

I don't think we should ever make it work properly, it's not worth the effort/overhead.

+1

I'd be fine if we want to detect / fail for it as SslStream does, especially if doing so helps to simplify something else (e.g. because of the enforcement we don't need to worry about unsafe code doing something problematic due to concurrent access, or we can better use a cache without additional synchronization, or some such thing). But I also think it's ok to say that at most one read and at most one write operation at a time is supported, and attempting more than that is a violation of the contract. We generally don't make any guarantees about multithreaded use of non-thread-safe types, and this is an extension of that.

@wegylexy
Copy link
Contributor

Let's say in a multiplayer game, there are players A, B and C, each with a single QUIC stream connected to the server.
When B and C simultaneously take an action, the game writes on their stream respectively.
The server reads from the streams of B and C asynchronously, and writes on the stream of A.
Without concurrent IO support, the server would need to copy stuff into an unbound Channel which takes up some memory, looping in another thread, to ensure single writer, instead of zero-copy writing to msquic asynchronously. Or the server would need to use extra lock{}s which block the CPU.
This game supports hundreds of players in proximity.

Every write is a small buffer containing a complete message that could otherwise be handled lock-free and zero-copy. The message is small enough to be complete, yet messages can be out of order. It is not streaming huge data that should not be mixed up out of order.

@wegylexy
Copy link
Contributor

It'd be a good reference for other stream types to support concurrent IO. SignalR could benefit from it.

@geoffkizer
Copy link
Contributor

For a scenario like that, just use a lock (or better, an async lock via SemaphoreSlim) to ensure mutual exclusion.

You could even build it as a wrapping stream so it works with any arbitrary underlying Stream.

@karelz
Copy link
Member

karelz commented Oct 26, 2021

Triage: We should be consistent with other APIs (like SslStream) and block mulit-Writes / multi-Reads proactively rather to make it work incorrectly.

@karelz karelz modified the milestones: Future, 7.0.0 Oct 26, 2021
@wegylexy
Copy link
Contributor

For my case, I'd wrap the QUIC stream as a tunnel for SignalR transport and then use Hub invocation instead of messing with the bytes myself.

@rzikm rzikm self-assigned this Mar 28, 2022
@ghost ghost added the in-pr There is an active PR which will close this issue when it is merged label Mar 30, 2022
@ghost ghost removed the in-pr There is an active PR which will close this issue when it is merged label Apr 27, 2022
@ghost ghost locked as resolved and limited conversation to collaborators May 27, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants