-
Notifications
You must be signed in to change notification settings - Fork 957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(swarm): Eliminating repetitive arc construction on each poll #4991
Conversation
59dfdf8
to
9262579
Compare
@mxinden or @thomaseizinger, I managed to make kad and identify work with more granular waking for connection handlers. I need to ask at this point if I am heading in a good direction, I realized, eventhough, in theory, less amount of polling should be performed, this is very saddle breaking change, it seems that almost all of the protocols break (tests mostly timeout), since they don't call wake when needed. I need to know before I invest more time into this. |
swarm/src/handler.rs
Outdated
impl<'a> Iterator for ProtocolsAdded<'a> { | ||
type Item = &'a StreamProtocol; | ||
fn next(&mut self) -> Option<Self::Item> { | ||
self.protocols.next() | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was by design that this is the only public API exposed by ProtocolsAdded
and ProtocolsRemvoed
. Can't we keep this but back it by a SmallVec
instead?
Thank you! But, as you've found out yourself, pretty much all protocols currently rely on this and it does simplify writing them a lot. Getting state machines right is hard enough as it is. It is kind of nice not having to think about wakers too. As such, I think a move forward with something like In its current state, this PR performs several kinds of optimisations. Before we take on an additional burden such as All of this is to say that I think we are not spending our time very wisely if we optimise wakers. I am happy to be proven wrong by benchmarks though. Note that I think the (excessive) cloning of protocols is a problem, mostly because it is O(N). In my opinion, the path forward here is to simplify / remove the My suggestion would be:
Curious to hear what you think! Thank yo for all this work :) |
@thomaseizinger, I need to admit, I started tackling the waker issue because its incredibly hard to do, which I find really fun and satisfying when i get it to work. But your feedback makes a lot of sense, I can drop this since from benchmarks I really seems its not worth the tradeoff yet, though I'd like to contribute somehow anyway. For your first bullet point, I can definitly do that. I noticed that nobody is working on #4790. We could make a transition to simplified connection handler with |
I managed to not break the api #5026 |
We all like to solve hard problems, ey? :)
That would be the end goal! If you are interested in working on that, I can fill you in on what - in my view - the biggest challenges are:
Footnotes
|
Description
Connection
now storesTHandler::InboundProtocol::Info
for local protocol list. It no longer uses hash set since we deal with small amount of protocols. The protocols are converted only if they need to be sent in a an event. TheProtocolChange
now only storesSmallvec<[StreamProtocol; 1]>
(maybe just aVec
can be good too).I implemented minimal
DelegatedWaker
that will skip the polling if it was not woken up, when benchmarking the effect of this change on simple rpc protocol, I observed around 3-5% improvement, though my benchmark rather extensively tested the muxer sine the protocol reused connections.Please include any relevant issues in here, for example:
Fix #4990
Notes & open questions
I wonder what could be a good benchmark for this, maybe making dropping and making more connections?
Change checklist