-
Notifications
You must be signed in to change notification settings - Fork 965
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(gossipsub): send more specific messages to ConnectionHandler
#4811
Conversation
8d79411
to
d8545d8
Compare
Hmm, I am not convinced it is simpler. I think it adds complexity actually because now there is a pathological case where the message can contain one thing but the type says otherwise. Can't we just move the My theory is that we can reduce allocations in gossipsub a lot if we delay construction of |
Hi Thomas, thanks for jumping in
yup, I agree, I also don't like that we are not leveraring the type system for that.
Yeah I also tinkered with that, but then it means we'll be fragmenting the main message for each connection handler, which means calling the same function lots of time, on LH's case is ~100 connection handlers.
It's interesting you say this, cause after I have the opposite idea aha, I was thinking that by having a How do you envision reducing the allocation by delaying the construction of the |
With something like #4751 where the main payload is a
I've not looked in detail into why we need So the options are either to remove it and simply force the caller to stay within the max message size or perform fragmentation on the bytes directly that we want to publish. My assumption would be that this only matters for publish messages anyway and this it doesn't really make sense to call it on any other message type. I'd love to learn more about what usecase this functionality serves though and how this is compatible with other libp2p implementations. |
I just had another look at the function and I think I know understand what is happening. We are not actually fragmenting the payload but instead, we are splitting the individual lists up into multiple RPCs. This problem simply goes away if we change the interface to the handler by sending down each message individually. If we wanted to, we could batch messages together in the handler but I wouldn't bother. Protobufs have marginal overhead so it doesn't make a difference in terms of bandwidth whether you send 3 instances of |
Yep the message fragmentation comes from the go implementation which was the "spec" at the time. The idea was to group control messages into a single rpc as much as we can when possible. The issue then became that rpc message sizes are constrained and if we grouped too many messages beyond the size limit we would then have to fragment them again. This was mainly implemented as it was in the go implementation. We can probably do the grouping when drawing from the queues in the handler. @jxs is figuring out the best way to handle these things and where best to put the logic. |
Anything that speaks against not doing any grouping of messages whatsoever? What is the difference between one RPC with 4 control messages vs 4 RPCs with 1 control message? |
Not that I know of. I think the idea was to avoid small overhead in messages, there's a varint for the size of the message and small amounts of protobuf padding. i.e the idea was if we could easily group, then group them. |
I consider grouping an optimization and thus only worth it if we have a calculation or benchmark proving its impact. Based on intuition I would deem the protobuf overhead negligible. See similar calculation in https://github.com/libp2p/specs/blob/master/webrtc/README.md#faq. |
Let's optimise for memory usage first which seems to be an acute problem and throw out the unproven bandwidth optimisation if it makes that harder to implement. The grouping can be reintroduced later although I somewhat doubt that we'll be able to measure an improvement. Ironically, having to allocate multiple |
838b14f
to
1eb0d44
Compare
rename Rpc to RpcIn and create RpcOut to allow sending different types of messages via such enum. this will alow us separing into different priorities the types of messages.
1eb0d44
to
f2b1f42
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review Thomas! Addressed the changes in separated commits, it's easier if you follow them by order.
f2b1f42
to
f2c5edd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple more suggestions, most of them pretty minor. Feel free to not address immediately / at all :)
Overall, I think this is already looking great and much easier to understand! I wouldn't be opposed to merge this as an initial refactoring and start another PR that explores how we can add a (prioritised) channel to the handler.
ConnectionHandler
Another downside of large groups is head-of-line blocking. Say that one puts 10 messages into a single protobuf. Say that the resulting protobuf is sent as two IP packets. Say that the second IP packet is dropped along the path. The messages in the first IP packet will not be delivered to the remote's Gossipsub implementation until the second IP packet is resent. |
I would prefer only merging this pull request once we have a second pull request in place that makes use of this "feature". |
I would agree if we would be changing external interfaces here and thus merging it would be a committment to them. But as it stands, this is just a refactoring and even if it turns out that we need something else, we can always revert it again and/or keep changing internals. In other words, I don't see any risk in merging in. Happy to wait hold off though as long as we merge them in separate PRs and keep this as a refactoring. At the moment, it is a tiny behaviour change as we now send more individual messages. Changing the order of messages being sent is more invasive IMO so it would be great to be able to bisect between the two. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great stuff! Thanks for following up :)
Mind writing up what we discussed regarding a priority channel in the issue next?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Thomas! Wrote #4667 (comment) as a follow up
7ca7089
to
b375b2f
Compare
This comment was marked as resolved.
This comment was marked as resolved.
and rename cloned to copied for clarity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! Happy to merge that as is :)
Description
Previously, the
NetworkBehaviour
constructed aproto::RPC
type and sent that into an unbounded queue to theConnectionHandler
. With such a broad type, theConnectionHandler
cannot do any prioritization on which messages to drop if a connection slows down.To enable a more fine-granular handling of messages, we make the interface between
NetworkBehaviour
andConnectionHandler
more specific by introducing anRpcOut
type that differentiates betweenPublish
,Forward
,Subscribe
, etc.Related: #4667.
Notes & open questions
Max suggested on #4667 (comment) changing
HandlerIn
but that would imply more changes into thefragment_message
function and back and forth conversion betweenproto::RPC
and its subtypes (using a subtype viaHandlerIn
to then convert to the mainproto::RPC
type when sending it from theConnectionHandler
). Adding aRpcType
to distinguish seemed simpler.We were also using the
Rpc
type for conversions betweenMessages
,ControlActions
, andSubscriptions
which can be done from these types directly.ptal @mxinden @thomaseizinger @divagant-martian @AgeManning
Change checklist