-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gossipsub backpressure #549
Gossipsub backpressure #549
Conversation
and clone it per ConnectionHandler, this will allow us to always have an open Receiver.
to allow for better handling of each message send.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah nice. Two things I noticed.
- We currently don't handle the cases where the queue's are full. I guess this is a future PR. I.e we don't handle the errors when we try to send.
- I know I'm missing something, but why are we using an unbounded channel for the priority queue and not just another bounded channel like we do with the non-priority?
let sender = self | ||
.handler_send_queues | ||
.get_mut(peer) | ||
.expect("Peerid should exist"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is true, as I recall trying to enforce conditions like this.
But we have to be absolutely sure that there is no code path where a peer can get added or removed from the peer_topics
mapping and not the handler_send_queues
.
If we are uncertain about this condition in any way (or are concerned a future dev may break this condition), we should log a crit instead imo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a peer is added when a new ConnectionHandler
is created and removed when the connection is closed. As we are using mpmc
channels if that PeerId
exists it has a respective ConnectionHandler
(theoretically). If you prefer we can for now use a crit
/error
log instead
lol, was messing with gh cli, didn't mean to merge this early. But it works. |
1 - when the queue is full for every peer we return 2 - no worries :D, because |
Backpressure PR to our fork