Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor(identify): use ReadyUpgrade for {In,Out}boundUpgrade #3412

Closed
wants to merge 4 commits into from

Conversation

jxs
Copy link
Member

@jxs jxs commented Jan 31, 2023

Description

Starts addressing #2863

Notes

I read the discussion Thomas, namely your comment to start with the TimedFutures implementation. But as I wasn't completely familiar with everything you wrote I started experimenting with identify and was able to use ReadyUpgrade for {In,Out}boundUpgrade therefore removing most of the protocol code.

Links to any relevant issues

Open Questions

  • I am not familiar with all the protocols yet, but wdyt of following the same approach hereby presented for identify and first replace all custom {In,Out}boundUpgrade's with ReadyUpgrade's then creating the TimedFutures and then introducing the breaking changes on the ConnectionHandler?
    The reason I ask this, is because with identify for example this PR AFAICT maintains the same timeouts and backpressure right? Nonetheless if you prefer we can add TimedFutures now and introduce it already to identify no strong opinion.

  • What do you think of joining all the events on the same FuturesUnordered? I feel it makes sense as they are all events, it's just some haven't been resolved but if prefered I can rollback to the events and pending_replies approach, though that will probably mean the need for another enum with the identify variants.

Change checklist

  • I have performed a self-review of my own code
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • A changelog entry has been made in the appropriate crates

use ReadyUpgrade for Handler::InboundProtocol
use ReadyUpgrade for Handler::OutboundProtocol.
@jxs jxs marked this pull request as draft January 31, 2023 22:06
@thomaseizinger
Copy link
Contributor

thomaseizinger commented Jan 31, 2023

  • I am not familiar with all the protocols yet, but wdyt of following the same approach hereby presented for identify and first replace all custom {In,Out}boundUpgrade's with ReadyUpgrade's then creating the TimedFutures and then introducing the breaking changes on the ConnectionHandler?
    The reason I ask this, is because with identify for example this PR AFAICT maintains the same timeouts and backpressure right? Nonetheless if you prefer we can add TimedFutures now and introduce it already to identify no strong opinion.

As I laid out in the comment below, I don't think we will maintain the same timeouts unfortunately. What we can do is manually compose together protocol::recv and futures_timer::Delay with a future::select. It will get a bit clunky to name things but it is functionally equivalent to what I am planning to have with TimedFutures.

@thomaseizinger
Copy link
Contributor

thomaseizinger commented Jan 31, 2023

  • What do you think of joining all the events on the same FuturesUnordered? I feel it makes sense as they are all events, it's just some haven't been resolved but if prefered I can rollback to the events and pending_replies approach, though that will probably mean the need for another enum with the identify variants.

I think having multiple sets (i.e. different fields) of tasks is better.

  1. It allows us to prioritize tasks amongst each other, i.e. process outbound requests before inbound requests by choosing, which one to poll first in ConnectionHandler::poll.
  2. It should result in a flatter type hierarchy as each set of tasks can have a separate output and we don't need to make an enum that handles all task outputs. Within poll, we just need to map to ConnectionHandlerEvent then.

@jxs
Copy link
Member Author

jxs commented Feb 6, 2023

  • What do you think of joining all the events on the same FuturesUnordered? I feel it makes sense as they are all events, it's just some haven't been resolved but if prefered I can rollback to the events and pending_replies approach, though that will probably mean the need for another enum with the identify variants.

I think having multiple sets (i.e. different fields) of tasks is better.

  1. It allows us to prioritize tasks amongst each other, i.e. process outbound requests before inbound requests by choosing, which one to poll first in ConnectionHandler::poll.
  2. It should result in a flatter type hierarchy as each set of tasks can have a separate output and we don't need to make an enum that handles all task outputs. Within poll, we just need to map to ConnectionHandlerEvent then.

yeah makes sense, reverted it ptal :)

@thomaseizinger
Copy link
Contributor

Originally, I envisioned TimedFutures to be used together with this: https://github.com/thomaseizinger/rust-async-response-stream/

But, I've since evolved this idea a bit and the latest draft is now here: mxinden/asynchronous-codec#5

That implements Stream now instead of Futures. If we like this idea, we may need to adapt TimedFutures to be TimedStreams instead where we race a stream against a timeout.

Technically, this can also be done separately but I think it might be good to test compatibility of these ideas first and see how it pans out.

@thomaseizinger
Copy link
Contributor

If we like this idea

The idea more concretely is that we have a futures::stream::SelectAll (or equivalent like TimedStreams) in our ConnectionHandler. Each new stream is wrapped in a "message pattern", e.g. in the PR, I implemented RecvSend. RecvSend implements Stream, yielding events. In the simplest case (closing stream after), we only ever yield one event per stream: The incoming message plus a placeholder that we can send the response into. The stream internally listens for this placeholder to be filled, writes it to the stream, closes it and then terminates. SelectAll will then clean up the stream.

This we would combine with TimedStreams which ensures that each stream finishes (i.e. returns None) in a given timeout. I believe this would yield in code that gives a lot of control over what is happening without much boilerplate. After all, we will be doing this in every ConnectionHandler hence being concise is important.

@mergify
Copy link
Contributor

mergify bot commented Mar 22, 2023

This pull request has merge conflicts. Could you please resolve them @jxs? 🙏

@thomaseizinger
Copy link
Contributor

In #4275, we are now adding a futures-bounded crate which should be of good use here. But perhaps this should just be restarted from current master.

@thomaseizinger thomaseizinger added this to the Simplify ConnectionHandler trait milestone Sep 18, 2023
@thomaseizinger thomaseizinger removed this from the Simplify ConnectionHandler trait milestone Sep 19, 2023
@thomaseizinger
Copy link
Contributor

#4275 is now merged and futures-bounded is ready to use if you are still keen for making that change here @jxs :)

@jxs
Copy link
Member Author

jxs commented Sep 21, 2023

Thanks for the ping Thomas, I won't have much bandwidth in the next week, @dgarus you wanna go ahead and take it?

@dgarus
Copy link
Contributor

dgarus commented Sep 21, 2023

Thanks for the ping Thomas, I won't have much bandwidth in the next week, @dgarus you wanna go ahead and take it?

Hi colleagues!
I could address this issue.

@thomaseizinger
Copy link
Contributor

Closing in favor of #4563.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants