Skip to content
This repository has been archived by the owner on Aug 29, 2023. It is now read-only.

callbacks -> async / await #102

Closed
wants to merge 17 commits into from
Closed

callbacks -> async / await #102

wants to merge 17 commits into from

Conversation

dirkmc
Copy link

@dirkmc dirkmc commented Apr 2, 2019

BREAKING CHANGE: All places in the API that used callbacks are now replaced with async/await

@ghost ghost assigned dirkmc Apr 2, 2019
@ghost ghost added the status/in-progress In progress label Apr 2, 2019
@jacobheun
Copy link
Contributor

We'll need to refactor the tests and api at https://github.com/libp2p/interface-transport to async/await for this change.

@dirkmc
Copy link
Author

dirkmc commented Apr 2, 2019

Yes, definitely. Alan is working on some documentation in the websockets PR: libp2p/js-libp2p-websockets#82

@dirkmc dirkmc force-pushed the feat/async-await branch 3 times, most recently from d22c9ed to 6cf8f71 Compare April 4, 2019 13:12
Copy link
Member

@alanshaw alanshaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is super cool @dirkmc 🚀 - I'm going to fully review this later today but one thing I noticed skimming through is that the AbortSignal passed to dial can only be used to abort the socket during connection. This differs from what I've done in libp2p/js-libp2p-websockets#82 - where the signal is ALSO passed to both sides of the socket after the connection is opened so you can abort later. We should discuss.

cc @jacobheun

@dirkmc
Copy link
Author

dirkmc commented Apr 19, 2019

@alanshaw what would be the use case for aborting after the connection has been made (as opposed to calling close() on the connection)?

@alanshaw
Copy link
Member

Good question.

If you abort the stream it’ll throw an error that you can catch and inspect. If you simply close the stream, you’ll exit the loop and you’ll not know if the stream finished or if it was aborted.

@dirkmc
Copy link
Author

dirkmc commented Apr 22, 2019

I'm not sure I understand - it looks from the code like it will catch the AbortError and close the stream

@jacobheun
Copy link
Contributor

what would be the use case for aborting after the connection has been made (as opposed to calling close() on the connection)

The main case for this is parallel dials in the Switch. If I dial to a peer that I have 3 TCP addresses for, ideally I would dial to all 3 in parallel to connect as quickly as possible. The first connection to succeed, would result in me aborting the other 2 dials. It's entirely possible that by the time I have the first connection, 1 or both of the other connections could also be made. Instead of detecting that difference, it would be nice to just call abort on the other dials and be done.

Note: As we currently don't have the option to cleanly abort dials, we only dial 1 address at a time.

@dirkmc
Copy link
Author

dirkmc commented Apr 22, 2019

Makes sense 👍

I will make the change in the code here and add a test to interface-transport

@alanshaw
Copy link
Member

I'm not sure I understand - it looks from the code like it will catch the AbortError and close the stream

On the sink side I think there's not much else you can do, it's the sink so it's the "end of the pipeline"...but the source is abortable with the same signal and I was referring to this in the above comment (I wrote it on my phone so it was perhaps too brief).

pipe(
  dataToSendToServer,
  duplexSocket,
  source => { // our sink (receiver for server responses)
    for await (const chunk of source) {
      // do something with the response from the server...
    }
    // we now get here because duplexSocket.close() was called somewhere 
    // else (connection mgr?) and we exit the loop.
    // this is a werid state because we might not have received all we wanted
    // from the server....but no error was thrown, what do we do now...?
  }
)

So, using the abort signal allows us to catch that error and know that the connection was aborted because of some reason on our side of the connection.

This feels to me like a useful distinction to make. This has to be proven and we're on totally new ground here so questioning the reasoning is really good. If we ever want to do reconnect because of sleep/network change/network disruption (which we do 😜) it'll be useful to not try to reconnect when the connection manager aborts the connection purposefully or when the node is being stopped (for examples).

@jacobheun has the concrete use case of course ;)

@dirkmc
Copy link
Author

dirkmc commented Apr 22, 2019

@alanshaw I see what you're saying, that does seem like a useful distinction, thanks!

@ghost ghost assigned jacobheun Apr 22, 2019
@dirkmc
Copy link
Author

dirkmc commented Apr 23, 2019

I added the ability to abort after connect. Depends on libp2p/interface-transport#49 (new tests in interface-transport)

I also made a change such that if the sink is destroyed while we are writing to it, we just end the stream. Does that make sense to you guys or do you think it should throw an error?

Copy link
Member

@alanshaw alanshaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a lot of code in src/listener.js managing multiple listeners. If every transport module now need to deal with tacking multiple listeners this needs to be refactored into something generic that can be reused. At the moment I'm finding it difficult to distinguish between code that's creating and closing them and code that's managing/tracking them.

At the moment I'm in favour of keeping the listen API more simple like it was before (see libp2p/interface-transport#46 (comment) for reasoning) and dealing with the multiple listeners outside of the transport modules themselves to reduce repetition and complexity.

README.md Show resolved Hide resolved
package.json Outdated Show resolved Hide resolved
package.json Outdated Show resolved Hide resolved
src/listener.js Outdated Show resolved Hide resolved
src/socket.js Show resolved Hide resolved
@dirkmc
Copy link
Author

dirkmc commented Apr 29, 2019

@jacobheun what do you think about @alanshaw's suggestion that we revert back to listening on a single address? Should we revert the PR to add multiple listeners?

@jacobheun
Copy link
Contributor

@dirkmc per my comment libp2p/interface-transport#46 (comment), I agree with reverting at the interface and here. It sounds like everyone is in agreement so we can move forward with those reversions.

@dirkmc
Copy link
Author

dirkmc commented Apr 29, 2019

Ok thanks @jacobheun I will apply the reverts

@dirkmc
Copy link
Author

dirkmc commented Apr 29, 2019

Depends on libp2p/interface-transport#51

@dirkmc
Copy link
Author

dirkmc commented Apr 29, 2019

@jacobheun @alanshaw I made a change such that if the sink is destroyed while we are writing to it, we just log a warning and end the stream. Does that make sense to you guys or do you think it should throw an error?

@jacobheun
Copy link
Contributor

It should throw, otherwise we might unknowingly continue trying to write to a destroyed connection.

@alanshaw
Copy link
Member

It should throw, otherwise we might unknowingly continue trying to write to a destroyed connection.

I think it can't throw, and I have been meaning to get back to this PR for a long time to explain properly why. Please do not merge until I can explain! 🙏

dirkmc and others added 17 commits August 19, 2019 13:43
BREAKING CHANGE: All places in the API that used callbacks are now replaced with async/await
* feat: support listen on array

* chore: fix missing deps

* chore: update interface version

* docs: update readme for array listen

* test: use port 0

* docs: add some more jsdocs

* chore: fix travis support for ip6 on linux

* refactor: clean up some code
License: MIT
Signed-off-by: Jacob Heun <jacobheun@gmail.com>
License: MIT
Signed-off-by: Jacob Heun <jacobheun@gmail.com>
@jacobheun
Copy link
Contributor

FYI, I have rebased this against the latest master.

@alanshaw alanshaw mentioned this pull request Sep 8, 2019
3 tasks
@jacobheun
Copy link
Contributor

Closing in favor of #112

@jacobheun jacobheun closed this Sep 11, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants