Skip to content
This repository has been archived by the owner on May 26, 2022. It is now read-only.

RemoteMultiaddr should be fully resolved #123

Closed
Stebalien opened this issue Mar 14, 2020 · 9 comments
Closed

RemoteMultiaddr should be fully resolved #123

Stebalien opened this issue Mar 14, 2020 · 9 comments
Labels
kind/bug A bug in existing code (including security flaws)

Comments

@Stebalien
Copy link
Member

The addr returned by RemoteMultiaddr should be fully resolved, even if the user dials with a /dns address.

@Stebalien Stebalien added the kind/bug A bug in existing code (including security flaws) label Mar 14, 2020
@marten-seemann
Copy link
Collaborator

What is a /dns address? How do we resolve it?

@Stebalien
Copy link
Member Author

DNS Address: /dns/some.domain/udp/127/quic

The QUIC transport already calls ResolveUDPAddr and gets a udp address when it dials the target. It just needs to take that resolved udp address and turn it back into a QUIC multiaddr.

@marten-seemann
Copy link
Collaborator

I'm not sure if resolving addresses works at all.

Creating a new /dns-QUIC-Multiaddr works:

ma.NewMultiaddr("/dns/google.com/udp/443/quic")

However, using that to dial a connection fails: invalid network: must be either udp4 or udp6.

Resolving a udp4-multiaddr fails:

ma.NewMultiaddr("/dns/google.com/udp4/443/quic")

errors withfailed to parse multiaddr "/dns/localhost/udp4/443/quic": unknown protocol udp4.

@Stebalien
Copy link
Member Author

Stebalien commented Mar 15, 2020

udp4 is not a valid multiaddr component. /dns/google.com/udp/443/quic would resolve to /ip4/..../udp/443/quic. You need to take the result of calling ResolveUDPAddr in transport.Dial and call toQuicMultiaddr on it.

@marten-seemann
Copy link
Collaborator

/dns/google.com/udp/443/quic would resolve to /ip4/..../udp/443/quic.

Not really, manet.DialArgs gives me udp and google.com:443. I can then pass this into ResolveUDPAddr to resolve that to a net.UDPAddr. This still doesn't tell us if this a udp4 or udp6.
We could us the IP address to determine if this is likely an IPv4 or IPv6 address, but this will likely be incorrect in some corner cases, see https://stackoverflow.com/questions/22751035/golang-distinguish-ipv4-ipv6.

I'm wondering, why is this each transport's responsibility to resolve the DNS address? It seems like every transport has to implement a very similar code path to do that. Wouldn't it make more sense to resolve the address before passing it to Dial?

@Stebalien
Copy link
Member Author

It's not generally each transport's job to resolve the address, the swarm usually handles that internally. That's the other half of the bug.

However, the quic transport should either reject these addresses (probably the better solution) or resolve them.

@raulk
Copy link
Member

raulk commented Mar 16, 2020

IMO:

  • Transports should be responsible for translating the addressing schemes they support. Either implicitly during the outbound dial, or by exposing a ResolveAddr(multiaddr) method that the network/swarm can invoke.
  • RemoteMultiaddr() should return the actual addr we're connected to. The remote endpoint could be DNS load-balanced; returning a non-resolved address is insufficient. If we're recording an instrumentation trace to analyse after the fact, knowing the actual IP address is more useful than the hostname.

I would recommend for go-libp2p-quic-transport to pipe all outbound dials through https://github.com/multiformats/go-multiaddr-dns to perform resolution. You can then dial the resulting multiaddr as you're doing now: https://godoc.org/github.com/multiformats/go-multiaddr-dns#Resolve (and record that in the state of the connection to return when RemoteMultiaddr() is called).

@marten-seemann
Copy link
Collaborator

@Stebalien Was this issue resolved by #131?

@Stebalien
Copy link
Member Author

Yep! Thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug A bug in existing code (including security flaws)
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants