Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

routing: multi-part mpp send #3967

Merged
merged 13 commits into from
Apr 10, 2020
Merged

Conversation

joostjager
Copy link
Contributor

@joostjager joostjager commented Jan 28, 2020

This PR enables multi-part payment sending. If lnd isn't able to find a single route that can carry the full payment amount, it will recursively split into smaller partial payments that are all collected by the receiver. Once the receiver has htlcs pending that together add up to the full amount, they will settle them all and thereby reveal the preimage to the sender.

The minimum partial payment size is currently fixed at 10k sats (unless the remainder is less than that).

Multi-part sending is controlled via a new payment rpc parameter max_parts. This parameter indicates what the maximum number of partial payments is that may be send. If set to one, there will be no splitting.

The parameter is also exposed on lncli. To send a multi-part payment, use:

lncli payinvoice --max_parts 3 <payreq>

The output from sendpayment/payinvoice/trackpayment/listpayments can be difficult to interpret for mpp payments. A visualization of the payment process can be created using https://github.com/joostjager/lnd-tools/tree/master/timingdiagram

image

Fixes #3940.

@joostjager
Copy link
Contributor Author

Currently running into an endless loop because of #3691

This is probably more important to fix with mpp because channels are spent down further.

@joostjager
Copy link
Contributor Author

Todo: track fee limit on the payment level

@Roasbeef Roasbeef added this to the 0.10.0 milestone Feb 26, 2020
@joostjager joostjager force-pushed the mpp-send branch 2 times, most recently from 160da22 to af386f9 Compare March 9, 2020 07:29
@Roasbeef
Copy link
Member

Rebase to remove the first merged PR?

@Roasbeef
Copy link
Member

Also to rebase on top of the latest version of the MPP shard send PR.

@joostjager
Copy link
Contributor Author

Rebased. But this PR isn't ready for review, except to observe the general idea for mpp splitting.

@joostjager joostjager force-pushed the mpp-send branch 7 times, most recently from e6dddd4 to 8ecc22b Compare March 17, 2020 20:29
@joostjager
Copy link
Contributor Author

Gets into endless loop if receiver has not enough inbound capacity. Will repro in the integrated routing test. Thanks for bringing up the scenario @Roasbeef

joostjager and others added 13 commits April 9, 2020 08:20
With mpp it isn't possible anymore for findPath to determine that there
isn't enough local bandwidth. The full payment amount isn't known at
that point.

In a follow-up, this payment outcome can be reintroduced on a higher
level (payment lifecycle).
Preparation for more test coverage of payment session.

The function findPath now has the call signature of the former
findPathInternal function.
Cover the payment session in the integrated routing test as a
preparation for executing the upcoming mpp split logic as part of the
test.
Modifies the payment session to launch additional pathfinding attempts
for lower amounts. If a single shot payment isn't possible, the goal is
to try to complete the payment using multiple htlcs. In previous
commits, the payment lifecycle has been prepared to deal with
partial-amount routes returned from the payment session. It will query
for additional shards if needed.

Additionally a new rpc payment parameter is added that controls the
maximum number of shards that will be used for the payment.
It can happen that the receiver times out some htlcs of the set if it
took to long to complete. But because the sender's mission control is
now updated, it is worth to keep trying to send those shards again.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
@joostjager
Copy link
Contributor Author

Rebased

@halseth
Copy link
Contributor

halseth commented Apr 9, 2020

Itest flake is caused by async hand-off of the htlc to the link and its interaction with the next pathfinding round.

Should we create an issue to track this? The sync handoff is also assumed by the payment loop, so we should probably fix this at one point...

// don't have enough bandwidth to carry the payment. New
// bandwidth hints are queried for every new path finding
// attempt, because concurrent payments may change balances.
bandwidthHints, err := p.getBandwidthHints()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving the comment here, to prevent a very long list of "top-level" comments in github.

Should we create an issue to track this? The sync handoff is also assumed by the payment loop, so we should probably fix this at one point...

I don't know if it is really an issue or merely an optimization. Although the boundary between the two is vague. I did think a bit about solutions. Maybe it is possible to hand off the htlc to the link directly, cut out the switch and mailbox. Or does the switch serve an essential purpose for locally initiated payments?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is something we don't want to fix, then the whole launch then collect logic in the payment loop can be greatly simplified I think.

I agree it is an optimization, since the pathfinding will just be retried, but would be nice to move the assumptions to either extreme: assume the bandwidth hints tell you nothing about in-flight shards (and we could pass in a list of active shards, that can also help pathfinding on non-local channels) or assume they tell you everything you need to know (the current assumption).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it is possible to hand off the htlc to the link directly, cut out the switch and mailbox. Or does the switch serve an essential purpose for locally initiated payments?

Yeah, didn't look into how to actually solve it, but I was think something like an ACK channel to wait on.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue created: #4181

Copy link
Contributor

@halseth halseth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🚂 Only lingering question is the htlcs vs shards naming, but I don't have a strong opinion about it.

@@ -1624,6 +1624,10 @@ type LightningPayment struct {
// understand this new onion payload format, then the payment will
// fail.
DestCustomRecords record.CustomSet

// MaxHtlcs is the maximum number of partial payments that may be use to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/use/used

@Roasbeef
Copy link
Member

Only lingering question is the htlcs vs shards naming, but I don't have a strong opinion about it.

Yeah even during offline discussion I find myself having to say "channel update max htlc" in order to disambiguate.

Copy link
Member

@Roasbeef Roasbeef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 💍

I think this is ready to land, we can iron out the terminology (max htlc vs max splits vs max shards, etc) in the RC phase. I'll rebase my follow up PR (to raise the invoice limit).

@@ -46,6 +46,13 @@ var (
"<hex_value>,.. For example: --data 3438382=0a21ff. " +
"Custom record ids start from 65536.",
}

maxHtlcsFlag = cli.UintFlag{
Name: "max_htlcs",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max_splits could also be another option here.

@Roasbeef Roasbeef merged commit 1354a46 into lightningnetwork:master Apr 10, 2020
Roasbeef added a commit to Roasbeef/lnd that referenced this pull request Apr 10, 2020
In this commit, we remove the restriction surrounding the largest
invoices that we'll allow a user to create. After lightningnetwork#3967 has landed,
users will be able to send in _aggregate_ a payment larger than the
current max HTLC size limit in the network. As a result, we can just
treat that value as the system's MTU, and allow users to request
payments it multiples of that MTU value.

A follow up to this PR at a later time will also allow wumbo _channels_.
However, that requires us to tweak the way we scale CSV values, as post
wumbo, there is no true channel size limit, only the
_local_ limit of a given node. We also need to implement a way for nodes
to signal to other nodes their accepted max channel size.
@joostjager joostjager changed the title routing: multi-shard mpp send routing: multi-part mpp send Apr 27, 2020
matheusd pushed a commit to matheusd/dcrlnd that referenced this pull request Jul 14, 2020
In this commit, we remove the restriction surrounding the largest
invoices that we'll allow a user to create. After lightningnetwork#3967 has landed,
users will be able to send in _aggregate_ a payment larger than the
current max HTLC size limit in the network. As a result, we can just
treat that value as the system's MTU, and allow users to request
payments it multiples of that MTU value.

A follow up to this PR at a later time will also allow wumbo _channels_.
However, that requires us to tweak the way we scale CSV values, as post
wumbo, there is no true channel size limit, only the
_local_ limit of a given node. We also need to implement a way for nodes
to signal to other nodes their accepted max channel size.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mpp payments Related to invoices/payments routing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

routing: implement basic divide-and-conquer payment splitting for mpp/amp
4 participants