-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
routing: multi-part mpp send #3967
Conversation
Currently running into an endless loop because of #3691 This is probably more important to fix with mpp because channels are spent down further. |
Todo: track fee limit on the payment level |
160da22
to
af386f9
Compare
Rebase to remove the first merged PR? |
Also to rebase on top of the latest version of the MPP shard send PR. |
Rebased. But this PR isn't ready for review, except to observe the general idea for mpp splitting. |
e6dddd4
to
8ecc22b
Compare
Gets into endless loop if receiver has not enough inbound capacity. Will repro in the integrated routing test. Thanks for bringing up the scenario @Roasbeef |
With mpp it isn't possible anymore for findPath to determine that there isn't enough local bandwidth. The full payment amount isn't known at that point. In a follow-up, this payment outcome can be reintroduced on a higher level (payment lifecycle).
Preparation for more test coverage of payment session. The function findPath now has the call signature of the former findPathInternal function.
Cover the payment session in the integrated routing test as a preparation for executing the upcoming mpp split logic as part of the test.
Modifies the payment session to launch additional pathfinding attempts for lower amounts. If a single shot payment isn't possible, the goal is to try to complete the payment using multiple htlcs. In previous commits, the payment lifecycle has been prepared to deal with partial-amount routes returned from the payment session. It will query for additional shards if needed. Additionally a new rpc payment parameter is added that controls the maximum number of shards that will be used for the payment.
It can happen that the receiver times out some htlcs of the set if it took to long to complete. But because the sender's mission control is now updated, it is worth to keep trying to send those shards again.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
Rebased |
Should we create an issue to track this? The sync handoff is also assumed by the payment loop, so we should probably fix this at one point... |
// don't have enough bandwidth to carry the payment. New | ||
// bandwidth hints are queried for every new path finding | ||
// attempt, because concurrent payments may change balances. | ||
bandwidthHints, err := p.getBandwidthHints() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moving the comment here, to prevent a very long list of "top-level" comments in github.
Should we create an issue to track this? The sync handoff is also assumed by the payment loop, so we should probably fix this at one point...
I don't know if it is really an issue or merely an optimization. Although the boundary between the two is vague. I did think a bit about solutions. Maybe it is possible to hand off the htlc to the link directly, cut out the switch and mailbox. Or does the switch serve an essential purpose for locally initiated payments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it is something we don't want to fix, then the whole launch then collect logic in the payment loop can be greatly simplified I think.
I agree it is an optimization, since the pathfinding will just be retried, but would be nice to move the assumptions to either extreme: assume the bandwidth hints tell you nothing about in-flight shards (and we could pass in a list of active shards, that can also help pathfinding on non-local channels) or assume they tell you everything you need to know (the current assumption).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it is possible to hand off the htlc to the link directly, cut out the switch and mailbox. Or does the switch serve an essential purpose for locally initiated payments?
Yeah, didn't look into how to actually solve it, but I was think something like an ACK channel to wait on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Issue created: #4181
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🚂 Only lingering question is the htlcs
vs shards
naming, but I don't have a strong opinion about it.
@@ -1624,6 +1624,10 @@ type LightningPayment struct { | |||
// understand this new onion payload format, then the payment will | |||
// fail. | |||
DestCustomRecords record.CustomSet | |||
|
|||
// MaxHtlcs is the maximum number of partial payments that may be use to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: s/use/used
Yeah even during offline discussion I find myself having to say "channel update max htlc" in order to disambiguate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 💍
I think this is ready to land, we can iron out the terminology (max htlc vs max splits vs max shards, etc) in the RC phase. I'll rebase my follow up PR (to raise the invoice limit).
@@ -46,6 +46,13 @@ var ( | |||
"<hex_value>,.. For example: --data 3438382=0a21ff. " + | |||
"Custom record ids start from 65536.", | |||
} | |||
|
|||
maxHtlcsFlag = cli.UintFlag{ | |||
Name: "max_htlcs", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
max_splits
could also be another option here.
In this commit, we remove the restriction surrounding the largest invoices that we'll allow a user to create. After lightningnetwork#3967 has landed, users will be able to send in _aggregate_ a payment larger than the current max HTLC size limit in the network. As a result, we can just treat that value as the system's MTU, and allow users to request payments it multiples of that MTU value. A follow up to this PR at a later time will also allow wumbo _channels_. However, that requires us to tweak the way we scale CSV values, as post wumbo, there is no true channel size limit, only the _local_ limit of a given node. We also need to implement a way for nodes to signal to other nodes their accepted max channel size.
In this commit, we remove the restriction surrounding the largest invoices that we'll allow a user to create. After lightningnetwork#3967 has landed, users will be able to send in _aggregate_ a payment larger than the current max HTLC size limit in the network. As a result, we can just treat that value as the system's MTU, and allow users to request payments it multiples of that MTU value. A follow up to this PR at a later time will also allow wumbo _channels_. However, that requires us to tweak the way we scale CSV values, as post wumbo, there is no true channel size limit, only the _local_ limit of a given node. We also need to implement a way for nodes to signal to other nodes their accepted max channel size.
This PR enables multi-part payment sending. If
lnd
isn't able to find a single route that can carry the full payment amount, it will recursively split into smaller partial payments that are all collected by the receiver. Once the receiver has htlcs pending that together add up to the full amount, they will settle them all and thereby reveal the preimage to the sender.The minimum partial payment size is currently fixed at 10k sats (unless the remainder is less than that).
Multi-part sending is controlled via a new payment rpc parameter
max_parts
. This parameter indicates what the maximum number of partial payments is that may be send. If set to one, there will be no splitting.The parameter is also exposed on
lncli
. To send a multi-part payment, use:lncli payinvoice --max_parts 3 <payreq>
The output from
sendpayment
/payinvoice
/trackpayment
/listpayments
can be difficult to interpret for mpp payments. A visualization of the payment process can be created using https://github.com/joostjager/lnd-tools/tree/master/timingdiagramFixes #3940.