Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incl tx fee when calcing inbound+outbound HTLC limits on channels #577

Conversation

valentinewallace
Copy link
Contributor

@valentinewallace valentinewallace commented Apr 8, 2020

When we receive an inbound HTLC from a peer on an inbound channel,
the peer needs to be able to pay for the additional commitment tx
fees as well.

When we're sending an outbound HTLC on an outbound channel, we need
to be able to pay for the additional commitment tx fees as well.

Closes #548.

TODOs:

@valentinewallace valentinewallace force-pushed the fix-onchain-fee-check-htlcs branch 4 times, most recently from 23a8619 to d7d24e4 Compare April 13, 2020 16:55
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
fuzz/src/chanmon_consistency.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
@valentinewallace valentinewallace force-pushed the fix-onchain-fee-check-htlcs branch 2 times, most recently from 7aa16f0 to 0d21617 Compare April 14, 2020 15:41
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
@valentinewallace valentinewallace force-pushed the fix-onchain-fee-check-htlcs branch 2 times, most recently from 618f79c to 265a82c Compare April 17, 2020 00:09
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
fuzz/src/chanmon_consistency.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
@TheBlueMatt
Copy link
Collaborator

Oops, travis found a bug in your update for chanmon_consistency.rs. This fixes it:

diff --git a/fuzz/src/chanmon_consistency.rs b/fuzz/src/chanmon_consistency.rs
index 4659d0afc..e31fea516 100644
--- a/fuzz/src/chanmon_consistency.rs
+++ b/fuzz/src/chanmon_consistency.rs
@@ -641,12 +641,12 @@ pub fn do_test<Out: test_logger::Output>(data: &[u8], out: Out) {
                                });
                                for event in events.drain(..) {
                                        match event {
-                                               events::Event::PaymentReceived { payment_hash, payment_secret, .. } => {
+                                               events::Event::PaymentReceived { payment_hash, payment_secret, amt } => {
                                                        if claim_set.insert(payment_hash.0) {
                                                                if $fail {
                                                                        assert!(nodes[$node].fail_htlc_backwards(&payment_hash, &payment_secret));
                                                                } else {
-                                                                       assert!(nodes[$node].claim_funds(PaymentPreimage(payment_hash.0), &payment_secret, 5_000_000));
+                                                                       assert!(nodes[$node].claim_funds(PaymentPreimage(payment_hash.0), &payment_secret, amt));
                                                                }
                                                        }
                                                },

@valentinewallace valentinewallace force-pushed the fix-onchain-fee-check-htlcs branch 2 times, most recently from 8571854 to f836e75 Compare April 18, 2020 23:00
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haven't had a chance to review the tests, but this looks good to me. Is there a reason its still a draft?

lightning/src/ln/channel.rs Show resolved Hide resolved
@valentinewallace valentinewallace marked this pull request as ready for review April 21, 2020 16:51
@valentinewallace
Copy link
Contributor Author

Haven't had a chance to review the tests, but this looks good to me. Is there a reason its still a draft?

no reason, thx!

@valentinewallace
Copy link
Contributor Author

@TheBlueMatt @jkczyz I updated the logic in the latest commit but haven't updated the tests to match yet. Could you check my logic and then I'll update the tests?

@codecov
Copy link

codecov bot commented Apr 28, 2020

Codecov Report

Merging #577 into master will increase coverage by 0.01%.
The diff coverage is 95.48%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #577      +/-   ##
==========================================
+ Coverage   91.26%   91.27%   +0.01%     
==========================================
  Files          35       35              
  Lines       20918    21122     +204     
==========================================
+ Hits        19090    19279     +189     
- Misses       1828     1843      +15     
Impacted Files Coverage Δ
lightning/src/ln/channelmanager.rs 85.45% <90.00%> (-0.01%) ⬇️
lightning/src/ln/channel.rs 86.72% <91.89%> (+0.09%) ⬆️
lightning/src/ln/functional_tests.rs 97.11% <97.67%> (+<0.01%) ⬆️
lightning/src/ln/peer_handler.rs 55.00% <0.00%> (+0.85%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f08d610...17ccab4. Read the comment docs.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: is the reason this added HTLC's amount can't count toward's the receiver's

I think that's preventive to avoid initiator getting under reserve if HTLC is cancelled at same time another HTLC is added.

lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
@valentinewallace
Copy link
Contributor Author

I was thinking more on #577 (comment), my current thinking is that we might be able to do the following:

* Generalize the closure to handle the `!chan.get().is_usable()` case, taking the error code.

* Move that check into `Channel::update_add_htcl`

* Make the closure produce an entire `PendingHTLCStatus`, not just the `OnionErrorPacket`

This may also address #577 (comment).

I'm not positive this would work as there may be some subtly that I'm missing. But that's exactly why this could use some refactoring, IMHO. I think the advantage is that it keeps channel checks and updating pending_forward_info inside Channel rather than split between Channel and ChannelManager.

@TheBlueMatt Would welcome your insight on this.

Added in b36589c. It seems cleaner to me :)


pub fn update_add_htlc<F>(&mut self, msg: &msgs::UpdateAddHTLC, mut pending_forward_state: PendingHTLCStatus, create_pending_htlc_status: F) -> Result<(), ChannelError>
where F: for<'a, 'b> Fn(&'a Self, &'b PendingHTLCStatus, u16) -> Option<PendingHTLCStatus> {
if !self.is_usable() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is redundant with the checks over the next few lines (and we shouldnt be failing the HTLC in this case, we should be failing the channel because our counterparty is misbehaving).

Copy link
Contributor Author

@valentinewallace valentinewallace Jun 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops @jkczyz I accidentally deleted your comment, lol. It said : "FWIW, I think the behavior here is the same as it had been before. Though maybe that wasn't desirable."

@TheBlueMatt It's not redundant because self.is_usable will return false if either party has sent a shutdown. If only we have sent a shutdown, we do indeed just want to fail the HTLC (which is why the htlc_fail_async_shutdown test exists.) Then the next check checks whether the remote has sent a shutdown, in which case we do want to fail the channel because that's a spec violation.

Copy link
Contributor Author

@valentinewallace valentinewallace Jun 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, basically saying I think the logic is right here fcf5072. Though it might be more clear if we changed the is_usable check to more specifically check if only we have sent a shutdown.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, a comment would be great here, at least, or just swap it for the explicit checks.

lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to take a closer look at the test changes but wanted to publishes these draft comments that I had written earlier.

lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
@lightningdevkit lightningdevkit deleted a comment from jkczyz Jun 2, 2020
Comment on lines 1810 to 1811
// Note that if we get None here, it's because we're already failing the HTLC, i.e. its
// status is already set to failing.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is out-of-date now that create_pending_htlc_status no longer returns an Option.

In my previous suggestion that removed Option, I mentioned it may not be as intuitive to update pending_forward_state in this manner, though. A more intuitive approach could be to check against pending_forward_state for PendingHTLCStatus::Forward at each call site.

The disadvantage of this alternative is some code duplication. The advantage is it would be more explicit by clarifying when pending_forward_state is changed. So that may be a better approach as I think the comment would also no longer be necessary.

That said, you may have to make other fields public. And, there may be some borrow issues that make it less clean. So, up to you if you want to try it that way.

Copy link
Contributor Author

@valentinewallace valentinewallace Jun 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I'm in favor of not increasing reasons that the Channel needs to know about the PendingHTLCStatus struct and I think it'd be better to work towards that struct being private to the ChannelManager to make this code more easily testable. I'm also pretty OK with the level of intuitiveness of the current code, plus increased conciseness is nice.

lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
@valentinewallace
Copy link
Contributor Author

Found a better way of testing so may want to hold off on further testing review til it's updated.

@jkczyz
Copy link
Contributor

jkczyz commented Jun 4, 2020

Found a better way of testing so may want to hold off on further testing review til it's updated.

Cool! Was just looking through the tests and thinking if there were any other ways to go about it. Let me know if you want to run anything by me.

@valentinewallace valentinewallace force-pushed the fix-onchain-fee-check-htlcs branch 3 times, most recently from 54189c9 to 5dffe87 Compare June 5, 2020 20:12
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great. A few nits and would be nice to get a bit more test coverage, but looks good and I'm still a big fan of the new fuzz coverage.

// state yet, implying our counterparty is trying to route payments
// over the channel back to themselves (cause no one else should
// know the short_id is a lightning channel yet). We should have no
// problem just calling this unknown_next_peer, as above (0x4000|10).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the "as above" note here is a copy-paste bug (and why move this out of the else block where it makes the most sense?)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We return a IgnoreError in get_channel_update for this but instead we should fail the channel as it's likely a hint of a buggy peer. Note: AFAICT, it's currently a spec void

Copy link
Contributor Author

@valentinewallace valentinewallace Jun 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ariard Hm, since it's a spec void, I'd think we'd default to giving them the benefit of the doubt & not closing?

lightning/src/ln/channelmanager.rs Show resolved Hide resolved
}

if !self.channel_outbound {
// `+1` for this HTLC, `2 *` and `+1` fee spike buffer we keep for the remote (this deviates from the spec
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth noting that, eventually, when we remove support for fee updates and switch to anchor-type fees, we can drop the 2x but we should keep the +1.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explicit exactly where the spec deviation is ? Because on the feerate scope of the check, I think it's right, we add one HTLC, one more for the fee spike buffer and check feerate of the whole commitment tx per c-lightning interpretation.

However, and that's where our spec deviation is to me, this policy requirement doesn't exist on the receiver, it's only on the sender.

Also, reference to anchor output should be dropped. In fact if you can CPFP, feerate of the child may cover the HTLC feerate one so this check may be removed.

Note: that's not what the anchor output is recommending now but its scope is still unclear...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TheBlueMatt are you fine to remove the anchor outputs reference here (per Antoine's comment above ^)? I'm not too familiar with the proposal so don't have an opinion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explicit exactly where the spec deviation is ?

Clarified in the comment


pub fn update_add_htlc<F>(&mut self, msg: &msgs::UpdateAddHTLC, mut pending_forward_state: PendingHTLCStatus, create_pending_htlc_status: F) -> Result<(), ChannelError>
where F: for<'a, 'b> Fn(&'a Self, &'b PendingHTLCStatus, u16) -> Option<PendingHTLCStatus> {
if !self.is_usable() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, a comment would be great here, at least, or just swap it for the explicit checks.

// Check that they won't violate our local required channel reserve by adding this HTLC.

// +1 for this HTLC.
let local_commit_tx_fee_msat = self.next_local_commit_tx_fee_msat(1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you set this to 0 the tests all still pass.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, when I set this to 0 test_chan_reserve_violation_inbound_htlc_outbound_channel fails (as is the intention of the test). Would you mind double checking?

if pending_remote_value_msat - msg.amount_msat - chan_reserve_msat < remote_fee_cost_incl_stuck_buffer_msat {
// Note that if the pending_forward_state is not updated here, then it's because we're already failing
// the HTLC, i.e. its status is already set to failing.
pending_forward_state = create_pending_htlc_status(self, pending_forward_state, 0x1000|7);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commenting this line out and the tests still pass.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sigh, yeah. Any advice on how to test this one? I'm thinking maybe I can add a function send_payment_test_only to channelmanager that sends payments w/o doing the normal checks? Can't figure out any way through the current public API atm.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ongoing review, generally document exactly where we enforce each new spec point and deviation would be great for future reads

// state yet, implying our counterparty is trying to route payments
// over the channel back to themselves (cause no one else should
// know the short_id is a lightning channel yet). We should have no
// problem just calling this unknown_next_peer, as above (0x4000|10).
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We return a IgnoreError in get_channel_update for this but instead we should fail the channel as it's likely a hint of a buggy peer. Note: AFAICT, it's currently a spec void

lightning/src/ln/channelmanager.rs Show resolved Hide resolved
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/channel.rs Show resolved Hide resolved
self.next_remote_commit_tx_fee_msat(1)
};
if pending_remote_value_msat - msg.amount_msat < remote_commit_tx_fee_msat {
return Err(ChannelError::Close("Remote HTLC add would not leave enough to pay for fees"));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the spec deviation right ? Please at least add a comment for it and modify commit message to say we fail channel. Anyway that may not conjugate well with anchor output, where once implemented a initiator-sender consider it's always good to send a HTLC if above reserve because you can always bump ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, don't believe this is a deviation. Curious why you think it's a deviation?
From the BOLT 2:

update_add_htlc
A sending node:
    if it is responsible for paying the Bitcoin fee:
        MUST NOT offer amount_msat if, after adding that HTLC to its commitment transaction, it cannot pay the fee for either the local or remote commitment transaction at the current feerate_per_kw while maintaining its channel reserve

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICT, this is a requirement on the sender but here we're at HTLC reception. It makes sense as that way you avoid to close channel even if sender violates spec. I think channel can still be unstuck if this HTLC is failed and sender-initiator balance is increased again without waiting feerate decrease.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe misunderstood is due to the spec saying

update_add_htlc
A receiving node:
     receiving an amount_msat that the sending node cannot afford at the current `feerate_per_kw` (while maintaining its channel reserve):
            SHOULD fail the channel

It doesn't require to enforce fee buffer spikes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah but there's no fee spike buffer check here?

Here's the deviating fee spike buffer check: https://github.com/rust-bitcoin/rust-lightning/pull/577/files#diff-79ec46664bd0cf85b6be0ddb567527a7R1811-R1818

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe misunderstood is due to the spec saying

update_add_htlc
A receiving node:
     receiving an amount_msat that the sending node cannot afford at the current `feerate_per_kw` (while maintaining its channel reserve):
            SHOULD fail the channel

It doesn't require to enforce fee buffer spikes.

Right, this check is intended to purely perform the check that you quote above here ^

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After IRC discussion, this is right, I confused myself with what this check was enforcing, but still you should have point out the receiver spec-side :p

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fair, lol

Copy link
Contributor Author

@valentinewallace valentinewallace Jun 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ariard
Copy link

ariard commented Jun 9, 2020

@valentinewallace see ariard@9697578, this is a WIP for documenting better our HTLC acceptance policy. I think it's currently correct but that's quite a sensible part, in case of bug, a third-party may freely close our channels.

@@ -1721,9 +1789,48 @@ impl<ChanSigner: ChannelKeys> Channel<ChanSigner> {
removed_outbound_total_msat += htlc.amount_msat;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TheBlueMatt

I don't get the rational between the difference in the way we account inbound HTLCs and outbound HTLCs. For any inbound HTLCs we evaluate as part of our balance independently of its state. We presume its resolution is favorable to us, even if it maybe not the case, which is reasonable without more tracking complexity.

Instead for outbound HTLC we account them in counterparty balance only after its removal announcement, independently of a success or failure. Why we don't also presume outbound HTLCs resolution are always favorable to them ? I.e accounting in their balance once we announce outbound HTLCs

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont really get your question. We should not be assuming any kind of HTLC resolution here, nor do I think we do - we only ever consider HTLCs which have begun the removal process.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know we talked about this offline but after reading conv again it was a still a misunderstood. I think we're making a different assumption for outbound, namely we don't consider that a committed outbound will resolve favorably for them.

Why we don't account OutboundHTLCState::Committed in part of their balance, removed_outbound_total_msat ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still a bit confused. Note that the "None" here means "there is no HTLCFailReason and the claim has resolved in our favor"

@valentinewallace
Copy link
Contributor Author

@valentinewallace see ariard@9697578, this is a WIP for documenting better our HTLC acceptance policy. I think it's currently correct but that's quite a sensible part, in case of bug, a third-party may freely close our channels.

That's a nice improvement, this PR's pretty big already though, so I'd prefer to leave it as a follow-up?

@ariard
Copy link

ariard commented Jun 11, 2020

@valentinewallace Sure will open documentation as a follow-up!

This also includes adding a closure that creates a new pending HTLC status
as a parameter for Channel's update_add_htlc. This will later be useful
when we add the check for fee spike buffer violations, which will also result
in changing an HTLC's pending status to failing.

Co-authored-by: Jeffrey Czyz <jkczyz@gmail.com>
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
where F: for<'a> Fn(&'a Self, PendingHTLCStatus, u16) -> PendingHTLCStatus {
// We can't accept HTLCs sent after we've sent a shutdown.
let local_sent_shutdown = (self.channel_state & (ChannelState::ChannelFunded as u32 | ChannelState::LocalShutdownSent as u32)) != (ChannelState::ChannelFunded as u32);
if local_sent_shutdown {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why assign product of channel state comparison and then test this assignment instead of testing channel state directly ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to make it readable and clear what's being checked. The variable makes it explicit that we're trying to check whether the local has sent a shutdown msg

self.next_remote_commit_tx_fee_msat(1)
};
if pending_remote_value_msat - msg.amount_msat < remote_commit_tx_fee_msat {
return Err(ChannelError::Close("Remote HTLC add would not leave enough to pay for fees"));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe misunderstood is due to the spec saying

update_add_htlc
A receiving node:
     receiving an amount_msat that the sending node cannot afford at the current `feerate_per_kw` (while maintaining its channel reserve):
            SHOULD fail the channel

It doesn't require to enforce fee buffer spikes.

}

if !self.channel_outbound {
// `+1` for this HTLC, `2 *` and `+1` fee spike buffer we keep for the remote (this deviates from the spec
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explicit exactly where the spec deviation is ? Because on the feerate scope of the check, I think it's right, we add one HTLC, one more for the fee spike buffer and check feerate of the whole commitment tx per c-lightning interpretation.

However, and that's where our spec deviation is to me, this policy requirement doesn't exist on the receiver, it's only on the sender.

Also, reference to anchor output should be dropped. In fact if you can CPFP, feerate of the child may cover the HTLC feerate one so this check may be removed.

Note: that's not what the anchor output is recommending now but its scope is still unclear...

// as we should still be able to afford adding this HTLC plus one more future HTLC, regardless of
// being sensitive to fee spikes.
let remote_fee_cost_incl_stuck_buffer_msat = 2 * self.next_remote_commit_tx_fee_msat(1 + 1);
if pending_remote_value_msat - msg.amount_msat - chan_reserve_msat < remote_fee_cost_incl_stuck_buffer_msat {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we don't check that sender-initiator can also afford this new HTLC with regards to our local commitment transaction fee ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's what this is supposed to be doing: https://github.com/rust-bitcoin/rust-lightning/pull/577/files#diff-79ec46664bd0cf85b6be0ddb567527a7R1830. Though it also takes into account the channel reserve, which we discuss further below.

lightning/src/ln/channel.rs Show resolved Hide resolved
@ariard
Copy link

ariard commented Jun 15, 2020

Code Review ACK d2ed6c1

Balances decrease may trigger a channel reserve or fee buffer spikes violation. There is 2 sources of balance decreases a) offering an HTLC and b) paying fees for the added weight of this HTLC. With regards to sender/receiver, a) is always static to evaluate as offers are unilateral in a single commit update, b) depends on channel funder. At HTLC sending, we're only concerned with checking b) as a potential counterparty violation (for security) and checking a) and b) as a violation of our own. At HTLC reception, we should only check a) and b) on counterparty. We deviate spec by 1) checking that we don't bypass our reserve, 2) checking fee buffer spikes integrity if peer is funder.

That said, I think we should move forward, and document behavior in follow-up. Likely spec should be also updated for clarification.

Great work Val !

valentinewallace and others added 2 commits June 15, 2020 15:51
When we receive an inbound HTLC from a peer on an inbound channel,
make sure the funder can still cover the additional on-chain cost
of the HTLC while maintaining their channel reserve.

When we're sending an outbound HTLC, make sure the funder can still
cover the additional on-chain cost of the HTLC while maintaining
their channel reserve.

+ implement fee spike buffer for channel initiators sending payments.
Also add an additional spec-deviating fee spike buffer on the
receiving side (but don't close the channel if this reserve is
violated, just fail the HTLC).
From lightning-rfc PR lightningdevkit#740.

Co-authored-by: Matt Corallo <git@bluematt.me>
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
This change should allow the fuzzer to catch more edge cases, such as
channel reserve checks that cut it close when sending payments.
@TheBlueMatt TheBlueMatt merged commit c1c9b11 into lightningdevkit:master Jun 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fix fee check when accepting inbound HTLCs
4 participants