Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: commitment transaction dips peer below chan reserve: internal error causing eclair to FC #7657

Closed
TrezorHannes opened this issue May 1, 2023 · 40 comments · Fixed by #8096
Assignees
Labels
bug Unintended code behaviour force closes HTLC P1 MUST be fixed or reviewed

Comments

@TrezorHannes
Copy link

Background

For unknown reason (possible racecondition) my LND instance identified that a downstream HTLC commitment would have caused a channel peer balance going negative, and below our chan reserve for that channel.
That caused my LND to send my peer an internal error. @DerEwige running Eclair, which immediately going on-chain when receiving an internal error.

Two things to validate:

  • the negative balance might happen quite often, for instance when > 1 HTLCs opened at the same time. Why would this cause an internal error and not just a failing HTLC?
  • could we identify a different error message, being more verbose, to avoid other implementations interpreting this as a major issue and going on-chain?

Your environment

  • version of lnd: lnd version 0.16.2-beta commit=v0.16.2-beta
  • which operating system (uname -a on *Nix): Linux debian-nuc 5.10.0-20-amd64 #1 SMP Debian 5.10.158-2 (2022-12-13) x86_64 GNU/Linux
  • version of btcd, bitcoind, or other backend: Bitcoin Core version v24.0.1
  • any other relevant environment details: LND latest release AMD64 binary

Steps to reproduce

Log excerpt (full log grep of channel-id and pubkey attached)

2023-05-01 01:48:09.421 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 239283580 mSAT is larger than 234681168 mSAT
2023-05-01 01:48:17.098 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 766964171 mSAT is larger than 746420614 mSAT
2023-05-01 01:49:35.142 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 74920943 mSAT is larger than 11413159 mSAT
2023-05-01 01:53:07.050 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 63151008 mSAT is larger than 11413159 mSAT
2023-05-01 01:59:08.683 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 21053936 mSAT is larger than 11413159 mSAT
2023-05-01 01:59:50.614 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 207614045 mSAT is larger than 11413159 mSAT
2023-05-01 02:02:31.406 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 50011450 mSAT is larger than 11413159 mSAT
2023-05-01 02:02:47.147 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 50011450 mSAT is larger than 11413159 mSAT
2023-05-01 02:03:09.108 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 20556850 mSAT is larger than 11413159 mSAT
2023-05-01 02:04:24.172 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 11863529 mSAT is larger than 11413159 mSAT
2023-05-01 02:06:55.592 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 73006424 mSAT is larger than 11413159 mSAT
2023-05-01 02:08:36.525 [ERR] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): failing link: unable to update commitment: commitment transaction dips peer below chan reserve: our balance below chan reserve with error: internal error
2023-05-01 02:08:36.525 [ERR] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): link failed, exiting htlcManager
2023-05-01 02:08:36.525 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): exited
2023-05-01 02:08:36.525 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): stopping
2023-05-01 02:08:55.257 [INF] PEER: Peer(023631624e30ef7bcb2887e600da8e59608a093718bc40d35b7a57145a0f3db9af): Loading ChannelPoint(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0), isPending=false
2023-05-01 02:08:55.257 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): starting
2023-05-01 02:08:55.257 [INF] CNCT: Attempting to update ContractSignals for ChannelPoint(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0)
2023-05-01 02:08:55.257 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): HTLC manager started, bandwidth=0 mSAT
2023-05-01 02:08:55.257 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): attempting to re-synchronize
2023-05-01 02:11:54.862 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): stopping
2023-05-01 02:11:54.862 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): error when syncing channel states: link shutting down
2023-05-01 02:11:54.862 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): exited
2023-05-01 02:12:04.983 [INF] PEER: Peer(023631624e30ef7bcb2887e600da8e59608a093718bc40d35b7a57145a0f3db9af): Loading ChannelPoint(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0), isPending=false
2023-05-01 02:12:04.983 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): starting
2023-05-01 02:12:04.983 [INF] CNCT: Attempting to update ContractSignals for ChannelPoint(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0)
2023-05-01 02:12:04.983 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): HTLC manager started, bandwidth=0 mSAT
2023-05-01 02:12:04.984 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): attempting to re-synchronize
2023-05-01 02:15:03.445 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): stopping
2023-05-01 02:15:03.445 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): error when syncing channel states: link shutting down
[....]
2023-05-01 02:40:20.520 [INF] CNCT: Attempting to update ContractSignals for ChannelPoint(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0)
2023-05-01 02:40:20.520 [INF] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): attempting to re-synchronize
2023-05-01 02:41:14.987 [INF] NTFN: Dispatching confirmed spend notification for outpoint=b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0, script=0 b0b3c81deb80ca8d88999cee148c5b2b5f1abef226d4d434139bdde0679f903e at current height=787714: 3daf8760142f17a9a5156be37a6da706deafe088c61e1b0275d313be666cb67b[0] spending b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0 at height=787714
2023-05-01 02:41:14.996 [INF] CNCT: Unilateral close of ChannelPoint(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0) detected
2023-05-01 02:41:14.997 [INF] CNCT: ChannelArbitrator(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): remote party has closed channel out on-chain
2023-05-01 02:41:28.618 [INF] CNCT: ChannelArbitrator(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): still awaiting contract resolution
2023-05-01 02:41:28.737 [INF] CNCT: *contractcourt.commitSweepResolver(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): Sweeping with witness type: CommitmentToRemoteConfirmed
2023-05-01 02:41:28.737 [INF] CNCT: *contractcourt.commitSweepResolver(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): sweeping commit output
2023-05-01 02:43:14.539 [INF] CNCT: *contractcourt.commitSweepResolver(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): local commitment output fully resolved by sweep tx: 6cf35d29f4170599173ed44cbd0bb5e8280699049447528c51405c906f416192
2023-05-01 02:43:14.578 [INF] CNCT: ChannelArbitrator(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): a contract has been fully resolved!
2023-05-01 02:43:14.578 [INF] CNCT: ChannelArbitrator(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): still awaiting contract resolution
2023-05-01 02:59:20.520 [ERR] PEER: Peer(023631624e30ef7bcb2887e600da8e59608a093718bc40d35b7a57145a0f3db9af): Channel(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0) request enabling failed due to inactive link

Expected behaviour

Proposal above, details better evaluated by folks who're more acustomed to cross-implementation error relays.

  • let the HTLC fail with usual process
  • don't send internal error

Actual behaviour

Sending an internal-error causing eclair to force-close
speedupln-fc.log

@TrezorHannes TrezorHannes added bug Unintended code behaviour needs triage labels May 1, 2023
@DerEwige
Copy link

DerEwige commented May 1, 2023

Here is the log from my side:

eclair.2023-05-01_02.log:2023-05-01 02:07:43,762 INFO  f.a.e.p.r.ChannelRelay PAY h:170265fb62c658c174b0dd7688e2d8d669c0271507991d608897b051af2148e0 p:9d5d0308 - relaying htlc #230485 from channelId=ebe6bd43f9ee41617fcc455c869eaba86a5edc98a0e4bd25ac698b2054bc508e to requestedShortChannelId=768220x2458x0 nextNode=037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3
eclair.2023-05-01_02.log:2023-05-01 02:07:49,525 INFO  f.a.e.p.r.ChannelRelay PAY h:170265fb62c658c174b0dd7688e2d8d669c0271507991d608897b051af2148e0 p:89da5cb3 - relaying htlc #230486 from channelId=ebe6bd43f9ee41617fcc455c869eaba86a5edc98a0e4bd25ac698b2054bc508e to requestedShortChannelId=768220x2458x0 nextNode=037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3
eclair.2023-05-01_02.log:2023-05-01 02:08:31,599 INFO  f.a.e.p.r.ChannelRelay PAY h:170265fb62c658c174b0dd7688e2d8d669c0271507991d608897b051af2148e0 p:d1610534 - relaying htlc #230491 from channelId=ebe6bd43f9ee41617fcc455c869eaba86a5edc98a0e4bd25ac698b2054bc508e to requestedShortChannelId=768220x2458x0 nextNode=037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3
eclair.2023-05-01_02.log:2023-05-01 02:08:36,629 ERROR f.a.e.c.fsm.Channel n:037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3 c:efc328abfaace666d49abf6d4eaea0589c0552feef9f34a5c81e74252b62e5b7 - peer sent error: ascii='internal error' bin=696e7465726e616c206572726f72
eclair.2023-05-01_02.log:2023-05-01 02:08:36,654 INFO  f.a.e.c.p.TxPublisher n:037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3 c:efc328abfaace666d49abf6d4eaea0589c0552feef9f34a5c81e74252b62e5b7 - publishing commit-tx txid=3daf8760142f17a9a5156be37a6da706deafe088c61e1b0275d313be666cb67b spending b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0 with id=39906b51-14ec-4cbf-a4e5-4b631ec3a1a1 (0 other attempts)
eclair.2023-05-01_02.log:2023-05-01 02:08:36,654 INFO  f.a.e.c.p.TxPublisher n:037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3 c:efc328abfaace666d49abf6d4eaea0589c0552feef9f34a5c81e74252b62e5b7 - publishing replaceable local-anchor spending 3daf8760142f17a9a5156be37a6da706deafe088c61e1b0275d313be666cb67b:0 with id=d09acaa5-d561-47dc-9a62-38bf41aec4b1 (0 other attempts)
eclair.2023-05-01_02.log:2023-05-01 02:08:36,661 INFO  f.a.e.db.DbEventHandler n:037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3 c:efc328abfaace666d49abf6d4eaea0589c0552feef9f34a5c81e74252b62e5b7 - paying mining fee=0 sat for txid=3daf8760142f17a9a5156be37a6da706deafe088c61e1b0275d313be666cb67b desc=commit-tx
eclair.2023-05-01_02.log:2023-05-01 02:08:36,661 INFO  f.a.e.c.p.TxPublisher n:037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3 c:efc328abfaace666d49abf6d4eaea0589c0552feef9f34a5c81e74252b62e5b7 - publishing local-main-delayed txid=b1a8e6c334074579d7cc5ad4726dc213f45ac49297b033585fa1dab4a60d3dc1 spending 3daf8760142f17a9a5156be37a6da706deafe088c61e1b0275d313be666cb67b:5 with id=a8602e0e-210c-40a2-a91e-d854330f4434 (0 other attempts)
eclair.2023-05-01_02.log:2023-05-01 02:08:36,662 INFO  f.a.e.c.p.TxPublisher n:037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3 c:efc328abfaace666d49abf6d4eaea0589c0552feef9f34a5c81e74252b62e5b7 - publishing replaceable htlc-timeout spending 3daf8760142f17a9a5156be37a6da706deafe088c61e1b0275d313be666cb67b:4 with id=0cd27a09-d278-4d6b-b266-8ba550bf9f00 (0 other attempts)
eclair.2023-05-01_02.log:2023-05-01 02:08:36,662 INFO  f.a.e.c.p.TxTimeLocksMonitor n:037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3 c:efc328abfaace666d49abf6d4eaea0589c0552feef9f34a5c81e74252b62e5b7 t:a8602e0e - local-main-delayed has a relative timeout of 360 blocks, watching parentTxId=3daf8760142f17a9a5156be37a6da706deafe088c61e1b0275d313be666cb67b
eclair.2023-05-01_02.log:2023-05-01 02:08:36,688 INFO  f.a.e.c.p.TxTimeLocksMonitor n:037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3 c:efc328abfaace666d49abf6d4eaea0589c0552feef9f34a5c81e74252b62e5b7 t:0cd27a09 - delaying publication of htlc-timeout until block=788378 (current block=787704)
eclair.2023-05-01_02.log:2023-05-01 02:08:36,704 INFO  f.a.e.c.p.ReplaceableTxFunder n:037f66e84e38fc2787d578599dfe1fcb7b71f9de4fb1e453c5ab85c05f5ce8c2e3 c:efc328abfaace666d49abf6d4eaea0589c0552feef9f34a5c81e74252b62e5b7 t:d09acaa5 - skipping local-anchor: commit feerate is high enough (feerate=2500 sat/kw)

I just see that my node received an "internal error" from my peer and then force closed.

@Crypt-iQ Looks like another case of a link sending out "internal error" causing a force close

@saubyk saubyk added interop interop with other implementations force closes labels May 1, 2023
@ziggie1984
Copy link
Collaborator

ziggie1984 commented May 2, 2023

@DerEwige could you check in your logs which was the latest UpdateAddHTLC msg which lead to this behaviour, especially the amt you tried to put on the remotes commitment transaction. What essentially happened is, your node tried to add an HTLC to the remotes commitment but at the current feerate your peer would have been forced to go below reserve.

According to the spec your SENDING node should:

if it is not responsible for paying the Bitcoin fee:
SHOULD NOT offer amount_msat if, once the remote node adds that HTLC to its commitment transaction, it cannot pay the fee for the updated local or remote transaction at the current feerate_per_kw while maintaining its channel reserve.

The RECEIVING node followed the spec in a way:

receiving an amount_msat that the sending node cannot afford at the current feerate_per_kw (while maintaining its channel reserve and any to_local_anchor and to_remote_anchor costs):
SHOULD send a warning and close the connection, or send an error and fail the channel.

Definitely worth checking why eclair added the htlc although it would force the peer to go below reserve.

Whether lnd sends a warning and closes the connection or sends an internal error should also be reevaluated.

@DerEwige
Copy link

DerEwige commented May 2, 2023

ould you check in your logs which was the latest UpdateAddHTLC msg which lead to this behaviour, especially the amt you tried to put on the remotes commitment transaction. What essentially happened is, your node tried to add an HTLC to the remotes commitment but at the current feerate your peer would have been forced to go below reserve.

Unfortunately, I have most of my eclair running on log level “WARN”. This is due to the excessive amount of logs my rebalancing produces. So I don’t have any additional logs from my side.

I can only see from the logs of my plugin that I tried to send out payment through this channel:

eclair.2023-05-01_02.log:2023-05-01 02:08:36,329 INFO f.a.c.ChannelDB_ActiveBalancer_Multibalance - 768220x2458x0 Send through route [768220x2458x0, 734274x2495x1, 562892x872x0, 778983x2811x0] 0 0 0.0

The RECEIVING node followed the spec in a way:

receiving an amount_msat that the sending node cannot afford at the current feerate_per_kw (while maintaining its channel reserve and any to_local_anchor and to_remote_anchor costs):
SHOULD send a warning and close the connection, or send an error and fail the channel.

This case applies if the sender falls below the reserve.
But in this case, the receiving node could not afford the fees.

According to the spec your SENDING node should:

if it is not responsible for paying the Bitcoin fee:
SHOULD NOT offer amount_msat if, once the remote node adds that HTLC to its commitment transaction, it cannot pay the fee for the updated local or remote transaction at the current feerate_per_kw while maintaining its channel reserve.

This one applies. My node should not have sent the HTLC.
But there might bee a fee estimate discrepancy at this point?
My node might want 10 sat/vb and think it is good.
And when I try to add the HTLC the remote peer might disagree and want 15 sat/vb which would dip him below the reserve?

Somehow the specs does not say anything about how the receiver should act if it would dip him below the reserve? Did I miss that part?

@ziggie1984
Copy link
Collaborator

ziggie1984 commented May 2, 2023

This case applies if the sender falls below the reserve.
But in this case, the receiving node could not afford the fees.

From my perspective this is the same, when you add an HTLC to the remote commit (sending an Update_ADD_HTLC), you as the sender need to make sure that you evaluate the transaction limits beforehand meaning that adding this new output in the current fee-environment should not let your peer fall below reserve. Your peer will just receive the amt and try to add it to the transaction at current feelimits, if that forces the peer to fall below its reserve it should fail this attempt (this is also what happened)

Somehow the specs does not say anything about how the receiver should act if it would dip him below the reserve? Did I miss that part?

Every channel has a synchronized feerate in sat/kweight which only the initiator can change via the update_fee msg. So the feerate was definitely the same, but why eclair and lnd diverged in the final calculation of the transaction is strange, maybe some rounding issue. Thats why I asked you supply the amount so that one could backtrace whether the boundaries were breached or not, but without logs very hard to do so.

But I must say this new check for an HTLC which is added to our commitment is very new (since 16.0 I think) so maybe something which will happen more often now(https://github.com/lightningnetwork/lnd/blob/master/lnwallet/channel.go#L3640)

Commit: 47c5809 (new fresh stuff^^) => not really new just log verbose level changed

@Roasbeef
Copy link
Member

Roasbeef commented May 2, 2023

could we identify a different error message, being more verbose, to avoid other implementations interpreting this as a major issue and going on-chain?

From the spec PoV, the channel basically can't continue at this point. Similarly, if someone else us an invalid pre-image, that's also an invalid action and the channels can't continue.

This might ultimately be a interoperability issue: does the HTLC eclair send actually dip below the reserve? If so, can we repro that exact scenario to dig into the accounting between the implementations? Is it an mSAT rounding thing?

@DerEwige
Copy link

DerEwige commented May 8, 2023

I had a second force close with the same issue:

peer logs:

2023-05-07 16:37:21.363 [ERR] HSWC: ChannelLink(8f50bc54208b69ac25bde4a098dc5e6aa8ab9e865c45cc7f6141eef943bde6eb:1): failing link: unable to update commitment: commitment transaction dips peer below chan reserve: our balance below chan reserve with error: internal error
2023-05-07 16:37:21.366 [ERR] HSWC: ChannelLink(8f50bc54208b69ac25bde4a098dc5e6aa8ab9e865c45cc7f6141eef943bde6eb:1): link failed, exiting htlcManager
2023-05-07 16:37:21.366 [INF] HSWC: ChannelLink(8f50bc54208b69ac25bde4a098dc5e6aa8ab9e865c45cc7f6141eef943bde6eb:1): exited
2023-05-07 16:37:21.366 [INF] HSWC: ChannelLink(8f50bc54208b69ac25bde4a098dc5e6aa8ab9e865c45cc7f6141eef943bde6eb:1): stopping

my logs:

2023-05-07 16:37:21,481 ERROR f.a.e.c.fsm.Channel n:020ca6f9aaebc3c8aab77f3532112649c28a0bfb1539d0c2a42f795fb6e4c363b2 c:ebe6bd43f9ee41617fcc455c869eaba86a5edc98a0e4bd25ac698b2054bc508e - peer sent error: ascii='internal error' bin=696e7465726e616c206572726f72
2023-05-07 16:37:21,511 INFO  f.a.e.c.p.TxPublisher n:020ca6f9aaebc3c8aab77f3532112649c28a0bfb1539d0c2a42f795fb6e4c363b2 c:ebe6bd43f9ee41617fcc455c869eaba86a5edc98a0e4bd25ac698b2054bc508e - publishing commit-tx txid=76a55a85d7c8ea382d2de31dc3e5de0191f0e52f94d5eee260a9e19379da5720 spending 8f50bc54208b69ac25bde4a098dc5e6aa8ab9e865c45cc7f6141eef943bde6eb:1 with id=940b27af-fd2e-41e4-afa0-7251f0db4238 (0 other attempts)

I will also open a issue with eclair and link the two issues

@t-bast
Copy link
Contributor

t-bast commented May 9, 2023

This might ultimately be a interoperability issue: does the HTLC eclair send actually dip below the reserve? If so, can we repro that exact scenario to dig into the accounting between the implementations? Is it an mSAT rounding thing?

This is the important question.

Eclair does check reserve requirements (for sender and receiver) before sending an HTLC: https://github.com/ACINQ/eclair/blob/77b333731f618395f33d69d1fe0aba2c86cdba58/eclair-core/src/main/scala/fr/acinq/eclair/channel/Commitments.scala#L440

@DerEwige your node has all the information we need to debug this. Can you:

  1. Run the channel command for the force-closed channel and share its output: this will give us the state of the channel (including the feerate for the commitment transaction) before the force-close
  2. Search your logs for the outgoing HTLC that triggered the force-close, and share its amount
  3. If there are multiple HTLCs that were added at the same time, make sure to include all of them

We should be able to create a unit test based on that data on both eclair and lnd to see why reserve calculation doesn't match.

@DerEwige
Copy link

DerEwige commented May 9, 2023

@t-bast

Run the channel command for the force-closed channel and share its output: this will give us the state of the channel (including the feerate for the commitment transaction) before the force-close

channel_fc1.log
channel_fc2.log
Channel data for both force closes

Search your logs for the outgoing HTLC that triggered the force-close, and share its amount
If there are multiple HTLCs that were added at the same time, make sure to include all of them

Unfortunately I run this logback config:

	<logger name="fr.acinq.eclair.router" level="WARN"/>
	<logger name="fr.acinq.eclair.Diagnostics" level="WARN"/>
	<logger name="fr.acinq.eclair.channel.Channel" level="WARN"/>
	<logger name="fr.acinq.eclair.channel.fsm.Channel" level="WARN"/>
	<logger name="fr.acinq.eclair.io.Peer" level="WARN"/>
	<logger name="fr.acinq.eclair.payment.send.PaymentLifecycle" level="WARN"/>

So the HTLC is not logged.

@t-bast
Copy link
Contributor

t-bast commented May 9, 2023

Thanks, that should be enough to replay the latest state and figure out why there is a mismatch between eclair and lnd. I'll investigate that state and share the contents of a unit test to try in both eclair and lnd to figure out where our balance calculation diverges.

@t-bast
Copy link
Contributor

t-bast commented May 16, 2023

I have been able to reproduce the flow of messages that leads to the two channel states shared by @DerEwige.
Both of them have exactly the same pattern, so I'll only detail what happened to first one (with channel_id efc328abfaace666d49abf6d4eaea0589c0552feef9f34a5c81e74252b62e5b7).

First of all, an HTLC eclair -> lnd is cross-signed without any issues:

  Eclair                      LND
    |                          |
    | update_add_htlc          | htlc_id = 1_477_859
    |------------------------->|
    | commit_sig               |
    |------------------------->|
    |           revoke_and_ack |
    |<-------------------------|
    |               commit_sig |
    |<-------------------------|
    | revoke_and_ack           |
    |------------------------->|

At that point, both commitments are synchronized and contain only that HTLC.

Then eclair and lnd try adding an HTLC at the same time:

  Eclair                      LND
    |                          |
    | update_add_htlc          | htlc_id = 1_477_861
    |------------------------->|
    |          update_add_htlc | htlc_id = 120_896
    |<-------------------------|
    | commit_sig    commit_sig |
    |-----------+  +-----------|
    |            \/            |
    |            /\            |
    |<----------+  +---------->|
    |           revoke_and_ack |
    |<-------------------------|
    | revoke_and_ack           |
    |------------------------->|
    | commit_sig               |
    |------------------------->|
    |                    error |
    |<-------------------------|

Here are the details of the commitments when eclair receives lnd's error message:

  • commitment feerate: 2500 sat/kw
  • channel reserve: 30_000 sat (same on both sides)
  • eclair's commitment:
    • commitment number: 3_194_658
    • to_local: 2_836_315_792 msat
    • to_remote: 34_367_703 msat
    • HTLC OUT: id=1_477_859 amount=117_941_049 msat
    • HTLC IN: id=120_896 amount=11_375_456 msat
  • lnd's commitment:
    • commitment number: 3_195_511
    • to_local: 45_743_159 msat
    • to_remote: 2_756_274_986 msat
    • HTLC IN: id=1_477_859 amount=117_941_049 msat
    • HTLC IN: id=1_477_861 amount=80_040_806 msat
  • lnd's next commitment (signed by eclair, previous not revoked yet):
    • commitment number: 3_195_512
    • to_local: 34_367_703 msat
    • to_remote: 2_756_274_986 msat
    • HTLC IN: id=1_477_859 amount=117_941_049 msat
    • HTLC IN: id=1_477_861 amount=80_040_806 msat
    • HTLC OUT: id=120_896 amount=11_375_456 msat

You can check that in the files shared by @DerEwige.

The important thing to note is that there is nothing that makes either peer dip into their reserve.
The difference between lnd's current commitment and the next one (which seems to trigger the force-close) is an additional HTLC going from lnd to eclair.
So it isn't triggered by something eclair sent, lnd decided to add that HTLC (and it doesn't make anyone dip into their reserve).
It seems to me that this internal error sent by lnd shouldn't be treated as an error, as there's nothing wrong with the state of the commitments or the messages exchanged.
@Roasbeef can you (or someone else from lnd) confirm this?
I'm very tempted to explicitly parse internal error messages and ignore them in eclair (like core-lightning does), as such bugs are very costly for our users who end up paying for a force-close.

t-bast added a commit to ACINQ/eclair that referenced this issue May 16, 2023
It seems like lnd sends this error whenever something wrong happens on
their side, regardless of whether the channel actually needs to be closed.
We ignore it to avoid paying the cost of a channel force-close, it's up
to them to broadcast their commitment if they wish.

See lightningnetwork/lnd#7657 for example.
t-bast added a commit to ACINQ/eclair that referenced this issue May 17, 2023
It seems like lnd sends this error whenever something wrong happens on
their side, regardless of whether the channel actually needs to be closed.
We ignore it to avoid paying the cost of a channel force-close, it's up
to them to broadcast their commitment if they wish.

See lightningnetwork/lnd#7657 for example.
@yyforyongyu
Copy link
Member

@TrezorHannes Could you provide more logs so we could understand what had happened during those 90s?

2023-05-01 02:06:55.592 [WRN] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): insufficient bandwidth to route htlc: 73006424 mSAT is larger than 11413159 mSAT

...

2023-05-01 02:08:36.525 [ERR] HSWC: ChannelLink(b7e5622b25741ec8a5349feffe52059c58a0ae4e6dbf9ad466e6acfaab28c3ef:0): failing link: unable to update commitment: commitment transaction dips peer below chan reserve: our balance below chan reserve with error: internal error

@TrezorHannes
Copy link
Author

@TrezorHannes Could you provide more logs so we could understand what had happened during those 90s?

@yyforyongyu did you see the attached log in the first post? I'm afraid I don't have more, my debug level is info.

@yyforyongyu
Copy link
Member

@TrezorHannes yep that's where I got those two lines. Need more info to debug the issue.

@TrezorHannes
Copy link
Author

@yyforyongyu I'm afraid I shared all I had. But perhaps @t-bast can help, who was able to reproduce the issue in a test-environment.

@t-bast
Copy link
Contributor

t-bast commented May 17, 2023

I've already shared everything in my previous comment. This confirmed that this issue seems to be on the lnd side, I don't have much else I can add...

@saubyk saubyk added this to the v0.17.0 milestone May 17, 2023
@Roasbeef
Copy link
Member

@t-bast thanks for that throughly analysis! We'll def prio this and look into what's going on.

@saubyk saubyk added this to lnd v0.17 May 18, 2023
@saubyk saubyk moved this to 🏗 In progress in lnd v0.17 May 18, 2023
@saubyk saubyk modified the milestones: v0.17.0, v0.17.1 Jun 15, 2023
@DerEwige
Copy link

I also had 2 more force closes due to this problem.
As eclair now ignores the initial “internal error”, the force close is now delayed for about a minute.
But the result is the same

Here is the log of one of the force closes
803645x1433x0.log

@ziggie1984
Copy link
Collaborator

ziggie1984 commented Aug 21, 2023

@t-bast I was verifying your numbers and figured out that you kinda neglected the commitment fees, the lnd node has to pay:

lnd's next commitment (signed by eclair, previous not revoked yet):
commitment number: 3_195_512
to_local: 34_367_703 msat
to_remote: 2_756_274_986 msat
HTLC IN: id=1_477_859 amount=117_941_049 msat
HTLC IN: id=1_477_861 amount=80_040_806 msat
HTLC OUT: id=120_896 amount=11_375_456 msat

cap is 3 mio, 34_367_703 + 2_756_274_986 + 117_941_049 + 80_040_806 + 11_375_456 = 3_000_000_000 msats

additional commitment fees: (3*173 + 1124) * 2500 sats/kweight = 4107,5 sats (in fees) + 660 sats (2 anchor values) = 4767,5 sats

Taking these costs into account the local balance falls below the reserve

to_local: 34_367_703 msat - 4767500 = 29600203 (which is below the reserve)

@ziggie1984
Copy link
Collaborator

ziggie1984 commented Aug 21, 2023

Actually I think one cannot guarantee these situations will not happen. The problem is that the Lnd node does yet not know before adding its own HTLC that the remote is adding one (additional 173 weight * 2500 sats/kweight = 432500 msat) which is exactly the difference which dips us below the reserve, same happens for eclair when adding their output, they don't know yet that we are adding an incoming HTLC to them?

@ziggie1984
Copy link
Collaborator

ziggie1984 commented Aug 21, 2023

As a potential fix I would add a safety net to the peer who opened the channel, lets say we always keep a buffer of 10*IncomingHTLC_Weight (173) * CFee (2500 sat/kweight) = 4325 sats (in this example with a CFee of 2500) additional buffer so that we would not fall below reserve if the peer added 10 incoming HTLCs in one revocation window.

Or we could include the remote_update_log also for unsigned updates by the remote in our estimation when we would like to add an HTLC to the remote chain? Currently unsigned updates by the remote are in our remote_log bucket as well but we only include Incoming HTLCs in our Commitment which our peer signed for (and we acked them). But we could leave space for them?

@t-bast
Copy link
Contributor

t-bast commented Aug 22, 2023

Nice, very good catch @ziggie1984 that explains it and the numbers match.

Actually I think one cannot guarantee these situations will not happen.

Right, in that case we cannot completely prevent this with the current commitment update protocol, since eclair and lnd concurrently add an HTLC, and each have to take into account the remote peer's HTLC (there's no way to "unadd") which forces dipping into the reserve.

If we actually allowed dipping into the reserve here, that could be abused: instead of 1 concurrent HTLC in each direction, one of the peers could send many of them to dip arbitrarily low into the reserve, which creates bad incentives.

As a potential fix I would add a safety net to the peer who opened the channel, lets say we always keep a buffer

I'm surprised you don't already do that as part of lightning/bolts#919? I thought every implementation had put this mitigation in place 2 years ago (otherwise lnd is potentially at risk of the kind of dust attack described in that PR). Eclair has that kind of mitigation, which should have avoided that situation if eclair had been the channel funder.

I guess that issue is another good argument for lightning/bolts#867, which will remove all of those edge cases 😅

@ziggie1984
Copy link
Collaborator

So we have a potential safety net here but as you mentioned above for concurrent updates, this is not good enough so I propose the fix above and always count to certain number of incoming HTLCs as a buffer.

https://github.com/lightningnetwork/lnd/blob/master/lnwallet/channel.go#L5378-L5391

	// We must also check whether it can be added to our own commitment
	// transaction, or the remote node will refuse to sign. This is not
	// totally bullet proof, as the remote might be adding updates
	// concurrently, but if we fail this check there is for sure not
	// possible for us to add the HTLC.
	err = lc.validateCommitmentSanity(
		lc.remoteUpdateLog.logIndex, lc.localUpdateLog.logIndex,
		false, pd, nil,
	)
	if err != nil {
		return err
	}

	return nil

@ziggie1984
Copy link
Collaborator

I'm surprised you don't already do that as part of lightning/bolts#919? I thought every implementation had put this mitigation in place 2 years ago (otherwise lnd is potentially at risk of the kind of dust attack described in that PR). Eclair has that kind of mitigation, which should have avoided that situation if eclair had been the channel funder.

what I am wondering tho, why is eclair signing this breach of the contract, also for eclair it must be known that the remote_commitment dips below reserve, do you actually log this even as a warning ?

I'm surprised you don't already do that as part of lightning/bolts#919?

Hmm we have a dust exposure check in place but this does only count for dust htlcs in the code base.

I guess that issue is another good argument for lightning/bolts#867,

Thanks for the hint, need to look at this proposal. Sounds promising hopefully not decreasing the speed to much :)

@t-bast
Copy link
Contributor

t-bast commented Aug 22, 2023

what I am wondering tho, why is eclair signing this breach of the contract, also for eclair it must be known that the remote_commitment dips below reserve, do you actually log this even as a warning ?

Eclair would indeed have complained at a later step and sent an error.

Hmm we have a dust exposure check in place but this does only count for dust htlcs in the code base.

Sorry that was the wrong "additional reserve", I meant this one: lightning/bolts#740

@ziggie1984
Copy link
Collaborator

could you maybe point me to the code part where eclair defines the additional buffer, just to get a feeling about the size of it :)

@t-bast
Copy link
Contributor

t-bast commented Aug 22, 2023

Sure, you can find that code here: https://github.com/ACINQ/eclair/blob/3547f87f664c5e956a6d13af530a3d1cb6fc1052/eclair-core/src/main/scala/fr/acinq/eclair/channel/Commitments.scala#L435

This function is called when trying to send an outgoing HTLC: if that HTLC violates some limit (such as this additional buffer), we simply don't send it (and try another channel with the same peer, or if none work, propagate the failure to the corresponding upstream channel).

@saubyk saubyk removed the interop interop with other implementations label Aug 22, 2023
@ziggie1984
Copy link
Collaborator

ziggie1984 commented Aug 22, 2023

Sure, you can find that code here: https://github.com/ACINQ/eclair/blob/3547f87f664c5e956a6d13af530a3d1cb6fc1052/eclair-core/src/main/scala/fr/acinq/eclair/channel/Commitments.scala#L435

Basically Eclair (I think also CLN) has a fee buffer of 2x the current commitment fee (the additional HTLC consideration will be consumed by the htlc which will be send out by the current payment request so I do not count it here (we already do this). Every new Incoming HTLC adds approx. 20% fees, so the buffer of 200% (2x current commitment fee) buys us 10 htlc slots (for concurrent htlcs).This increases when htlcs are on the channel (commitment tx bigger) which acts as a nice HTLC limiter when our balance is very low). I like this approach as well and would implement the additional feebuffer, let me know what you guys think 👻

This would basically implement this: lightning/bolts#740
Especially:

The node _responsible_ for paying the Bitcoin fee should maintain a "fee
spike buffer" on top of its reserve to accommodate a future fee increase.
Without this buffer, the node _responsible_ for paying the Bitcoin fee may
reach a state where it is unable to send or receive any non-dust HTLC while
maintaining its channel reserve (because of the increased weight of the
commitment transaction), resulting in a degraded channel. See [#728](https://github.com/lightningnetwork/lightning-rfc/issues/728)
for more details.

@morehouse
Copy link
Collaborator

Indeed LND is missing the fee spike buffer (see #7721). We should implement that.

The fee spike buffer helps, but IIUC it doesn't completely prevent concurrent HTLC problems. But even if an HTLC race occurs, why do we need to error and force close? If the next commitment would violate the channel reserve, can't we just disconnect to drop the proposed updates? Then on reconnect we can do exponential backoff to prevent the race from reoccurring indefinitely.

@Roasbeef
Copy link
Member

@morehouse hehe I came here to link that very issue. AFAICT, it's an edge case where we send an HTLC (as the initiator), can pay for that HTLC, but then the remote party sends another HTLC concurrently (they haven't seen our yet), once that HTLC is ACK'd and a sig ready to go out, we fail as adding that would dip us below the reserve.

@Roasbeef
Copy link
Member

@t-bast we had an initial work around which helped to spur that spec PR here. In practice, it looks like it wasn't enough to fully address this edge case.

@morehouse
Copy link
Collaborator

morehouse commented Aug 24, 2023

@morehouse hehe I came here to link that very issue. AFAICT, it's an edge case where we send an HTLC (as the initiator), can pay for that HTLC, but then the remote party sends another HTLC concurrently (they haven't seen our yet), once that HTLC is ACK'd and a sig ready to go out, we fail as adding that would dip us below the reserve.

Right. So we could just disconnect to get out of that bad state (should also send a warning). No need to send an error and cause a force close.

Edit: Never mind, I think I see the problem now. The 2 commitments contain different HTLCs that are OK on their own, but combined they cause the channel reserve to be violated. And we can't go back to the previous state because it's already been revoked.

@morehouse
Copy link
Collaborator

Edit: Never mind, I think I see the problem now. The 2 commitments contain different HTLCs that are OK on their own, but combined they cause the channel reserve to be violated. And we can't go back to the previous state because it's already been revoked.

I think this race could still be detected and avoided without Rusty's simplified commitment update.

When we receive a commitment_signed to update our commitment from X to X+1, we do the usual checks. Then we also create a "speculative commitment" X+2 with any HTLCs we've previously offered that haven't been ACKed yet. If channel reserve or other checks fail on the speculative commitment, we disconnect to reset state to X and backoff on reconnect. Otherwise, (the speculative commitment checks out), we can safely advance our commitment to X+1 and revoke X.

@yyforyongyu
Copy link
Member

Then we also create a "speculative commitment" X+2 with any HTLCs we've previously offered that haven't been ACKed yet.

I think the issue is we don't know what to include in X+2 because the incoming HTLC has not yet arrived or processed. We can easily handle parallel adding update_add_htlc, but not a combination of update_add_htlc followed closely with commitment_signed.

@yyforyongyu
Copy link
Member

The case we've covered in lnd,

  1. Alice sends update_add_htlc
  2. Bob receives Alice's HTLC
  3. Bob sends update_add_htlc, which would fail because of the validateAddHtlc check.
  4. Alice would fail too due to the same check.

The case we should cover in lnd,

  1. Alice sends update_add_htlc
  2. Bob sends update_add_htlc
  3. Alice receives Bob's HTLC, and should fail here
  4. Bob receives Alice's HTLC, and should fail here

We should add a method validateReceiveHtlc similar to validateAddHtlc and use it in ReceiveHTLC.

The case cannot be fixed atm,

  1. Alice sends update_add_htlc
  2. Alice sends commitment_signed <- commits this HTLC on Alice's remote chain
  3. Bob sends update_add_htlc
  4. Bob sends commitment_signed <- commits this HTLC to Bob's remote chain
  5. Alice receives Bob's HTLC and commit sig, but fails it due to dipping below channel reserve
  6. Bob receives Alice's HTLC and commit sig, but fails it due to dipping below channel reserve

There are a few mitigations for this case,

  1. implement [bug]: implement "fee spike buffer" from spec #7721 using @ziggie's suggestion
  2. change IdealCommitFeeRate used here to return min relay fee for anchor channels and let the fee bumping be handled by CPFP.

@ziggie1984
Copy link
Collaborator

ziggie1984 commented Aug 28, 2023

When we receive a commitment_signed to update our commitment from X to X+1, we do the usual checks. Then we also create a "speculative commitment" X+2 with any HTLCs we've previously offered that haven't been ACKed yet. If channel reserve or other checks fail on the speculative commitment, we disconnect to reset state to X and backoff on reconnect. Otherwise, (the speculative commitment checks out), we can safely advance our commitment to X+1 and revoke X.

Hmm I am not sure how this is possible. When we send out HTLC to the peer which they haven't acked, does not mean we can forget about it (signature is out and we need a revocation for this), they will eventually revoke_ack their state including the HTLCs we offered them, forcing us to include them as well in our Commitment when verifying their signature (because they anticipate that we have it on our local commitment as well when they acked our offered HTLCs?)

we disconnect to reset state to X and backoff on reconnect.

what do you mean by this, we stop using the channel ?

@ziggie1984
Copy link
Collaborator

The case we should cover in lnd,

Alice sends update_add_htlc
Bob sends update_add_htlc
Alice receives Bob's HTLC, and should fail here
Bob receives Alice's HTLC, and should fail here

With the current protocol where each node immediately sends out a CommitSig (50ms) after sending out the UpdateAddHTLC we would also make the channel unusable because we need a revocation from the peer. So I think the feebuffer is our best shot for this problem? We just keep slots for the peer to add some htlcs, in case it happens concurrently?

@yyforyongyu
Copy link
Member

With the current protocol where each node immediately sends out a CommitSig (50ms) after sending out the UpdateAddHTLC we would also make the channel unusable because we need a revocation from the peer. So I think the feebuffer is our best shot for this problem?

Yes, but then this would be the third case.

@Crypt-iQ
Copy link
Collaborator

Just to add a little bit more color:

  1. We send an HTLC to the peer, this calls AddHTLC. This will call validateCommitmentSanity for the remote commitment with their updates on our commitment transaction (lc.localCommitChain.tail().theirMessageIndex) and all of our updates (lc.localUpdateLog.logIndex). It'll also call validateCommitmentSanity for the local commit with all of their updates (lc.remoteUpdateLog.logIndex) and all of our updates (lc.localUpdateLog.logIndex)
  2. We sign the commit and send it over
  3. We receive an HTLC from the peer, calling ReceiveHTLC. This will call validateCommitmentSanity for the local commit with all the remote updates (lc.remoteUpdateLog.logIndex) and all of our updates that are on their lowest unrevoked commitment (we call this the tail -- lc.remoteCommitChain.tail().ourMessageIndex). This doesn't trigger the reserve check because both HTLC's aren't included as they haven't yet revoked the tail commit
  4. They send a commit sig
  5. We send a revoke
  6. They send a revoke and a sig
  7. When we receive the revoke, we'll attempt to sign a new commitment. The issue though is that we'll now include both HTLC's and fail the reserve check

There are probably a bunch of things we could do here -- we could be more strict in ReceiveHTLC and include more HTLC's in the call to validateCommitmentSanity (e.g. check all of our updates, all of their updates), but that could be too strict. I think the best thing here is to use the fee-buffer and not try to worry about involved update accounting. It won't work for every case -- maybe there's a large fee spike and a concurrent update or maybe somebody sends a lot of HTLC's in a single update round. But I think it should be good enough. I don't know that it makes sense to change the error message because with LND-LND people might have to force close anyways since the channel gets stuck in a reestablish loop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Unintended code behaviour force closes HTLC P1 MUST be fixed or reviewed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants