8000 Channel Splicing (feature 62/63) by t-bast · Pull Request #1160 · lightning/bolts · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Channel Splicing (feature 62/63) #1160

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: master
Choose a base branch
from
Open

Conversation

t-bast
Copy link
Collaborator
@t-bast t-bast commented May 2, 2024

Splicing allows spending the current funding transaction to replace it with a new one that changes the capacity of the channel, allowing both peers to add or remove funds to/from their channel balance.

Splicing takes place while a channel is quiescent, to ensure that both peers have the same view of the current commitments.

We don't want channels to be unusable while waiting for transactions to confirm, so channel operation returns to normal once the splice transaction has been signed and we're waiting for it to confirm. The channel can then be used for payments, as long as those payments are valid for every pending splice transactions. Splice transactions can be RBF-ed to speed up confirmation.

Once one of the pending splice transactions confirms and reaches acceptable depth, peers exchange splice_locked to discard the other pending splice transactions and the previous funding transaction. The confirmed splice transaction becomes the channel funding transaction.

Nodes then advertise this spliced channel to the network, so that nodes keep routing payments through it without any downtime.

This PR replaces #863 which contains a lot of legacy mechanisms for early versions of splicing, which didn't work in some edge cases (detailed in the test vectors provided in this PR). It can be very helpful to read the protocol flows described in the test vector: they give a better intuition of how splicing works, and how it deals with message concurrency and disconnections.

This PR requires the quiescence feature (#869) to start negotiating a splice.

Credits to @rustyrussell and @ddustin will be added in the commit messages once we're ready to merge this PR.

@ProofOfKeags
Copy link
Contributor

Can I suggest we do this as an extension BOLT rather than layering it in with the existing BOLT2 text? It makes it easier to implement when all of the requirements deltas are in a single document than when it is inlined into the original spec. Otherwise, the PR/branch-diff itself is the only way to see the diff and that can get very messy during the review process as people's commentary comes in. While there are other ways to get at this diff without the commentary, it would make the UX of getting at this diff rather straightforward.

Given that the change is gated behind a feature bit anyway it also makes it easier for a new implementation to bootstrap itself without the splice feature by just reading the main BOLTs as is.

At some point in the future when splicing support becomes standard across the network we can consolidate the extension BOLT into the main BOLTs if people still prefer.

@t-bast
Copy link
Collaborator Author
t-bast commented May 3, 2024

Why not, if others also feel that it would be better as an extension bolt. I prefer it directly in Bolt 2, because of the following reasons:

  • Most of it is self contained in its own section(s) anyway.
  • It's an important part of the channel lifecycle: channels are opened, then during normal operation payments are relayed and splices happen, then the channel eventually closes. It is nicely reflected in the architecture of the Bolt 2 sections right now.
  • The few additions to existing message TLVs (commit_sig, tx_add_input, tx_signatures) should not be in a separate document when merging, because otherwise different features may use the same TLV tags without realizing it, with a risk of inadvertently shipping incompatible code. I think it's important that all TLVs for a given message are listed in that message's section, this way you know you don't have to randomly search the BOLTs for another place where TLVs may be defined.

But if I'm the only one thinking this is better, I'll move it to a separate document!

One thing to note is that we already have two implementations (eclair and cln), and maybe a 3rd one (LDK) who are very close to code-complete and have had months of experience on mainnet, which means the spec is almost final and we should be able to to merge it to the BOLTs in the not-so-distant future (:crossed_fingers:).

@ddustin
Copy link
Contributor
ddustin commented Jun 4, 2024

One thing I've been thinking about is with large splices across many nodes, if some node fails to send signatures (likely because two nodes in the cluster demand to sign last) than splice will hang one tx_signatures.

I believe we need two things to address this:

  1. Timeout logic where splices are aborted
  2. Being lax about having sent our tx_signatures but getting nothing back

Currently CLN fails the channel in this case as taking signatures and not responding is rather rude but this is bad because it could lead to clusters of splice channels being closed.

The unfortunate side effect of this is we have to be comfortable sending out signatures with no recourse for not getting any back.

I believe long term the solution is to maintain a signature-sending reputation for each peer and eventually blacklist peers from doing splices and / or fail your channels with that peer.

A reputation system may be beyond the needs of the spec but what to do with hanging tx_signatures (timeout etc) should be in the spec with a note about this problem.

@t-bast
Copy link
Collaborator Author
t-bast commented Jun 6, 2024
  1. Timeout logic where splices are aborted

This is already covered at the quiescence level: quiescence will timeout if the splice doesn't complete (e.g. because we haven't received tx_signatures).

  1. Being lax about having sent our tx_signatures but getting nothing back

I don't think this is necessary, and I think we should really require people to send tx_signatures when it is owed, to ensure that we get to a clean state on both peers.

if some node fails to send signatures (likely because two nodes in the cluster demand to sign last)

It seems like we've discussed this many times already: this simply cannot happen because ordering based on contributed amount fixes this? Can you detail a concrete scenario where tx_signatures ordering leads to a deadlock?

Comment on lines +1711 to +1780
- Either side has added an output other than the channel funding output
and the balance for that side is less than the channel reserve that
matches the new channel capacity.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does it mean to have a channel reserve to "match the new channel capacity". AFAICT the channel_reserve is specified in satoshis and reading the negotiation process of this proposal doesn't seem to indicate that there is any change happening to that parameter during negotiation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICT the channel_reserve is specified in satoshis

Not with dual-funding, where the channel reserve is 1% of the channel capacity. That's why this is potentially changing "automatically" when splicing on top of a dual-funded channel if we want to keep using 1%.

But you're right to highlight this: the channel reserve behavior is very loosely specified for now, and there were a lot of previous discussions with @morehouse regarding what we should do when splicing. Another edge case that we must better specify is what happens when splicing on top of a non-dual-funded channel, where the channel reserve was indeed a static value instead of a proportional one!

The channel reserve behavior is IMO the only missing piece of this specification, that we should discuss, thanks for bringing it up!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be a good thing to discuss in Tokyo!

Also worth stepping back and double checking the reserve requirement makes sense in its current form generally 👀.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think of the following behavior for handling channel reserves:

  • Whenever a splice happens, the channel is automatically enrolled into the 1% reserve policy, even if it wasn't initially a dual-funded channel (unless 0-reserve is used of course, see Add option_zero_reserve (FEATURE 64/65) #1140)
  • Splice-out is not allowed if you end up below your pre-splice reserve (your peer will reject that splice with tx_abort)
  • Otherwise, it's ok if one side ends up below the channel reserve after a splice: this is the same behavior as when a new channel is created. If we get into that state, the peer that is below the channel reserve:
    • is not allowed to send outgoing HTLCs
    • is allowed to receive incoming HTLCs
    • if it is paying the commit fees, it is allowed to dip further into its channel reserve to receive HTLCs (because of the added weight of the HTLC output), because we must be able to move liquidity to their side to get them above their reserve
  • When there are multiple unconfirmed splices, we use the highest channel reserve of all pending splices (ie requirements must be satisfied for all pending splice transactions)

As discussed during yesterday's meeting, there are subtle edge cases due to concurrent updates: this is inherent to the current commitment protocol, but will eventually become much simpler with #867

@ddustin @ProofOfKeags @rustyrussell @ziggie1984 @morehouse

Copy link
Contributor
@ziggie1984 ziggie1984 Sep 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

related: ACINQ/eclair#2899 (comment), tries to specify the concurrent edge cases and also the requirement when we would already (without splicing) allow the peer paying the fees being dipped below its reserve.

Copy link
Contributor
@morehouse morehouse Sep 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@t-bast

That all seems reasonable to me. The one part where we could get into trouble is:

if it is paying the commit fees, it is allowed to dip further into its channel reserve to receive HTLCs (because of the added weight of the HTLC output), because we must be able to move liquidity to their side to get them above their reserve

This allows the reserve to be violated, potentially all the way down to 0. In that situation, there is ~zero incentive to broadcast the latest commitment on force close.

That said, I know the implementation details are hairy to do things completely safely. And we can also look forward to zero-fee commitments with TRUC and ephemeral anchors, which would obsolete the "dip-into-reserve to pay fees" exception entirely.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This allows the reserve to be violated, potentially all the way down to 0. In that situation, there is ~zero incentive to broadcast the latest commitment on force close.

Since we only allow this to happen when the node paying the fee receives HTLCs, the other node sending that HTLC can limit the exposure by controlling how many HTLCs they send in a batch (or keep pending the commit tx) when we're in this state.

There are unfortunately cases where even a single HTLC would make the node paying the fee have no output (small channels with high feerate), but when that happens you really don't have any other option, the channel is otherwise unusable, so your only other option is to force-close anyway which isn't great...

And we can also look forward to zero-fee commitments with TRUC and ephemeral anchors, which would obsolete the "dip-into-reserve to pay fees" exception entirely.

Exactly, this is coming together (look at this beautiful 0-fee commitment transaction: https://mempool.space/testnet4/tx/85f2256c8d6d614 6D40 98c074d53912d1f0ef907ee508bb06f5701f3826432ba53b8) which will finally get rid of this kind of mess: I'm fine with using an imperfect but simple work-around in the meantime!

Copy link
Contributor
@ziggie1984 ziggie1984 Sep 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this requirement would solely be used for the splicing case, allowing HTLC which dip the opener into its reserve or should we make this an overall requirement. If so there is the problem with backwards compatibility, because older nodes (speaking for LND nodes) will force close if the opener dips below its reserve. So maybe it makes sense to only activate it for splicing use cases so that we don't run into the backwards compatibility issues ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea!

We add more test vectors to describe how reconnection should be handled after
a splice, before receiving `splice_locked`, when there are pending updates.
@wpaulino
Copy link
Contributor
wpaulino commented Jun 4, 2025

@t-bast @ddustin not sure if this was also an issue for your implementations: LDK does not track the full funding transaction of an inbound channel (only the funding outpoint), so it cannot provide the prevtx TLV in the tx_add_input message for the funding input being spent in a splice. We can of course start tracking the funding transaction on inbound channels, but that wouldn't help any existing channels already confirmed. Since the spec already requires that funding transactions must not be malleable, maybe we can omit the TLV for this specific case?

@TheBlueMatt
Copy link
Collaborator

And would prefer to not start storing the full funding transaction for every inbound channel :)

@t-bast
Copy link
Collaborator Author
t-bast commented Jun 5, 2025

it cannot provide the prevtx TLV in the tx_add_input message for the funding input being spent in a splice. We can of course start tracking the funding transaction on inbound channels, but that wouldn't help any existing channels already confirmed. Since the spec already requires that funding transactions must not be malleable, maybe we can omit the TLV for this specific case?

That is exactly why we've introduced the shared_input_txid TLV instead of including the prevtx field, is the spec not clear enough here? This is described in the paragraph for tx_add_input for splicing, where we explicitly say that the prevtx field must not be filled for the previous funding transaction.

I don't think anyone stores the whole funding transaction, everyone just tracks the outpoint and as you mention, we know segwit is used so we're good. On top of that, if we had to transmit the funding transaction in the prevtx field, we wouldn't be able to splice channels for which the funding transaction exceeds 65kB, which would be a very annoying limitation!

@jkczyz
Copy link
Contributor
jkczyz commented Jun 5, 2025

That is exactly why we've introduced the shared_input_txid TLV instead of including the prevtx field, is the spec not clear enough here? This is described in the paragraph for tx_add_input for splicing, where we explicitly say that the prevtx field must not be filled for the previous funding transaction.

Huh... not sure how we missed that. Leaving some clarifying comments.

t-bast added 3 commits June 11, 2025 17:24
It is always helpful to reference the `funding_txid` that is spent by
a `commit_sig`, even when there are no pending splices. It's also easier
for implementation to always include it.
We add a `message_type` TLV to `start_batch` that must be used when the
batch contains only messages of the same type, which is how it is used
for splicing (where we send a batch of `commitment_signed` messages).
As suggested by @jkczyz, we clarify requirements around:

- the `shared_funding_txid` field
- `start_batch` maximum size and RBF attempts
- `channel_reestablish` ordering with `splice_locked`
@ddustin
Copy link
CEB7 Contributor
ddustin commented Jun 13, 2025

To be honest I don't think this should become a general mechanism, it is still very hacky compared to using a single message (because of DoS concerns). I think this is something we should only ever use for cases like splicing where we really don't have a choice, and it's very likely that this will be the only usage of that start_batch message ever.

So we want the minimal level of abstraction to potentially allow other use-cases, without adding too much complexity. For the splice use-case, it is true that what we want is to restrict the whole batch to having only commit_sig messages for the same channel_id, so having an explicit message_type would make a lot of sense. It should probably be in an optional TLV though, otherwise we would never be able to allow batches of distinct messages if we ever need it? What do you think of adding this TLV:

  1. type: 1 (message_type)

  2. data:

    • [u16:message_type]

What is the expected behavior for unrecognized message_type -- do we want to

  1. Close the connection with a warning
  2. Ignore the start_batch message
  3. Ignore the start_batch message and batch_size messages following it
  4. Freak out and force close

@t-bast
Copy link
Collaborator Author
t-bast commented Jun 16, 2025

What is the expected behavior for unrecognized message_type -- do we want to

I think that generally, we should just ignore start_batch that we don't understand and act as if it wasn't there (ie process the following messages sequentially). Or do you think we should behave more strictly?

@ddustin
Copy link
Contributor
ddustin commented Jun 16, 2025

What is the expected behavior for unrecognized message_type -- do we want to

I think that generally, we should just ignore start_batch that we don't understand and act as if it wasn't there (ie process the following messages sequentially). Or do you think we should behave more strictly?

Ah yeah that makes total sense 👍

As proposed by @ddustin, we explicitly narrow the requirements for
`start_batch` to match our only use-case for it (splicing). We can
change that in the future if we use this messages for other features.
Some of those requirements shouldn't be gated on announcing the channel,
and we clarify that we retransmit once per connection.
Comment on lines +3382 to +3385
- if it receives `channel_ready` for that transaction after exchanging `channel_reestablish`:
- MUST retransmit `channel_ready` in response, if not already sent since reconnecting.
- if it receives `splice_locked` for that transaction after exchanging `channel_reestablish`:
- MUST retransmit `splice_locked` in response, if not already sent since reconnecting.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As written these requirements are dependent on receiving other messages. This seems more complicated than it needs to be. Instead, can't the retransmission requirements be entirely within the channel_reestablish's last "A receiving node" section? There's already a requirement there to retransmit splice_locked, so the one here seems redundant. We'd just need to add a requirement there for retransmitting channel_ready.

Am I missing something?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure it would be clearer, I like keeping all of those requirements under the if option_splice was negotiated...overall I think that channel_reestablish requirements deserve a refactoring, but nobody was interested in it (see #1049 and #1051) so I gave up 🤷‍♂️

Can you try refactoring like what you suggest? If it's better I'll include that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0