Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch on-chain claims more aggressively per channel #3340

Draft
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

wvanlint
Copy link
Contributor

When batch claiming was first added, it was only done so for claims which were not pinnable, i.e. those which can only be claimed by us.

This was the conservative choice - pinning of outputs claimed by a batch would leave the entire batch unable to confirm on-chain. However, if pinning is considered an attack that can be executed with a high probability of success, then there is no reason not to batch claims of pinnable outputs together, separate from unpinnable outputs.

Whether specific outputs are pinnable can change over time - those that are not pinnable will eventually become pinnable at the height at which our counterparty can spend them. Thus, outputs are treated as pinnable if they're within COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE of that height.

Aside from outputs being pinnable or not, locktimes are also a factor for batching claims. HTLC-Timeout claims have locktimes fixed by the counterparty's signature and thus can only be aggregated with other HTLCs of the same CLTV, which we have to check for.

The complexity required here is worth it - aggregation can save users a significant amount of fees in the case of a force-closure, and directly impacts the number of UTXOs needed as a reserve for anchors.

@wvanlint
Copy link
Contributor Author

This change depends on #3297.

Copy link

codecov bot commented Sep 25, 2024

Codecov Report

Attention: Patch coverage is 93.87223% with 47 lines in your changes missing coverage. Please review.

Project coverage is 89.59%. Comparing base (d35239c) to head (f7e8c9f).
Report is 123 commits behind head on main.

Files with missing lines Patch % Lines
lightning/src/ln/monitor_tests.rs 89.78% 33 Missing ⚠️
lightning/src/chain/package.rs 97.22% 7 Missing and 2 partials ⚠️
lightning/src/chain/onchaintx.rs 87.17% 2 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3340      +/-   ##
==========================================
- Coverage   89.85%   89.59%   -0.27%     
==========================================
  Files         126      126              
  Lines      104145   103322     -823     
  Branches   104145   103322     -823     
==========================================
- Hits        93577    92567    -1010     
- Misses       7894     8025     +131     
- Partials     2674     2730      +56     
Flag Coverage Δ
89.59% <93.87%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@TheBlueMatt
Copy link
Collaborator

Needs rebase now 🎉

@TheBlueMatt TheBlueMatt linked an issue Oct 19, 2024 that may be closed by this pull request
TheBlueMatt and others added 4 commits October 28, 2024 10:57
In the next commit we'll be changing the order some transactions
get spent in packages, causing some tests to spuriously fail. Here
we update a few tests to avoid that by checking sets of inputs
rather than specific ordering.
Currently our package merging logic is strewn about between
`package.rs` (which decides various flags based on the package
type) and `onchaintx.rs` (which does the actual merging based on
the derived flags as well as its own logic), making the logic hard
to follow.

Instead, here we consolidate the package merging logic entirely
into `package.rs` with a new `PackageTemplate::can_merge_with`
method that decides if merging can happen. We also simplify the
merge pass in `update_claims_view_from_requests` to try to
maximally merge by testing each pair of `PackageTemplate`s we're
given to see if they can be merged.

This is overly complicated (and inefficient) for today's merge
logic, but over the coming commits we'll expand when we can merge
and not having to think about the merge pass' behavior makes that
much simpler (and O(N^2) for <1000 elements done only once when a
commitment transaction confirms is fine).
There are multiple factors affecting the locktime of a package:
- HTLC transactions rely on a fixed timelock due to the counterparty's
  signature.
- HTLC timeout claims on the counterparty's commitment transaction
  require satisfying a CLTV timelock.
- The locktime can be set to the latest height to avoid fee sniping.
These factors were combined in a single method, making the separate
factors less clear.
This moves panics to a higher level, allows failures to be handled
gracefully in some cases, and supports more explicit testing without
using `#[should_panic]`.
requests[j].merge_package(merge);
break;
if let Err(rejected) = requests[j].merge_package(merge) {
requests.insert(i, rejected);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, removing then inserting at every step is kinda annoying cause it generally requires a vec shift...I'm not entirely convinced by this commit. If we want to reduce the risk of accidental panic introductions with code changes maybe we rename merge_package to make it clearer that it assumes can_merge_with?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the re-inserts were only introduced as an alternative to panicking on the Err(_) here. However, those errors should never occur as we call can_merge_with beforehand. Added a debug_assert!(false, _). I can change the re-inserts to a panic! as well to maintain the previous behavior.

The main goal was to push panic!s up in the stack, while avoiding preconditions on merge_package. Since the Result of merge_package is determined by can_merge_with, the latter can be used beforehand to optimize any calls.

I can remove the commit as well though.

node_txn.swap_remove(0);
// The unpinnable, revoked to_self output, and the pinnable, revoked htlc output will
// be claimed in separate transactions.
assert_eq!(node_txn.len(), 2);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Care to check that the transactions spend different inputs? (similar elsewhere)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added additional checks to verify that they're spending different outputs here and throughout.

/// Checks if this and `other` are spending types of inputs which could have descended from the
/// same commitment transaction(s) and thus could both be spent without requiring a
/// double-spend.
fn is_possibly_from_same_tx_tree(&self, other: &PackageSolvingData) -> bool {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

incremental-mutants thinks that replacing this entire function with true doesn't cause any tests to fail. If its easy, we should consider a reorg test that hits this (I think that's the only way to hit this?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added an additional test in package.rs around merging of packages from different transaction trees if that's sufficient.

@@ -340,6 +344,124 @@ fn sorted_vec<T: Ord>(mut v: Vec<T>) -> Vec<T> {
v
}

fn verify_claimable_balances(mut balances_1: Vec<Balance>, mut balances_2: Vec<Balance>, margin: u64) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused why we can't keep asserting the balance set matches a predefined list exactly? We should be able to calculate the exact fees paid, no?

Copy link
Contributor Author

@wvanlint wvanlint Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I got confused with the varying size of the signatures and the fee calculation of spend_spendable_outputs. The weight of the transaction multiplied by the fee rate didn't line up with the actual transaction fee.

Calculated the fee of the transaction exactly now by looking up the input values.

wvanlint and others added 4 commits October 29, 2024 15:35
When batch claiming was first added, it was only done so for claims
which were not pinnable, i.e. those which can only be claimed by us.

This was the conservative choice - pinning of outputs claimed by a batch
would leave the entire batch unable to confirm on-chain. However, if
pinning is considered an attack that can be executed with a high
probability of success, then there is no reason not to batch claims of
pinnable outputs together, separate from unpinnable outputs.

Whether specific outputs are pinnable can change over time - those that
are not pinnable will eventually become pinnable at the height at which
our counterparty can spend them. Outputs are treated as pinnable if
they're within `COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE` of that
height.

Aside from outputs being pinnable or not, locktimes are also a factor
for batching claims. HTLC-timeout claims have locktimes fixed by the
counterparty's signature and thus can only be aggregated with other
HTLCs of the same CLTV, which we have to check for.

The complexity required here is worth it - aggregation can save users a
significant amount of fees in the case of a force-closure, and directly
impacts the number of UTXOs needed as a reserve for anchors.

Co-authored-by: Matt Corallo <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Batching of HTLC transactions for anchor output channels
2 participants