-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch on-chain claims more aggressively per channel #3340
base: main
Are you sure you want to change the base?
Conversation
This change depends on #3297. |
37df581
to
6334a05
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3340 +/- ##
==========================================
- Coverage 89.85% 89.59% -0.27%
==========================================
Files 126 126
Lines 104145 103322 -823
Branches 104145 103322 -823
==========================================
- Hits 93577 92567 -1010
- Misses 7894 8025 +131
- Partials 2674 2730 +56
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
6334a05
to
f7e8c9f
Compare
Needs rebase now 🎉 |
In the next commit we'll be changing the order some transactions get spent in packages, causing some tests to spuriously fail. Here we update a few tests to avoid that by checking sets of inputs rather than specific ordering.
Currently our package merging logic is strewn about between `package.rs` (which decides various flags based on the package type) and `onchaintx.rs` (which does the actual merging based on the derived flags as well as its own logic), making the logic hard to follow. Instead, here we consolidate the package merging logic entirely into `package.rs` with a new `PackageTemplate::can_merge_with` method that decides if merging can happen. We also simplify the merge pass in `update_claims_view_from_requests` to try to maximally merge by testing each pair of `PackageTemplate`s we're given to see if they can be merged. This is overly complicated (and inefficient) for today's merge logic, but over the coming commits we'll expand when we can merge and not having to think about the merge pass' behavior makes that much simpler (and O(N^2) for <1000 elements done only once when a commitment transaction confirms is fine).
There are multiple factors affecting the locktime of a package: - HTLC transactions rely on a fixed timelock due to the counterparty's signature. - HTLC timeout claims on the counterparty's commitment transaction require satisfying a CLTV timelock. - The locktime can be set to the latest height to avoid fee sniping. These factors were combined in a single method, making the separate factors less clear.
This moves panics to a higher level, allows failures to be handled gracefully in some cases, and supports more explicit testing without using `#[should_panic]`.
f7e8c9f
to
ab8c7f8
Compare
requests[j].merge_package(merge); | ||
break; | ||
if let Err(rejected) = requests[j].merge_package(merge) { | ||
requests.insert(i, rejected); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, removing then inserting at every step is kinda annoying cause it generally requires a vec shift...I'm not entirely convinced by this commit. If we want to reduce the risk of accidental panic introductions with code changes maybe we rename merge_package
to make it clearer that it assumes can_merge_with
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the re-inserts were only introduced as an alternative to panicking on the Err(_)
here. However, those errors should never occur as we call can_merge_with
beforehand. Added a debug_assert!(false, _)
. I can change the re-inserts to a panic!
as well to maintain the previous behavior.
The main goal was to push panic!
s up in the stack, while avoiding preconditions on merge_package
. Since the Result
of merge_package
is determined by can_merge_with
, the latter can be used beforehand to optimize any calls.
I can remove the commit as well though.
node_txn.swap_remove(0); | ||
// The unpinnable, revoked to_self output, and the pinnable, revoked htlc output will | ||
// be claimed in separate transactions. | ||
assert_eq!(node_txn.len(), 2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Care to check that the transactions spend different inputs? (similar elsewhere)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added additional checks to verify that they're spending different outputs here and throughout.
/// Checks if this and `other` are spending types of inputs which could have descended from the | ||
/// same commitment transaction(s) and thus could both be spent without requiring a | ||
/// double-spend. | ||
fn is_possibly_from_same_tx_tree(&self, other: &PackageSolvingData) -> bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
incremental-mutants
thinks that replacing this entire function with true
doesn't cause any tests to fail. If its easy, we should consider a reorg test that hits this (I think that's the only way to hit this?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added an additional test in package.rs
around merging of packages from different transaction trees if that's sufficient.
lightning/src/ln/monitor_tests.rs
Outdated
@@ -340,6 +344,124 @@ fn sorted_vec<T: Ord>(mut v: Vec<T>) -> Vec<T> { | |||
v | |||
} | |||
|
|||
fn verify_claimable_balances(mut balances_1: Vec<Balance>, mut balances_2: Vec<Balance>, margin: u64) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused why we can't keep asserting the balance set matches a predefined list exactly? We should be able to calculate the exact fees paid, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I got confused with the varying size of the signatures and the fee calculation of spend_spendable_outputs
. The weight of the transaction multiplied by the fee rate didn't line up with the actual transaction fee.
Calculated the fee of the transaction exactly now by looking up the input values.
When batch claiming was first added, it was only done so for claims which were not pinnable, i.e. those which can only be claimed by us. This was the conservative choice - pinning of outputs claimed by a batch would leave the entire batch unable to confirm on-chain. However, if pinning is considered an attack that can be executed with a high probability of success, then there is no reason not to batch claims of pinnable outputs together, separate from unpinnable outputs. Whether specific outputs are pinnable can change over time - those that are not pinnable will eventually become pinnable at the height at which our counterparty can spend them. Outputs are treated as pinnable if they're within `COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE` of that height. Aside from outputs being pinnable or not, locktimes are also a factor for batching claims. HTLC-timeout claims have locktimes fixed by the counterparty's signature and thus can only be aggregated with other HTLCs of the same CLTV, which we have to check for. The complexity required here is worth it - aggregation can save users a significant amount of fees in the case of a force-closure, and directly impacts the number of UTXOs needed as a reserve for anchors. Co-authored-by: Matt Corallo <[email protected]>
ab8c7f8
to
b84fe8b
Compare
When batch claiming was first added, it was only done so for claims which were not pinnable, i.e. those which can only be claimed by us.
This was the conservative choice - pinning of outputs claimed by a batch would leave the entire batch unable to confirm on-chain. However, if pinning is considered an attack that can be executed with a high probability of success, then there is no reason not to batch claims of pinnable outputs together, separate from unpinnable outputs.
Whether specific outputs are pinnable can change over time - those that are not pinnable will eventually become pinnable at the height at which our counterparty can spend them. Thus, outputs are treated as pinnable if they're within
COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE
of that height.Aside from outputs being pinnable or not, locktimes are also a factor for batching claims. HTLC-Timeout claims have locktimes fixed by the counterparty's signature and thus can only be aggregated with other HTLCs of the same CLTV, which we have to check for.
The complexity required here is worth it - aggregation can save users a significant amount of fees in the case of a force-closure, and directly impacts the number of UTXOs needed as a reserve for anchors.