-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify and fix AtomicCounter #3302
Simplify and fix AtomicCounter #3302
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #3302 +/- ##
==========================================
+ Coverage 89.85% 90.57% +0.72%
==========================================
Files 126 126
Lines 104145 109040 +4895
Branches 104145 109040 +4895
==========================================
+ Hits 93577 98761 +5184
+ Misses 7894 7624 -270
+ Partials 2674 2655 -19 ☔ View full report in Codecov by Sentry. |
lightning/src/util/atomic_counter.rs
Outdated
#[cfg(target_has_atomic = "64")] | ||
counter: AtomicU64::new(0), | ||
#[cfg(not(target_has_atomic = "64"))] | ||
counter: Mutex::new(0), | ||
} | ||
} | ||
pub(crate) fn get_increment(&self) -> u64 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Preexisting, this naming seems a bit confusing as we're actually not returning the incremented counter. Maybe the field should be called next_counter
and the method just next()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I think. Feel free to squash the fixup.
Squashed. |
3b9487b
to
7b1f07b
Compare
It seems you're missing some imports here, CI is sad:
etc |
7b1f07b
to
38ecd35
Compare
Gah $ git diff-tree -U1 7b1f07bd3 38ecd35c7
diff --git a/lightning/src/util/atomic_counter.rs b/lightning/src/util/atomic_counter.rs
index d00c7341b..6dbe54e64 100644
--- a/lightning/src/util/atomic_counter.rs
+++ b/lightning/src/util/atomic_counter.rs
@@ -5,3 +5,3 @@ use core::sync::atomic::{AtomicU64, Ordering};
#[cfg(not(target_has_atomic = "64"))]
-use crate::prelude::*;
+use crate::sync::Mutex;
$ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, happy to land it if CI passes.
`InMemorySigner` has various private keys in it which makes `Debug` either useless or dangerous (because most keys won't log anything, but if they did we'd risk logging private key material).
`blinded_path_with_custom_tlv` indirectly relied on route CLTV randomization when sending because nodes were at substantially different block heights after setup. Instead we make sure all nodes are at the same height which makes the test more robust.
38ecd35
to
dccdd3c
Compare
FFS $ git diff-tree -U1 38ecd35c7 dccdd3c15
diff --git a/lightning/src/sign/mod.rs b/lightning/src/sign/mod.rs
index 41354c695..4b8a9e025 100644
--- a/lightning/src/sign/mod.rs
+++ b/lightning/src/sign/mod.rs
@@ -1029,3 +1029,2 @@ pub trait ChangeDestinationSource {
/// a secure external signer.
-#[derive(Debug)]
pub struct InMemorySigner {
@@ -2483,3 +2482,2 @@ impl PhantomKeysManager {
/// An implementation of [`EntropySource`] using ChaCha20.
-#[derive(Debug)]
pub struct RandomBytes {
diff --git a/lightning/src/util/atomic_counter.rs b/lightning/src/util/atomic_counter.rs
index 6dbe54e64..3627ccf8a 100644
--- a/lightning/src/util/atomic_counter.rs
+++ b/lightning/src/util/atomic_counter.rs
@@ -7,3 +7,2 @@ use crate::sync::Mutex;
-#[derive(Debug)]
pub(crate) struct AtomicCounter {
$ |
Nope:
|
dccdd3c
to
68d4c96
Compare
Grr, guess I didn't have the arm compiler installed so running CI locally didn't test it...
|
`AtomicCounter` was slightly race-y on 32-bit platforms because it increments the high `AtomicUsize` independently from the low `AtomicUsize`, leading to a potential race where another thread could observe the low increment but not the high increment and see a value of 0 twice. This isn't a big deal because (a) most platforms are 64-bit these days, (b) 32-bit platforms aren't super likely to have their counter overflow 32 bits anyway, and (c) the two writes are back-to-back so having another thread read during that window is very unlikely. However, we can also optimize the counter somewhat by using the `target_has_atomic = "64"` cfg flag, which we do here, allowing us to use `AtomicU64` even on 32-bit platforms where 64-bit atomics are available. This changes some test behavior slightly, which requires adaptation. Fixes lightningdevkit#3000
Its a counter, `next` is super clear, `get_increment` is a bit less so.
68d4c96
to
1c2bd09
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, landing this as pretty trivial.
#[cfg(target_has_atomic = "64")] | ||
counter: AtomicU64, | ||
#[cfg(not(target_has_atomic = "64"))] | ||
counter: Mutex<u64>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the intention is to produce some unique values that maybe don't need to be sequential you can still have a sensible lock-free implementation. Roughly like this:
let mut low = self.low.load(Relaxed);
let mut high = self.high.load(Relaxed);
loop {
let new_low = if low == u32::MAX {
// don't use fetch_add to avoid incrementing high by more than 1
if let Err(new) = self.high.compare_exchange(high, high + 1, Relaxed, Relaxed) {
high = new;
}
0
} else {
low + 1
}
// FTR this cannot be weak
match self.low.compare_exchange(low, new_low, Relaxed, Relaxed) {
Ok(_) => break,
Err(new) => low = new,
}
}
u64::from(high) << 32 | u64::from(low)
This assumes that a thread doesn't get scheduled-out after incrementing high for so long that other thread(s) manage to increment the counter by 2^32, which I think is a reasonable assumption. There's still a chance that high
gets bumped by more than one though if a thread managed to bump it and before it updates low
another thread reads both of them. This is quite unfrequent and could be dealt with by sacrificing one bit of high
which gets set first and then reset at the end.
}; | ||
(high << 32) | low | ||
#[cfg(target_has_atomic = "64")] { | ||
self.counter.fetch_add(1, Ordering::AcqRel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have AcqRel
however to my understanding this is not a synchronization primitive so Relaxed
is appropriate.
AtomicCounter
was slightly race-y on 32-bit platforms because itincrements the high
AtomicUsize
independently from the lowAtomicUsize
, leading to a potential race where another threadcould observe the low increment but not the high increment and see
a value of 0 twice.
This isn't a big deal because (a) most platforms are 64-bit these
days, (b) 32-bit platforms aren't super likely to have their
counter overflow 32 bits anyway, and (c) the two writes are
back-to-back so having another thread read during that window is
very unlikely.
However, we can also optimize the counter somewhat by using the
target_has_atomic = "64"
cfg flag, which we do here, allowing usto use
AtomicU64
even on 32-bit platforms where 64-bit atomicsare available.
This changes some test behavior slightly, which requires
adaptation.
Fixes #3000