Skip to content

Commit

Permalink
Change thread safety annotations in thread local store code (envoypro…
Browse files Browse the repository at this point in the history
…xy#38113)

When I tried to build Envoy with Clang-18 I hit an issue that Clang
thread safety analyzer does not like the fact that we are returning a
reference to a protected member (central_cache_) from
centralCacheLockHeld method.

While I do think that the code is correct, when looking at the thread
safety annotations I think we could do a little bit better.

Currently, centralCacheLockHeld is annotated with
ABLS_ASSERT_EXCLUSIVE_LOCK. My understanding is that this annotation
should be used on functions that during runtime check that the right
lock is held and fail if it's not the case. centralCacheLockHeld
currently does not actually check that the lock is held - this seems
somewhat misleading and I don't think that thread safety analysis should
derive anything from this annotation TBH, as there is no runtime check
present there.

We could add a runtime check to the function, but unfortunately it will
not be enough to address Clang's warning (see
llvm/llvm-project#123512). Besides I think
that we can do slightly better.

This change replaces ABLS_ASSERT_EXCLUSIVE_LOCK with
ABSL_EXCLUSIVE_LOCKS_REQUIRED, to let Clang thread safety analysis know
during compilation time that this function should be called under a
lock.

That change triggered a few more warnings in various places that call
into centralCacheLockHeld. In simple cases just adding
ABSL_EXCLUSIVE_LOCKS_REQUIRED to the functions that call
centralCacheLockHeld was enough.

There were a couple of more complicated cases that Clang could not
figure out because it does not support aliasing (i.e., when the same
mutex is known under different names, Clang cannot always figure out
that different names refer to the same lock). To deal with those cases I
added assertLocked method with
ABSL_ASSERT_EXCLUSIVE_LOCK(scope->parent_.lock_) and
ABSL_EXCLUSIVE_LOCKS_REQUIRED(lock_). So on the one hand to call this
function Clang should be convinced that lock_ is held, and, on the other
hand, it lets Clang know that after this function scope->parent_.lock_
is held.

Given that scope->parent_.lock_ and lock_ are two different names of the
same lock, we can avoid doing a runtime check inside the assertLocked
method, because if lock_ is held (and Clang is convinced of that) then it
follows that scope->parent_.lock_ is also held, because it's the same
lock.

All-in-all, I think relying on ABSL_EXCLUSIVE_LOCKS_REQUIRED is slightly
better and it addresses the warning for me as well.

Additional Description: Related to the work in envoyproxy#37911
Risk Level: Low
Testing: Tested that things build after the change (plus played around
with the code making thread safety analysis warnings to trigger)
Docs Changes: n/a
Release Notes: n/a
Platform Specific Features: n/a
Signed-off-by: Mikhail Krinkin <[email protected]>
Signed-off-by: Mikhail Krinkin <[email protected]>
  • Loading branch information
krinkinmu authored Jan 28, 2025
1 parent dd2f3b9 commit 18170cc
Show file tree
Hide file tree
Showing 2 changed files with 35 additions and 11 deletions.
12 changes: 8 additions & 4 deletions source/common/stats/thread_local_store.cc
Original file line number Diff line number Diff line change
Expand Up @@ -53,10 +53,12 @@ ThreadLocalStoreImpl::~ThreadLocalStoreImpl() {
}

void ThreadLocalStoreImpl::setHistogramSettings(HistogramSettingsConstPtr&& histogram_settings) {
iterateScopes([](const ScopeImplSharedPtr& scope) -> bool {
ASSERT(scope->centralCacheLockHeld()->histograms_.empty());
return true;
});
iterateScopes([this](const ScopeImplSharedPtr& scope)
ABSL_EXCLUSIVE_LOCKS_REQUIRED(lock_) -> bool {
assertLocked(*scope);
ASSERT(scope->centralCacheLockHeld()->histograms_.empty());
return true;
});
histogram_settings_ = std::move(histogram_settings);
}

Expand All @@ -74,6 +76,7 @@ void ThreadLocalStoreImpl::setStatsMatcher(StatsMatcherPtr&& stats_matcher) {
const uint32_t first_histogram_index = deleted_histograms_.size();
iterateScopesLockHeld([this](const ScopeImplSharedPtr& scope) ABSL_EXCLUSIVE_LOCKS_REQUIRED(
lock_) -> bool {
assertLocked(*scope);
const CentralCacheEntrySharedPtr& central_cache = scope->centralCacheLockHeld();
removeRejectedStats<CounterSharedPtr>(central_cache->counters_,
[this](const CounterSharedPtr& counter) mutable {
Expand Down Expand Up @@ -293,6 +296,7 @@ void ThreadLocalStoreImpl::releaseScopeCrossThread(ScopeImpl* scope) {
// VirtualHosts.
bool need_post = scopes_to_cleanup_.empty();
scopes_to_cleanup_.push_back(scope->scope_id_);
assertLocked(*scope);
central_cache_entries_to_cleanup_.push_back(scope->centralCacheLockHeld());
lock.release();

Expand Down
34 changes: 27 additions & 7 deletions source/common/stats/thread_local_store.h
Original file line number Diff line number Diff line change
Expand Up @@ -339,16 +339,20 @@ class ThreadLocalStoreImpl : Logger::Loggable<Logger::Id::stats>, public StoreRo
return iterateLockHeld(fn);
}

bool iterateLockHeld(const IterateFn<Counter>& fn) const {
bool iterateLockHeld(const IterateFn<Counter>& fn) const
ABSL_EXCLUSIVE_LOCKS_REQUIRED(parent_.lock_) {
return iterHelper(fn, centralCacheLockHeld()->counters_);
}
bool iterateLockHeld(const IterateFn<Gauge>& fn) const {
bool iterateLockHeld(const IterateFn<Gauge>& fn) const
ABSL_EXCLUSIVE_LOCKS_REQUIRED(parent_.lock_) {
return iterHelper(fn, centralCacheLockHeld()->gauges_);
}
bool iterateLockHeld(const IterateFn<Histogram>& fn) const {
bool iterateLockHeld(const IterateFn<Histogram>& fn) const
ABSL_EXCLUSIVE_LOCKS_REQUIRED(parent_.lock_) {
return iterHelper(fn, centralCacheLockHeld()->histograms_);
}
bool iterateLockHeld(const IterateFn<TextReadout>& fn) const {
bool iterateLockHeld(const IterateFn<TextReadout>& fn) const
ABSL_EXCLUSIVE_LOCKS_REQUIRED(parent_.lock_) {
return iterHelper(fn, centralCacheLockHeld()->text_readouts_);
}
ThreadLocalStoreImpl& store() override { return parent_; }
Expand Down Expand Up @@ -421,7 +425,7 @@ class ThreadLocalStoreImpl : Logger::Loggable<Logger::Id::stats>, public StoreRo
// scope->central_cache_, the analysis system cannot understand that the
// scope's parent_.lock_ is held, so we assert that here.
const CentralCacheEntrySharedPtr& centralCacheLockHeld() const
ABSL_ASSERT_EXCLUSIVE_LOCK(parent_.lock_) {
ABSL_EXCLUSIVE_LOCKS_REQUIRED(parent_.lock_) {
return central_cache_;
}

Expand Down Expand Up @@ -463,6 +467,19 @@ class ThreadLocalStoreImpl : Logger::Loggable<Logger::Id::stats>, public StoreRo

using ScopeImplSharedPtr = std::shared_ptr<ScopeImpl>;

/**
* assertLocked exists to help compiler figure out that lock_ and scope->parent_.lock_ is
* actually the same lock known under two different names. This function requires lock_ to
* be held when it's called and at the same time it is annotated as if it checks in runtime
* that scope->parent_.lock_ is held. It does not actually perform any runtime checks, because
* those aren't needed since we know that scope->parent_ refers to ThreadLockStoreImpl and
* therefore scope->parent_.lock is the same as lock_.
*/
void assertLocked(const ScopeImpl& scope) const ABSL_EXCLUSIVE_LOCKS_REQUIRED(lock_)
ABSL_ASSERT_EXCLUSIVE_LOCK(scope.parent_.lock_) {
UNREFERENCED_PARAMETER(scope);
}

/**
* Calls fn_lock_held for every scope with, lock_ held. This avoids iterate/destruct
* races for scopes.
Expand All @@ -481,8 +498,11 @@ class ThreadLocalStoreImpl : Logger::Loggable<Logger::Id::stats>, public StoreRo

// The Store versions of iterate cover all the scopes in the store.
template <class StatFn> bool iterHelper(StatFn fn) const {
return iterateScopes(
[fn](const ScopeImplSharedPtr& scope) -> bool { return scope->iterateLockHeld(fn); });
return iterateScopes([this, fn](const ScopeImplSharedPtr& scope)
ABSL_EXCLUSIVE_LOCKS_REQUIRED(lock_) -> bool {
assertLocked(*scope);
return scope->iterateLockHeld(fn);
});
}

std::string getTagsForName(const std::string& name, TagVector& tags) const;
Expand Down

0 comments on commit 18170cc

Please sign in to comment.