Skip to content

Commit 4356dfa

Browse files
committed
Change thread safety annotations in TLS code
When I tried to build Envoy with Clang-18 I hit an issue that Clang thread safety analizer does not like the fact that we are returning a reference to a protected member (central_cache_) from centralCacheLockHeld method. While I do think that the code is correct, when looking at the thread safety annotations I think we could do a little bit better. Currently, centralCacheLockHeld is annotated with ABLS_ASSERT_EXCLUSIVE_LOCK. My understanding is that this annotation should be used on functions that during runtime check that the right lock is held and fail if it's not the case. centralCacheLockHeld currently does not actually check that the lock is held - this seems somewhat misleading and I don't think that thread safety analysis should derive anything from this annotation TBH, as there is no runtime check present there. We could add a runtime check to the function, but unfortunately it will not be enough to address Clang's warning (see llvm/llvm-project#123512). Besides I think that we can do slightly better. This change replaces ABLS_ASSERT_EXCLUSIVE_LOCK with ABSL_EXCLUSIVE_LOCKS_REQUIRED, to let Clang thread safety analysis know during compilation time that this function should be called under a lock. That change triggered a few more warnings in various places that call into centralCacheLockHeld. In simple cases just adding ABSL_EXCLUSIVE_LOCKS_REQUIRED to the functions that call centralCacheLockHeld was enough. There were a couple of more complicated cases that Clang could not figure out because it does not support aliasing (i.e., when the same mutex is known under different names, Clang cannot always figure out that different names refer to the same lock). To deal with those cases I added assertHeld method with ABLS_ASSERT_EXCLUSIVE_LOCK annotation to the lock implementation and used it to help Clang to figure out what locks are held. All-in-all, I think relying on ABSL_EXCLUSIVE_LOCKS_REQUIRED is slightly better and it addresses the warning for me as well. Signed-off-by: Mikhail Krinkin <mkrinkin@microsoft.com> Signed-off-by: Mikhail Krinkin <krinkin.m.u@gmail.com>
1 parent 6e8fa0a commit 4356dfa

File tree

2 files changed

+35
-11
lines changed

2 files changed

+35
-11
lines changed

source/common/stats/thread_local_store.cc

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -53,10 +53,12 @@ ThreadLocalStoreImpl::~ThreadLocalStoreImpl() {
5353
}
5454

5555
void ThreadLocalStoreImpl::setHistogramSettings(HistogramSettingsConstPtr&& histogram_settings) {
56-
iterateScopes([](const ScopeImplSharedPtr& scope) -> bool {
57-
ASSERT(scope->centralCacheLockHeld()->histograms_.empty());
58-
return true;
59-
});
56+
iterateScopes([this](const ScopeImplSharedPtr& scope)
57+
ABSL_EXCLUSIVE_LOCKS_REQUIRED(lock_) -> bool {
58+
assertLocked(*scope);
59+
ASSERT(scope->centralCacheLockHeld()->histograms_.empty());
60+
return true;
61+
});
6062
histogram_settings_ = std::move(histogram_settings);
6163
}
6264

@@ -74,6 +76,7 @@ void ThreadLocalStoreImpl::setStatsMatcher(StatsMatcherPtr&& stats_matcher) {
7476
const uint32_t first_histogram_index = deleted_histograms_.size();
7577
iterateScopesLockHeld([this](const ScopeImplSharedPtr& scope) ABSL_EXCLUSIVE_LOCKS_REQUIRED(
7678
lock_) -> bool {
79+
assertLocked(*scope);
7780
const CentralCacheEntrySharedPtr& central_cache = scope->centralCacheLockHeld();
7881
removeRejectedStats<CounterSharedPtr>(central_cache->counters_,
7982
[this](const CounterSharedPtr& counter) mutable {
@@ -293,6 +296,7 @@ void ThreadLocalStoreImpl::releaseScopeCrossThread(ScopeImpl* scope) {
293296
// VirtualHosts.
294297
bool need_post = scopes_to_cleanup_.empty();
295298
scopes_to_cleanup_.push_back(scope->scope_id_);
299+
assertLocked(*scope);
296300
central_cache_entries_to_cleanup_.push_back(scope->centralCacheLockHeld());
297301
lock.release();
298302

source/common/stats/thread_local_store.h

Lines changed: 27 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -339,16 +339,20 @@ class ThreadLocalStoreImpl : Logger::Loggable<Logger::Id::stats>, public StoreRo
339339
return iterateLockHeld(fn);
340340
}
341341

342-
bool iterateLockHeld(const IterateFn<Counter>& fn) const {
342+
bool iterateLockHeld(const IterateFn<Counter>& fn) const
343+
ABSL_EXCLUSIVE_LOCKS_REQUIRED(parent_.lock_) {
343344
return iterHelper(fn, centralCacheLockHeld()->counters_);
344345
}
345-
bool iterateLockHeld(const IterateFn<Gauge>& fn) const {
346+
bool iterateLockHeld(const IterateFn<Gauge>& fn) const
347+
ABSL_EXCLUSIVE_LOCKS_REQUIRED(parent_.lock_) {
346348
return iterHelper(fn, centralCacheLockHeld()->gauges_);
347349
}
348-
bool iterateLockHeld(const IterateFn<Histogram>& fn) const {
350+
bool iterateLockHeld(const IterateFn<Histogram>& fn) const
351+
ABSL_EXCLUSIVE_LOCKS_REQUIRED(parent_.lock_) {
349352
return iterHelper(fn, centralCacheLockHeld()->histograms_);
350353
}
351-
bool iterateLockHeld(const IterateFn<TextReadout>& fn) const {
354+
bool iterateLockHeld(const IterateFn<TextReadout>& fn) const
355+
ABSL_EXCLUSIVE_LOCKS_REQUIRED(parent_.lock_) {
352356
return iterHelper(fn, centralCacheLockHeld()->text_readouts_);
353357
}
354358
ThreadLocalStoreImpl& store() override { return parent_; }
@@ -421,7 +425,7 @@ class ThreadLocalStoreImpl : Logger::Loggable<Logger::Id::stats>, public StoreRo
421425
// scope->central_cache_, the analysis system cannot understand that the
422426
// scope's parent_.lock_ is held, so we assert that here.
423427
const CentralCacheEntrySharedPtr& centralCacheLockHeld() const
424-
ABSL_ASSERT_EXCLUSIVE_LOCK(parent_.lock_) {
428+
ABSL_EXCLUSIVE_LOCKS_REQUIRED(parent_.lock_) {
425429
return central_cache_;
426430
}
427431

@@ -463,6 +467,19 @@ class ThreadLocalStoreImpl : Logger::Loggable<Logger::Id::stats>, public StoreRo
463467

464468
using ScopeImplSharedPtr = std::shared_ptr<ScopeImpl>;
465469

470+
/**
471+
* assertLocked exists to help compiler figure out that lock_ and scope->parent_.lock_ is
472+
* actually the same lock known under two different names. This function requires lock_ to
473+
* be held when it's called and at the same time it is annotated as if it checks in runtime
474+
* that scope->parent_.lock_ is held. It does not actually perform any runtime checks, because
475+
* those aren't needed since we know that scope->parent_ refers to ThreadLockStoreImpl and
476+
* therefore scope->parent_.lock is the same as lock_.
477+
*/
478+
void assertLocked(const ScopeImpl& scope) const ABSL_EXCLUSIVE_LOCKS_REQUIRED(lock_)
479+
ABSL_ASSERT_EXCLUSIVE_LOCK(scope.parent_.lock_) {
480+
(void)scope;
481+
}
482+
466483
/**
467484
* Calls fn_lock_held for every scope with, lock_ held. This avoids iterate/destruct
468485
* races for scopes.
@@ -481,8 +498,11 @@ class ThreadLocalStoreImpl : Logger::Loggable<Logger::Id::stats>, public StoreRo
481498

482499
// The Store versions of iterate cover all the scopes in the store.
483500
template <class StatFn> bool iterHelper(StatFn fn) const {
484-
return iterateScopes(
485-
[fn](const ScopeImplSharedPtr& scope) -> bool { return scope->iterateLockHeld(fn); });
501+
return iterateScopes([this, fn](const ScopeImplSharedPtr& scope)
502+
ABSL_EXCLUSIVE_LOCKS_REQUIRED(lock_) -> bool {
503+
assertLocked(*scope);
504+
return scope->iterateLockHeld(fn);
505+
});
486506
}
487507

488508
std::string getTagsForName(const std::string& name, TagVector& tags) const;

0 commit comments

Comments
 (0)