Groups
Groups

[PATCH tip/locking/core 0/6] compiler-context-analysis: Scoped init guards

2 views
Skip to first unread message

Marco Elver

unread,
Jan 19, 2026, 4:40:48 AMJan 19
to el...@google.com, Peter Zijlstra, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Christoph Hellwig, Steven Rostedt, Bart Van Assche, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org
Current context analysis treats lock_init() as implicitly "holding" the
lock to allow initializing guarded members. This causes false-positive
"double lock" reports if the lock is acquired immediately after
initialization in the same scope; for example:

mutex_init(&d->mtx);
/* ... counter is guarded by mtx ... */
d->counter = 0; /* ok, but mtx is now "held" */
...
mutex_lock(&d->mtx); /* warning: acquiring mutex already held */

This series proposes a solution to this by introducing scoped init
guards which Peter suggested, using the guard(type_init)(&lock) or
scoped_guard(type_init, ..) interface. This explicitly marks init scope
where we can initialize guarded members. With that we can revert the
"implicitly hold" after init annotations, which allows use after
initialization scope as follows:

scoped_guard(mutex_init, &d->mtx) {
d->counter = 0;
}
...
mutex_lock(&d->mtx); /* ok */

Note: Scoped guarded initialization remains optional, and normal
initialization can still be used if no guarded members are being
initialized. Another alternative is to just disable context analysis to
initialize guarded members with `context_unsafe(var = init)` or adding
the `__context_unsafe(init)` function attribute (the latter not being
recommended for non-trivial functions due to lack of any checking):

mutex_init(&d->mtx);
context_unsafe(d->counter = 0); /* ok */
...
mutex_lock(&d->mtx);

This series is an alternative to the approach in [1]:

* Scoped init guards (this series): Sound interface, requires use of
guard(type_init)(&lock) or scoped_guard(type_init, ..) for guarded
member initialization.

* Reentrant init [1]: Less intrusive, type_init() just works, and
also allows guarded member initialization with later lock use in
the same function. But unsound, and e.g. misses double-lock bugs
immediately after init, trading false positives for false negatives.

[1] https://lore.kernel.org/all/20260115005231....@google.com/

Marco Elver (6):
cleanup: Make __DEFINE_LOCK_GUARD handle commas in initializers
compiler-context-analysis: Introduce scoped init guards
kcov: Use scoped init guard
crypto: Use scoped init guard
tomoyo: Use scoped init guard
compiler-context-analysis: Remove __assume_ctx_lock from initializers

Documentation/dev-tools/context-analysis.rst | 30 ++++++++++++++++++--
crypto/crypto_engine.c | 2 +-
crypto/drbg.c | 2 +-
include/linux/cleanup.h | 8 +++---
include/linux/compiler-context-analysis.h | 9 ++----
include/linux/local_lock.h | 8 ++++++
include/linux/local_lock_internal.h | 4 +--
include/linux/mutex.h | 4 ++-
include/linux/rwlock.h | 3 +-
include/linux/rwlock_rt.h | 1 -
include/linux/rwsem.h | 6 ++--
include/linux/seqlock.h | 6 +++-
include/linux/spinlock.h | 17 ++++++++---
include/linux/spinlock_rt.h | 1 -
include/linux/ww_mutex.h | 1 -
kernel/kcov.c | 2 +-
lib/test_context-analysis.c | 22 ++++++--------
security/tomoyo/common.c | 2 +-
18 files changed, 80 insertions(+), 48 deletions(-)

--
2.52.0.457.g6b5491de43-goog

Marco Elver

unread,
Jan 19, 2026, 4:40:52 AMJan 19
to el...@google.com, Peter Zijlstra, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Christoph Hellwig, Steven Rostedt, Bart Van Assche, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org, kernel test robot
Initialization macros can expand to structure initializers containing
commas, which when used as a "lock" function resulted in errors such as:

>> include/linux/spinlock.h:582:56: error: too many arguments provided to function-like macro invocation
582 | DEFINE_LOCK_GUARD_1(raw_spinlock_init, raw_spinlock_t, raw_spin_lock_init(_T->lock), /* */)
| ^
include/linux/spinlock.h:113:17: note: expanded from macro 'raw_spin_lock_init'
113 | do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); } while (0)
| ^
include/linux/spinlock_types_raw.h:70:19: note: expanded from macro '__RAW_SPIN_LOCK_UNLOCKED'
70 | (raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname)
| ^
include/linux/spinlock_types_raw.h:67:34: note: expanded from macro '__RAW_SPIN_LOCK_INITIALIZER'
67 | RAW_SPIN_DEP_MAP_INIT(lockname) }
| ^
include/linux/cleanup.h:496:9: note: macro '__DEFINE_LOCK_GUARD_1' defined here
496 | #define __DEFINE_LOCK_GUARD_1(_name, _type, _lock) \
| ^
include/linux/spinlock.h:582:1: note: parentheses are required around macro argument containing braced initializer list
582 | DEFINE_LOCK_GUARD_1(raw_spinlock_init, raw_spinlock_t, raw_spin_lock_init(_T->lock), /* */)
| ^
| (
include/linux/cleanup.h:558:60: note: expanded from macro 'DEFINE_LOCK_GUARD_1'
558 | __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, __VA_ARGS__) \
| ^

Make __DEFINE_LOCK_GUARD_0 and __DEFINE_LOCK_GUARD_1 variadic so that
__VA_ARGS__ captures everything.

Reported-by: kernel test robot <l...@intel.com>
Signed-off-by: Marco Elver <el...@google.com>
---
include/linux/cleanup.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h
index ee6df68c2177..dbc4162921e9 100644
--- a/include/linux/cleanup.h
+++ b/include/linux/cleanup.h
@@ -493,22 +493,22 @@ static __always_inline void class_##_name##_destructor(class_##_name##_t *_T) \
\
__DEFINE_GUARD_LOCK_PTR(_name, &_T->lock)

-#define __DEFINE_LOCK_GUARD_1(_name, _type, _lock) \
+#define __DEFINE_LOCK_GUARD_1(_name, _type, ...) \
static __always_inline class_##_name##_t class_##_name##_constructor(_type *l) \
__no_context_analysis \
{ \
class_##_name##_t _t = { .lock = l }, *_T = &_t; \
- _lock; \
+ __VA_ARGS__; \
return _t; \
}

-#define __DEFINE_LOCK_GUARD_0(_name, _lock) \
+#define __DEFINE_LOCK_GUARD_0(_name, ...) \
static __always_inline class_##_name##_t class_##_name##_constructor(void) \
__no_context_analysis \
{ \
class_##_name##_t _t = { .lock = (void*)1 }, \
*_T __maybe_unused = &_t; \
- _lock; \
+ __VA_ARGS__; \
return _t; \
}

--
2.52.0.457.g6b5491de43-goog

Marco Elver

unread,
Jan 19, 2026, 4:40:56 AMJan 19
to el...@google.com, Peter Zijlstra, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Christoph Hellwig, Steven Rostedt, Bart Van Assche, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org
Add scoped init guard definitions for common synchronization primitives
supported by context analysis.

The scoped init guards treat the context as active within initialization
scope of the underlying context lock, given initialization implies
exclusive access to the underlying object. This allows initialization of
guarded members without disabling context analysis, while documenting
initialization from subsequent usage.

The documentation is updated with the new recommendation. Where scoped
init guards are not provided or cannot be implemented (ww_mutex omitted
for lack of multi-arg guard initializers), the alternative is to just
disable context analysis where guarded members are initialized.

Link: https://lore.kernel.org/all/202512120959...@noisy.programming.kicks-ass.net/
Suggested-by: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Marco Elver <el...@google.com>
---
Documentation/dev-tools/context-analysis.rst | 30 ++++++++++++++++++--
include/linux/compiler-context-analysis.h | 9 ++----
include/linux/local_lock.h | 8 ++++++
include/linux/local_lock_internal.h | 1 +
include/linux/mutex.h | 3 ++
include/linux/rwsem.h | 4 +++
include/linux/seqlock.h | 5 ++++
include/linux/spinlock.h | 12 ++++++++
lib/test_context-analysis.c | 16 +++++------
9 files changed, 70 insertions(+), 18 deletions(-)

diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/dev-tools/context-analysis.rst
index e69896e597b6..54d9ee28de98 100644
--- a/Documentation/dev-tools/context-analysis.rst
+++ b/Documentation/dev-tools/context-analysis.rst
@@ -83,9 +83,33 @@ Currently the following synchronization primitives are supported:
`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`,
`ww_mutex`.

-For context locks with an initialization function (e.g., `spin_lock_init()`),
-calling this function before initializing any guarded members or globals
-prevents the compiler from issuing warnings about unguarded initialization.
+To initialize variables guarded by a context lock with an initialization
+function (``type_init(&lock)``), prefer using ``guard(type_init)(&lock)`` or
+``scoped_guard(type_init, &lock) { ... }`` to initialize such guarded members
+or globals in the enclosing scope. This initializes the context lock and treats
+the context as active within the initialization scope (initialization implies
+exclusive access to the underlying object).
+
+For example::
+
+ struct my_data {
+ spinlock_t lock;
+ int counter __guarded_by(&lock);
+ };
+
+ void init_my_data(struct my_data *d)
+ {
+ ...
+ guard(spinlock_init)(&d->lock);
+ d->counter = 0;
+ ...
+ }
+
+Alternatively, initializing guarded variables can be done with context analysis
+disabled, preferably in the smallest possible scope (due to lack of any other
+checking): either with a ``context_unsafe(var = init)`` expression, or by
+marking small initialization functions with the ``__context_unsafe(init)``
+attribute.

Lockdep assertions, such as `lockdep_assert_held()`, inform the compiler's
context analysis that the associated synchronization primitive is held after
diff --git a/include/linux/compiler-context-analysis.h b/include/linux/compiler-context-analysis.h
index db7e0d48d8f2..27ea01adeb2c 100644
--- a/include/linux/compiler-context-analysis.h
+++ b/include/linux/compiler-context-analysis.h
@@ -32,13 +32,8 @@
/*
* The "assert_capability" attribute is a bit confusingly named. It does not
* generate a check. Instead, it tells the analysis to *assume* the capability
- * is held. This is used for:
- *
- * 1. Augmenting runtime assertions, that can then help with patterns beyond the
- * compiler's static reasoning abilities.
- *
- * 2. Initialization of context locks, so we can access guarded variables right
- * after initialization (nothing else should access the same object yet).
+ * is held. This is used for augmenting runtime assertions, that can then help
+ * with patterns beyond the compiler's static reasoning abilities.
*/
# define __assumes_ctx_lock(...) __attribute__((assert_capability(__VA_ARGS__)))
# define __assumes_shared_ctx_lock(...) __attribute__((assert_shared_capability(__VA_ARGS__)))
diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h
index 99c06e499375..b8830148a859 100644
--- a/include/linux/local_lock.h
+++ b/include/linux/local_lock.h
@@ -104,6 +104,8 @@ DEFINE_LOCK_GUARD_1(local_lock_nested_bh, local_lock_t __percpu,
local_lock_nested_bh(_T->lock),
local_unlock_nested_bh(_T->lock))

+DEFINE_LOCK_GUARD_1(local_lock_init, local_lock_t, local_lock_init(_T->lock), /* */)
+
DECLARE_LOCK_GUARD_1_ATTRS(local_lock, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
#define class_local_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock, _T)
DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irq, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
@@ -112,5 +114,11 @@ DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irqsave, __acquires(_T), __releases(*(loca
#define class_local_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_irqsave, _T)
DECLARE_LOCK_GUARD_1_ATTRS(local_lock_nested_bh, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
#define class_local_lock_nested_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_nested_bh, _T)
+DECLARE_LOCK_GUARD_1_ATTRS(local_lock_init, __acquires(_T), __releases(*(local_lock_t **)_T))
+#define class_local_lock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_init, _T)
+
+DEFINE_LOCK_GUARD_1(local_trylock_init, local_trylock_t, local_trylock_init(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(local_trylock_init, __acquires(_T), __releases(*(local_trylock_t **)_T))
+#define class_local_trylock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_trylock_init, _T)

#endif
diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
index e8c4803d8db4..4521c40895f8 100644
--- a/include/linux/local_lock_internal.h
+++ b/include/linux/local_lock_internal.h
@@ -6,6 +6,7 @@
#include <linux/percpu-defs.h>
#include <linux/irqflags.h>
#include <linux/lockdep.h>
+#include <linux/debug_locks.h>
#include <asm/current.h>

#ifndef CONFIG_PREEMPT_RT
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 89977c215cbd..6b12009351d2 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -254,6 +254,7 @@ extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) __cond_a
DEFINE_LOCK_GUARD_1(mutex, struct mutex, mutex_lock(_T->lock), mutex_unlock(_T->lock))
DEFINE_LOCK_GUARD_1_COND(mutex, _try, mutex_trylock(_T->lock))
DEFINE_LOCK_GUARD_1_COND(mutex, _intr, mutex_lock_interruptible(_T->lock), _RET == 0)
+DEFINE_LOCK_GUARD_1(mutex_init, struct mutex, mutex_init(_T->lock), /* */)

DECLARE_LOCK_GUARD_1_ATTRS(mutex, __acquires(_T), __releases(*(struct mutex **)_T))
#define class_mutex_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex, _T)
@@ -261,6 +262,8 @@ DECLARE_LOCK_GUARD_1_ATTRS(mutex_try, __acquires(_T), __releases(*(struct mutex
#define class_mutex_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_try, _T)
DECLARE_LOCK_GUARD_1_ATTRS(mutex_intr, __acquires(_T), __releases(*(struct mutex **)_T))
#define class_mutex_intr_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_intr, _T)
+DECLARE_LOCK_GUARD_1_ATTRS(mutex_init, __acquires(_T), __releases(*(struct mutex **)_T))
+#define class_mutex_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_init, _T)

extern unsigned long mutex_get_owner(struct mutex *lock);

diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index 8da14a08a4e1..ea1bbdb57a47 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -280,6 +280,10 @@ DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_try, __acquires(_T), __releases(*(struct
DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_kill, __acquires(_T), __releases(*(struct rw_semaphore **)_T))
#define class_rwsem_write_kill_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_write_kill, _T)

+DEFINE_LOCK_GUARD_1(rwsem_init, struct rw_semaphore, init_rwsem(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(rwsem_init, __acquires(_T), __releases(*(struct rw_semaphore **)_T))
+#define class_rwsem_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_init, _T)
+
/*
* downgrade write lock to read lock
*/
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 113320911a09..22216df47b0f 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -14,6 +14,7 @@
*/

#include <linux/compiler.h>
+#include <linux/cleanup.h>
#include <linux/kcsan-checks.h>
#include <linux/lockdep.h>
#include <linux/mutex.h>
@@ -1359,4 +1360,8 @@ static __always_inline void __scoped_seqlock_cleanup_ctx(struct ss_tmp **s)
#define scoped_seqlock_read(_seqlock, _target) \
__scoped_seqlock_read(_seqlock, _target, __UNIQUE_ID(seqlock))

+DEFINE_LOCK_GUARD_1(seqlock_init, seqlock_t, seqlock_init(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(seqlock_init, __acquires(_T), __releases(*(seqlock_t **)_T))
+#define class_seqlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(seqlock_init, _T)
+
#endif /* __LINUX_SEQLOCK_H */
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 396b8c5d6c1b..7b11991c742a 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -582,6 +582,10 @@ DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irqsave, _try,
DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave_try, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
#define class_raw_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave_try, _T)

+DEFINE_LOCK_GUARD_1(raw_spinlock_init, raw_spinlock_t, raw_spin_lock_init(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_init, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
+#define class_raw_spinlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_init, _T)
+
DEFINE_LOCK_GUARD_1(spinlock, spinlock_t,
spin_lock(_T->lock),
spin_unlock(_T->lock))
@@ -626,6 +630,10 @@ DEFINE_LOCK_GUARD_1_COND(spinlock_irqsave, _try,
DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irqsave_try, __acquires(_T), __releases(*(spinlock_t **)_T))
#define class_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_irqsave_try, _T)

+DEFINE_LOCK_GUARD_1(spinlock_init, spinlock_t, spin_lock_init(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(spinlock_init, __acquires(_T), __releases(*(spinlock_t **)_T))
+#define class_spinlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_init, _T)
+
DEFINE_LOCK_GUARD_1(read_lock, rwlock_t,
read_lock(_T->lock),
read_unlock(_T->lock))
@@ -664,5 +672,9 @@ DEFINE_LOCK_GUARD_1(write_lock_irqsave, rwlock_t,
DECLARE_LOCK_GUARD_1_ATTRS(write_lock_irqsave, __acquires(_T), __releases(*(rwlock_t **)_T))
#define class_write_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(write_lock_irqsave, _T)

+DEFINE_LOCK_GUARD_1(rwlock_init, rwlock_t, rwlock_init(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(rwlock_init, __acquires(_T), __releases(*(rwlock_t **)_T))
+#define class_rwlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwlock_init, _T)
+
#undef __LINUX_INSIDE_SPINLOCK_H
#endif /* __LINUX_SPINLOCK_H */
diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c
index 1c5a381461fc..0f05943d957f 100644
--- a/lib/test_context-analysis.c
+++ b/lib/test_context-analysis.c
@@ -35,7 +35,7 @@ static void __used test_common_helpers(void)
}; \
static void __used test_##class##_init(struct test_##class##_data *d) \
{ \
- type_init(&d->lock); \
+ guard(type_init)(&d->lock); \
d->counter = 0; \
} \
static void __used test_##class(struct test_##class##_data *d) \
@@ -83,7 +83,7 @@ static void __used test_common_helpers(void)

TEST_SPINLOCK_COMMON(raw_spinlock,
raw_spinlock_t,
- raw_spin_lock_init,
+ raw_spinlock_init,
raw_spin_lock,
raw_spin_unlock,
raw_spin_trylock,
@@ -109,7 +109,7 @@ static void __used test_raw_spinlock_trylock_extra(struct test_raw_spinlock_data

TEST_SPINLOCK_COMMON(spinlock,
spinlock_t,
- spin_lock_init,
+ spinlock_init,
spin_lock,
spin_unlock,
spin_trylock,
@@ -163,7 +163,7 @@ struct test_mutex_data {

static void __used test_mutex_init(struct test_mutex_data *d)
{
- mutex_init(&d->mtx);
+ guard(mutex_init)(&d->mtx);
d->counter = 0;
}

@@ -226,7 +226,7 @@ struct test_seqlock_data {

static void __used test_seqlock_init(struct test_seqlock_data *d)
{
- seqlock_init(&d->sl);
+ guard(seqlock_init)(&d->sl);
d->counter = 0;
}

@@ -275,7 +275,7 @@ struct test_rwsem_data {

static void __used test_rwsem_init(struct test_rwsem_data *d)
{
- init_rwsem(&d->sem);
+ guard(rwsem_init)(&d->sem);
d->counter = 0;
}

@@ -475,7 +475,7 @@ static DEFINE_PER_CPU(struct test_local_lock_data, test_local_lock_data) = {

static void __used test_local_lock_init(struct test_local_lock_data *d)
{
- local_lock_init(&d->lock);
+ guard(local_lock_init)(&d->lock);
d->counter = 0;
}

@@ -519,7 +519,7 @@ static DEFINE_PER_CPU(struct test_local_trylock_data, test_local_trylock_data) =

static void __used test_local_trylock_init(struct test_local_trylock_data *d)
{
- local_trylock_init(&d->lock);
+ guard(local_trylock_init)(&d->lock);
d->counter = 0;
}

--
2.52.0.457.g6b5491de43-goog

Marco Elver

unread,
Jan 19, 2026, 4:40:57 AMJan 19
to el...@google.com, Peter Zijlstra, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Christoph Hellwig, Steven Rostedt, Bart Van Assche, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org
Convert lock initialization to scoped guarded initialization where
lock-guarded members are initialized in the same scope.

This ensures the context analysis treats the context as active during
member initialization. This is required to avoid errors once implicit
context assertion is removed.

Signed-off-by: Marco Elver <el...@google.com>
---
kernel/kcov.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/kcov.c b/kernel/kcov.c
index 6cbc6e2d8aee..5397d0c14127 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -530,7 +530,7 @@ static int kcov_open(struct inode *inode, struct file *filep)
kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
if (!kcov)
return -ENOMEM;
- spin_lock_init(&kcov->lock);
+ guard(spinlock_init)(&kcov->lock);
kcov->mode = KCOV_MODE_DISABLED;
kcov->sequence = 1;
refcount_set(&kcov->refcount, 1);
--
2.52.0.457.g6b5491de43-goog

Marco Elver

unread,
Jan 19, 2026, 4:40:59 AMJan 19
to el...@google.com, Peter Zijlstra, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Christoph Hellwig, Steven Rostedt, Bart Van Assche, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org
Convert lock initialization to scoped guarded initialization where
lock-guarded members are initialized in the same scope.

This ensures the context analysis treats the context as active during member
initialization. This is required to avoid errors once implicit context
assertion is removed.

Signed-off-by: Marco Elver <el...@google.com>
---
crypto/crypto_engine.c | 2 +-
crypto/drbg.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 1653a4bf5b31..afb6848f7df4 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -453,7 +453,7 @@ struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev,
snprintf(engine->name, sizeof(engine->name),
"%s-engine", dev_name(dev));

- spin_lock_init(&engine->queue_lock);
+ guard(spinlock_init)(&engine->queue_lock);
crypto_init_queue(&engine->queue, qlen);

engine->kworker = kthread_run_worker(0, "%s", engine->name);
diff --git a/crypto/drbg.c b/crypto/drbg.c
index 0a6f6c05a78f..21b339c76cca 100644
--- a/crypto/drbg.c
+++ b/crypto/drbg.c
@@ -1780,7 +1780,7 @@ static inline int __init drbg_healthcheck_sanity(void)
if (!drbg)
return -ENOMEM;

- mutex_init(&drbg->drbg_mutex);
+ guard(mutex_init)(&drbg->drbg_mutex);
drbg->core = &drbg_cores[coreref];
drbg->reseed_threshold = drbg_max_requests(drbg);

--
2.52.0.457.g6b5491de43-goog

Marco Elver

unread,
Jan 19, 2026, 4:41:03 AMJan 19
to el...@google.com, Peter Zijlstra, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Christoph Hellwig, Steven Rostedt, Bart Van Assche, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org
Convert lock initialization to scoped guarded initialization where
lock-guarded members are initialized in the same scope.

This ensures the context analysis treats the context as active during member
initialization. This is required to avoid errors once implicit context
assertion is removed.

Signed-off-by: Marco Elver <el...@google.com>
---
security/tomoyo/common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/security/tomoyo/common.c b/security/tomoyo/common.c
index 86ce56c32d37..7e1f825d903b 100644
--- a/security/tomoyo/common.c
+++ b/security/tomoyo/common.c
@@ -2557,7 +2557,7 @@ int tomoyo_open_control(const u8 type, struct file *file)

if (!head)
return -ENOMEM;
- mutex_init(&head->io_sem);
+ guard(mutex_init)(&head->io_sem);
head->type = type;
switch (type) {
case TOMOYO_DOMAINPOLICY:
--
2.52.0.457.g6b5491de43-goog

Marco Elver

unread,
Jan 19, 2026, 4:41:05 AMJan 19
to el...@google.com, Peter Zijlstra, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Christoph Hellwig, Steven Rostedt, Bart Van Assche, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org
Remove __assume_ctx_lock() from lock initializers.

Implicitly asserting an active context during initialization caused
false-positive double-lock errors when acquiring a lock immediately after its
initialization. Moving forward, guarded member initialization must either:

1. Use guard(type_init)(&lock) or scoped_guard(type_init, ...).
2. Use context_unsafe() for simple initialization.

Link: https://lore.kernel.org/all/57062131-e79e-42c2...@acm.org/
Reported-by: Bart Van Assche <bvana...@acm.org>
Signed-off-by: Marco Elver <el...@google.com>
---
include/linux/local_lock_internal.h | 3 ---
include/linux/mutex.h | 1 -
include/linux/rwlock.h | 3 +--
include/linux/rwlock_rt.h | 1 -
include/linux/rwsem.h | 2 --
include/linux/seqlock.h | 1 -
include/linux/spinlock.h | 5 +----
include/linux/spinlock_rt.h | 1 -
include/linux/ww_mutex.h | 1 -
lib/test_context-analysis.c | 6 ------
10 files changed, 2 insertions(+), 22 deletions(-)

diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
index 4521c40895f8..ebfcdf517224 100644
--- a/include/linux/local_lock_internal.h
+++ b/include/linux/local_lock_internal.h
@@ -87,13 +87,11 @@ do { \
0, LD_WAIT_CONFIG, LD_WAIT_INV, \
LD_LOCK_PERCPU); \
local_lock_debug_init(lock); \
- __assume_ctx_lock(lock); \
} while (0)

#define __local_trylock_init(lock) \
do { \
__local_lock_init((local_lock_t *)lock); \
- __assume_ctx_lock(lock); \
} while (0)

#define __spinlock_nested_bh_init(lock) \
@@ -105,7 +103,6 @@ do { \
0, LD_WAIT_CONFIG, LD_WAIT_INV, \
LD_LOCK_NORMAL); \
local_lock_debug_init(lock); \
- __assume_ctx_lock(lock); \
} while (0)

#define __local_lock_acquire(lock) \
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 6b12009351d2..ecaa0440f6ec 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -62,7 +62,6 @@ do { \
static struct lock_class_key __key; \
\
__mutex_init((mutex), #mutex, &__key); \
- __assume_ctx_lock(mutex); \
} while (0)

/**
diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h
index 65a5b55e1bcd..3390d21c95dd 100644
--- a/include/linux/rwlock.h
+++ b/include/linux/rwlock.h
@@ -22,11 +22,10 @@ do { \
static struct lock_class_key __key; \
\
__rwlock_init((lock), #lock, &__key); \
- __assume_ctx_lock(lock); \
} while (0)
#else
# define rwlock_init(lock) \
- do { *(lock) = __RW_LOCK_UNLOCKED(lock); __assume_ctx_lock(lock); } while (0)
+ do { *(lock) = __RW_LOCK_UNLOCKED(lock); } while (0)
#endif

#ifdef CONFIG_DEBUG_SPINLOCK
diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
index 37b387dcab21..5353abbfdc0b 100644
--- a/include/linux/rwlock_rt.h
+++ b/include/linux/rwlock_rt.h
@@ -22,7 +22,6 @@ do { \
\
init_rwbase_rt(&(rwl)->rwbase); \
__rt_rwlock_init(rwl, #rwl, &__key); \
- __assume_ctx_lock(rwl); \
} while (0)

extern void rt_read_lock(rwlock_t *rwlock) __acquires_shared(rwlock);
diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index ea1bbdb57a47..9bf1d93d3d7b 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -121,7 +121,6 @@ do { \
static struct lock_class_key __key; \
\
__init_rwsem((sem), #sem, &__key); \
- __assume_ctx_lock(sem); \
} while (0)

/*
@@ -175,7 +174,6 @@ do { \
static struct lock_class_key __key; \
\
__init_rwsem((sem), #sem, &__key); \
- __assume_ctx_lock(sem); \
} while (0)

static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem)
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 22216df47b0f..c0c6235dff59 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -817,7 +817,6 @@ static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s)
do { \
spin_lock_init(&(sl)->lock); \
seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock); \
- __assume_ctx_lock(sl); \
} while (0)

/**
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 7b11991c742a..e1e2f144af9b 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -106,12 +106,11 @@ do { \
static struct lock_class_key __key; \
\
__raw_spin_lock_init((lock), #lock, &__key, LD_WAIT_SPIN); \
- __assume_ctx_lock(lock); \
} while (0)

#else
# define raw_spin_lock_init(lock) \
- do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); __assume_ctx_lock(lock); } while (0)
+ do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); } while (0)
#endif

#define raw_spin_is_locked(lock) arch_spin_is_locked(&(lock)->raw_lock)
@@ -324,7 +323,6 @@ do { \
\
__raw_spin_lock_init(spinlock_check(lock), \
#lock, &__key, LD_WAIT_CONFIG); \
- __assume_ctx_lock(lock); \
} while (0)

#else
@@ -333,7 +331,6 @@ do { \
do { \
spinlock_check(_lock); \
*(_lock) = __SPIN_LOCK_UNLOCKED(_lock); \
- __assume_ctx_lock(_lock); \
} while (0)

#endif
diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
index 0a585768358f..373618a4243c 100644
--- a/include/linux/spinlock_rt.h
+++ b/include/linux/spinlock_rt.h
@@ -20,7 +20,6 @@ static inline void __rt_spin_lock_init(spinlock_t *lock, const char *name,
do { \
rt_mutex_base_init(&(slock)->lock); \
__rt_spin_lock_init(slock, name, key, percpu); \
- __assume_ctx_lock(slock); \
} while (0)

#define _spin_lock_init(slock, percpu) \
diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index 58e959ee10e9..c47d4b9b88b3 100644
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -107,7 +107,6 @@ context_lock_struct(ww_acquire_ctx) {
*/
static inline void ww_mutex_init(struct ww_mutex *lock,
struct ww_class *ww_class)
- __assumes_ctx_lock(lock)
{
ww_mutex_base_init(&lock->base, ww_class->mutex_name, &ww_class->mutex_key);
lock->ctx = NULL;
diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c
index 0f05943d957f..140efa8a9763 100644
--- a/lib/test_context-analysis.c
+++ b/lib/test_context-analysis.c
@@ -542,12 +542,6 @@ struct test_ww_mutex_data {
int counter __guarded_by(&mtx);
};

-static void __used test_ww_mutex_init(struct test_ww_mutex_data *d)
-{
- ww_mutex_init(&d->mtx, &ww_class);
- d->counter = 0;
-}
-
static void __used test_ww_mutex_lock_noctx(struct test_ww_mutex_data *d)
{
if (!ww_mutex_lock(&d->mtx, NULL)) {
--
2.52.0.457.g6b5491de43-goog

Christoph Hellwig

unread,
Jan 20, 2026, 2:24:07 AMJan 20
to Marco Elver, Peter Zijlstra, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Christoph Hellwig, Steven Rostedt, Bart Van Assche, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org
On Mon, Jan 19, 2026 at 10:05:50AM +0100, Marco Elver wrote:
> Note: Scoped guarded initialization remains optional, and normal
> initialization can still be used if no guarded members are being
> initialized. Another alternative is to just disable context analysis to
> initialize guarded members with `context_unsafe(var = init)` or adding
> the `__context_unsafe(init)` function attribute (the latter not being
> recommended for non-trivial functions due to lack of any checking):

I still think this is doing the wrong for the regular non-scoped
cased, and I think I finally understand what is so wrong about it.

The fact that mutex_init (let's use mutexes for the example, applied
to other primitives as well) should not automatically imply guarding
the members for the rest of the function. Because as soon as the
structure that contains the lock is published that is not actually
true, and we did have quite a lot of bugs because of that in the
past.

So I think the first step is to avoid implying the safety of guarded
member access by initialing the lock. We then need to think how to
express they are save, which would probably require explicit annotation
unless we can come up with a scheme that makes these accesses fine
before the mutex_init in a magic way.

Peter Zijlstra

unread,
Jan 20, 2026, 5:52:20 AMJan 20
to Christoph Hellwig, Marco Elver, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Steven Rostedt, Bart Van Assche, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org
But that is exactly what these patches do!

Note that the current state of things (tip/locking/core,next) is that
mutex_init() is 'special'. And I agree with you that that is quite
horrible.

Now, these patches, specifically patch 6, removes this implied
horribleness.

The alternative is an explicit annotation -- as you suggest.


So given something like:

struct my_obj {
struct mutex mutex;
int data __guarded_by(&mutex);
...
};


tip/locking/core,next:

init_my_obj(struct my_obj *obj)
{
mutex_init(&obj->mutex); // implies obj->mutex is taken until end of function
obj->data = FOO; // OK, because &obj->mutex 'held'
...
}

And per these patches that will no longer be true. So if you apply just
patch 6, which removes this implied behaviour, you get a compile fail.
Not good!

So patches 1-5 introduces alternatives.

So your preferred solution:

hch_my_obj(struct my_obj *obj)
{
mutex_init(&obj->mutex);
mutex_lock(&obj->mutex); // actually acquires lock
obj->data = FOO;
...
}

is perfectly fine and will work. But not everybody wants this. For the
people that only need to init the data fields and don't care about the
lock state we get:

init_my_obj(struct my_obj *obj)
{
guard(mutex_init)(&obj->mutex); // initializes mutex and considers lock
// held until end of function
obj->data = FOO;
...
}

For the people that want to first init the object but then actually lock
it, we get:

use_my_obj(struct my_obj *obj)
{
scoped_guard (mutex_init, &obj->mutex) { // init mutex and 'hold' for scope
obj->data = FOO;
...
}

mutex_lock(&obj->lock);
...
}

And for the people that *reaaaaaly* hate guards, it is possible to write
something like:

ugly_my_obj(struct my_obj *obj)
{
mutex_init(&obj->mutex);
__acquire_ctx_lock(&obj->mutex);
obj->data = FOO;
...
__release_ctx_lock(&obj->mutex);

mutex_lock(&obj->lock);
...
}

And, then there is the option that C++ has:

init_my_obj(struct my_obj *obj)
__no_context_analysis // STFU!
{
mutex_init(&obj->mutex);
obj->data = FOO; // WARN; but ignored
...
}

All I can make from your email is that you must be in favour of these
patches.

Bart Van Assche

unread,
Jan 20, 2026, 1:24:17 PMJan 20
to Marco Elver, Peter Zijlstra, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Christoph Hellwig, Steven Rostedt, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org
On 1/19/26 1:05 AM, Marco Elver wrote:
> This series proposes a solution to this by introducing scoped init
> guards which Peter suggested, using the guard(type_init)(&lock) or
> scoped_guard(type_init, ..) interface.
Although I haven't had the time yet to do an in-depth review, from a
quick look all patches in this series look good to me.

Thanks,

Bart.

Christoph Hellwig

unread,
Jan 22, 2026, 1:30:48 AMJan 22
to Peter Zijlstra, Christoph Hellwig, Marco Elver, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Steven Rostedt, Bart Van Assche, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org
And this is just as bad as the original version, except it is now
even more obfuscated.

> And for the people that *reaaaaaly* hate guards, it is possible to write
> something like:
>
> ugly_my_obj(struct my_obj *obj)
> {
> mutex_init(&obj->mutex);
> __acquire_ctx_lock(&obj->mutex);
> obj->data = FOO;
> ...
> __release_ctx_lock(&obj->mutex);
>
> mutex_lock(&obj->lock);
> ...

That's better. What would be even better for everyone would be:

mutex_prepare(&obj->mutex); /* acquire, but with a nice name */
obj->data = FOO;
mutex_init_prepared(&obj->mutex); /* release, barrier, actual init */

mutex_lock(&obj->mutex); /* IFF needed only */

Peter Zijlstra

unread,
Jan 23, 2026, 3:44:13 AMJan 23
to Christoph Hellwig, Marco Elver, Ingo Molnar, Thomas Gleixner, Will Deacon, Boqun Feng, Waiman Long, Steven Rostedt, Bart Van Assche, kasa...@googlegroups.com, ll...@lists.linux.dev, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux-secu...@vger.kernel.org, linux-...@vger.kernel.org
On Thu, Jan 22, 2026 at 07:30:42AM +0100, Christoph Hellwig wrote:

> That's better. What would be even better for everyone would be:
>
> mutex_prepare(&obj->mutex); /* acquire, but with a nice name */
> obj->data = FOO;
> mutex_init_prepared(&obj->mutex); /* release, barrier, actual init */
>
> mutex_lock(&obj->mutex); /* IFF needed only */
>

This is cannot work. There is no such thing is a release-barrier.
Furthermore, store-release, load-acquire needs an address dependency to
work.

When publishing an object, which is what we're talking about, we have
two common patterns:

1) a locked data-structure

2) RCU


The way 1) works is:

Publish Use

lock(&structure_lock);
insert(&structure, obj);
unlock(&structure_lock);

lock(&structure_lock)
obj = find(&structure, key);
...
unlock(&structure_lock);

And here the Publish-unlock is a release which pairs with the Use-lock's
acquire and guarantees that Use sees both 'structure' in a coherent
state and obj as it was at the time of insertion. IOW we have
release-acquire through the &structure_lock pointer.

The way 2) works is:

Publish Use

lock(&structure_lock);
insert(&structure, obj)
rcu_assign_pointer(ptr, obj);
unlock(&structure_lock);

rcu_read_lock();
obj = find_rcu(&structure, key);
...
rcu_read_unlock();


And here rcu_assign_pointer() is a store-release that pairs with an
rcu_dereference() inside find_rcu() on the same pointer.

There is no alternative way to order things, there must be a
release-acquire through a common address.

In both cases it is imperative the obj is fully (or full enough)
initialized before publication, because the consumer is only guaranteed
to see the state of the object it was in at publish time.

Lizbeth Chodash

unread,
Jan 25, 2026, 9:55:30 PM (13 days ago) Jan 25
to kasan-dev
If you lost your BTC or ETH, I will help you to recover your funds. I will charge 10% after your recovery. Whatsapp me at +1-646-914-3655
Reply all
Reply to author
Forward
0 new messages
Search
Clear search
Close search
Google apps
Main menu