KCSAN: data-race in taskstats_exit / taskstats_exit

67 views
Skip to first unread message

syzbot

unread,
Oct 5, 2019, 12:26:07 AM10/5/19
to bsing...@gmail.com, el...@google.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: b4bd9343 x86, kcsan: Enable KCSAN for x86
git tree: https://github.com/google/ktsan.git kcsan
console output: https://syzkaller.appspot.com/x/log.txt?x=125329db600000
kernel config: https://syzkaller.appspot.com/x/.config?x=c0906aa620713d80
dashboard link: https://syzkaller.appspot.com/bug?extid=c5d03165a1bd1dead0c1
compiler: gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com

==================================================================
BUG: KCSAN: data-race in taskstats_exit / taskstats_exit

write to 0xffff8881157bbe10 of 8 bytes by task 7951 on cpu 0:
taskstats_tgid_alloc kernel/taskstats.c:567 [inline]
taskstats_exit+0x6b7/0x717 kernel/taskstats.c:596
do_exit+0x2c2/0x18e0 kernel/exit.c:864
do_group_exit+0xb4/0x1c0 kernel/exit.c:983
get_signal+0x2a2/0x1320 kernel/signal.c:2734
do_signal+0x3b/0xc00 arch/x86/kernel/signal.c:815
exit_to_usermode_loop+0x250/0x2c0 arch/x86/entry/common.c:159
prepare_exit_to_usermode arch/x86/entry/common.c:194 [inline]
syscall_return_slowpath arch/x86/entry/common.c:274 [inline]
do_syscall_64+0x2d7/0x2f0 arch/x86/entry/common.c:299
entry_SYSCALL_64_after_hwframe+0x44/0xa9

read to 0xffff8881157bbe10 of 8 bytes by task 7949 on cpu 1:
taskstats_tgid_alloc kernel/taskstats.c:559 [inline]
taskstats_exit+0xb2/0x717 kernel/taskstats.c:596
do_exit+0x2c2/0x18e0 kernel/exit.c:864
do_group_exit+0xb4/0x1c0 kernel/exit.c:983
__do_sys_exit_group kernel/exit.c:994 [inline]
__se_sys_exit_group kernel/exit.c:992 [inline]
__x64_sys_exit_group+0x2e/0x30 kernel/exit.c:992
do_syscall_64+0xcf/0x2f0 arch/x86/entry/common.c:296
entry_SYSCALL_64_after_hwframe+0x44/0xa9

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 7949 Comm: syz-executor.3 Not tainted 5.3.0+ #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
==================================================================
Kernel panic - not syncing: panic_on_warn set ...
CPU: 1 PID: 7949 Comm: syz-executor.3 Not tainted 5.3.0+ #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0xf5/0x159 lib/dump_stack.c:113
panic+0x209/0x639 kernel/panic.c:219
end_report kernel/kcsan/report.c:135 [inline]
kcsan_report.cold+0x57/0xeb kernel/kcsan/report.c:283
__kcsan_setup_watchpoint+0x342/0x500 kernel/kcsan/core.c:456
__tsan_read8 kernel/kcsan/kcsan.c:31 [inline]
__tsan_read8+0x2c/0x30 kernel/kcsan/kcsan.c:31
taskstats_tgid_alloc kernel/taskstats.c:559 [inline]
taskstats_exit+0xb2/0x717 kernel/taskstats.c:596
do_exit+0x2c2/0x18e0 kernel/exit.c:864
do_group_exit+0xb4/0x1c0 kernel/exit.c:983
__do_sys_exit_group kernel/exit.c:994 [inline]
__se_sys_exit_group kernel/exit.c:992 [inline]
__x64_sys_exit_group+0x2e/0x30 kernel/exit.c:992
do_syscall_64+0xcf/0x2f0 arch/x86/entry/common.c:296
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x459a59
Code: fd b7 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7
48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff
ff 0f 83 cb b7 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007ffc4ccf3408 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 000000000000001e RCX: 0000000000459a59
RDX: 0000000000413741 RSI: fffffffffffffff7 RDI: 0000000000000000
RBP: 0000000000000000 R08: 000000007f8e4506 R09: 00007ffc4ccf3460
R10: ffffffff81007108 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffc4ccf3460 R14: 0000000000000000 R15: 00007ffc4ccf3470
Kernel Offset: disabled
Rebooting in 86400 seconds..


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

Dmitry Vyukov

unread,
Oct 5, 2019, 12:29:52 AM10/5/19
to syzbot, Christian Brauner, bsing...@gmail.com, Marco Elver, LKML, syzkaller-bugs
On Sat, Oct 5, 2019 at 6:26 AM syzbot
<syzbot+c5d031...@syzkaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit: b4bd9343 x86, kcsan: Enable KCSAN for x86
> git tree: https://github.com/google/ktsan.git kcsan
> console output: https://syzkaller.appspot.com/x/log.txt?x=125329db600000
> kernel config: https://syzkaller.appspot.com/x/.config?x=c0906aa620713d80
> dashboard link: https://syzkaller.appspot.com/bug?extid=c5d03165a1bd1dead0c1
> compiler: gcc (GCC) 9.0.0 20181231 (experimental)
>
> Unfortunately, I don't have any reproducer for this crash yet.
>
> IMPORTANT: if you fix the bug, please add the following tag to the commit:
> Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com

+Christian, you wanted races in process mgmt ;)
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-bug...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-bugs/0000000000009b403005942237bf%40google.com.

Christian Brauner

unread,
Oct 5, 2019, 7:28:14 AM10/5/19
to syzbot+c5d031...@syzkaller.appspotmail.com, bsing...@gmail.com, el...@google.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com, Christian Brauner
When assiging and testing taskstats in taskstats
taskstats_exit() there's a race around writing and reading sig->stats.

cpu0:
task calls exit()
do_exit()
-> taskstats_exit()
-> taskstats_tgid_alloc()
The task takes sighand lock and assigns new stats to sig->stats.

cpu1:
task catches signal
do_exit()
-> taskstats_tgid_alloc()
-> taskstats_exit()
The tasks reads sig->stats __without__ holding sighand lock seeing
garbage.

Fix this by taking sighand lock when reading sig->stats.

Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com
Signed-off-by: Christian Brauner <christia...@ubuntu.com>
---
kernel/taskstats.c | 28 +++++++++++++++++-----------
1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/kernel/taskstats.c b/kernel/taskstats.c
index 13a0f2e6ebc2..58b145234c4a 100644
--- a/kernel/taskstats.c
+++ b/kernel/taskstats.c
@@ -553,26 +553,32 @@ static int taskstats_user_cmd(struct sk_buff *skb, struct genl_info *info)

static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
{
+ int empty;
+ struct taskstats *stats_new, *stats = NULL;
struct signal_struct *sig = tsk->signal;
- struct taskstats *stats;
-
- if (sig->stats || thread_group_empty(tsk))
- goto ret;

/* No problem if kmem_cache_zalloc() fails */
- stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
+ stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
+
+ empty = thread_group_empty(tsk);

spin_lock_irq(&tsk->sighand->siglock);
+ if (sig->stats || empty) {
+ stats = sig->stats;
+ spin_unlock_irq(&tsk->sighand->siglock);
+ goto free_cache;
+ }
+
if (!sig->stats) {
- sig->stats = stats;
- stats = NULL;
+ sig->stats = stats_new;
+ spin_unlock_irq(&tsk->sighand->siglock);
+ return stats_new;
}
spin_unlock_irq(&tsk->sighand->siglock);

- if (stats)
- kmem_cache_free(taskstats_cache, stats);
-ret:
- return sig->stats;
+free_cache:
+ kmem_cache_free(taskstats_cache, stats_new);
+ return stats;
}

/* Send pid data out on exit */
--
2.23.0

Christian Brauner

unread,
Oct 5, 2019, 7:30:02 AM10/5/19
to Dmitry Vyukov, syzbot, bsing...@gmail.com, Marco Elver, LKML, syzkaller-bugs
On Sat, Oct 05, 2019 at 06:29:39AM +0200, Dmitry Vyukov wrote:
> On Sat, Oct 5, 2019 at 6:26 AM syzbot
> <syzbot+c5d031...@syzkaller.appspotmail.com> wrote:
> >
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit: b4bd9343 x86, kcsan: Enable KCSAN for x86
> > git tree: https://github.com/google/ktsan.git kcsan
> > console output: https://syzkaller.appspot.com/x/log.txt?x=125329db600000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=c0906aa620713d80
> > dashboard link: https://syzkaller.appspot.com/bug?extid=c5d03165a1bd1dead0c1
> > compiler: gcc (GCC) 9.0.0 20181231 (experimental)
> >
> > Unfortunately, I don't have any reproducer for this crash yet.
> >
> > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> > Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com
>
> +Christian, you wanted races in process mgmt ;)

Yes, indeed. :) Sent a fix for this one just now. I'll put it in my
fixes tree for rc3.

Christian

Marco Elver

unread,
Oct 5, 2019, 9:33:19 AM10/5/19
to Christian Brauner, syzbot+c5d031...@syzkaller.appspotmail.com, bsing...@gmail.com, LKML, syzkall...@googlegroups.com
On Sat, 5 Oct 2019 at 13:28, Christian Brauner
<christia...@ubuntu.com> wrote:
>
> When assiging and testing taskstats in taskstats
> taskstats_exit() there's a race around writing and reading sig->stats.
>
> cpu0:
> task calls exit()
> do_exit()
> -> taskstats_exit()
> -> taskstats_tgid_alloc()
> The task takes sighand lock and assigns new stats to sig->stats.
>
> cpu1:
> task catches signal
> do_exit()
> -> taskstats_tgid_alloc()
> -> taskstats_exit()
> The tasks reads sig->stats __without__ holding sighand lock seeing
> garbage.

Is the task seeing garbage reading the data pointed to by stats, or is
this just the pointer that would be garbage?

My only observation here is that the previous version was trying to do
double-checked locking, to avoid taking the lock if sig->stats was
already set. The obvious problem with the previous version is plain
read/write and missing memory ordering: the write inside the critical
section should be smp_store_release and there should only be one
smp_load_acquire at the start.

Maybe I missed something somewhere, but maybe my suggestion below
would be an equivalent fix without always having to take the lock to
assign the pointer? If performance is not critical here, then it's
probably not worth it.

Thanks,
-- Marco

diff --git a/kernel/taskstats.c b/kernel/taskstats.c
index 13a0f2e6ebc2..f58dd285a44b 100644
--- a/kernel/taskstats.c
+++ b/kernel/taskstats.c
@@ -554,25 +554,31 @@ static int taskstats_user_cmd(struct sk_buff
*skb, struct genl_info *info)
static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
{
struct signal_struct *sig = tsk->signal;
- struct taskstats *stats;
+ struct taskstats *stats_new, *stats;

- if (sig->stats || thread_group_empty(tsk))
+ /* acquire load to make pointed-to data visible */
+ stats = smp_load_acquire(&sig->stats);
+ if (stats || thread_group_empty(tsk))
goto ret;

/* No problem if kmem_cache_zalloc() fails */
- stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
+ stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);

spin_lock_irq(&tsk->sighand->siglock);
- if (!sig->stats) {
- sig->stats = stats;
- stats = NULL;
+ stats = sig->stats;
+ if (!stats) {
+ stats = stats_new;
+ /* release store to order zalloc before */
+ smp_store_release(&sig->stats, stats_new);
+ stats_new = NULL;
}
spin_unlock_irq(&tsk->sighand->siglock);

- if (stats)
- kmem_cache_free(taskstats_cache, stats);
+ if (stats_new)
+ kmem_cache_free(taskstats_cache, stats_new);
+
ret:
- return sig->stats;

Christian Brauner

unread,
Oct 5, 2019, 10:15:31 AM10/5/19
to Marco Elver, syzbot+c5d031...@syzkaller.appspotmail.com, bsing...@gmail.com, LKML, syzkall...@googlegroups.com
On Sat, Oct 05, 2019 at 03:33:07PM +0200, Marco Elver wrote:
> On Sat, 5 Oct 2019 at 13:28, Christian Brauner
> <christia...@ubuntu.com> wrote:
> >
> > When assiging and testing taskstats in taskstats
> > taskstats_exit() there's a race around writing and reading sig->stats.
> >
> > cpu0:
> > task calls exit()
> > do_exit()
> > -> taskstats_exit()
> > -> taskstats_tgid_alloc()
> > The task takes sighand lock and assigns new stats to sig->stats.
> >
> > cpu1:
> > task catches signal
> > do_exit()
> > -> taskstats_tgid_alloc()
> > -> taskstats_exit()
> > The tasks reads sig->stats __without__ holding sighand lock seeing
> > garbage.
>
> Is the task seeing garbage reading the data pointed to by stats, or is
> this just the pointer that would be garbage?

I expect the pointer to be garbage.

>
> My only observation here is that the previous version was trying to do
> double-checked locking, to avoid taking the lock if sig->stats was
> already set. The obvious problem with the previous version is plain
> read/write and missing memory ordering: the write inside the critical
> section should be smp_store_release and there should only be one
> smp_load_acquire at the start.
>
> Maybe I missed something somewhere, but maybe my suggestion below
> would be an equivalent fix without always having to take the lock to
> assign the pointer? If performance is not critical here, then it's
> probably not worth it.

The only point of contention is when the whole thread-group exits (e.g.
via exit_group(2) since threads in a thread-group share signal struct).
The reason I didn't do memory barriers was because we need to take the
spinlock for the actual list manipulation anyway.
But I don't mind incorporating the acquire/release.

Christian

Marco Elver

unread,
Oct 5, 2019, 10:34:35 AM10/5/19
to Christian Brauner, syzbot+c5d031...@syzkaller.appspotmail.com, bsing...@gmail.com, LKML, syzkall...@googlegroups.com
Thanks for the clarification. Since you no longer do double-checked
locking, explicit memory barriers shouldn't be needed because
spin_lock/unlock already provides acquire/release ordering.

-- Marco

Dmitry Vyukov

unread,
Oct 6, 2019, 6:00:45 AM10/6/19
to Christian Brauner, syzbot, bsing...@gmail.com, Marco Elver, LKML, syzkaller-bugs
This seems to be over-pessimistic wrt performance b/c:
1. We always allocate the object and free it on every call, even if
the stats are already allocated, whereas currently we don't.
2. We always allocate the object and free it if thread_group_empty,
whereas currently we don't.
3. We do lock/unlock on every call.

I would suggest to fix the double-checked locking properly.
Locking is not the only correct way to synchronize things. Lock-free
synchronization is also possible. It's more tricky, but it can be
correct and it's supported by KCSAN/KTSAN. It just needs to be
properly implemented and expressed. For some cases we may decide to
switch to locking instead, but it needs to be an explicit decision.

We can fix the current code by doing READ_ONCE on sig->stats (which
implies smp_read_barrier_depends since 4.15), and storing to it with
smp_store_release.
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-bug...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-bugs/20191005112806.13960-1-christian.brauner%40ubuntu.com.

Christian Brauner

unread,
Oct 6, 2019, 6:59:38 AM10/6/19
to Dmitry Vyukov, syzbot, bsing...@gmail.com, Marco Elver, LKML, syzkaller-bugs
As I said in my reply to Marco: I'm integrating this. I just haven't
had time to update the patch.

Christian

Christian Brauner

unread,
Oct 6, 2019, 7:52:30 PM10/6/19
to christia...@ubuntu.com, bsing...@gmail.com, el...@google.com, linux-...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com, Dmitry Vyukov
When assiging and testing taskstats in taskstats_exit() there's a race
when writing and reading sig->stats when a thread-group with more than
one thread exits:

cpu0:
thread catches fatal signal and whole thread-group gets taken down
do_exit()
do_group_exit()
taskstats_exit()
taskstats_tgid_alloc()
The tasks reads sig->stats holding sighand lock seeing garbage.

cpu1:
task calls exit_group()
do_exit()
do_group_exit()
taskstats_exit()
taskstats_tgid_alloc()
The task takes sighand lock and assigns new stats to sig->stats.

Fix this by using READ_ONCE() and smp_store_release().

Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com
Cc: Dmitry Vyukov <dvy...@google.com>
Signed-off-by: Christian Brauner <christia...@ubuntu.com>
---
/* v1 */
Link: https://lore.kernel.org/r/20191005112806.13960...@ubuntu.com

/* v2 */
- Dmitry Vyukov <dvy...@google.com>, Marco Elver <el...@google.com>:
- fix the original double-checked locking using memory barriers
---
kernel/taskstats.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/kernel/taskstats.c b/kernel/taskstats.c
index 13a0f2e6ebc2..8ee046e8a792 100644
--- a/kernel/taskstats.c
+++ b/kernel/taskstats.c
@@ -554,24 +554,25 @@ static int taskstats_user_cmd(struct sk_buff *skb, struct genl_info *info)
static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
{
struct signal_struct *sig = tsk->signal;
- struct taskstats *stats;
+ struct taskstats *stats_new, *stats;

- if (sig->stats || thread_group_empty(tsk))
- goto ret;
+ stats = READ_ONCE(sig->stats);
+ if (stats || thread_group_empty(tsk))
+ return stats;

/* No problem if kmem_cache_zalloc() fails */
- stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
+ stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);

spin_lock_irq(&tsk->sighand->siglock);
if (!sig->stats) {
- sig->stats = stats;
- stats = NULL;
+ smp_store_release(&sig->stats, stats_new);
+ stats_new = NULL;
}
spin_unlock_irq(&tsk->sighand->siglock);

- if (stats)
- kmem_cache_free(taskstats_cache, stats);
-ret:
+ if (stats_new)
+ kmem_cache_free(taskstats_cache, stats_new);
+
return sig->stats;
}

--
2.23.0

Dmitry Vyukov

unread,
Oct 7, 2019, 3:31:28 AM10/7/19
to Christian Brauner, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs
Reviewed-by: Dmitry Vyukov <dvy...@google.com>

Thanks!

Christian Brauner

unread,
Oct 7, 2019, 5:29:33 AM10/7/19
to Dmitry Vyukov, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs
Applied to:
https://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git/log/?h=fixes

Should show up in linux-next tomorrow.
Targeting v5.4-rc3.
Cced stable.

Christian

Andrea Parri

unread,
Oct 7, 2019, 6:40:49 AM10/7/19
to Christian Brauner, bsing...@gmail.com, el...@google.com, linux-...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com, Dmitry Vyukov
Hi Christian,

On Mon, Oct 07, 2019 at 01:52:16AM +0200, Christian Brauner wrote:
> When assiging and testing taskstats in taskstats_exit() there's a race
> when writing and reading sig->stats when a thread-group with more than
> one thread exits:
>
> cpu0:
> thread catches fatal signal and whole thread-group gets taken down
> do_exit()
> do_group_exit()
> taskstats_exit()
> taskstats_tgid_alloc()
> The tasks reads sig->stats holding sighand lock seeing garbage.
>
> cpu1:
> task calls exit_group()
> do_exit()
> do_group_exit()
> taskstats_exit()
> taskstats_tgid_alloc()
> The task takes sighand lock and assigns new stats to sig->stats.
>
> Fix this by using READ_ONCE() and smp_store_release().
>
> Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com
> Cc: Dmitry Vyukov <dvy...@google.com>
> Signed-off-by: Christian Brauner <christia...@ubuntu.com>

FYI, checkpatch.pl says:

WARNING: memory barrier without comment
#62: FILE: kernel/taskstats.c:568:
+ smp_store_release(&sig->stats, stats_new);

Maybe you can make checkpatch.pl happy ;-) and add a comment to stress
the 'pairing' between this barrier and the added READ_ONCE() (as Dmitry
was alluding to in a previous post)?

Thanks,
Andrea

Christian Brauner

unread,
Oct 7, 2019, 6:50:52 AM10/7/19
to Andrea Parri, bsing...@gmail.com, el...@google.com, linux-...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com, Dmitry Vyukov
Of course. I totally forgot the memory barrier documentation requirement.

Christian

Christian Brauner

unread,
Oct 7, 2019, 7:01:26 AM10/7/19
to parri....@gmail.com, bsing...@gmail.com, christia...@ubuntu.com, dvy...@google.com, el...@google.com, linux-...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com, sta...@vger.kernel.org
When assiging and testing taskstats in taskstats_exit() there's a race
when writing and reading sig->stats when a thread-group with more than
one thread exits:

cpu0:
thread catches fatal signal and whole thread-group gets taken down
do_exit()
do_group_exit()
taskstats_exit()
taskstats_tgid_alloc()
The tasks reads sig->stats holding sighand lock seeing garbage.

cpu1:
task calls exit_group()
do_exit()
do_group_exit()
taskstats_exit()
taskstats_tgid_alloc()
The task takes sighand lock and assigns new stats to sig->stats.

Fix this by using READ_ONCE() and smp_store_release().

Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com
Fixes: 34ec12349c8a ("taskstats: cleanup ->signal->stats allocation")
Cc: sta...@vger.kernel.org
Signed-off-by: Christian Brauner <christia...@ubuntu.com>
Reviewed-by: Dmitry Vyukov <dvy...@google.com>
Link: https://lore.kernel.org/r/20191006235216.7483...@ubuntu.com
---
/* v1 */
Link: https://lore.kernel.org/r/20191005112806.13960...@ubuntu.com

/* v2 */
- Dmitry Vyukov <dvy...@google.com>, Marco Elver <el...@google.com>:
- fix the original double-checked locking using memory barriers

/* v3 */
- Andrea Parri <parri....@gmail.com>:
- document memory barriers to make checkpatch happy
---
kernel/taskstats.c | 21 ++++++++++++---------
1 file changed, 12 insertions(+), 9 deletions(-)

diff --git a/kernel/taskstats.c b/kernel/taskstats.c
index 13a0f2e6ebc2..978d7931fb65 100644
--- a/kernel/taskstats.c
+++ b/kernel/taskstats.c
@@ -554,24 +554,27 @@ static int taskstats_user_cmd(struct sk_buff *skb, struct genl_info *info)
static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
{
struct signal_struct *sig = tsk->signal;
- struct taskstats *stats;
+ struct taskstats *stats_new, *stats;

- if (sig->stats || thread_group_empty(tsk))
- goto ret;
+ /* Pairs with smp_store_release() below. */
+ stats = READ_ONCE(sig->stats);
+ if (stats || thread_group_empty(tsk))
+ return stats;

/* No problem if kmem_cache_zalloc() fails */
- stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
+ stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);

spin_lock_irq(&tsk->sighand->siglock);
if (!sig->stats) {
- sig->stats = stats;
- stats = NULL;
+ /* Pairs with READ_ONCE() above. */

Andrea Parri

unread,
Oct 7, 2019, 9:18:11 AM10/7/19
to Christian Brauner, bsing...@gmail.com, dvy...@google.com, el...@google.com, linux-...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com, sta...@vger.kernel.org
On Mon, Oct 07, 2019 at 01:01:17PM +0200, Christian Brauner wrote:
> When assiging and testing taskstats in taskstats_exit() there's a race
> when writing and reading sig->stats when a thread-group with more than
> one thread exits:
>
> cpu0:
> thread catches fatal signal and whole thread-group gets taken down
> do_exit()
> do_group_exit()
> taskstats_exit()
> taskstats_tgid_alloc()
> The tasks reads sig->stats holding sighand lock seeing garbage.

You meant "without holding sighand lock" here, right?
This pairing suggests that the READ_ONCE() is heading an address
dependency, but I fail to identify it: what is the target memory
access of such a (putative) dependency?


> + if (stats || thread_group_empty(tsk))
> + return stats;
>
> /* No problem if kmem_cache_zalloc() fails */
> - stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
> + stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
>
> spin_lock_irq(&tsk->sighand->siglock);
> if (!sig->stats) {
> - sig->stats = stats;
> - stats = NULL;
> + /* Pairs with READ_ONCE() above. */
> + smp_store_release(&sig->stats, stats_new);

This is intended to 'order' the _zalloc() (zero initializazion)
before the update of sig->stats, right? what else am I missing?

Thanks,
Andrea

Christian Brauner

unread,
Oct 7, 2019, 9:28:58 AM10/7/19
to Andrea Parri, bsing...@gmail.com, dvy...@google.com, el...@google.com, linux-...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com, sta...@vger.kernel.org
On Mon, Oct 07, 2019 at 03:18:04PM +0200, Andrea Parri wrote:
> On Mon, Oct 07, 2019 at 01:01:17PM +0200, Christian Brauner wrote:
> > When assiging and testing taskstats in taskstats_exit() there's a race
> > when writing and reading sig->stats when a thread-group with more than
> > one thread exits:
> >
> > cpu0:
> > thread catches fatal signal and whole thread-group gets taken down
> > do_exit()
> > do_group_exit()
> > taskstats_exit()
> > taskstats_tgid_alloc()
> > The tasks reads sig->stats holding sighand lock seeing garbage.
>
> You meant "without holding sighand lock" here, right?

Correct, thanks for noticing!
Right, I should've mentioned that. I'll change the comment.
But I thought this also paired with smp_read_barrier_depends() that's
placed alongside READ_ONCE()?

Christian

Dmitry Vyukov

unread,
Oct 7, 2019, 9:50:59 AM10/7/19
to Andrea Parri, Christian Brauner, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs, stable
I would assume callers of this function access *stats. So the
dependency is between loading stats and accessing *stats.

Christian Brauner

unread,
Oct 7, 2019, 9:55:36 AM10/7/19
to Dmitry Vyukov, Andrea Parri, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs, stable
Right, but why READ_ONCE() and not smp_load_acquire here?

Christian

Dmitry Vyukov

unread,
Oct 7, 2019, 10:08:55 AM10/7/19
to Christian Brauner, Andrea Parri, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs, stable
Because if all memory accesses we need to order have data dependency
between them, READ_ONCE is enough and is cheaper on some archs (e.g.
ARM).
In our case there is a data dependency between loading of stats and
accessing *stats (only Alpha could reorder that, other arches can't
load via a pointer before loading the pointer itself (sic!)).

Christian Brauner

unread,
Oct 7, 2019, 10:10:32 AM10/7/19
to Dmitry Vyukov, Andrea Parri, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs, stable
Right, the except-Alpha-clause is well-known...

Christian

Andrea Parri

unread,
Oct 7, 2019, 10:14:39 AM10/7/19
to Dmitry Vyukov, Christian Brauner, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs, stable
> > > static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
> > > {
> > > struct signal_struct *sig = tsk->signal;
> > > - struct taskstats *stats;
> > > + struct taskstats *stats_new, *stats;
> > >
> > > - if (sig->stats || thread_group_empty(tsk))
> > > - goto ret;
> > > + /* Pairs with smp_store_release() below. */
> > > + stats = READ_ONCE(sig->stats);
> >
> > This pairing suggests that the READ_ONCE() is heading an address
> > dependency, but I fail to identify it: what is the target memory
> > access of such a (putative) dependency?
>
> I would assume callers of this function access *stats. So the
> dependency is between loading stats and accessing *stats.

AFAICT, the only caller of the function in 5.4-rc2 is taskstats_exit(),
which 'casts' the return value to a boolean (so I really don't see how
any address dependency could be carried over/relied upon here).

Andrea

Dmitry Vyukov

unread,
Oct 7, 2019, 10:18:39 AM10/7/19
to Andrea Parri, Christian Brauner, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs, stable
This does not make sense.

But later taskstats_exit does:

memcpy(stats, tsk->signal->stats, sizeof(*stats));

Perhaps it's supposed to use stats returned by taskstats_tgid_alloc?

Andrea Parri

unread,
Oct 8, 2019, 10:20:42 AM10/8/19
to Dmitry Vyukov, Christian Brauner, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs, stable
Seems reasonable to me. If so, replacing the READ_ONCE() in question
with an smp_load_acquire() might be the solution. Thoughts?

Andrea

Christian Brauner

unread,
Oct 8, 2019, 10:24:21 AM10/8/19
to Andrea Parri, Dmitry Vyukov, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs, stable
I've done that already in my tree yesterday. I can resend for another
review if you'd prefer.

Christian

Andrea Parri

unread,
Oct 8, 2019, 11:27:08 AM10/8/19
to Christian Brauner, Dmitry Vyukov, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs, stable
Oh nice! No need to resend of course. ;D FWIW, I can check it if you
let me know the particular branch/commit (guessing that's somewhere in
git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git, yes?).

Thanks,
Andrea

Christian Brauner

unread,
Oct 8, 2019, 11:35:42 AM10/8/19
to Andrea Parri, Dmitry Vyukov, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs, stable

Andrea Parri

unread,
Oct 8, 2019, 11:44:25 AM10/8/19
to Christian Brauner, Dmitry Vyukov, bsing...@gmail.com, Marco Elver, LKML, syzbot, syzkaller-bugs, stable
You forgot to update the commit msg. It looks good to me modulo that.

Thanks,
Andrea

Christian Brauner

unread,
Oct 9, 2019, 7:31:58 AM10/9/19
to parri....@gmail.com, bsing...@gmail.com, christia...@ubuntu.com, dvy...@google.com, el...@google.com, linux-...@vger.kernel.org, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
When assiging and testing taskstats in taskstats_exit() there's a race
when writing and reading sig->stats when a thread-group with more than
one thread exits:

cpu0:
thread catches fatal signal and whole thread-group gets taken down
do_exit()
do_group_exit()
taskstats_exit()
taskstats_tgid_alloc()
The tasks reads sig->stats without holding sighand lock seeing garbage.

cpu1:
task calls exit_group()
do_exit()
do_group_exit()
taskstats_exit()
taskstats_tgid_alloc()
The task takes sighand lock and assigns new stats to sig->stats.

Fix this by using smp_load_acquire() and smp_store_release().

Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com
Fixes: 34ec12349c8a ("taskstats: cleanup ->signal->stats allocation")
Cc: sta...@vger.kernel.org
Signed-off-by: Christian Brauner <christia...@ubuntu.com>
Reviewed-by: Dmitry Vyukov <dvy...@google.com>
Link: https://lore.kernel.org/r/20191006235216.7483...@ubuntu.com
- Dmitry Vyukov <dvy...@google.com>, Marco Elver <el...@google.com>:
- fix the original double-checked locking using memory barriers

/* v3 */
Link: https://lore.kernel.org/r/20191007110117.1096...@ubuntu.com/
- Andrea Parri <parri....@gmail.com>:
- document memory barriers to make checkpatch happy

/* v4 */
- Andrea Parri <parri....@gmail.com>:
- use smp_load_acquire(), not READ_ONCE()
- update commit message
---
kernel/taskstats.c | 24 +++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/kernel/taskstats.c b/kernel/taskstats.c
index 13a0f2e6ebc2..e6b45315607a 100644
--- a/kernel/taskstats.c
+++ b/kernel/taskstats.c
@@ -554,24 +554,30 @@ static int taskstats_user_cmd(struct sk_buff *skb, struct genl_info *info)
static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
{
struct signal_struct *sig = tsk->signal;
- struct taskstats *stats;
+ struct taskstats *stats_new, *stats;

- if (sig->stats || thread_group_empty(tsk))
- goto ret;
+ /* Pairs with smp_store_release() below. */
+ stats = smp_load_acquire(sig->stats);
+ if (stats || thread_group_empty(tsk))
+ return stats;

/* No problem if kmem_cache_zalloc() fails */
- stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
+ stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);

spin_lock_irq(&tsk->sighand->siglock);
if (!sig->stats) {
- sig->stats = stats;
- stats = NULL;
+ /*
+ * Pairs with smp_store_release() above and order the
+ * kmem_cache_zalloc().
+ */
+ smp_store_release(&sig->stats, stats_new);

Christian Brauner

unread,
Oct 9, 2019, 7:40:45 AM10/9/19
to parri....@gmail.com, bsing...@gmail.com, dvy...@google.com, el...@google.com, linux-...@vger.kernel.org, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
On Wed, Oct 09, 2019 at 01:31:34PM +0200, Christian Brauner wrote:
> When assiging and testing taskstats in taskstats_exit() there's a race
> when writing and reading sig->stats when a thread-group with more than
> one thread exits:
>
> cpu0:
> thread catches fatal signal and whole thread-group gets taken down
> do_exit()
> do_group_exit()
> taskstats_exit()
> taskstats_tgid_alloc()
> The tasks reads sig->stats without holding sighand lock seeing garbage.
>
> cpu1:
> task calls exit_group()
> do_exit()
> do_group_exit()
> taskstats_exit()
> taskstats_tgid_alloc()
> The task takes sighand lock and assigns new stats to sig->stats.
>
> Fix this by using smp_load_acquire() and smp_store_release().
>
> Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com
> Fixes: 34ec12349c8a ("taskstats: cleanup ->signal->stats allocation")
> Cc: sta...@vger.kernel.org
> Signed-off-by: Christian Brauner <christia...@ubuntu.com>
> Reviewed-by: Dmitry Vyukov <dvy...@google.com>

_sigh_, let me resend since i fcked this one up.

Christian

Christian Brauner

unread,
Oct 9, 2019, 7:48:19 AM10/9/19
to christia...@ubuntu.com, bsing...@gmail.com, dvy...@google.com, el...@google.com, linux-...@vger.kernel.org, parri....@gmail.com, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
- Andrea Parri <parri....@gmail.com>:
- document memory barriers to make checkpatch happy

/* v4 */
Link: https://lore.kernel.org/r/20191009113134.5171...@ubuntu.com
- Andrea Parri <parri....@gmail.com>:
- use smp_load_acquire(), not READ_ONCE()
- update commit message

/* v5 */
- Andrea Parri <parri....@gmail.com>:
- fix typo in smp_load_acquire()
---
kernel/taskstats.c | 24 +++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/kernel/taskstats.c b/kernel/taskstats.c
index 13a0f2e6ebc2..6e18fdc4f7c8 100644
--- a/kernel/taskstats.c
+++ b/kernel/taskstats.c
@@ -554,24 +554,30 @@ static int taskstats_user_cmd(struct sk_buff *skb, struct genl_info *info)
static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
{
struct signal_struct *sig = tsk->signal;
- struct taskstats *stats;
+ struct taskstats *stats_new, *stats;

- if (sig->stats || thread_group_empty(tsk))
- goto ret;
+ /* Pairs with smp_store_release() below. */
+ stats = smp_load_acquire(&sig->stats);

Marco Elver

unread,
Oct 9, 2019, 7:48:41 AM10/9/19
to Christian Brauner, Andrea Parri, bsing...@gmail.com, Dmitry Vyukov, LKML, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
Acked-by: Marco Elver <el...@google.com>

Note that this now looks almost like what I suggested, except the
return at the end of the function is accessing sig->stats again. In
this case, it seems it's fine assuming sig->stats cannot be written
elsewhere. Just wanted to point it out to make sure it's considered.

Thanks!

Christian Brauner

unread,
Oct 9, 2019, 7:53:29 AM10/9/19
to Marco Elver, Andrea Parri, bsing...@gmail.com, Dmitry Vyukov, LKML, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
Right, I think we all just needed to get our heads clear about what
exactly is happening here. This codepath is not a very prominent one. :)

> return at the end of the function is accessing sig->stats again. In
> this case, it seems it's fine assuming sig->stats cannot be written
> elsewhere. Just wanted to point it out to make sure it's considered.

Yes, I considered that but thanks for mentioning it.

Note that this patch has a bug. It should be
smp_load_acquire(&sig->stats) and not smp_load_acquire(sig->stats).
I accidently didn't automatically recompile the patchset after the last
change I made. Andrea thankfully caught this.

Thanks!
Christian

Andrea Parri

unread,
Oct 9, 2019, 8:08:18 AM10/9/19
to Christian Brauner, bsing...@gmail.com, dvy...@google.com, el...@google.com, linux-...@vger.kernel.org, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
On Wed, Oct 09, 2019 at 01:48:09PM +0200, Christian Brauner wrote:
> When assiging and testing taskstats in taskstats_exit() there's a race
> when writing and reading sig->stats when a thread-group with more than
> one thread exits:
>
> cpu0:
> thread catches fatal signal and whole thread-group gets taken down
> do_exit()
> do_group_exit()
> taskstats_exit()
> taskstats_tgid_alloc()
> The tasks reads sig->stats without holding sighand lock seeing garbage.
>
> cpu1:
> task calls exit_group()
> do_exit()
> do_group_exit()
> taskstats_exit()
> taskstats_tgid_alloc()
> The task takes sighand lock and assigns new stats to sig->stats.
>
> Fix this by using smp_load_acquire() and smp_store_release().
>
> Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com
> Fixes: 34ec12349c8a ("taskstats: cleanup ->signal->stats allocation")
> Cc: sta...@vger.kernel.org
> Signed-off-by: Christian Brauner <christia...@ubuntu.com>
> Reviewed-by: Dmitry Vyukov <dvy...@google.com>

Reviewed-by: Andrea Parri <parri....@gmail.com>

Thanks,
Andrea

Christian Brauner

unread,
Oct 9, 2019, 9:26:20 AM10/9/19
to bsing...@gmail.com, dvy...@google.com, el...@google.com, linux-...@vger.kernel.org, parri....@gmail.com, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
On Wed, Oct 09, 2019 at 01:48:09PM +0200, Christian Brauner wrote:
> When assiging and testing taskstats in taskstats_exit() there's a race
> when writing and reading sig->stats when a thread-group with more than
> one thread exits:
>
> cpu0:
> thread catches fatal signal and whole thread-group gets taken down
> do_exit()
> do_group_exit()
> taskstats_exit()
> taskstats_tgid_alloc()
> The tasks reads sig->stats without holding sighand lock seeing garbage.
>
> cpu1:
> task calls exit_group()
> do_exit()
> do_group_exit()
> taskstats_exit()
> taskstats_tgid_alloc()
> The task takes sighand lock and assigns new stats to sig->stats.
>
> Fix this by using smp_load_acquire() and smp_store_release().
>
> Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com
> Fixes: 34ec12349c8a ("taskstats: cleanup ->signal->stats allocation")
> Cc: sta...@vger.kernel.org
> Signed-off-by: Christian Brauner <christia...@ubuntu.com>
> Reviewed-by: Dmitry Vyukov <dvy...@google.com>

Applied to:
https://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git/log/?h=taskstats_syzbot

Merged into:
https://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git/log/?h=fixes

Targeting v5.4-rc3.

Thanks!
Christian

Christian Brauner

unread,
Oct 21, 2019, 7:33:35 AM10/21/19
to Will Deacon, linux-...@vger.kernel.org, christia...@ubuntu.com, bsing...@gmail.com, dvy...@google.com, el...@google.com, parri....@gmail.com, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
When assiging and testing taskstats in taskstats_exit() there's a race
when writing and reading sig->stats when a thread-group with more than
one thread exits:

cpu0:
thread catches fatal signal and whole thread-group gets taken down
do_exit()
do_group_exit()
taskstats_exit()
taskstats_tgid_alloc()
The tasks reads sig->stats without holding sighand lock.

cpu1:
task calls exit_group()
do_exit()
do_group_exit()
taskstats_exit()
taskstats_tgid_alloc()
The task takes sighand lock and assigns new stats to sig->stats.

The first approach used smp_load_acquire() and smp_store_release().
However, after having discussed this it seems that the data dependency
for kmem_cache_alloc() would be fixed by WRITE_ONCE().
Furthermore, the smp_load_acquire() would only manage to order the stats
check before the thread_group_empty() check. So it seems just using
READ_ONCE() and WRITE_ONCE() will do the job and I wanted to bring this
up for discussion at least.

Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com
Fixes: 34ec12349c8a ("taskstats: cleanup ->signal->stats allocation")
Cc: Will Deacon <wi...@kernel.org>
Cc: Marco Elver <el...@google.com>
Cc: Andrea Parri <parri....@gmail.com>
Cc: Dmitry Vyukov <dvy...@google.com>
Cc: sta...@vger.kernel.org
Signed-off-by: Christian Brauner <christia...@ubuntu.com>
---
/* v1 */
Link: https://lore.kernel.org/r/20191005112806.13960...@ubuntu.com

/* v2 */
Link: https://lore.kernel.org/r/20191006235216.7483...@ubuntu.com
- Dmitry Vyukov <dvy...@google.com>, Marco Elver <el...@google.com>:
- fix the original double-checked locking using memory barriers

/* v3 */
Link: https://lore.kernel.org/r/20191007110117.1096...@ubuntu.com
- Andrea Parri <parri....@gmail.com>:
- document memory barriers to make checkpatch happy

/* v4 */
Link: https://lore.kernel.org/r/20191009113134.5171...@ubuntu.com
- Andrea Parri <parri....@gmail.com>:
- use smp_load_acquire(), not READ_ONCE()
- update commit message

/* v5 */
Link: https://lore.kernel.org/r/20191009114809.8643...@ubuntu.com
- Andrea Parri <parri....@gmail.com>:
- fix typo in smp_load_acquire()

/* v6 */
- Christian Brauner <christia...@ubuntu.com>:
- bring up READ_ONCE()/WRITE_ONCE() approach for discussion
---
kernel/taskstats.c | 26 +++++++++++++++-----------
1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/kernel/taskstats.c b/kernel/taskstats.c
index 13a0f2e6ebc2..111bb4139aa2 100644
--- a/kernel/taskstats.c
+++ b/kernel/taskstats.c
@@ -554,25 +554,29 @@ static int taskstats_user_cmd(struct sk_buff *skb, struct genl_info *info)
static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
{
struct signal_struct *sig = tsk->signal;
- struct taskstats *stats;
+ struct taskstats *stats_new, *stats;

- if (sig->stats || thread_group_empty(tsk))
- goto ret;
+ /* Pairs with WRITE_ONCE() below. */
+ stats = READ_ONCE(sig->stats);
+ if (stats || thread_group_empty(tsk))
+ return stats;

/* No problem if kmem_cache_zalloc() fails */
- stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
+ stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);

spin_lock_irq(&tsk->sighand->siglock);
- if (!sig->stats) {
- sig->stats = stats;
- stats = NULL;
+ if (!stats) {
+ stats = stats_new;
+ /* Pairs with READ_ONCE() above. */
+ WRITE_ONCE(sig->stats, stats_new);
+ stats_new = NULL;
}
spin_unlock_irq(&tsk->sighand->siglock);

- if (stats)
- kmem_cache_free(taskstats_cache, stats);
-ret:
- return sig->stats;
+ if (stats_new)
+ kmem_cache_free(taskstats_cache, stats_new);
+
+ return stats;
}

/* Send pid data out on exit */
--
2.23.0

Rasmus Villemoes

unread,
Oct 21, 2019, 8:19:04 AM10/21/19
to Christian Brauner, Will Deacon, linux-...@vger.kernel.org, bsing...@gmail.com, dvy...@google.com, el...@google.com, parri....@gmail.com, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
On 21/10/2019 13.33, Christian Brauner wrote:
> The first approach used smp_load_acquire() and smp_store_release().
> However, after having discussed this it seems that the data dependency
> for kmem_cache_alloc() would be fixed by WRITE_ONCE().
> Furthermore, the smp_load_acquire() would only manage to order the stats
> check before the thread_group_empty() check. So it seems just using
> READ_ONCE() and WRITE_ONCE() will do the job and I wanted to bring this
> up for discussion at least.
>
No idea about the memory ordering issues, but don't you need to
load/check sig->stats again? Otherwise it seems that two threads might
both see !sig->stats, both allocate a stats_new, and both
unconditionally in turn assign their stats_new to sig->stats. Then the
first assignment ends up becoming a memory leak (and any writes through
that pointer done by the caller end up in /dev/null...)

Rasmus

Christian Brauner

unread,
Oct 21, 2019, 9:06:11 AM10/21/19
to Rasmus Villemoes, Will Deacon, linux-...@vger.kernel.org, bsing...@gmail.com, dvy...@google.com, el...@google.com, parri....@gmail.com, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
Trigger hand too fast. I guess you're thinking sm like:

diff --git a/kernel/taskstats.c b/kernel/taskstats.c
index 13a0f2e6ebc2..c4e1ed11e785 100644
--- a/kernel/taskstats.c
+++ b/kernel/taskstats.c
@@ -554,25 +554,27 @@ static int taskstats_user_cmd(struct sk_buff *skb, struct genl_info *info)
static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
{
struct signal_struct *sig = tsk->signal;
- struct taskstats *stats;
+ struct taskstats *stats_new, *stats;

- if (sig->stats || thread_group_empty(tsk))
- goto ret;
+ stats = READ_ONCE(sig->stats);
+ if (stats || thread_group_empty(tsk))
+ return stats;

- /* No problem if kmem_cache_zalloc() fails */
- stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
+ stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);

spin_lock_irq(&tsk->sighand->siglock);
- if (!sig->stats) {
- sig->stats = stats;
- stats = NULL;
+ stats = READ_ONCE(sig->stats);
+ if (!stats) {
+ stats = stats_new;
+ WRITE_ONCE(sig->stats, stats_new);
+ stats_new = NULL;
}

Andrea Parri

unread,
Oct 23, 2019, 8:16:11 AM10/23/19
to Christian Brauner, Will Deacon, linux-...@vger.kernel.org, bsing...@gmail.com, dvy...@google.com, el...@google.com, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
Mmh, the RELEASE was intended to order the memory initialization in
kmem_cache_zalloc() with the later ->stats pointer assignment; AFAICT,
there is no data dependency between such memory accesses.

Correspondingly, the ACQUIRE was intended to order the ->stats pointer
load with later, _independent dereferences of the same pointer; the
latter are, e.g., in taskstats_exit() (but not thread_group_empty()).

Looking again, I see that fill_tgid_exit()'s dereferences of ->stats
are protected by ->siglock: maybe you meant to rely on such a critical
section pairing with the critical section in taskstats_tgid_alloc()?

That memcpy(-, tsk->signal->stats, -) at the end of taskstats_exit()
also bugs me: could these dereferences of ->stats happen concurrently
with other stores to the same memory locations?

Thanks,
Andrea

Dmitry Vyukov

unread,
Oct 23, 2019, 8:40:08 AM10/23/19
to Andrea Parri, Christian Brauner, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs
I agree. This needs smp_store_release. The latest version that I
looked at contained:
smp_store_release(&sig->stats, stats_new);

> Correspondingly, the ACQUIRE was intended to order the ->stats pointer
> load with later, _independent dereferences of the same pointer; the
> latter are, e.g., in taskstats_exit() (but not thread_group_empty()).

How these later loads can be completely independent of the pointer
value? They need to obtain the pointer value from somewhere. And this
can only be done by loaded it. And if a thread loads a pointer and
then dereferences that pointer, that's a data/address dependency and
we assume this is now covered by READ_ONCE.
Or these later loads of the pointer can also race with the store? If
so, I think they also need to use READ_ONCE (rather than turn this earlier
pointer load into acquire).

Christian Brauner

unread,
Oct 23, 2019, 9:11:59 AM10/23/19
to Dmitry Vyukov, Will Deacon, Andrea Parri, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs
This is what really makes me wonder. Can the compiler really re-order
the kmem_cache_zalloc() call with the assignment. If that's really the
case then shouldn't all allocation functions have compiler barriers in
them? This then seems like a very generic problem.

>
> > Correspondingly, the ACQUIRE was intended to order the ->stats pointer
> > load with later, _independent dereferences of the same pointer; the
> > latter are, e.g., in taskstats_exit() (but not thread_group_empty()).
>
> How these later loads can be completely independent of the pointer
> value? They need to obtain the pointer value from somewhere. And this
> can only be done by loaded it. And if a thread loads a pointer and
> then dereferences that pointer, that's a data/address dependency and
> we assume this is now covered by READ_ONCE.
> Or these later loads of the pointer can also race with the store? If

To clarify, later loads as in taskstats_exit() and thread_group_empty(),
not the later load in the double-checked locking case.

> so, I think they also need to use READ_ONCE (rather than turn this earlier
> pointer load into acquire).

Using READ_ONCE() in the alloc, taskstat_exit(), and
thread_group_empty() case.

Christian

Dmitry Vyukov

unread,
Oct 23, 2019, 9:20:57 AM10/23/19
to Christian Brauner, Will Deacon, Andrea Parri, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs
Yes.
Not sure about compiler, but hardware definitely can. And generally
one does not care if it's compiler or hardware.

> If that's really the
> case then shouldn't all allocation functions have compiler barriers in
> them? This then seems like a very generic problem.

No.
One puts memory barriers into synchronization primitives.
This equally affects memset's, memcpy's and in fact all normal stores.
Adding a memory barrier to every normal store is not the solution to
this. The memory barrier is done before publication of the memory. And
we already have smp_store_release for this. So if one doesn't publish
objects with a plain store (which breaks all possible rules anyways)
and uses a proper primitive, there is no problem.

Andrea Parri

unread,
Oct 24, 2019, 7:32:03 AM10/24/19
to Dmitry Vyukov, Christian Brauner, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs
> How these later loads can be completely independent of the pointer
> value? They need to obtain the pointer value from somewhere. And this
> can only be done by loaded it. And if a thread loads a pointer and
> then dereferences that pointer, that's a data/address dependency and
> we assume this is now covered by READ_ONCE.

The "dependency" I was considering here is a dependency _between the
load of sig->stats in taskstats_tgid_alloc() and the (program-order)
later loads of *(sig->stats) in taskstats_exit(). Roughly speaking,
such a dependency should correspond to a dependency chain at the asm
or registers level from the first load to the later loads; e.g., in:

Thread [register r0 contains the address of sig->stats]

A: LOAD r1,[r0] // LOAD_ACQUIRE sig->stats
...
B: LOAD r2,[r0] // LOAD *(sig->stats)
C: LOAD r3,[r2]

there would be no such dependency from A to C. Compare, e.g., with:

Thread [register r0 contains the address of sig->stats]

A: LOAD r1,[r0] // LOAD_ACQUIRE sig->stats
...
C: LOAD r3,[r1] // LOAD *(sig->stats)

AFAICT, there's no guarantee that the compilers will generate such a
dependency from the code under discussion.


> Or these later loads of the pointer can also race with the store? If
> so, I think they also need to use READ_ONCE (rather than turn this earlier
> pointer load into acquire).

AFAICT, _if the LOAD_ACQUIRE reads from the mentioned STORE_RELEASE,
then the former must induce enough synchronization to eliminate data
races (as well as any undesired re-ordering).

TBH, I am not familiar enough with the underlying logic of this code
to say whether that "if .. reads from .." pre-condition holds by the
time those *(sig->stats) execute.

Thanks,
Andrea

Dmitry Vyukov

unread,
Oct 24, 2019, 7:51:34 AM10/24/19
to Andrea Parri, Christian Brauner, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs
On Thu, Oct 24, 2019 at 1:32 PM Andrea Parri <parri....@gmail.com> wrote:
>
> > How these later loads can be completely independent of the pointer
> > value? They need to obtain the pointer value from somewhere. And this
> > can only be done by loaded it. And if a thread loads a pointer and
> > then dereferences that pointer, that's a data/address dependency and
> > we assume this is now covered by READ_ONCE.
>
> The "dependency" I was considering here is a dependency _between the
> load of sig->stats in taskstats_tgid_alloc() and the (program-order)
> later loads of *(sig->stats) in taskstats_exit(). Roughly speaking,
> such a dependency should correspond to a dependency chain at the asm
> or registers level from the first load to the later loads; e.g., in:
>
> Thread [register r0 contains the address of sig->stats]
>
> A: LOAD r1,[r0] // LOAD_ACQUIRE sig->stats
> ...
> B: LOAD r2,[r0] // LOAD *(sig->stats)
> C: LOAD r3,[r2]
>
> there would be no such dependency from A to C. Compare, e.g., with:
>
> Thread [register r0 contains the address of sig->stats]
>
> A: LOAD r1,[r0] // LOAD_ACQUIRE sig->stats
> ...
> C: LOAD r3,[r1] // LOAD *(sig->stats)
>
> AFAICT, there's no guarantee that the compilers will generate such a
> dependency from the code under discussion.

Fixing this by making A ACQUIRE looks like somewhat weird code pattern
to me (though correct). B is what loads the address used to read
indirect data, so B ought to be ACQUIRE (or LOAD-DEPENDS which we get
from READ_ONCE).

What you are suggesting is:

addr = ptr.load(memory_order_acquire);
if (addr) {
addr = ptr.load(memory_order_relaxed);
data = *addr;
}

whereas the canonical/non-convoluted form of this pattern is:

addr = ptr.load(memory_order_consume);
if (addr)
data = *addr;

Moreover the second load of ptr is not even atomic in our case, so it
is a subject to another data race?

Andrea Parri

unread,
Oct 24, 2019, 9:05:10 AM10/24/19
to Dmitry Vyukov, Christian Brauner, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs
No, I'd rather be suggesting:

addr = ptr.load(memory_order_acquire);
if (addr)
data = *addr;

since I'd not expect any form of encouragement to rely on "consume" or
on "READ_ONCE() + true-address-dependency" from myself. ;-)

IAC, v6 looks more like:

addr = ptr.load(memory_order_consume);
if (!!addr)
*ptr = 1;
data = *ptr;

to me (hence my comments/questions ...).

Thanks,
Andrea

Dmitry Vyukov

unread,
Oct 24, 2019, 9:14:02 AM10/24/19
to Andrea Parri, Christian Brauner, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs
But why? I think kernel contains lots of such cases and it seems to be
officially documented by the LKMM:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/memory-model/Documentation/explanation.txt
address dependencies and ppo
Am I missing something? Why do we need acquire with address dependency?

Christian Brauner

unread,
Oct 24, 2019, 9:21:46 AM10/24/19
to Dmitry Vyukov, Andrea Parri, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs

Dmitry Vyukov

unread,
Oct 24, 2019, 9:34:32 AM10/24/19
to Christian Brauner, Andrea Parri, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs

Andrea Parri

unread,
Oct 24, 2019, 9:43:26 AM10/24/19
to Dmitry Vyukov, Christian Brauner, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs
> But why? I think kernel contains lots of such cases and it seems to be
> officially documented by the LKMM:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/memory-model/Documentation/explanation.txt
> address dependencies and ppo

Well, that same documentation also alerts about some of the pitfalls
developers can incur while relying on dependencies. I'm sure you're
more than aware of some of the debate surrounding these issues.

Andrea

Dmitry Vyukov

unread,
Oct 24, 2019, 9:58:54 AM10/24/19
to Andrea Parri, Christian Brauner, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs
I thought that LKMM is finally supposed to stop all these
centi-threads around subtle details of ordering. And not we finally
have it. And it says that using address-dependencies is legal. And you
are one of the authors. And now you are arguing here that we better
not use it :) Can we have some black/white yes/no for code correctness
reflected in LKMM please :) If we are banning address dependencies,
don't we need to fix all of rcu uses?

Andrea Parri

unread,
Oct 24, 2019, 10:40:58 AM10/24/19
to Dmitry Vyukov, Christian Brauner, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs
Current limitations of the LKMM are listed in tools/memory-model/README
(and I myself discussed a number of them at LPC recently); the relevant
point here seems to be:

1. Compiler optimizations are not accurately modeled. Of course,
the use of READ_ONCE() and WRITE_ONCE() limits the compiler's
ability to optimize, but under some circumstances it is possible
for the compiler to undermine the memory model. [...]

Note that this limitation in turn limits LKMM's ability to
accurately model address, control, and data dependencies.

A less elegant, but hopefully more effective, way to phrase such point
is maybe "feel free to rely on dependencies, but then do not blame the
LKMM authors please". ;-)

Thanks,
Andrea

Dmitry Vyukov

unread,
Oct 24, 2019, 10:49:47 AM10/24/19
to Andrea Parri, Christian Brauner, Will Deacon, LKML, bsing...@gmail.com, Marco Elver, stable, syzbot, syzkaller-bugs
We are not going to blame LKMM authors :)

Acquire will introduce actual hardware barrier on arm/power/etc. Maybe
it does not matter here. But I feel if we start replacing all
load-depends/rcu with acquire, it will be noticeable overhead. So what
do we do in the context of the whole kernel?

Balbir Singh

unread,
Nov 5, 2019, 7:10:01 PM11/5/19
to syzbot, el...@google.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
On Fri, 2019-10-04 at 21:26 -0700, syzbot wrote:
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit: b4bd9343 x86, kcsan: Enable KCSAN for x86
> git tree: https://github.com/google/ktsan.git kcsan
> console output: https://syzkaller.appspot.com/x/log.txt?x=125329db600000
> kernel config: https://syzkaller.appspot.com/x/.config?x=c0906aa620713d80
> dashboard link: https://syzkaller.appspot.com/bug?extid=c5d03165a1bd1dead0c1
> compiler: gcc (GCC) 9.0.0 20181231 (experimental)
>
> Unfortunately, I don't have any reproducer for this crash yet.
>
> IMPORTANT: if you fix the bug, please add the following tag to the commit:
> Reported-by: syzbot+c5d031...@syzkaller.appspotmail.com
>
> ==================================================================
> BUG: KCSAN: data-race in taskstats_exit / taskstats_exit
>
> write to 0xffff8881157bbe10 of 8 bytes by task 7951 on cpu 0:
> taskstats_tgid_alloc kernel/taskstats.c:567 [inline]
> taskstats_exit+0x6b7/0x717 kernel/taskstats.c:596
> do_exit+0x2c2/0x18e0 kernel/exit.c:864
> do_group_exit+0xb4/0x1c0 kernel/exit.c:983
> get_signal+0x2a2/0x1320 kernel/signal.c:2734
> do_signal+0x3b/0xc00 arch/x86/kernel/signal.c:815
> exit_to_usermode_loop+0x250/0x2c0 arch/x86/entry/common.c:159
> prepare_exit_to_usermode arch/x86/entry/common.c:194 [inline]
> syscall_return_slowpath arch/x86/entry/common.c:274 [inline]
> do_syscall_64+0x2d7/0x2f0 arch/x86/entry/common.c:299
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
>
> read to 0xffff8881157bbe10 of 8 bytes by task 7949 on cpu 1:
> taskstats_tgid_alloc kernel/taskstats.c:559 [inline]
> taskstats_exit+0xb2/0x717 kernel/taskstats.c:596
> do_exit+0x2c2/0x18e0 kernel/exit.c:864
> do_group_exit+0xb4/0x1c0 kernel/exit.c:983
> __do_sys_exit_group kernel/exit.c:994 [inline]
> __se_sys_exit_group kernel/exit.c:992 [inline]
> __x64_sys_exit_group+0x2e/0x30 kernel/exit.c:992
> do_syscall_64+0xcf/0x2f0 arch/x86/entry/common.c:296
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
>

Sorry I've been away and just catching up with email

I don't think this is a bug, if I interpret the report correctly it shows a
race

static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
{
struct signal_struct *sig = tsk->signal;
struct taskstats *stats;

#1 if (sig->stats || thread_group_empty(tsk)) <- the check of sig->stats
goto ret;

/* No problem if kmem_cache_zalloc() fails */
stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);

spin_lock_irq(&tsk->sighand->siglock);
if (!sig->stats) {
#2 sig->stats = stats; <- here in setting sig->stats
stats = NULL;
}
spin_unlock_irq(&tsk->sighand->siglock);

if (stats)
kmem_cache_free(taskstats_cache, stats);
ret:
return sig->stats;
}

The worst case scenario is that we might see sig->stats as being NULL when two
threads belonging to the same tgid. We do free up stats if we got that wrong

Am I misinterpreting the report?

Balbir Singh.


Balbir Singh

unread,
Nov 5, 2019, 7:27:12 PM11/5/19
to Christian Brauner, syzbot+c5d031...@syzkaller.appspotmail.com, el...@google.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
On Sat, 2019-10-05 at 13:28 +0200, Christian Brauner wrote:
> When assiging and testing taskstats in taskstats
> taskstats_exit() there's a race around writing and reading sig->stats.
>
> cpu0:
> task calls exit()
> do_exit()
> -> taskstats_exit()
> -> taskstats_tgid_alloc()
> The task takes sighand lock and assigns new stats to sig->stats.
>
> cpu1:
> task catches signal
> do_exit()
> -> taskstats_tgid_alloc()
> -> taskstats_exit()
> The tasks reads sig->stats __without__ holding sighand lock seeing
> garbage.
>
> Fix this by taking sighand lock when reading sig->stats.
>

Hi

Sorry I am just catching up with my inbox, why is sig->stats garbage? I'll
need to go back and look, unsigned long (pointer) reads are atomic and the
expectation is that they would be NULL initially and then non NULL for thread
groups when first allocated. May be I am just being dense in the morning -
what am I missing?

Balbir Singh.



Marco Elver

unread,
Nov 6, 2019, 5:24:06 AM11/6/19
to Balbir Singh, syzbot, LKML, syzkall...@googlegroups.com
The plain concurrent reads/writes are a data race, which may manifest
in various undefined behaviour due to compiler optimizations [1, 2].
Note that, "data race" does not necessarily imply "race condition";
some data races are race conditions (usually the more interesting
bugs) -- but not *all* data races are race conditions (sometimes
referred to as "benign races"). KCSAN reports data races according to
the LKMM.
[1] https://lwn.net/Articles/793253/
[2] https://lwn.net/Articles/799218/

If there is no race condition here that warrants heavier
synchronization (locking etc.), we still have the data race which
needs fixing by using marked atomic operations (READ_ONCE, WRITE_ONCE,
atomic_t, etc.). We also need to consider memory ordering requirements
(do we need smp_*mb(), smp_load_acquire/smp_store_release, ..)?

In the case here, the pattern is double-checked locking, which is
incorrect without atomic operations and the correct memory ordering.
There is a lengthy discussion here:
https://lore.kernel.org/lkml/20191021113327.22365...@ubuntu.com/

-- Marco

Balbir Singh

unread,
Nov 7, 2019, 5:39:39 AM11/7/19
to Marco Elver, syzbot, LKML, syzkall...@googlegroups.com
Thanks for the explanation

I presume or presumed that accessing a pointer is atomic, in this case
sig->stats, I'll double check my assumption

Balbir

Balbir Singh

unread,
Nov 7, 2019, 7:54:44 PM11/7/19
to Marco Elver, syzbot, LKML, syzkall...@googlegroups.com
I am still not convinced unless someone can prove that unsigned long reads are
non-atomic, acquire/release and barriers semantics don't matter because the
code deals with the race inside of a lock if the read was spurious, The
assumption is based on the face that sig->stats can be NULL or the kzalloc'ed
value in all cases.

Balbir Singh.

Dmitry Vyukov

unread,
Nov 8, 2019, 3:55:18 AM11/8/19
to Balbir Singh, Marco Elver, syzbot, LKML, syzkaller-bugs
None of the relevant standards guarantee this. C standard very
explicitly states the opposite - any data race renders behavior of the
program as undefined. LKMM does not give any defined behavior to data
races either. QED ;)

> acquire/release and barriers semantics don't matter because the
> code deals with the race inside of a lock if the read was spurious, The
> assumption is based on the face that sig->stats can be NULL or the kzalloc'ed
> value in all cases.
>
> Balbir Singh.
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-bug...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-bugs/acd6b0d98a7ebcb4ead9b263ec5c568c5a747166.camel%40gmail.com.

Balbir Singh

unread,
Nov 8, 2019, 10:42:52 PM11/8/19
to Dmitry Vyukov, Marco Elver, syzbot, LKML, syzkaller-bugs
Fair enough! I am going to have a read the specification, in the
meanwhile, I agree changes are needed based on what you've just stated
above

Balbir

Will Deacon

unread,
Nov 29, 2019, 12:56:11 PM11/29/19
to Christian Brauner, Rasmus Villemoes, linux-...@vger.kernel.org, bsing...@gmail.com, dvy...@google.com, el...@google.com, parri....@gmail.com, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
On Mon, Oct 21, 2019 at 03:04:18PM +0200, Christian Brauner wrote:
This probably wants to be an acquire, since both the memcpy() later on
in taskstats_exit() and the accesses in {b,x}acct_add_tsk() appear to
read from the taskstats structure without the sighand->siglock held and
therefore may miss zeroed allocation from the zalloc() below, I think.

> + if (stats || thread_group_empty(tsk))
> + return stats;
>
> - /* No problem if kmem_cache_zalloc() fails */
> - stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
> + stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
>
> spin_lock_irq(&tsk->sighand->siglock);
> - if (!sig->stats) {
> - sig->stats = stats;
> - stats = NULL;
> + stats = READ_ONCE(sig->stats);

You hold the spinlock here, so I don't think you need the READ_ONCE().

> + if (!stats) {
> + stats = stats_new;
> + WRITE_ONCE(sig->stats, stats_new);

You probably want a release here to publish the zeroes from the zalloc()
(back to my first comment). With those changes:

Reviewed-by: Will Deacon <wi...@kernel.org>

However, this caused me to look at do_group_exit() and we appear to have
racy accesses on sig->flags there thanks to signal_group_exit(). I worry
that might run quite deep, and can probably be looked at separately.

Will

Will Deacon

unread,
Nov 29, 2019, 12:57:58 PM11/29/19
to Christian Brauner, linux-...@vger.kernel.org, bsing...@gmail.com, dvy...@google.com, el...@google.com, parri....@gmail.com, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
On Mon, Oct 21, 2019 at 01:33:27PM +0200, Christian Brauner wrote:
> When assiging and testing taskstats in taskstats_exit() there's a race
> when writing and reading sig->stats when a thread-group with more than
> one thread exits:
>
> cpu0:
> thread catches fatal signal and whole thread-group gets taken down
> do_exit()
> do_group_exit()

Nit: I don't think this is the signal-handling path.

> taskstats_exit()
> taskstats_tgid_alloc()
> The tasks reads sig->stats without holding sighand lock.
>
> cpu1:
> task calls exit_group()
> do_exit()
> do_group_exit()

Nit: These ^^ seem to be the wrong way round.

Will

Christian Brauner

unread,
Nov 30, 2019, 10:09:00 AM11/30/19
to Will Deacon, Rasmus Villemoes, linux-...@vger.kernel.org, bsing...@gmail.com, dvy...@google.com, el...@google.com, parri....@gmail.com, sta...@vger.kernel.org, syzbot+c5d031...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
Thanks, this is basically what we had in v5. I'll rework and send this
after the merge window closes.

>
> However, this caused me to look at do_group_exit() and we appear to have
> racy accesses on sig->flags there thanks to signal_group_exit(). I worry
> that might run quite deep, and can probably be looked at separately.

Yeah, we should look into this but separate from this patch.

Thanks for taking a look at this! Much appreciated!
Christian
Reply all
Reply to author
Forward
0 new messages