[syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write

2 views
Skip to first unread message

syzbot

unread,
6:04 AM (9 hours ago) 6:04 AM
to cgr...@vger.kernel.org, han...@cmpxchg.org, linux-...@vger.kernel.org, mko...@suse.com, syzkall...@googlegroups.com, t...@kernel.org
Hello,

syzbot found the following issue on:

HEAD commit: 591cd656a1bf Linux 7.0-rc7
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=114a36ba580000
kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
dashboard link: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=16cb33da580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=12648bd6580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/87739bfd4864/disk-591cd656.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/56be68f80e4e/vmlinux-591cd656.xz
kernel image: https://storage.googleapis.com/syzbot-assets/08638816d82a/bzImage-591cd656.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+33e571...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in pressure_write+0xa4/0x210 kernel/cgroup/cgroup.c:4011
Read of size 8 at addr ffff88803160f208 by task syz.3.1283/9352

CPU: 0 UID: 0 PID: 9352 Comm: syz.3.1283 Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0xba/0x230 mm/kasan/report.c:482
kasan_report+0x117/0x150 mm/kasan/report.c:595
pressure_write+0xa4/0x210 kernel/cgroup/cgroup.c:4011
cgroup_file_write+0x36f/0x790 kernel/cgroup/cgroup.c:4311
kernfs_fop_write_iter+0x3b0/0x540 fs/kernfs/file.c:352
new_sync_write fs/read_write.c:595 [inline]
vfs_write+0x629/0xba0 fs/read_write.c:688
ksys_write+0x156/0x270 fs/read_write.c:740
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f73f541c819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f73f4a76028 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f73f5695fa0 RCX: 00007f73f541c819
RDX: 000000000000002f RSI: 0000200000000080 RDI: 0000000000000004
RBP: 00007f73f54b2c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f73f5696038 R14: 00007f73f5695fa0 R15: 00007ffe5707b838
</TASK>

Allocated by task 9352:
kasan_save_stack mm/kasan/common.c:57 [inline]
kasan_save_track+0x3e/0x80 mm/kasan/common.c:78
poison_kmalloc_redzone mm/kasan/common.c:398 [inline]
__kasan_kmalloc+0x93/0xb0 mm/kasan/common.c:415
kasan_kmalloc include/linux/kasan.h:263 [inline]
__kmalloc_cache_noprof+0x3a6/0x690 mm/slub.c:5380
kmalloc_noprof include/linux/slab.h:950 [inline]
kzalloc_noprof include/linux/slab.h:1188 [inline]
cgroup_file_open+0x90/0x3a0 kernel/cgroup/cgroup.c:4256
kernfs_fop_open+0x9eb/0xcb0 fs/kernfs/file.c:724
do_dentry_open+0x83d/0x13e0 fs/open.c:949
vfs_open+0x3b/0x350 fs/open.c:1081
do_open fs/namei.c:4677 [inline]
path_openat+0x2e43/0x38a0 fs/namei.c:4836
do_file_open+0x23e/0x4a0 fs/namei.c:4865
do_sys_openat2+0x113/0x200 fs/open.c:1366
do_sys_open fs/open.c:1372 [inline]
__do_sys_openat fs/open.c:1388 [inline]
__se_sys_openat fs/open.c:1383 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1383
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

Freed by task 9353:
kasan_save_stack mm/kasan/common.c:57 [inline]
kasan_save_track+0x3e/0x80 mm/kasan/common.c:78
kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:584
poison_slab_object mm/kasan/common.c:253 [inline]
__kasan_slab_free+0x5c/0x80 mm/kasan/common.c:285
kasan_slab_free include/linux/kasan.h:235 [inline]
slab_free_hook mm/slub.c:2685 [inline]
slab_free mm/slub.c:6165 [inline]
kfree+0x1c1/0x6c0 mm/slub.c:6483
cgroup_file_release+0xd6/0x100 kernel/cgroup/cgroup.c:4283
kernfs_release_file fs/kernfs/file.c:764 [inline]
kernfs_drain_open_files+0x392/0x720 fs/kernfs/file.c:834
kernfs_drain+0x470/0x600 fs/kernfs/dir.c:525
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4483
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6197
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6311
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

The buggy address belongs to the object at ffff88803160f200
which belongs to the cache kmalloc-192 of size 192
The buggy address is located 8 bytes inside of
freed 192-byte region [ffff88803160f200, ffff88803160f2c0)

The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x3160f
flags: 0x80000000000000(node=0|zone=1)
page_type: f5(slab)
raw: 0080000000000000 ffff88813fe1a3c0 dead000000000100 dead000000000122
raw: 0000000000000000 0000000800100010 00000000f5000000 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0xd2cc0(GFP_KERNEL|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 1, tgid 1 (swapper/0), ts 17504889254, free_ts 0
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x231/0x280 mm/page_alloc.c:1889
prep_new_page mm/page_alloc.c:1897 [inline]
get_page_from_freelist+0x28bb/0x2950 mm/page_alloc.c:3962
__alloc_frozen_pages_noprof+0x18d/0x380 mm/page_alloc.c:5250
alloc_slab_page mm/slub.c:3292 [inline]
allocate_slab+0x77/0x660 mm/slub.c:3481
new_slab mm/slub.c:3539 [inline]
refill_objects+0x334/0x3c0 mm/slub.c:7175
refill_sheaf mm/slub.c:2812 [inline]
__pcs_replace_empty_main+0x35c/0x710 mm/slub.c:4615
alloc_from_pcs mm/slub.c:4717 [inline]
slab_alloc_node mm/slub.c:4851 [inline]
__kmalloc_cache_noprof+0x44e/0x690 mm/slub.c:5375
kmalloc_noprof include/linux/slab.h:950 [inline]
kzalloc_noprof include/linux/slab.h:1188 [inline]
call_usermodehelper_setup+0x8e/0x270 kernel/umh.c:362
kobject_uevent_env+0x65b/0x9e0 lib/kobject_uevent.c:628
device_add+0x557/0xb80 drivers/base/core.c:3672
netdev_register_kobject+0x178/0x310 net/core/net-sysfs.c:2358
register_netdevice+0x12d7/0x1d10 net/core/dev.c:11441
register_netdev+0x40/0x60 net/core/dev.c:11557
nr_proto_init+0x14a/0x720 net/netrom/af_netrom.c:1424
do_one_initcall+0x250/0x8d0 init/main.c:1382
do_initcall_level+0x104/0x190 init/main.c:1444
page_owner free stack trace missing

Memory state around the buggy address:
ffff88803160f100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff88803160f180: 00 00 00 00 fc fc fc fc fc fc fc fc fc fc fc fc
>ffff88803160f200: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff88803160f280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
ffff88803160f300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

Edward Adam Davis

unread,
9:06 AM (6 hours ago) 9:06 AM
to syzbot+33e571...@syzkaller.appspotmail.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
#syz test

diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 4ca3cb993da2..c42e4d9930fa 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -4281,7 +4281,9 @@ static void cgroup_file_release(struct kernfs_open_file *of)
cft->release(of);
put_cgroup_ns(ctx->ns);
kfree(ctx);
+ mutex_lock(&of->mutex);
of->priv = NULL;
+ mutex_unlock(&of->mutex);
}

static ssize_t cgroup_file_write(struct kernfs_open_file *of, char *buf,

syzbot

unread,
9:27 AM (6 hours ago) 9:27 AM
to ead...@qq.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __kernfs_remove

======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.2.19/6647 is trying to acquire lock:
ffff8880573dce18 (kn->active#59){++++}-{0:0}, at: __kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513

but task is already holding lock:
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (cgroup_mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_lock include/linux/cgroup.h:394 [inline]
cgroup_lock_and_drain_offline+0x8d/0x4a0 kernel/cgroup/cgroup.c:3265
cgroup_kn_lock_live+0x120/0x230 kernel/cgroup/cgroup.c:1730
cgroup_subtree_control_write+0x4b3/0x10a0 kernel/cgroup/cgroup.c:3580
cgroup_file_write+0x36f/0x790 kernel/cgroup/cgroup.c:4313
kernfs_fop_write_iter+0x3b0/0x540 fs/kernfs/file.c:352
new_sync_write fs/read_write.c:595 [inline]
vfs_write+0x629/0xba0 fs/read_write.c:688
ksys_write+0x156/0x270 fs/read_write.c:740
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&of->mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_file_release+0xe5/0x110 kernel/cgroup/cgroup.c:4284
kernfs_release_file fs/kernfs/file.c:764 [inline]
kernfs_fop_release+0x21d/0x450 fs/kernfs/file.c:779
__fput+0x461/0xa90 fs/file_table.c:469
fput_close_sync+0x11f/0x240 fs/file_table.c:574
__do_sys_close fs/open.c:1509 [inline]
__se_sys_close fs/open.c:1494 [inline]
__x64_sys_close+0x7e/0x110 fs/open.c:1494
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&kernfs_locks->open_file_mutex[count]){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
kernfs_open_file_mutex_lock fs/kernfs/file.c:56 [inline]
kernfs_get_open_node fs/kernfs/file.c:538 [inline]
kernfs_fop_open+0x6e6/0xcb0 fs/kernfs/file.c:718
do_dentry_open+0x83d/0x13e0 fs/open.c:949
vfs_open+0x3b/0x350 fs/open.c:1081
do_open fs/namei.c:4677 [inline]
path_openat+0x2e43/0x38a0 fs/namei.c:4836
do_file_open+0x23e/0x4a0 fs/namei.c:4865
do_sys_openat2+0x113/0x200 fs/open.c:1366
do_sys_open fs/open.c:1372 [inline]
__do_sys_openat fs/open.c:1388 [inline]
__se_sys_openat fs/open.c:1383 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1383
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (kn->active#59){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4485
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6199
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6313
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
kn->active#59 --> &of->mutex --> cgroup_mutex

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(cgroup_mutex);
lock(&of->mutex);
lock(cgroup_mutex);
lock(kn->active#59);

*** DEADLOCK ***

4 locks held by syz.2.19/6647:
#0: ffff88803b86e480 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
#1: ffff8880441e1778 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
#1: ffff8880441e1778 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2929 [inline]
#1: ffff8880441e1778 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2940 [inline]
#1: ffff8880441e1778 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5392
#2: ffff88804b42c8f8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#2: ffff88804b42c8f8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: vfs_rmdir+0x109/0x6f0 fs/namei.c:5329
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732

stack backtrace:
CPU: 0 UID: 0 PID: 6647 Comm: syz.2.19 Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043
check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4485
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6199
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6313
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f38b3fcc819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f38b362e028 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007f38b4245fa0 RCX: 00007f38b3fcc819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000100
RBP: 00007f38b4062c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f38b4246038 R14: 00007f38b4245fa0 R15: 00007ffef200b028
</TASK>


Tested on:

commit: 7f87a5ea Merge tag 'hid-for-linus-2026040801' of git:/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=16bd8cd2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
dashboard link: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch: https://syzkaller.appspot.com/x/patch.diff?x=12ddbbd6580000

Edward Adam Davis

unread,
9:35 AM (6 hours ago) 9:35 AM
to syzbot+33e571...@syzkaller.appspotmail.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
#syz test

diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 4ca3cb993da2..5a34bd70ef7b 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -4280,8 +4280,10 @@ static void cgroup_file_release(struct kernfs_open_file *of)
if (cft->release)
cft->release(of);
put_cgroup_ns(ctx->ns);
+ mutex_lock(&of->mutex);
kfree(ctx);

syzbot

unread,
9:50 AM (5 hours ago) 9:50 AM
to ead...@qq.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __kernfs_remove

======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.1.18/6463 is trying to acquire lock:
ffff888032ccf968 (kn->active#59){++++}-{0:0}, at: __kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513

but task is already holding lock:
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (cgroup_mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_lock include/linux/cgroup.h:394 [inline]
cgroup_lock_and_drain_offline+0x8d/0x4a0 kernel/cgroup/cgroup.c:3265
cgroup_kn_lock_live+0x120/0x230 kernel/cgroup/cgroup.c:1730
cgroup_subtree_control_write+0x4b3/0x10a0 kernel/cgroup/cgroup.c:3580
cgroup_file_write+0x36f/0x790 kernel/cgroup/cgroup.c:4313
kernfs_fop_write_iter+0x3b0/0x540 fs/kernfs/file.c:352
new_sync_write fs/read_write.c:595 [inline]
vfs_write+0x629/0xba0 fs/read_write.c:688
ksys_write+0x156/0x270 fs/read_write.c:740
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&of->mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_file_release+0xdd/0x110 kernel/cgroup/cgroup.c:4283
4 locks held by syz.1.18/6463:
#0: ffff88803ad2a480 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2929 [inline]
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2940 [inline]
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5392
#2: ffff88805a98f4f8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#2: ffff88805a98f4f8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: vfs_rmdir+0x109/0x6f0 fs/namei.c:5329
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732

stack backtrace:
CPU: 0 UID: 0 PID: 6463 Comm: syz.1.18 Not tainted syzkaller #0 PREEMPT_{RT,(full)}
RIP: 0033:0x7ff79991c819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ff798f76028 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007ff799b95fa0 RCX: 00007ff79991c819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000100
RBP: 00007ff7999b2c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ff799b96038 R14: 00007ff799b95fa0 R15: 00007ffce7cda7c8
</TASK>


Tested on:

commit: 7f87a5ea Merge tag 'hid-for-linus-2026040801' of git:/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10b05e06580000
kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
dashboard link: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch: https://syzkaller.appspot.com/x/patch.diff?x=155d9316580000

Edward Adam Davis

unread,
10:02 AM (5 hours ago) 10:02 AM
to syzbot+33e571...@syzkaller.appspotmail.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
#syz test

diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
index e32406d62c0d..f76e2b9452d0 100644
--- a/fs/kernfs/file.c
+++ b/fs/kernfs/file.c
@@ -348,8 +348,12 @@ static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
}

ops = kernfs_ops(of->kn);
- if (ops->write)
+ if (ops->write) {
+ struct mutex *mutex;
+ mutex = kernfs_open_file_mutex_lock(of->kn);
len = ops->write(of, buf, len, iocb->ki_pos);
+ mutex_unlock(mutex);
+ }
else
len = -EINVAL;


syzbot

unread,
10:28 AM (5 hours ago) 10:28 AM
to ead...@qq.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __kernfs_remove

======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.1.18/6652 is trying to acquire lock:
ffff88803634de18 (kn->active#59){++++}-{0:0}, at: __kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513

but task is already holding lock:
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (cgroup_mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_lock include/linux/cgroup.h:394 [inline]
cgroup_lock_and_drain_offline+0x8d/0x4a0 kernel/cgroup/cgroup.c:3265
cgroup_kn_lock_live+0x120/0x230 kernel/cgroup/cgroup.c:1730
cgroup_subtree_control_write+0x4b3/0x10a0 kernel/cgroup/cgroup.c:3580
cgroup_file_write+0x36f/0x790 kernel/cgroup/cgroup.c:4311
kernfs_fop_write_iter+0x43e/0x5e0 fs/kernfs/file.c:354
new_sync_write fs/read_write.c:595 [inline]
vfs_write+0x629/0xba0 fs/read_write.c:688
ksys_write+0x156/0x270 fs/read_write.c:740
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&kernfs_locks->open_file_mutex[count]){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
kernfs_open_file_mutex_lock fs/kernfs/file.c:56 [inline]
kernfs_get_open_node fs/kernfs/file.c:542 [inline]
kernfs_fop_open+0x6e6/0xcb0 fs/kernfs/file.c:722
do_dentry_open+0x83d/0x13e0 fs/open.c:949
vfs_open+0x3b/0x350 fs/open.c:1081
do_open fs/namei.c:4677 [inline]
path_openat+0x2e43/0x38a0 fs/namei.c:4836
do_file_open+0x23e/0x4a0 fs/namei.c:4865
do_sys_openat2+0x113/0x200 fs/open.c:1366
do_sys_open fs/open.c:1372 [inline]
__do_sys_openat fs/open.c:1388 [inline]
__se_sys_openat fs/open.c:1383 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1383
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (kn->active#59){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4483
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6197
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6311
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
kn->active#59 --> &kernfs_locks->open_file_mutex[count] --> cgroup_mutex

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(cgroup_mutex);
lock(&kernfs_locks->open_file_mutex[count]);
lock(cgroup_mutex);
lock(kn->active#59);

*** DEADLOCK ***

4 locks held by syz.1.18/6652:
#0: ffff888024604480 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
#1: ffff888044184378 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
#1: ffff888044184378 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2929 [inline]
#1: ffff888044184378 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2940 [inline]
#1: ffff888044184378 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5392
#2: ffff888054c08c78 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#2: ffff888054c08c78 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: vfs_rmdir+0x109/0x6f0 fs/namei.c:5329
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732

stack backtrace:
CPU: 0 UID: 0 PID: 6652 Comm: syz.1.18 Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043
check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4483
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6197
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6311
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4ba117c819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f4ba07d6028 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007f4ba13f5fa0 RCX: 00007f4ba117c819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000100
RBP: 00007f4ba1212c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f4ba13f6038 R14: 00007f4ba13f5fa0 R15: 00007fff23e2bb58
</TASK>


Tested on:

commit: 7f87a5ea Merge tag 'hid-for-linus-2026040801' of git:/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=14138cd2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
dashboard link: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch: https://syzkaller.appspot.com/x/patch.diff?x=17beaeba580000

Reply all
Reply to author
Forward
0 new messages