[fs?] KASAN: slab-use-after-free Read in trylock_super

4 views
Skip to first unread message

syzbot

unread,
Apr 6, 2023, 12:46:46 AM4/6/23
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 00c7b5f4ddc5 Merge tag 'input-for-v6.3-rc4' of git://git.k..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=14f29a59c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=9c35b3803e5ad668
dashboard link: https://syzkaller.appspot.com/bug?extid=e32313ff59595ad80aa3
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
CC: [bra...@kernel.org linux-...@vger.kernel.org linux-...@vger.kernel.org vi...@zeniv.linux.org.uk]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/5607d6cbfde7/disk-00c7b5f4.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/5c1a8a6fcfd2/vmlinux-00c7b5f4.xz
kernel image: https://storage.googleapis.com/syzbot-assets/81e98ee78f76/bzImage-00c7b5f4.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e32313...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in __down_read_trylock kernel/locking/rwsem.c:1281 [inline]
BUG: KASAN: slab-use-after-free in down_read_trylock+0xac/0x3b0 kernel/locking/rwsem.c:1559
Read of size 8 at addr ffff88802b5e40d8 by task kworker/u4:5/1065

CPU: 1 PID: 1065 Comm: kworker/u4:5 Not tainted 6.3.0-rc4-syzkaller-00224-g00c7b5f4ddc5 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Workqueue: writeback wb_workfn (flush-7:0)
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
print_address_description mm/kasan/report.c:319 [inline]
print_report+0x163/0x540 mm/kasan/report.c:430
kasan_report+0x176/0x1b0 mm/kasan/report.c:536
__down_read_trylock kernel/locking/rwsem.c:1281 [inline]
down_read_trylock+0xac/0x3b0 kernel/locking/rwsem.c:1559
trylock_super+0x1f/0xf0 fs/super.c:414
__writeback_inodes_wb+0x101/0x260 fs/fs-writeback.c:1953
wb_writeback+0x46c/0xc70 fs/fs-writeback.c:2067
wb_check_old_data_flush fs/fs-writeback.c:2167 [inline]
wb_do_writeback fs/fs-writeback.c:2220 [inline]
wb_workfn+0xbb5/0xff0 fs/fs-writeback.c:2248
process_one_work+0x8a0/0x10e0 kernel/workqueue.c:2390
worker_thread+0xa63/0x1210 kernel/workqueue.c:2537
kthread+0x270/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
</TASK>

Allocated by task 5135:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4f/0x70 mm/kasan/common.c:52
____kasan_kmalloc mm/kasan/common.c:374 [inline]
__kasan_kmalloc+0x98/0xb0 mm/kasan/common.c:383
kasan_kmalloc include/linux/kasan.h:196 [inline]
__do_kmalloc_node mm/slab_common.c:967 [inline]
__kmalloc+0xb9/0x230 mm/slab_common.c:980
kmalloc include/linux/slab.h:584 [inline]
tomoyo_realpath_from_path+0xcf/0x5e0 security/tomoyo/realpath.c:251
tomoyo_get_realpath security/tomoyo/file.c:151 [inline]
tomoyo_path_perm+0x28d/0x700 security/tomoyo/file.c:822
tomoyo_path_unlink+0xd0/0x110 security/tomoyo/tomoyo.c:161
security_path_unlink+0xdb/0x130 security/security.c:1203
do_unlinkat+0x3db/0x940 fs/namei.c:4313
__do_sys_unlink fs/namei.c:4364 [inline]
__se_sys_unlink fs/namei.c:4362 [inline]
__x64_sys_unlink+0x49/0x50 fs/namei.c:4362
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

Freed by task 5135:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4f/0x70 mm/kasan/common.c:52
kasan_save_free_info+0x2b/0x40 mm/kasan/generic.c:521
____kasan_slab_free+0xd6/0x120 mm/kasan/common.c:236
kasan_slab_free include/linux/kasan.h:162 [inline]
slab_free_hook mm/slub.c:1781 [inline]
slab_free_freelist_hook mm/slub.c:1807 [inline]
slab_free mm/slub.c:3787 [inline]
__kmem_cache_free+0x264/0x3c0 mm/slub.c:3800
tomoyo_realpath_from_path+0x5a3/0x5e0 security/tomoyo/realpath.c:286
tomoyo_get_realpath security/tomoyo/file.c:151 [inline]
tomoyo_path_perm+0x28d/0x700 security/tomoyo/file.c:822
tomoyo_path_unlink+0xd0/0x110 security/tomoyo/tomoyo.c:161
security_path_unlink+0xdb/0x130 security/security.c:1203
do_unlinkat+0x3db/0x940 fs/namei.c:4313
__do_sys_unlink fs/namei.c:4364 [inline]
__se_sys_unlink fs/namei.c:4362 [inline]
__x64_sys_unlink+0x49/0x50 fs/namei.c:4362
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

Last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:45
__kasan_record_aux_stack+0xb0/0xc0 mm/kasan/generic.c:491
insert_work+0x54/0x3d0 kernel/workqueue.c:1361
__queue_work+0xb37/0xf10 kernel/workqueue.c:1524
queue_work_on+0x14f/0x250 kernel/workqueue.c:1552
rcu_do_batch kernel/rcu/tree.c:2112 [inline]
rcu_core+0xa4d/0x16f0 kernel/rcu/tree.c:2372
__do_softirq+0x2ab/0x908 kernel/softirq.c:571

Second to last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:45
__kasan_record_aux_stack+0xb0/0xc0 mm/kasan/generic.c:491
__call_rcu_common kernel/rcu/tree.c:2622 [inline]
call_rcu+0x167/0xa70 kernel/rcu/tree.c:2736
put_super fs/super.c:310 [inline]
deactivate_locked_super+0xd8/0x110 fs/super.c:342
cleanup_mnt+0x426/0x4c0 fs/namespace.c:1177
task_work_run+0x24a/0x300 kernel/task_work.c:179
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xd9/0x100 kernel/entry/common.c:171
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:204
__syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline]
syscall_exit_to_user_mode+0x64/0x280 kernel/entry/common.c:297
do_syscall_64+0x4d/0xc0 arch/x86/entry/common.c:86
entry_SYSCALL_64_after_hwframe+0x63/0xcd

The buggy address belongs to the object at ffff88802b5e4000
which belongs to the cache kmalloc-4k of size 4096
The buggy address is located 216 bytes inside of
freed 4096-byte region [ffff88802b5e4000, ffff88802b5e5000)

The buggy address belongs to the physical page:
page:ffffea0000ad7800 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2b5e0
head:ffffea0000ad7800 order:3 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff)
raw: 00fff00000010200 ffff888012442140 ffffea0000a2d800 dead000000000002
raw: 0000000000000000 0000000000040004 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Unmovable, gfp_mask 0x1d2040(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL), pid 5120, tgid 5120 (udevd), ts 409758051189, free_ts 409709006154
prep_new_page mm/page_alloc.c:2553 [inline]
get_page_from_freelist+0x3246/0x33c0 mm/page_alloc.c:4326
__alloc_pages+0x255/0x670 mm/page_alloc.c:5592
alloc_slab_page+0x6a/0x160 mm/slub.c:1851
allocate_slab mm/slub.c:1998 [inline]
new_slab+0x84/0x2f0 mm/slub.c:2051
___slab_alloc+0xa85/0x10a0 mm/slub.c:3193
__slab_alloc mm/slub.c:3292 [inline]
__slab_alloc_node mm/slub.c:3345 [inline]
slab_alloc_node mm/slub.c:3442 [inline]
__kmem_cache_alloc_node+0x1b8/0x290 mm/slub.c:3491
__do_kmalloc_node mm/slab_common.c:966 [inline]
__kmalloc+0xa8/0x230 mm/slab_common.c:980
kmalloc include/linux/slab.h:584 [inline]
tomoyo_realpath_from_path+0xcf/0x5e0 security/tomoyo/realpath.c:251
tomoyo_get_realpath security/tomoyo/file.c:151 [inline]
tomoyo_path_perm+0x28d/0x700 security/tomoyo/file.c:822
tomoyo_path_unlink+0xd0/0x110 security/tomoyo/tomoyo.c:161
security_path_unlink+0xdb/0x130 security/security.c:1203
do_unlinkat+0x3db/0x940 fs/namei.c:4313
__do_sys_unlink fs/namei.c:4364 [inline]
__se_sys_unlink fs/namei.c:4362 [inline]
__x64_sys_unlink+0x49/0x50 fs/namei.c:4362
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1454 [inline]
free_pcp_prepare mm/page_alloc.c:1504 [inline]
free_unref_page_prepare+0xe2f/0xe70 mm/page_alloc.c:3388
free_unref_page+0x37/0x3f0 mm/page_alloc.c:3483
discard_slab mm/slub.c:2098 [inline]
__unfreeze_partials+0x1b1/0x1f0 mm/slub.c:2637
put_cpu_partial+0x116/0x180 mm/slub.c:2713
qlist_free_all+0x22/0x60 mm/kasan/quarantine.c:187
kasan_quarantine_reduce+0x14b/0x160 mm/kasan/quarantine.c:294
__kasan_slab_alloc+0x23/0x70 mm/kasan/common.c:305
kasan_slab_alloc include/linux/kasan.h:186 [inline]
slab_post_alloc_hook+0x68/0x3a0 mm/slab.h:769
slab_alloc_node mm/slub.c:3452 [inline]
slab_alloc mm/slub.c:3460 [inline]
__kmem_cache_alloc_lru mm/slub.c:3467 [inline]
kmem_cache_alloc+0x11f/0x2e0 mm/slub.c:3476
getname_flags+0xbc/0x4e0 fs/namei.c:140
vfs_fstatat fs/stat.c:275 [inline]
__do_sys_newfstatat fs/stat.c:446 [inline]
__se_sys_newfstatat fs/stat.c:440 [inline]
__x64_sys_newfstatat+0x12e/0x1d0 fs/stat.c:440
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

Memory state around the buggy address:
ffff88802b5e3f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff88802b5e4000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88802b5e4080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff88802b5e4100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88802b5e4180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jul 1, 2023, 12:40:51 AM7/1/23
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages