[moderation] [kernel?] KASAN: slab-use-after-free Read in process_one_work

9 views
Skip to first unread message

syzbot

unread,
Sep 6, 2023, 8:43:08 PM9/6/23
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 0468be89b3fa Merge tag 'iommu-updates-v6.6' of git://git.k..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=15d36d57a80000
kernel config: https://syzkaller.appspot.com/x/.config?x=3d78b3780d210e21
dashboard link: https://syzkaller.appspot.com/bug?extid=2e8adc6571c14586dd43
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
CC: [b...@alien8.de bra...@kernel.org dave....@linux.intel.com h...@zytor.com linux-...@vger.kernel.org mi...@redhat.com tg...@linutronix.de x...@kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/b1f680b10f71/disk-0468be89.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/891d25d1d399/vmlinux-0468be89.xz
kernel image: https://storage.googleapis.com/syzbot-assets/c6c655f887fb/bzImage-0468be89.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+2e8adc...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in debug_spin_lock_before kernel/locking/spinlock_debug.c:85 [inline]
BUG: KASAN: slab-use-after-free in do_raw_spin_lock+0x2bf/0x3a0 kernel/locking/spinlock_debug.c:114
Read of size 4 at addr ffff88814a478504 by task kworker/0:1H/1099

CPU: 0 PID: 1099 Comm: kworker/0:1H Not tainted 6.5.0-syzkaller-10885-g0468be89b3fa #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/26/2023
Workqueue: glock_workqueue glock_work_func
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
print_address_description mm/kasan/report.c:364 [inline]
print_report+0x163/0x540 mm/kasan/report.c:475
kasan_report+0x175/0x1b0 mm/kasan/report.c:588
debug_spin_lock_before kernel/locking/spinlock_debug.c:85 [inline]
do_raw_spin_lock+0x2bf/0x3a0 kernel/locking/spinlock_debug.c:114
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0xe1/0x120 kernel/locking/spinlock.c:162
__wake_up_common_lock kernel/sched/wait.c:137 [inline]
__wake_up+0x101/0x1d0 kernel/sched/wait.c:160
process_one_work+0x781/0x1130 kernel/workqueue.c:2630
process_scheduled_works kernel/workqueue.c:2703 [inline]
worker_thread+0xabf/0x1060 kernel/workqueue.c:2784
kthread+0x2b8/0x350 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
</TASK>

Allocated by task 15131:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4f/0x70 mm/kasan/common.c:52
____kasan_kmalloc mm/kasan/common.c:374 [inline]
__kasan_kmalloc+0x98/0xb0 mm/kasan/common.c:383
kmalloc include/linux/slab.h:599 [inline]
kzalloc include/linux/slab.h:720 [inline]
init_sbd fs/gfs2/ops_fstype.c:77 [inline]
gfs2_fill_super+0x136/0x2790 fs/gfs2/ops_fstype.c:1144
get_tree_bdev+0x416/0x5b0 fs/super.c:1577
gfs2_get_tree+0x54/0x210 fs/gfs2/ops_fstype.c:1333
vfs_get_tree+0x8c/0x280 fs/super.c:1750
do_new_mount+0x28f/0xae0 fs/namespace.c:3335
do_mount fs/namespace.c:3675 [inline]
__do_sys_mount fs/namespace.c:3884 [inline]
__se_sys_mount+0x2d9/0x3c0 fs/namespace.c:3861
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

Freed by task 5063:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4f/0x70 mm/kasan/common.c:52
kasan_save_free_info+0x28/0x40 mm/kasan/generic.c:522
____kasan_slab_free+0xd6/0x120 mm/kasan/common.c:236
kasan_slab_free include/linux/kasan.h:162 [inline]
slab_free_hook mm/slub.c:1800 [inline]
slab_free_freelist_hook mm/slub.c:1826 [inline]
slab_free mm/slub.c:3809 [inline]
__kmem_cache_free+0x25f/0x3b0 mm/slub.c:3822
generic_shutdown_super+0x13a/0x2c0 fs/super.c:693
kill_block_super+0x41/0x70 fs/super.c:1646
deactivate_locked_super+0xa4/0x110 fs/super.c:481
cleanup_mnt+0x426/0x4c0 fs/namespace.c:1254
task_work_run+0x24a/0x300 kernel/task_work.c:179
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xd9/0x100 kernel/entry/common.c:171
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:204
__syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline]
syscall_exit_to_user_mode+0x64/0x280 kernel/entry/common.c:296
do_syscall_64+0x4d/0xc0 arch/x86/entry/common.c:86
entry_SYSCALL_64_after_hwframe+0x63/0xcd

The buggy address belongs to the object at ffff88814a478000
which belongs to the cache kmalloc-8k of size 8192
The buggy address is located 1284 bytes inside of
freed 8192-byte region [ffff88814a478000, ffff88814a47a000)

The buggy address belongs to the physical page:
page:ffffea0005291e00 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x14a478
head:ffffea0005291e00 order:3 entire_mapcount:0 nr_pages_mapped:0 pincount:0
ksm flags: 0x57ff00000000840(slab|head|node=1|zone=2|lastcpupid=0x7ff)
page_type: 0xffffffff()
raw: 057ff00000000840 ffff888012842280 ffffea0001dd0e00 dead000000000003
raw: 0000000000000000 0000000000020002 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 1, tgid 1 (swapper/0), ts 18458014877, free_ts 0
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x1e6/0x210 mm/page_alloc.c:1536
prep_new_page mm/page_alloc.c:1543 [inline]
get_page_from_freelist+0x31ec/0x3370 mm/page_alloc.c:3183
__alloc_pages+0x255/0x670 mm/page_alloc.c:4439
alloc_page_interleave+0x22/0x1d0 mm/mempolicy.c:2131
alloc_slab_page+0x6a/0x160 mm/slub.c:1870
allocate_slab mm/slub.c:2017 [inline]
new_slab+0x84/0x2f0 mm/slub.c:2070
___slab_alloc+0xade/0x1100 mm/slub.c:3223
__slab_alloc mm/slub.c:3322 [inline]
__slab_alloc_node mm/slub.c:3375 [inline]
slab_alloc_node mm/slub.c:3468 [inline]
__kmem_cache_alloc_node+0x1af/0x270 mm/slub.c:3517
kmalloc_trace+0x2a/0xe0 mm/slab_common.c:1114
kmalloc include/linux/slab.h:599 [inline]
kzalloc include/linux/slab.h:720 [inline]
cryptomgr_schedule_probe crypto/algboss.c:85 [inline]
cryptomgr_notify+0x84/0xb20 crypto/algboss.c:226
notifier_call_chain+0x18c/0x3a0 kernel/notifier.c:93
blocking_notifier_call_chain+0x69/0x90 kernel/notifier.c:388
crypto_probing_notify crypto/api.c:305 [inline]
crypto_alg_mod_lookup+0x4ea/0x720 crypto/api.c:335
crypto_find_alg crypto/api.c:580 [inline]
crypto_alloc_tfm_node+0x130/0x350 crypto/api.c:617
seg6_hmac_init_algo net/ipv6/seg6_hmac.c:370 [inline]
seg6_hmac_init+0x113/0x3c0 net/ipv6/seg6_hmac.c:400
seg6_init+0x8d/0xe0 net/ipv6/seg6.c:534
page_owner free stack trace missing

Memory state around the buggy address:
ffff88814a478400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88814a478480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88814a478500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff88814a478580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88814a478600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Dec 8, 2023, 12:35:19 PM12/8/23
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages