[moderation] [xfs?] KASAN: slab-use-after-free Read in xfs_qm_dquot_logitem_unpin (2)

0 views
Skip to first unread message

syzbot

unread,
May 31, 2024, 2:56:20 PMMay 31
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 2bfcfd584ff5 Merge tag 'pmdomain-v6.10-rc1' of git://git.k..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=17f1622c980000
kernel config: https://syzkaller.appspot.com/x/.config?x=733cc7a95171d8e7
dashboard link: https://syzkaller.appspot.com/bug?extid=4e2924b0deaa4c908eef
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: i386
CC: [chanda...@oracle.com djw...@kernel.org linux-...@vger.kernel.org linu...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-2bfcfd58.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/c7ed3bb80bed/vmlinux-2bfcfd58.xz
kernel image: https://storage.googleapis.com/syzbot-assets/93acc5bfbaef/bzImage-2bfcfd58.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4e2924...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in debug_spin_lock_before kernel/locking/spinlock_debug.c:86 [inline]
BUG: KASAN: slab-use-after-free in do_raw_spin_lock+0x271/0x2c0 kernel/locking/spinlock_debug.c:115
Read of size 4 at addr ffff88801194dfb4 by task kworker/2:1H/1112

CPU: 2 PID: 1112 Comm: kworker/2:1H Not tainted 6.10.0-rc1-syzkaller-00013-g2bfcfd584ff5 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: xfs-log/loop0 xlog_ioend_work
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
print_address_description mm/kasan/report.c:377 [inline]
print_report+0xc3/0x620 mm/kasan/report.c:488
kasan_report+0xd9/0x110 mm/kasan/report.c:601
debug_spin_lock_before kernel/locking/spinlock_debug.c:86 [inline]
do_raw_spin_lock+0x271/0x2c0 kernel/locking/spinlock_debug.c:115
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0x42/0x60 kernel/locking/spinlock.c:162
__wake_up_common_lock kernel/sched/wait.c:105 [inline]
__wake_up+0x1c/0x60 kernel/sched/wait.c:127
xfs_qm_dquot_logitem_unpin+0x81/0x90 fs/xfs/xfs_dquot_item.c:96
xfs_log_item_batch_insert fs/xfs/xfs_trans.c:746 [inline]
xfs_trans_committed_bulk+0x744/0x890 fs/xfs/xfs_trans.c:850
xlog_cil_committed+0x161/0x840 fs/xfs/xfs_log_cil.c:736
xlog_cil_process_committed+0x123/0x1f0 fs/xfs/xfs_log_cil.c:768
xlog_state_do_iclog_callbacks fs/xfs/xfs_log.c:2791 [inline]
xlog_state_do_callback+0x562/0xd90 fs/xfs/xfs_log.c:2816
xlog_ioend_work+0x92/0x110 fs/xfs/xfs_log.c:1398
process_one_work+0x958/0x1ad0 kernel/workqueue.c:3231
process_scheduled_works kernel/workqueue.c:3312 [inline]
worker_thread+0x6c8/0xf70 kernel/workqueue.c:3393
kthread+0x2c1/0x3a0 kernel/kthread.c:389
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>

Allocated by task 7507:
kasan_save_stack+0x33/0x60 mm/kasan/common.c:47
kasan_save_track+0x14/0x30 mm/kasan/common.c:68
unpoison_slab_object mm/kasan/common.c:312 [inline]
__kasan_slab_alloc+0x89/0x90 mm/kasan/common.c:338
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook mm/slub.c:3940 [inline]
slab_alloc_node mm/slub.c:4000 [inline]
kmem_cache_alloc_noprof+0x121/0x2f0 mm/slub.c:4007
xfs_dquot_alloc+0x2a/0x670 fs/xfs/xfs_dquot.c:497
xfs_qm_dqread+0x8e/0x5f0 fs/xfs/xfs_dquot.c:683
xfs_qm_dqget_inode+0x224/0x6d0 fs/xfs/xfs_dquot.c:1004
xfs_qm_dqattach_one+0x26f/0x590 fs/xfs/xfs_qm.c:278
xfs_qm_dqattach_locked+0x1c6/0x2d0 fs/xfs/xfs_qm.c:337
xfs_qm_dqattach fs/xfs/xfs_qm.c:371 [inline]
xfs_qm_dqattach+0x47/0x70 fs/xfs/xfs_qm.c:362
xfs_remove+0x282/0xc60 fs/xfs/xfs_inode.c:2743
xfs_vn_unlink+0xfd/0x230 fs/xfs/xfs_iops.c:403
vfs_unlink+0x2fb/0x9b0 fs/namei.c:4343
do_unlinkat+0x5c0/0x750 fs/namei.c:4407
__do_sys_unlink fs/namei.c:4455 [inline]
__se_sys_unlink fs/namei.c:4453 [inline]
__ia32_sys_unlink+0xc6/0x110 fs/namei.c:4453
do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
__do_fast_syscall_32+0x73/0x120 arch/x86/entry/common.c:386
do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
entry_SYSENTER_compat_after_hwframe+0x84/0x8e

Freed by task 112:
kasan_save_stack+0x33/0x60 mm/kasan/common.c:47
kasan_save_track+0x14/0x30 mm/kasan/common.c:68
kasan_save_free_info+0x3b/0x60 mm/kasan/generic.c:579
poison_slab_object+0xf7/0x160 mm/kasan/common.c:240
__kasan_slab_free+0x32/0x50 mm/kasan/common.c:256
kasan_slab_free include/linux/kasan.h:184 [inline]
slab_free_hook mm/slub.c:2195 [inline]
slab_free mm/slub.c:4436 [inline]
kmem_cache_free+0x12f/0x3a0 mm/slub.c:4511
xfs_qm_shrink_scan+0x25c/0x3f0 fs/xfs/xfs_qm.c:531
do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
shrink_slab+0x18a/0x1310 mm/shrinker.c:662
shrink_one+0x493/0x7c0 mm/vmscan.c:4790
shrink_many mm/vmscan.c:4851 [inline]
lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4951
shrink_node mm/vmscan.c:5910 [inline]
kswapd_shrink_node mm/vmscan.c:6720 [inline]
balance_pgdat+0x1105/0x1970 mm/vmscan.c:6911
kswapd+0x5ea/0xbf0 mm/vmscan.c:7180
kthread+0x2c1/0x3a0 kernel/kthread.c:389
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

The buggy address belongs to the object at ffff88801194dd40
which belongs to the cache xfs_dquot of size 704
The buggy address is located 628 bytes inside of
freed 704-byte region [ffff88801194dd40, ffff88801194e000)

The buggy address belongs to the physical page:
page: refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff88801194d6c0 pfn:0x1194c
head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0xfff00000000040(head|node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xffffefff(slab)
raw: 00fff00000000040 ffff888019d60c80 dead000000000122 0000000000000000
raw: ffff88801194d6c0 000000008013000d 00000001ffffefff 0000000000000000
head: 00fff00000000040 ffff888019d60c80 dead000000000122 0000000000000000
head: ffff88801194d6c0 000000008013000d 00000001ffffefff 0000000000000000
head: 00fff00000000002 ffffea0000465301 ffffffffffffffff 0000000000000000
head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0x1d20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL), pid 7643, tgid 7642 (syz-executor.3), ts 168461537699, free_ts 167856682291
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x2d1/0x350 mm/page_alloc.c:1468
prep_new_page mm/page_alloc.c:1476 [inline]
get_page_from_freelist+0x136a/0x2df0 mm/page_alloc.c:3402
__alloc_pages_noprof+0x22b/0x2460 mm/page_alloc.c:4660
__alloc_pages_node_noprof include/linux/gfp.h:269 [inline]
alloc_pages_node_noprof include/linux/gfp.h:296 [inline]
alloc_slab_page+0x56/0x110 mm/slub.c:2264
allocate_slab mm/slub.c:2427 [inline]
new_slab+0x84/0x260 mm/slub.c:2480
___slab_alloc+0xdac/0x1870 mm/slub.c:3666
__slab_alloc.constprop.0+0x56/0xb0 mm/slub.c:3756
__slab_alloc_node mm/slub.c:3809 [inline]
slab_alloc_node mm/slub.c:3988 [inline]
kmem_cache_alloc_noprof+0x2ae/0x2f0 mm/slub.c:4007
xfs_dquot_alloc+0x2a/0x670 fs/xfs/xfs_dquot.c:497
xfs_qm_dqread+0x8e/0x5f0 fs/xfs/xfs_dquot.c:683
xfs_qm_dqget_uncached+0xbc/0x160 fs/xfs/xfs_dquot.c:940
xfs_qm_init_timelimits+0x153/0x390 fs/xfs/xfs_qm.c:600
xfs_qm_init_quotainfo+0x54c/0xac0 fs/xfs/xfs_qm.c:673
xfs_qm_mount_quotas+0x59/0x650 fs/xfs/xfs_qm.c:1477
xfs_mountfs+0x1b72/0x1e80 fs/xfs/xfs_mount.c:980
xfs_fs_fill_super+0x1434/0x1df0 fs/xfs/xfs_super.c:1753
page last free pid 7507 tgid 7507 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1088 [inline]
free_unref_page+0x64a/0xe40 mm/page_alloc.c:2565
qlink_free mm/kasan/quarantine.c:163 [inline]
qlist_free_all+0x4e/0x140 mm/kasan/quarantine.c:179
kasan_quarantine_reduce+0x192/0x1e0 mm/kasan/quarantine.c:286
__kasan_slab_alloc+0x69/0x90 mm/kasan/common.c:322
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook mm/slub.c:3940 [inline]
slab_alloc_node mm/slub.c:4000 [inline]
kmalloc_trace_noprof+0x11e/0x310 mm/slub.c:4147
kmalloc_noprof include/linux/slab.h:660 [inline]
kzalloc_noprof include/linux/slab.h:778 [inline]
ref_tracker_alloc+0x17c/0x5b0 lib/ref_tracker.c:203
__netdev_tracker_alloc include/linux/netdevice.h:4038 [inline]
netdev_hold include/linux/netdevice.h:4067 [inline]
netdev_hold include/linux/netdevice.h:4062 [inline]
netdev_queue_add_kobject net/core/net-sysfs.c:1783 [inline]
netdev_queue_update_kobjects+0x281/0x640 net/core/net-sysfs.c:1838
register_queue_kobjects net/core/net-sysfs.c:1900 [inline]
netdev_register_kobject+0x290/0x3f0 net/core/net-sysfs.c:2140
register_netdevice+0x12ce/0x1c40 net/core/dev.c:10374
veth_newlink+0x363/0xa10 drivers/net/veth.c:1829
rtnl_newlink_create net/core/rtnetlink.c:3510 [inline]
__rtnl_newlink+0x119c/0x1960 net/core/rtnetlink.c:3730
rtnl_newlink+0x67/0xa0 net/core/rtnetlink.c:3743
rtnetlink_rcv_msg+0x3c7/0xe60 net/core/rtnetlink.c:6595
netlink_rcv_skb+0x165/0x410 net/netlink/af_netlink.c:2564
netlink_unicast_kernel net/netlink/af_netlink.c:1335 [inline]
netlink_unicast+0x542/0x820 net/netlink/af_netlink.c:1361
netlink_sendmsg+0x8b8/0xd70 net/netlink/af_netlink.c:1905

Memory state around the buggy address:
ffff88801194de80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88801194df00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88801194df80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff88801194e000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff88801194e080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages