[moderation] [xfs?] KASAN: slab-use-after-free Read in xfs_qm_dquot_logitem_unpin

5 views
Skip to first unread message

syzbot

unread,
Aug 26, 2023, 1:47:57 PM8/26/23
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: f7757129e3de Merge tag 'v6.5-p3' of git://git.kernel.org/p..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10362cbda80000
kernel config: https://syzkaller.appspot.com/x/.config?x=1e4a882f77ed77bd
dashboard link: https://syzkaller.appspot.com/bug?extid=f9db3fad6571768b3db0
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: i386
CC: [djw...@kernel.org linux-...@vger.kernel.org linux-...@vger.kernel.org linu...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-f7757129.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/f9616e78d7e0/vmlinux-f7757129.xz
kernel image: https://storage.googleapis.com/syzbot-assets/452e6e8f3c37/bzImage-f7757129.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+f9db3f...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in debug_spin_lock_before kernel/locking/spinlock_debug.c:85 [inline]
BUG: KASAN: slab-use-after-free in do_raw_spin_lock+0x26f/0x2b0 kernel/locking/spinlock_debug.c:114
Read of size 4 at addr ffff88802998c274 by task kworker/2:1H/69

CPU: 2 PID: 69 Comm: kworker/2:1H Not tainted 6.5.0-rc7-syzkaller-00004-gf7757129e3de #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: xfs-log/loop0 xlog_ioend_work
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106
print_address_description mm/kasan/report.c:364 [inline]
print_report+0xc4/0x620 mm/kasan/report.c:475
kasan_report+0xda/0x110 mm/kasan/report.c:588
debug_spin_lock_before kernel/locking/spinlock_debug.c:85 [inline]
do_raw_spin_lock+0x26f/0x2b0 kernel/locking/spinlock_debug.c:114
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0x42/0x50 kernel/locking/spinlock.c:162
__wake_up_common_lock+0xbb/0x140 kernel/sched/wait.c:137
xfs_qm_dquot_logitem_unpin+0x81/0x90 fs/xfs/xfs_dquot_item.c:96
xfs_log_item_batch_insert fs/xfs/xfs_trans.c:738 [inline]
xfs_trans_committed_bulk+0x72b/0x870 fs/xfs/xfs_trans.c:842
xlog_cil_committed+0x1bf/0xf60 fs/xfs/xfs_log_cil.c:795
xlog_cil_process_committed+0x123/0x1f0 fs/xfs/xfs_log_cil.c:823
xlog_state_do_iclog_callbacks fs/xfs/xfs_log.c:2811 [inline]
xlog_state_do_callback+0x549/0xcc0 fs/xfs/xfs_log.c:2836
xlog_ioend_work+0x8a/0x110 fs/xfs/xfs_log.c:1415
process_one_work+0xaa2/0x16f0 kernel/workqueue.c:2600
worker_thread+0x687/0x1110 kernel/workqueue.c:2751
kthread+0x33a/0x430 kernel/kthread.c:389
ret_from_fork+0x2c/0x70 arch/x86/kernel/process.c:145
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
</TASK>

Allocated by task 28615:
kasan_save_stack+0x33/0x50 mm/kasan/common.c:45
kasan_set_track+0x25/0x30 mm/kasan/common.c:52
__kasan_slab_alloc+0x81/0x90 mm/kasan/common.c:328
kasan_slab_alloc include/linux/kasan.h:186 [inline]
slab_post_alloc_hook mm/slab.h:762 [inline]
slab_alloc_node mm/slub.c:3470 [inline]
slab_alloc mm/slub.c:3478 [inline]
__kmem_cache_alloc_lru mm/slub.c:3485 [inline]
kmem_cache_alloc+0x172/0x3b0 mm/slub.c:3494
kmem_cache_zalloc include/linux/slab.h:693 [inline]
xfs_dquot_alloc+0x2a/0x670 fs/xfs/xfs_dquot.c:475
xfs_qm_dqread+0x8b/0x540 fs/xfs/xfs_dquot.c:659
xfs_qm_dqget+0x151/0x4a0 fs/xfs/xfs_dquot.c:869
xfs_qm_scall_setqlim+0x16e/0x1960 fs/xfs/xfs_qm_syscalls.c:300
xfs_fs_set_dqblk+0x16f/0x1e0 fs/xfs/xfs_quotaops.c:267
quota_setquota+0x4bc/0x5e0 fs/quota/quota.c:310
do_quotactl+0xb01/0x13d0 fs/quota/quota.c:802
__do_sys_quotactl fs/quota/quota.c:961 [inline]
__se_sys_quotactl fs/quota/quota.c:917 [inline]
__ia32_sys_quotactl+0x1bb/0x440 fs/quota/quota.c:917
do_syscall_32_irqs_on arch/x86/entry/common.c:112 [inline]
__do_fast_syscall_32+0x61/0xe0 arch/x86/entry/common.c:178
do_fast_syscall_32+0x33/0x70 arch/x86/entry/common.c:203
entry_SYSENTER_compat_after_hwframe+0x70/0x82

Freed by task 112:
kasan_save_stack+0x33/0x50 mm/kasan/common.c:45
kasan_set_track+0x25/0x30 mm/kasan/common.c:52
kasan_save_free_info+0x2b/0x40 mm/kasan/generic.c:522
____kasan_slab_free mm/kasan/common.c:236 [inline]
____kasan_slab_free+0x15e/0x1b0 mm/kasan/common.c:200
kasan_slab_free include/linux/kasan.h:162 [inline]
slab_free_hook mm/slub.c:1792 [inline]
slab_free_freelist_hook+0x10b/0x1e0 mm/slub.c:1818
slab_free mm/slub.c:3801 [inline]
kmem_cache_free+0xf0/0x490 mm/slub.c:3823
xfs_qm_shrink_scan+0x238/0x3e0 fs/xfs/xfs_qm.c:531
do_shrink_slab+0x422/0xaa0 mm/vmscan.c:900
shrink_slab+0x17f/0x6e0 mm/vmscan.c:1060
shrink_one+0x4f7/0x700 mm/vmscan.c:5403
shrink_many mm/vmscan.c:5453 [inline]
lru_gen_shrink_node mm/vmscan.c:5570 [inline]
shrink_node+0x20c2/0x3730 mm/vmscan.c:6510
kswapd_shrink_node mm/vmscan.c:7315 [inline]
balance_pgdat+0xa37/0x1b90 mm/vmscan.c:7505
kswapd+0x5be/0xbf0 mm/vmscan.c:7765
kthread+0x33a/0x430 kernel/kthread.c:389
ret_from_fork+0x2c/0x70 arch/x86/kernel/process.c:145
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304

The buggy address belongs to the object at ffff88802998c000
which belongs to the cache xfs_dquot of size 704
The buggy address is located 628 bytes inside of
freed 704-byte region [ffff88802998c000, ffff88802998c2c0)

The buggy address belongs to the physical page:
page:ffffea0000a66300 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2998c
head:ffffea0000a66300 order:2 entire_mapcount:0 nr_pages_mapped:0 pincount:0
ksm flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xffffffff()
raw: 00fff00000010200 ffff888044162000 ffffea0001307600 dead000000000003
raw: 0000000000000000 0000000080130013 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0x1d20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL), pid 16116, tgid 16115 (syz-executor.3), ts 357922400903, free_ts 344533158681
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x2d2/0x350 mm/page_alloc.c:1570
prep_new_page mm/page_alloc.c:1577 [inline]
get_page_from_freelist+0x10a9/0x31e0 mm/page_alloc.c:3221
__alloc_pages+0x1d0/0x4a0 mm/page_alloc.c:4477
alloc_pages+0x1a9/0x270 mm/mempolicy.c:2292
alloc_slab_page mm/slub.c:1862 [inline]
allocate_slab+0x24e/0x380 mm/slub.c:2009
new_slab mm/slub.c:2062 [inline]
___slab_alloc+0x8bc/0x1570 mm/slub.c:3215
__slab_alloc.constprop.0+0x56/0xa0 mm/slub.c:3314
__slab_alloc_node mm/slub.c:3367 [inline]
slab_alloc_node mm/slub.c:3460 [inline]
slab_alloc mm/slub.c:3478 [inline]
__kmem_cache_alloc_lru mm/slub.c:3485 [inline]
kmem_cache_alloc+0x392/0x3b0 mm/slub.c:3494
kmem_cache_zalloc include/linux/slab.h:693 [inline]
xfs_dquot_alloc+0x2a/0x670 fs/xfs/xfs_dquot.c:475
xfs_qm_dqread+0x8b/0x540 fs/xfs/xfs_dquot.c:659
xfs_qm_dqget_uncached+0xc5/0x180 fs/xfs/xfs_dquot.c:908
xfs_qm_set_defquota+0x80/0x3a0 fs/xfs/xfs_qm.c:558
xfs_qm_init_quotainfo+0x814/0xa10 fs/xfs/xfs_qm.c:681
xfs_qm_mount_quotas+0x59/0x6a0 fs/xfs/xfs_qm.c:1444
xfs_mountfs+0x1c8b/0x1de0 fs/xfs/xfs_mount.c:959
xfs_fs_fill_super+0x145b/0x1e50 fs/xfs/xfs_super.c:1710
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1161 [inline]
free_unref_page_prepare+0x508/0xb90 mm/page_alloc.c:2348
free_unref_page+0x33/0x3b0 mm/page_alloc.c:2443
__unfreeze_partials+0x21d/0x240 mm/slub.c:2647
qlink_free mm/kasan/quarantine.c:166 [inline]
qlist_free_all+0x6a/0x170 mm/kasan/quarantine.c:185
kasan_quarantine_reduce+0x18b/0x1d0 mm/kasan/quarantine.c:292
__kasan_slab_alloc+0x65/0x90 mm/kasan/common.c:305
kasan_slab_alloc include/linux/kasan.h:186 [inline]
slab_post_alloc_hook mm/slab.h:762 [inline]
slab_alloc_node mm/slub.c:3470 [inline]
slab_alloc mm/slub.c:3478 [inline]
__kmem_cache_alloc_lru mm/slub.c:3485 [inline]
kmem_cache_alloc_lru+0x21a/0x630 mm/slub.c:3501
alloc_inode_sb include/linux/fs.h:2735 [inline]
sock_alloc_inode+0x25/0x1c0 net/socket.c:305
alloc_inode+0x5d/0x220 fs/inode.c:259
new_inode_pseudo+0x16/0x80 fs/inode.c:1017
sock_alloc+0x40/0x270 net/socket.c:631
__sock_create+0xbc/0x810 net/socket.c:1500
sock_create net/socket.c:1587 [inline]
__sys_socket_create net/socket.c:1624 [inline]
__sys_socket+0x13d/0x250 net/socket.c:1652
__do_compat_sys_socketcall+0x57b/0x700 net/compat.c:448
do_syscall_32_irqs_on arch/x86/entry/common.c:112 [inline]
__do_fast_syscall_32+0x61/0xe0 arch/x86/entry/common.c:178
do_fast_syscall_32+0x33/0x70 arch/x86/entry/common.c:203

Memory state around the buggy address:
ffff88802998c100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88802998c180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88802998c200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff88802998c280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
ffff88802998c300: fc fc fc fc fc fc fc fc fa fb fb fb fb fb fb fb
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Nov 20, 2023, 12:42:19 PM11/20/23
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages