[syzbot] [mm?] KASAN: slab-use-after-free Read in folio_evictable (3)

16 views
Skip to first unread message

syzbot

unread,
Nov 25, 2024, 9:41:27 PM11/25/24
to ak...@linux-foundation.org, linux-...@vger.kernel.org, linu...@kvack.org, syzkall...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 4a39ac5b7d62 Merge tag 'random-6.12-rc1-for-linus' of git:..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=16caa607980000
kernel config: https://syzkaller.appspot.com/x/.config?x=dd14c10ec1b6af25
dashboard link: https://syzkaller.appspot.com/bug?extid=4c7590f1cee06597e43a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/3ee480a33b34/disk-4a39ac5b.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/73587a04fea1/vmlinux-4a39ac5b.xz
kernel image: https://storage.googleapis.com/syzbot-assets/67e463731a48/bzImage-4a39ac5b.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4c7590...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in instrument_atomic_read include/linux/instrumented.h:68 [inline]
BUG: KASAN: slab-use-after-free in _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
BUG: KASAN: slab-use-after-free in mapping_unevictable include/linux/pagemap.h:262 [inline]
BUG: KASAN: slab-use-after-free in folio_evictable+0xe3/0x310 mm/internal.h:370
Read of size 8 at addr ffff888024afbf90 by task kswapd0/89

CPU: 1 UID: 0 PID: 89 Comm: kswapd0 Not tainted 6.11.0-syzkaller-05319-g4a39ac5b7d62 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:93 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:119
print_address_description mm/kasan/report.c:377 [inline]
print_report+0x169/0x550 mm/kasan/report.c:488
kasan_report+0x143/0x180 mm/kasan/report.c:601
kasan_check_range+0x282/0x290 mm/kasan/generic.c:189
instrument_atomic_read include/linux/instrumented.h:68 [inline]
_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
mapping_unevictable include/linux/pagemap.h:262 [inline]
folio_evictable+0xe3/0x310 mm/internal.h:370
sort_folio mm/vmscan.c:4275 [inline]
scan_folios mm/vmscan.c:4392 [inline]
isolate_folios mm/vmscan.c:4517 [inline]
evict_folios+0x1023/0x7780 mm/vmscan.c:4548
try_to_shrink_lruvec+0x9ab/0xbb0 mm/vmscan.c:4755
shrink_one+0x3b9/0x850 mm/vmscan.c:4793
shrink_many mm/vmscan.c:4856 [inline]
lru_gen_shrink_node mm/vmscan.c:4934 [inline]
shrink_node+0x3799/0x3de0 mm/vmscan.c:5914
kswapd_shrink_node mm/vmscan.c:6742 [inline]
balance_pgdat mm/vmscan.c:6934 [inline]
kswapd+0x1cbc/0x3720 mm/vmscan.c:7203
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>

Allocated by task 22175:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
unpoison_slab_object mm/kasan/common.c:319 [inline]
__kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:345
kasan_slab_alloc include/linux/kasan.h:247 [inline]
slab_post_alloc_hook mm/slub.c:4086 [inline]
slab_alloc_node mm/slub.c:4135 [inline]
kmem_cache_alloc_noprof+0x135/0x2a0 mm/slub.c:4142
getname_flags+0xb7/0x540 fs/namei.c:139
vfs_fstatat+0x12c/0x190 fs/stat.c:340
__do_sys_newfstatat fs/stat.c:505 [inline]
__se_sys_newfstatat fs/stat.c:499 [inline]
__x64_sys_newfstatat+0x11d/0x1a0 fs/stat.c:499
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

Freed by task 22175:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:579
poison_slab_object mm/kasan/common.c:247 [inline]
__kasan_slab_free+0x59/0x70 mm/kasan/common.c:264
kasan_slab_free include/linux/kasan.h:230 [inline]
slab_free_hook mm/slub.c:2343 [inline]
slab_free mm/slub.c:4580 [inline]
kmem_cache_free+0x1a3/0x420 mm/slub.c:4682
vfs_fstatat+0x14f/0x190 fs/stat.c:342
__do_sys_newfstatat fs/stat.c:505 [inline]
__se_sys_newfstatat fs/stat.c:499 [inline]
__x64_sys_newfstatat+0x11d/0x1a0 fs/stat.c:499
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

The buggy address belongs to the object at ffff888024afb300
which belongs to the cache names_cache of size 4096
The buggy address is located 3216 bytes inside of
freed 4096-byte region [ffff888024afb300, ffff888024afc300)

The buggy address belongs to the physical page:
page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x24af8
head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0xfff00000000040(head|node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xfdffffff(slab)
raw: 00fff00000000040 ffff88801bafc780 dead000000000122 0000000000000000
raw: 0000000000000000 0000000000070007 00000001fdffffff 0000000000000000
head: 00fff00000000040 ffff88801bafc780 dead000000000122 0000000000000000
head: 0000000000000000 0000000000070007 00000001fdffffff 0000000000000000
head: 00fff00000000003 ffffea000092be01 ffffffffffffffff 0000000000000000
head: 0000000000000008 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 22175, tgid 22175 (sed), ts 1028936253921, free_ts 1028792054212
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x1f3/0x230 mm/page_alloc.c:1500
prep_new_page mm/page_alloc.c:1508 [inline]
get_page_from_freelist+0x2e4c/0x2f10 mm/page_alloc.c:3446
__alloc_pages_noprof+0x256/0x6c0 mm/page_alloc.c:4702
alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2263
alloc_slab_page+0x6a/0x130 mm/slub.c:2413
allocate_slab+0x5a/0x2f0 mm/slub.c:2579
new_slab mm/slub.c:2632 [inline]
___slab_alloc+0xcd1/0x14b0 mm/slub.c:3819
__slab_alloc+0x58/0xa0 mm/slub.c:3909
__slab_alloc_node mm/slub.c:3962 [inline]
slab_alloc_node mm/slub.c:4123 [inline]
kmem_cache_alloc_noprof+0x1c1/0x2a0 mm/slub.c:4142
getname_flags+0xb7/0x540 fs/namei.c:139
do_sys_openat2+0xd2/0x1d0 fs/open.c:1409
do_sys_open fs/open.c:1430 [inline]
__do_sys_openat fs/open.c:1446 [inline]
__se_sys_openat fs/open.c:1441 [inline]
__x64_sys_openat+0x247/0x2a0 fs/open.c:1441
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
page last free pid 22169 tgid 22169 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1101 [inline]
free_unref_page+0xd22/0xea0 mm/page_alloc.c:2619
discard_slab mm/slub.c:2678 [inline]
__put_partials+0xeb/0x130 mm/slub.c:3146
put_cpu_partial+0x17c/0x250 mm/slub.c:3221
__slab_free+0x2ea/0x3d0 mm/slub.c:4450
qlink_free mm/kasan/quarantine.c:163 [inline]
qlist_free_all+0x9e/0x140 mm/kasan/quarantine.c:179
kasan_quarantine_reduce+0x14f/0x170 mm/kasan/quarantine.c:286
__kasan_slab_alloc+0x23/0x80 mm/kasan/common.c:329
kasan_slab_alloc include/linux/kasan.h:247 [inline]
slab_post_alloc_hook mm/slub.c:4086 [inline]
slab_alloc_node mm/slub.c:4135 [inline]
kmem_cache_alloc_noprof+0x135/0x2a0 mm/slub.c:4142
getname_flags+0xb7/0x540 fs/namei.c:139
vfs_fstatat+0x12c/0x190 fs/stat.c:340
__do_sys_newfstatat fs/stat.c:505 [inline]
__se_sys_newfstatat fs/stat.c:499 [inline]
__x64_sys_newfstatat+0x11d/0x1a0 fs/stat.c:499
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

Memory state around the buggy address:
ffff888024afbe80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888024afbf00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff888024afbf80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff888024afc000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888024afc080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Dec 13, 2024, 11:18:27 AM12/13/24
to ak...@linux-foundation.org, linux-...@vger.kernel.org, linu...@kvack.org, syzkall...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: f932fb9b4074 Merge tag 'v6.13-rc2-ksmbd-server-fixes' of g..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=101e24f8580000
kernel config: https://syzkaller.appspot.com/x/.config?x=fee25f93665c89ac
dashboard link: https://syzkaller.appspot.com/bug?extid=4c7590f1cee06597e43a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=14654730580000

Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7feb34a89c2a/non_bootable_disk-f932fb9b.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/1982926f01cf/vmlinux-f932fb9b.xz
kernel image: https://storage.googleapis.com/syzbot-assets/56ef2ef1e465/bzImage-f932fb9b.xz
mounted in repro #1: https://storage.googleapis.com/syzbot-assets/7e9b5cd91eeb/mount_0.gz
mounted in repro #2: https://storage.googleapis.com/syzbot-assets/87ff98e190e4/mount_1.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4c7590...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in instrument_atomic_read include/linux/instrumented.h:68 [inline]
BUG: KASAN: slab-use-after-free in _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
BUG: KASAN: slab-use-after-free in mapping_unevictable include/linux/pagemap.h:269 [inline]
BUG: KASAN: slab-use-after-free in folio_evictable+0xe3/0x250 mm/internal.h:435
Read of size 8 at addr ffff88804e4813a0 by task kswapd1/81

CPU: 0 UID: 0 PID: 81 Comm: kswapd1 Not tainted 6.13.0-rc2-syzkaller-00159-gf932fb9b4074 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0x169/0x550 mm/kasan/report.c:489
kasan_report+0x143/0x180 mm/kasan/report.c:602
kasan_check_range+0x282/0x290 mm/kasan/generic.c:189
instrument_atomic_read include/linux/instrumented.h:68 [inline]
_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
mapping_unevictable include/linux/pagemap.h:269 [inline]
folio_evictable+0xe3/0x250 mm/internal.h:435
sort_folio mm/vmscan.c:4299 [inline]
scan_folios mm/vmscan.c:4424 [inline]
isolate_folios mm/vmscan.c:4550 [inline]
evict_folios+0xff2/0x5800 mm/vmscan.c:4581
try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4789
shrink_one+0x3b9/0x850 mm/vmscan.c:4834
shrink_many mm/vmscan.c:4897 [inline]
lru_gen_shrink_node mm/vmscan.c:4975 [inline]
shrink_node+0x37c5/0x3e50 mm/vmscan.c:5956
kswapd_shrink_node mm/vmscan.c:6785 [inline]
balance_pgdat mm/vmscan.c:6977 [inline]
kswapd+0x1ca9/0x36f0 mm/vmscan.c:7246
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>

Allocated by task 5580:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
unpoison_slab_object mm/kasan/common.c:319 [inline]
__kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:345
kasan_slab_alloc include/linux/kasan.h:250 [inline]
slab_post_alloc_hook mm/slub.c:4104 [inline]
slab_alloc_node mm/slub.c:4153 [inline]
kmem_cache_alloc_noprof+0x1d9/0x380 mm/slub.c:4160
gfs2_glock_get+0x309/0x1010 fs/gfs2/glock.c:1178
gfs2_inode_lookup+0x2a3/0xc90 fs/gfs2/inode.c:135
gfs2_dir_search+0x229/0x2f0 fs/gfs2/dir.c:1667
gfs2_lookupi+0x461/0x5e0 fs/gfs2/inode.c:340
gfs2_jindex_hold fs/gfs2/ops_fstype.c:587 [inline]
init_journal+0x5fa/0x2410 fs/gfs2/ops_fstype.c:729
init_inodes+0xdc/0x320 fs/gfs2/ops_fstype.c:864
gfs2_fill_super+0x1bd1/0x24d0 fs/gfs2/ops_fstype.c:1249
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1330
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
do_new_mount+0x2be/0xb40 fs/namespace.c:3507
do_mount fs/namespace.c:3847 [inline]
__do_sys_mount fs/namespace.c:4057 [inline]
__se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4034
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

Freed by task 16:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:582
poison_slab_object mm/kasan/common.c:247 [inline]
__kasan_slab_free+0x59/0x70 mm/kasan/common.c:264
kasan_slab_free include/linux/kasan.h:233 [inline]
slab_free_hook mm/slub.c:2338 [inline]
slab_free mm/slub.c:4598 [inline]
kmem_cache_free+0x195/0x410 mm/slub.c:4700
rcu_do_batch kernel/rcu/tree.c:2567 [inline]
rcu_core+0xaaa/0x17a0 kernel/rcu/tree.c:2823
handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561
run_ksoftirqd+0xca/0x130 kernel/softirq.c:950
smpboot_thread_fn+0x544/0xa30 kernel/smpboot.c:164
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

Last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47
__kasan_record_aux_stack+0xac/0xc0 mm/kasan/generic.c:544
__call_rcu_common kernel/rcu/tree.c:3086 [inline]
call_rcu+0x167/0xa70 kernel/rcu/tree.c:3190
__gfs2_glock_free+0xda0/0xef0 fs/gfs2/glock.c:172
gfs2_glock_free+0x3c/0xb0 fs/gfs2/glock.c:178
gfs2_glock_put_eventually fs/gfs2/super.c:1257 [inline]
gfs2_evict_inode+0x6e2/0x13c0 fs/gfs2/super.c:1546
evict+0x4e8/0x9a0 fs/inode.c:796
gfs2_jindex_free+0x3f6/0x4b0 fs/gfs2/super.c:79
init_journal+0x9fb/0x2410 fs/gfs2/ops_fstype.c:846
init_inodes+0xdc/0x320 fs/gfs2/ops_fstype.c:864
gfs2_fill_super+0x1bd1/0x24d0 fs/gfs2/ops_fstype.c:1249
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1330
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
do_new_mount+0x2be/0xb40 fs/namespace.c:3507
do_mount fs/namespace.c:3847 [inline]
__do_sys_mount fs/namespace.c:4057 [inline]
__se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4034
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

Second to last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47
__kasan_record_aux_stack+0xac/0xc0 mm/kasan/generic.c:544
insert_work+0x3e/0x330 kernel/workqueue.c:2183
__queue_work+0xc8b/0xf50 kernel/workqueue.c:2339
queue_delayed_work_on+0x1ca/0x390 kernel/workqueue.c:2552
queue_delayed_work include/linux/workqueue.h:677 [inline]
gfs2_glock_queue_work fs/gfs2/glock.c:250 [inline]
do_xmote+0xaf8/0x1250 fs/gfs2/glock.c:832
glock_work_func+0x343/0x5c0 fs/gfs2/glock.c:1090
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

The buggy address belongs to the object at ffff88804e480fd8
which belongs to the cache gfs2_glock(aspace) of size 1224
The buggy address is located 968 bytes inside of
freed 1224-byte region [ffff88804e480fd8, ffff88804e4814a0)

The buggy address belongs to the physical page:
page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x4e480
head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0x4fff00000000040(head|node=1|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 04fff00000000040 ffff88801fa3d000 dead000000000122 0000000000000000
raw: 0000000000000000 00000000800c000c 00000001f5000000 0000000000000000
head: 04fff00000000040 ffff88801fa3d000 dead000000000122 0000000000000000
head: 0000000000000000 00000000800c000c 00000001f5000000 0000000000000000
head: 04fff00000000002 ffffea0001392001 ffffffffffffffff 0000000000000000
head: 0000000700000004 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0xd2040(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 5580, tgid 5580 (syz.0.16), ts 92534711834, free_ts 92373479633
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x1f3/0x230 mm/page_alloc.c:1556
prep_new_page mm/page_alloc.c:1564 [inline]
get_page_from_freelist+0x365c/0x37a0 mm/page_alloc.c:3474
__alloc_pages_noprof+0x292/0x710 mm/page_alloc.c:4751
alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2269
alloc_slab_page+0x6a/0x110 mm/slub.c:2408
allocate_slab+0x5a/0x2b0 mm/slub.c:2574
new_slab mm/slub.c:2627 [inline]
___slab_alloc+0xc27/0x14a0 mm/slub.c:3815
__slab_alloc+0x58/0xa0 mm/slub.c:3905
__slab_alloc_node mm/slub.c:3980 [inline]
slab_alloc_node mm/slub.c:4141 [inline]
kmem_cache_alloc_noprof+0x268/0x380 mm/slub.c:4160
gfs2_glock_get+0x309/0x1010 fs/gfs2/glock.c:1178
gfs2_inode_lookup+0x2a3/0xc90 fs/gfs2/inode.c:135
gfs2_lookup_root fs/gfs2/ops_fstype.c:440 [inline]
init_sb+0xa2a/0x1270 fs/gfs2/ops_fstype.c:507
gfs2_fill_super+0x19b3/0x24d0 fs/gfs2/ops_fstype.c:1216
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1330
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
page last free pid 52 tgid 52 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1127 [inline]
free_unref_page+0xd3f/0x1010 mm/page_alloc.c:2657
discard_slab mm/slub.c:2673 [inline]
__put_partials+0x160/0x1c0 mm/slub.c:3142
put_cpu_partial+0x17c/0x250 mm/slub.c:3217
__slab_free+0x290/0x380 mm/slub.c:4468
qlink_free mm/kasan/quarantine.c:163 [inline]
qlist_free_all+0x9a/0x140 mm/kasan/quarantine.c:179
kasan_quarantine_reduce+0x14f/0x170 mm/kasan/quarantine.c:286
__kasan_slab_alloc+0x23/0x80 mm/kasan/common.c:329
kasan_slab_alloc include/linux/kasan.h:250 [inline]
slab_post_alloc_hook mm/slub.c:4104 [inline]
slab_alloc_node mm/slub.c:4153 [inline]
kmem_cache_alloc_node_noprof+0x1d9/0x380 mm/slub.c:4205
__alloc_skb+0x1c3/0x440 net/core/skbuff.c:668
alloc_skb include/linux/skbuff.h:1323 [inline]
alloc_skb_with_frags+0xc3/0x820 net/core/skbuff.c:6612
sock_alloc_send_pskb+0x91a/0xa60 net/core/sock.c:2881
sock_alloc_send_skb include/net/sock.h:1797 [inline]
mld_newpack+0x1c3/0xaf0 net/ipv6/mcast.c:1747
add_grhead net/ipv6/mcast.c:1850 [inline]
add_grec+0x1492/0x19a0 net/ipv6/mcast.c:1988
mld_send_cr net/ipv6/mcast.c:2114 [inline]
mld_ifc_work+0x691/0xd90 net/ipv6/mcast.c:2651
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391

Memory state around the buggy address:
ffff88804e481280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88804e481300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88804e481380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff88804e481400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88804e481480: fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

Yu Zhao

unread,
Dec 13, 2024, 2:21:31 PM12/13/24
to syzbot, ak...@linux-foundation.org, linux-...@vger.kernel.org, linu...@kvack.org, syzkall...@googlegroups.com, Matthew Wilcox
On Fri, Dec 13, 2024 at 9:18 AM syzbot
<syzbot+4c7590...@syzkaller.appspotmail.com> wrote:
>
> syzbot has found a reproducer for the following issue on:
>
> HEAD commit: f932fb9b4074 Merge tag 'v6.13-rc2-ksmbd-server-fixes' of g..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=101e24f8580000
> kernel config: https://syzkaller.appspot.com/x/.config?x=fee25f93665c89ac
> dashboard link: https://syzkaller.appspot.com/bug?extid=4c7590f1cee06597e43a
> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=14654730580000
>
> Downloadable assets:
> disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7feb34a89c2a/non_bootable_disk-f932fb9b.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/1982926f01cf/vmlinux-f932fb9b.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/56ef2ef1e465/bzImage-f932fb9b.xz
> mounted in repro #1: https://storage.googleapis.com/syzbot-assets/7e9b5cd91eeb/mount_0.gz
> mounted in repro #2: https://storage.googleapis.com/syzbot-assets/87ff98e190e4/mount_1.gz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+4c7590...@syzkaller.appspotmail.com
>
> ==================================================================
> BUG: KASAN: slab-use-after-free in instrument_atomic_read include/linux/instrumented.h:68 [inline]
> BUG: KASAN: slab-use-after-free in _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
> BUG: KASAN: slab-use-after-free in mapping_unevictable include/linux/pagemap.h:269 [inline]
> BUG: KASAN: slab-use-after-free in folio_evictable+0xe3/0x250 mm/internal.h:435
> Read of size 8 at addr ffff88804e4813a0 by task kswapd1/81

This doesn't seem like an MM bug -- folio_evictable() should be safe
to use on a folio as long as it's on LRU, i.e., after it's exposed on
LRU and before page_cache_release() finishes.

There might have been a dangling folio_mappping()?

Hillf Danton

unread,
Dec 13, 2024, 7:15:32 PM12/13/24
to syzbot, linux-...@vger.kernel.org, syzkall...@googlegroups.com
On Fri, Dec 13, 2024 at 9:18
> syzbot has found a reproducer for the following issue on:
>
> HEAD commit: f932fb9b4074 Merge tag 'v6.13-rc2-ksmbd-server-fixes' of g..
> git tree: upstream
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=14654730580000

#syz test

--- x/mm/filemap.c
+++ y/mm/filemap.c
@@ -871,6 +871,7 @@ noinline int __filemap_add_folio(struct
folio_ref_add(folio, nr);
folio->mapping = mapping;
folio->index = xas.xa_index;
+ BUG_ON(mapping_exiting(mapping));

for (;;) {
int order = -1, split_order = 0;
--

syzbot

unread,
Dec 13, 2024, 7:31:04 PM12/13/24
to hda...@sina.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __submit_bio

Adding 124996k swap on ./swap-file. Priority:0 extents:1 across:124996k
======================================================
WARNING: possible circular locking dependency detected
6.13.0-rc2-syzkaller-00232-g4800575d8c0b-dirty #0 Not tainted
------------------------------------------------------
syz-executor/5695 is trying to acquire lock:
ffff888034c21438 (&q->q_usage_counter(io)#37){++++}-{0:0}, at: __submit_bio+0x2c6/0x560 block/blk-core.c:629

but task is already holding lock:
ffffffff8ea35ca0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3924 [inline]
ffffffff8ea35ca0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim+0xd4/0x3c0 mm/page_alloc.c:3949

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (fs_reclaim){+.+.}-{0:0}:
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
__fs_reclaim_acquire mm/page_alloc.c:3851 [inline]
fs_reclaim_acquire+0x88/0x130 mm/page_alloc.c:3865
might_alloc include/linux/sched/mm.h:318 [inline]
slab_pre_alloc_hook mm/slub.c:4070 [inline]
slab_alloc_node mm/slub.c:4148 [inline]
__do_kmalloc_node mm/slub.c:4297 [inline]
__kmalloc_node_noprof+0xb2/0x4d0 mm/slub.c:4304
__kvmalloc_node_noprof+0x72/0x190 mm/util.c:650
sbitmap_init_node+0x2d4/0x670 lib/sbitmap.c:132
scsi_realloc_sdev_budget_map+0x2a7/0x460 drivers/scsi/scsi_scan.c:246
scsi_add_lun drivers/scsi/scsi_scan.c:1106 [inline]
scsi_probe_and_add_lun+0x3173/0x4bd0 drivers/scsi/scsi_scan.c:1287
__scsi_add_device+0x228/0x2f0 drivers/scsi/scsi_scan.c:1622
ata_scsi_scan_host+0x236/0x740 drivers/ata/libata-scsi.c:4575
async_run_entry_fn+0xa8/0x420 kernel/async.c:129
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #0 (&q->q_usage_counter(io)#37){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3161 [inline]
check_prevs_add kernel/locking/lockdep.c:3280 [inline]
validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
__lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
bio_queue_enter block/blk.h:75 [inline]
blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3092
__submit_bio+0x2c6/0x560 block/blk-core.c:629
__submit_bio_noacct_mq block/blk-core.c:710 [inline]
submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739
swap_writepage_bdev_async mm/page_io.c:451 [inline]
__swap_writepage+0x747/0x14d0 mm/page_io.c:474
swap_writepage+0x6ee/0xce0 mm/page_io.c:289
pageout mm/vmscan.c:689 [inline]
shrink_folio_list+0x3b68/0x5ca0 mm/vmscan.c:1367
evict_folios+0x3c86/0x5800 mm/vmscan.c:4593
try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4789
shrink_one+0x3b9/0x850 mm/vmscan.c:4834
shrink_many mm/vmscan.c:4897 [inline]
lru_gen_shrink_node mm/vmscan.c:4975 [inline]
shrink_node+0x37c5/0x3e50 mm/vmscan.c:5956
shrink_zones mm/vmscan.c:6215 [inline]
do_try_to_free_pages+0x78c/0x1cf0 mm/vmscan.c:6277
try_to_free_pages+0x47c/0x1050 mm/vmscan.c:6527
__perform_reclaim mm/page_alloc.c:3927 [inline]
__alloc_pages_direct_reclaim+0x178/0x3c0 mm/page_alloc.c:3949
__alloc_pages_slowpath+0x764/0x1020 mm/page_alloc.c:4380
__alloc_pages_noprof+0x49b/0x710 mm/page_alloc.c:4764
alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2269
folio_alloc_mpol_noprof mm/mempolicy.c:2287 [inline]
vma_alloc_folio_noprof+0x12e/0x230 mm/mempolicy.c:2317
folio_prealloc+0x2e/0x170
alloc_anon_folio mm/memory.c:4752 [inline]
do_anonymous_page mm/memory.c:4809 [inline]
do_pte_missing mm/memory.c:3977 [inline]
handle_pte_fault+0x2c98/0x5ed0 mm/memory.c:5801
__handle_mm_fault mm/memory.c:5944 [inline]
handle_mm_fault+0x1106/0x1bb0 mm/memory.c:6112
do_user_addr_fault arch/x86/mm/fault.c:1338 [inline]
handle_page_fault arch/x86/mm/fault.c:1481 [inline]
exc_page_fault+0x459/0x8b0 arch/x86/mm/fault.c:1539
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&q->q_usage_counter(io)#37);
lock(fs_reclaim);
rlock(&q->q_usage_counter(io)#37);

*** DEADLOCK ***

2 locks held by syz-executor/5695:
#0: ffff888011d6c8e0 (&vma->vm_lock->lock){++++}-{4:4}, at: vma_start_read include/linux/mm.h:716 [inline]
#0: ffff888011d6c8e0 (&vma->vm_lock->lock){++++}-{4:4}, at: lock_vma_under_rcu+0x34b/0x790 mm/memory.c:6278
#1: ffffffff8ea35ca0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3924 [inline]
#1: ffffffff8ea35ca0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim+0xd4/0x3c0 mm/page_alloc.c:3949

stack backtrace:
CPU: 0 UID: 0 PID: 5695 Comm: syz-executor Not tainted 6.13.0-rc2-syzkaller-00232-g4800575d8c0b-dirty #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074
check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206
check_prev_add kernel/locking/lockdep.c:3161 [inline]
check_prevs_add kernel/locking/lockdep.c:3280 [inline]
validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
__lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
bio_queue_enter block/blk.h:75 [inline]
blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3092
__submit_bio+0x2c6/0x560 block/blk-core.c:629
__submit_bio_noacct_mq block/blk-core.c:710 [inline]
submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739
swap_writepage_bdev_async mm/page_io.c:451 [inline]
__swap_writepage+0x747/0x14d0 mm/page_io.c:474
swap_writepage+0x6ee/0xce0 mm/page_io.c:289
pageout mm/vmscan.c:689 [inline]
shrink_folio_list+0x3b68/0x5ca0 mm/vmscan.c:1367
evict_folios+0x3c86/0x5800 mm/vmscan.c:4593
try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4789
shrink_one+0x3b9/0x850 mm/vmscan.c:4834
shrink_many mm/vmscan.c:4897 [inline]
lru_gen_shrink_node mm/vmscan.c:4975 [inline]
shrink_node+0x37c5/0x3e50 mm/vmscan.c:5956
shrink_zones mm/vmscan.c:6215 [inline]
do_try_to_free_pages+0x78c/0x1cf0 mm/vmscan.c:6277
try_to_free_pages+0x47c/0x1050 mm/vmscan.c:6527
__perform_reclaim mm/page_alloc.c:3927 [inline]
__alloc_pages_direct_reclaim+0x178/0x3c0 mm/page_alloc.c:3949
__alloc_pages_slowpath+0x764/0x1020 mm/page_alloc.c:4380
__alloc_pages_noprof+0x49b/0x710 mm/page_alloc.c:4764
alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2269
folio_alloc_mpol_noprof mm/mempolicy.c:2287 [inline]
vma_alloc_folio_noprof+0x12e/0x230 mm/mempolicy.c:2317
folio_prealloc+0x2e/0x170
alloc_anon_folio mm/memory.c:4752 [inline]
do_anonymous_page mm/memory.c:4809 [inline]
do_pte_missing mm/memory.c:3977 [inline]
handle_pte_fault+0x2c98/0x5ed0 mm/memory.c:5801
__handle_mm_fault mm/memory.c:5944 [inline]
handle_mm_fault+0x1106/0x1bb0 mm/memory.c:6112
do_user_addr_fault arch/x86/mm/fault.c:1338 [inline]
handle_page_fault arch/x86/mm/fault.c:1481 [inline]
exc_page_fault+0x459/0x8b0 arch/x86/mm/fault.c:1539
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
RIP: 0033:0x7fd75394f603
Code: 07 62 e1 7d 28 e7 4f 01 62 e1 7d 28 e7 57 02 62 e1 7d 28 e7 5f 03 62 e1 7d 28 e7 a7 00 10 00 00 62 e1 7d 28 e7 af 20 10 00 00 <62> e1 7d 28 e7 b7 40 10 00 00 62 e1 7d 28 e7 bf 60 10 00 00 48 83
RSP: 002b:00007fff07875f68 EFLAGS: 00010203
RAX: 00007fd74dca2aa8 RBX: 00007fff07876420 RCX: 0000000000000016
RDX: 00000000000014b0 RSI: 00007fd7520325c8 RDI: 00007fd74e170fc0
RBP: 0000000000000020 R08: ffffffffffffffe8 R09: 0000000000000000
R10: 0000000000000be6 R11: 000000000c000000 R12: 00007fd751b64010
R13: 0000000000000020 R14: 0000000000e4da98 R15: 0000000001c9b4c8
</TASK>


Tested on:

commit: 4800575d Merge tag 'xfs-fixes-6.13-rc3' of git://git.k..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13983be8580000
kernel config: https://syzkaller.appspot.com/x/.config?x=fee25f93665c89ac
dashboard link: https://syzkaller.appspot.com/bug?extid=4c7590f1cee06597e43a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=1506a4f8580000

Hillf Danton

unread,
Dec 13, 2024, 7:45:04 PM12/13/24
to syzbot, linux-...@vger.kernel.org, syzkall...@googlegroups.com
On Fri, Dec 13, 2024 at 9:18
> syzbot has found a reproducer for the following issue on:
>
> HEAD commit: f932fb9b4074 Merge tag 'v6.13-rc2-ksmbd-server-fixes' of g..
> git tree: upstream
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=14654730580000

#syz test

--- x/mm/filemap.c
+++ y/mm/filemap.c
@@ -871,6 +871,7 @@ noinline int __filemap_add_folio(struct
folio_ref_add(folio, nr);
folio->mapping = mapping;
folio->index = xas.xa_index;
+ BUG_ON(mapping_exiting(mapping));

for (;;) {
int order = -1, split_order = 0;
--- x/block/blk.h
+++ y/block/blk.h
@@ -72,8 +72,6 @@ static inline int bio_queue_enter(struct
struct request_queue *q = bdev_get_queue(bio->bi_bdev);

if (blk_try_enter_queue(q, false)) {
- rwsem_acquire_read(&q->io_lockdep_map, 0, 0, _RET_IP_);
- rwsem_release(&q->io_lockdep_map, _RET_IP_);
return 0;
}
return __bio_queue_enter(q, bio);
--

syzbot

unread,
Dec 13, 2024, 8:00:05 PM12/13/24
to hda...@sina.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
KASAN: slab-use-after-free Read in folio_evictable

==================================================================
BUG: KASAN: slab-use-after-free in instrument_atomic_read include/linux/instrumented.h:68 [inline]
BUG: KASAN: slab-use-after-free in _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
BUG: KASAN: slab-use-after-free in mapping_unevictable include/linux/pagemap.h:269 [inline]
BUG: KASAN: slab-use-after-free in folio_evictable+0xe3/0x250 mm/internal.h:435
Read of size 8 at addr ffff8880526713a0 by task syz.1.17/5931

CPU: 0 UID: 0 PID: 5931 Comm: syz.1.17 Not tainted 6.13.0-rc2-syzkaller-00232-g4800575d8c0b-dirty #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0x169/0x550 mm/kasan/report.c:489
kasan_report+0x143/0x180 mm/kasan/report.c:602
kasan_check_range+0x282/0x290 mm/kasan/generic.c:189
instrument_atomic_read include/linux/instrumented.h:68 [inline]
_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
mapping_unevictable include/linux/pagemap.h:269 [inline]
folio_evictable+0xe3/0x250 mm/internal.h:435
sort_folio mm/vmscan.c:4299 [inline]
scan_folios mm/vmscan.c:4424 [inline]
isolate_folios mm/vmscan.c:4550 [inline]
evict_folios+0xff2/0x5800 mm/vmscan.c:4581
try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4789
shrink_one+0x3b9/0x850 mm/vmscan.c:4834
shrink_many mm/vmscan.c:4897 [inline]
lru_gen_shrink_node mm/vmscan.c:4975 [inline]
shrink_node+0x37c5/0x3e50 mm/vmscan.c:5956
shrink_zones mm/vmscan.c:6215 [inline]
do_try_to_free_pages+0x78c/0x1cf0 mm/vmscan.c:6277
try_to_free_pages+0x47c/0x1050 mm/vmscan.c:6527
__perform_reclaim mm/page_alloc.c:3927 [inline]
__alloc_pages_direct_reclaim+0x178/0x3c0 mm/page_alloc.c:3949
__alloc_pages_slowpath+0x764/0x1020 mm/page_alloc.c:4380
__alloc_pages_noprof+0x49b/0x710 mm/page_alloc.c:4764
alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2269
folio_alloc_mpol_noprof+0x36/0x50 mm/mempolicy.c:2287
shmem_alloc_folio mm/shmem.c:1794 [inline]
shmem_alloc_and_add_folio+0x4a0/0x1080 mm/shmem.c:1833
shmem_get_folio_gfp+0x621/0x1840 mm/shmem.c:2355
shmem_get_folio mm/shmem.c:2461 [inline]
shmem_write_begin+0x165/0x350 mm/shmem.c:3117
generic_perform_write+0x346/0x990 mm/filemap.c:4056
shmem_file_write_iter+0xf9/0x120 mm/shmem.c:3293
new_sync_write fs/read_write.c:586 [inline]
vfs_write+0xaeb/0xd30 fs/read_write.c:679
ksys_write+0x18f/0x2b0 fs/read_write.c:731
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6c7b3847cf
Code: 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 f9 92 02 00 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 44 24 08 e8 4c 93 02 00 48
RSP: 002b:00007f6c7c288df0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000001000000 RCX: 00007f6c7b3847cf
RDX: 0000000001000000 RSI: 00007f6c72000000 RDI: 0000000000000003
RBP: 0000000000000000 R08: 0000000000000000 R09: 000000000001274a
R10: 0000000020000142 R11: 0000000000000293 R12: 0000000000000003
R13: 00007f6c7c288ef0 R14: 00007f6c7c288eb0 R15: 00007f6c72000000
</TASK>

Allocated by task 5886:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
unpoison_slab_object mm/kasan/common.c:319 [inline]
__kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:345
kasan_slab_alloc include/linux/kasan.h:250 [inline]
slab_post_alloc_hook mm/slub.c:4119 [inline]
slab_alloc_node mm/slub.c:4168 [inline]
kmem_cache_alloc_noprof+0x1d9/0x380 mm/slub.c:4175
slab_free_hook mm/slub.c:2353 [inline]
slab_free mm/slub.c:4613 [inline]
kmem_cache_free+0x195/0x410 mm/slub.c:4715
rcu_do_batch kernel/rcu/tree.c:2567 [inline]
rcu_core+0xaaa/0x17a0 kernel/rcu/tree.c:2823
handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561
run_ksoftirqd+0xca/0x130 kernel/softirq.c:950
smpboot_thread_fn+0x544/0xa30 kernel/smpboot.c:164
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

The buggy address belongs to the object at ffff888052670fd8
which belongs to the cache gfs2_glock(aspace) of size 1224
The buggy address is located 968 bytes inside of
freed 1224-byte region [ffff888052670fd8, ffff8880526714a0)

The buggy address belongs to the physical page:
page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x52670
head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0x4fff00000000040(head|node=1|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 04fff00000000040 ffff88801f753dc0 dead000000000122 0000000000000000
raw: 0000000000000000 00000000800c000c 00000001f5000000 0000000000000000
head: 04fff00000000040 ffff88801f753dc0 dead000000000122 0000000000000000
head: 0000000000000000 00000000800c000c 00000001f5000000 0000000000000000
head: 04fff00000000002 ffffea0001499c01 ffffffffffffffff 0000000000000000
head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0xd2040(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 5886, tgid 5885 (syz.0.16), ts 141846335874, free_ts 141768346607
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x1f3/0x230 mm/page_alloc.c:1556
prep_new_page mm/page_alloc.c:1564 [inline]
get_page_from_freelist+0x365c/0x37a0 mm/page_alloc.c:3474
__alloc_pages_noprof+0x292/0x710 mm/page_alloc.c:4751
alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2269
alloc_slab_page+0x6a/0x110 mm/slub.c:2423
allocate_slab+0x5a/0x2b0 mm/slub.c:2589
new_slab mm/slub.c:2642 [inline]
___slab_alloc+0xc27/0x14a0 mm/slub.c:3830
__slab_alloc+0x58/0xa0 mm/slub.c:3920
__slab_alloc_node mm/slub.c:3995 [inline]
slab_alloc_node mm/slub.c:4156 [inline]
kmem_cache_alloc_noprof+0x268/0x380 mm/slub.c:4175
gfs2_glock_get+0x309/0x1010 fs/gfs2/glock.c:1178
gfs2_inode_lookup+0x2a3/0xc90 fs/gfs2/inode.c:135
gfs2_lookup_root fs/gfs2/ops_fstype.c:440 [inline]
init_sb+0xa2a/0x1270 fs/gfs2/ops_fstype.c:507
gfs2_fill_super+0x19b3/0x24d0 fs/gfs2/ops_fstype.c:1216
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1330
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
page last free pid 5886 tgid 5885 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1127 [inline]
free_unref_page+0xd3f/0x1010 mm/page_alloc.c:2657
stack_depot_save_flags+0x7c6/0x940 lib/stackdepot.c:674
kasan_save_stack mm/kasan/common.c:48 [inline]
kasan_save_track+0x51/0x80 mm/kasan/common.c:68
poison_kmalloc_redzone mm/kasan/common.c:377 [inline]
__kasan_kmalloc+0x98/0xb0 mm/kasan/common.c:394
kasan_kmalloc include/linux/kasan.h:260 [inline]
__do_kmalloc_node mm/slub.c:4298 [inline]
__kmalloc_node_track_caller_noprof+0x28b/0x4c0 mm/slub.c:4317
__kmemdup_nul mm/util.c:61 [inline]
kstrdup+0x39/0xb0 mm/util.c:81
__kernfs_new_node+0x9d/0x870 fs/kernfs/dir.c:620
kernfs_new_node+0x137/0x240 fs/kernfs/dir.c:700
kernfs_create_dir_ns+0x43/0x120 fs/kernfs/dir.c:1061
sysfs_create_dir_ns+0x189/0x3a0 fs/sysfs/dir.c:59
create_dir lib/kobject.c:73 [inline]
kobject_add_internal+0x435/0x8d0 lib/kobject.c:240
kobject_add_varg lib/kobject.c:374 [inline]
kobject_init_and_add+0x124/0x190 lib/kobject.c:457
gfs2_sys_fs_add+0x23b/0x4a0 fs/gfs2/sys.c:737
gfs2_fill_super+0x11ee/0x24d0 fs/gfs2/ops_fstype.c:1202
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1330

Memory state around the buggy address:
ffff888052671280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888052671300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff888052671380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff888052671400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888052671480: fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================


Tested on:

commit: 4800575d Merge tag 'xfs-fixes-6.13-rc3' of git://git.k..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=15aec730580000
kernel config: https://syzkaller.appspot.com/x/.config?x=fee25f93665c89ac
dashboard link: https://syzkaller.appspot.com/bug?extid=4c7590f1cee06597e43a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=10043be8580000

Edward Adam Davis

unread,
Dec 14, 2024, 11:08:00 PM12/14/24
to syzbot+4c7590...@syzkaller.appspotmail.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
#syz test

diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index 8c4c1f871a88..8f851ecd1625 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -267,6 +267,7 @@ static void __gfs2_glock_put(struct gfs2_glock *gl)
lockref_mark_dead(&gl->gl_lockref);
spin_unlock(&gl->gl_lockref.lock);
gfs2_glock_remove_from_lru(gl);
+ cancel_delayed_work(&gl->gl_work);
GLOCK_BUG_ON(gl, !list_empty(&gl->gl_holders));
if (mapping) {
truncate_inode_pages_final(mapping);

syzbot

unread,
Dec 14, 2024, 11:22:06 PM12/14/24
to ead...@qq.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __submit_bio

======================================================
WARNING: possible circular locking dependency detected
6.13.0-rc2-syzkaller-00362-g2d8308bf5b67-dirty #0 Not tainted
------------------------------------------------------
kswapd0/77 is trying to acquire lock:
ffff88801a8a9438 (&q->q_usage_counter(io)#37){++++}-{0:0}, at: __submit_bio+0x2c6/0x560 block/blk-core.c:629

but task is already holding lock:
ffffffff8ea36de0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6864 [inline]
ffffffff8ea36de0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbf1/0x36f0 mm/vmscan.c:7246

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (fs_reclaim){+.+.}-{0:0}:
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
__fs_reclaim_acquire mm/page_alloc.c:3851 [inline]
fs_reclaim_acquire+0x88/0x130 mm/page_alloc.c:3865
might_alloc include/linux/sched/mm.h:318 [inline]
slab_pre_alloc_hook mm/slub.c:4070 [inline]
slab_alloc_node mm/slub.c:4148 [inline]
__do_kmalloc_node mm/slub.c:4297 [inline]
__kmalloc_node_noprof+0xb2/0x4d0 mm/slub.c:4304
__kvmalloc_node_noprof+0x72/0x190 mm/util.c:650
sbitmap_init_node+0x2d4/0x670 lib/sbitmap.c:132
scsi_realloc_sdev_budget_map+0x2a7/0x460 drivers/scsi/scsi_scan.c:246
scsi_add_lun drivers/scsi/scsi_scan.c:1106 [inline]
scsi_probe_and_add_lun+0x3173/0x4bd0 drivers/scsi/scsi_scan.c:1287
__scsi_add_device+0x228/0x2f0 drivers/scsi/scsi_scan.c:1622
ata_scsi_scan_host+0x236/0x740 drivers/ata/libata-scsi.c:4575
async_run_entry_fn+0xa8/0x420 kernel/async.c:129
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #0 (&q->q_usage_counter(io)#37){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3161 [inline]
check_prevs_add kernel/locking/lockdep.c:3280 [inline]
validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
__lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
bio_queue_enter block/blk.h:75 [inline]
blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3090
__submit_bio+0x2c6/0x560 block/blk-core.c:629
__submit_bio_noacct_mq block/blk-core.c:710 [inline]
submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739
swap_writepage_bdev_async mm/page_io.c:451 [inline]
__swap_writepage+0x747/0x14d0 mm/page_io.c:474
swap_writepage+0x6ee/0xce0 mm/page_io.c:289
pageout mm/vmscan.c:689 [inline]
shrink_folio_list+0x3b68/0x5ca0 mm/vmscan.c:1367
evict_folios+0x3c86/0x5800 mm/vmscan.c:4593
try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4789
shrink_one+0x3b9/0x850 mm/vmscan.c:4834
shrink_many mm/vmscan.c:4897 [inline]
lru_gen_shrink_node mm/vmscan.c:4975 [inline]
shrink_node+0x37c5/0x3e50 mm/vmscan.c:5956
kswapd_shrink_node mm/vmscan.c:6785 [inline]
balance_pgdat mm/vmscan.c:6977 [inline]
kswapd+0x1ca9/0x36f0 mm/vmscan.c:7246
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&q->q_usage_counter(io)#37);
lock(fs_reclaim);
rlock(&q->q_usage_counter(io)#37);

*** DEADLOCK ***

1 lock held by kswapd0/77:
#0: ffffffff8ea36de0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6864 [inline]
#0: ffffffff8ea36de0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbf1/0x36f0 mm/vmscan.c:7246

stack backtrace:
CPU: 0 UID: 0 PID: 77 Comm: kswapd0 Not tainted 6.13.0-rc2-syzkaller-00362-g2d8308bf5b67-dirty #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074
check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206
check_prev_add kernel/locking/lockdep.c:3161 [inline]
check_prevs_add kernel/locking/lockdep.c:3280 [inline]
validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
__lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
bio_queue_enter block/blk.h:75 [inline]
blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3090
__submit_bio+0x2c6/0x560 block/blk-core.c:629
__submit_bio_noacct_mq block/blk-core.c:710 [inline]
submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739
swap_writepage_bdev_async mm/page_io.c:451 [inline]
__swap_writepage+0x747/0x14d0 mm/page_io.c:474
swap_writepage+0x6ee/0xce0 mm/page_io.c:289
pageout mm/vmscan.c:689 [inline]
shrink_folio_list+0x3b68/0x5ca0 mm/vmscan.c:1367
evict_folios+0x3c86/0x5800 mm/vmscan.c:4593
try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4789
shrink_one+0x3b9/0x850 mm/vmscan.c:4834
shrink_many mm/vmscan.c:4897 [inline]
lru_gen_shrink_node mm/vmscan.c:4975 [inline]
shrink_node+0x37c5/0x3e50 mm/vmscan.c:5956
kswapd_shrink_node mm/vmscan.c:6785 [inline]
balance_pgdat mm/vmscan.c:6977 [inline]
kswapd+0x1ca9/0x36f0 mm/vmscan.c:7246
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>


Tested on:

commit: 2d8308bf Merge tag 'scsi-fixes' of git://git.kernel.or..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13173cdf980000
kernel config: https://syzkaller.appspot.com/x/.config?x=fee25f93665c89ac
dashboard link: https://syzkaller.appspot.com/bug?extid=4c7590f1cee06597e43a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=13c73cdf980000

Edward Adam Davis

unread,
Dec 14, 2024, 11:48:49 PM12/14/24
to syzbot+4c7590...@syzkaller.appspotmail.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
index 042329b74c6e..3dcef4bb0427 100644
--- a/drivers/scsi/scsi_scan.c
+++ b/drivers/scsi/scsi_scan.c
@@ -222,6 +222,7 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
bool need_free = false;
int ret;
struct sbitmap sb_backup;
+ unsigned int flags;

depth = min_t(unsigned int, depth, scsi_device_max_queue_depth(sdev));

@@ -243,10 +244,12 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
blk_mq_freeze_queue(sdev->request_queue);
sb_backup = sdev->budget_map;
}
+ flags = memalloc_nofs_save();
ret = sbitmap_init_node(&sdev->budget_map,
scsi_device_max_queue_depth(sdev),
new_shift, GFP_KERNEL,
sdev->request_queue->node, false, true);
+ memalloc_nofs_restore(flags);
if (!ret)
sbitmap_resize(&sdev->budget_map, depth);


syzbot

unread,
Dec 15, 2024, 12:03:06 AM12/15/24
to ead...@qq.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
KASAN: slab-use-after-free Read in folio_evictable

==================================================================
BUG: KASAN: slab-use-after-free in instrument_atomic_read include/linux/instrumented.h:68 [inline]
BUG: KASAN: slab-use-after-free in _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
BUG: KASAN: slab-use-after-free in mapping_unevictable include/linux/pagemap.h:269 [inline]
BUG: KASAN: slab-use-after-free in folio_evictable+0xe3/0x250 mm/internal.h:435
Read of size 8 at addr ffff88804ffb13a0 by task kswapd1/81

CPU: 0 UID: 0 PID: 81 Comm: kswapd1 Not tainted 6.13.0-rc2-syzkaller-00362-g2d8308bf5b67-dirty #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0x169/0x550 mm/kasan/report.c:489
kasan_report+0x143/0x180 mm/kasan/report.c:602
kasan_check_range+0x282/0x290 mm/kasan/generic.c:189
instrument_atomic_read include/linux/instrumented.h:68 [inline]
_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
mapping_unevictable include/linux/pagemap.h:269 [inline]
folio_evictable+0xe3/0x250 mm/internal.h:435
sort_folio mm/vmscan.c:4299 [inline]
scan_folios mm/vmscan.c:4424 [inline]
isolate_folios mm/vmscan.c:4550 [inline]
evict_folios+0xff2/0x5800 mm/vmscan.c:4581
try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4789
shrink_one+0x3b9/0x850 mm/vmscan.c:4834
shrink_many mm/vmscan.c:4897 [inline]
lru_gen_shrink_node mm/vmscan.c:4975 [inline]
shrink_node+0x37c5/0x3e50 mm/vmscan.c:5956
kswapd_shrink_node mm/vmscan.c:6785 [inline]
balance_pgdat mm/vmscan.c:6977 [inline]
kswapd+0x1ca9/0x36f0 mm/vmscan.c:7246
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>

Allocated by task 6050:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
unpoison_slab_object mm/kasan/common.c:319 [inline]
__kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:345
kasan_slab_alloc include/linux/kasan.h:250 [inline]
slab_post_alloc_hook mm/slub.c:4119 [inline]
slab_alloc_node mm/slub.c:4168 [inline]
kmem_cache_alloc_noprof+0x1d9/0x380 mm/slub.c:4175
gfs2_glock_get+0x309/0x1010 fs/gfs2/glock.c:1179
gfs2_inode_lookup+0x2a3/0xc90 fs/gfs2/inode.c:135
gfs2_dir_search+0x229/0x2f0 fs/gfs2/dir.c:1667
gfs2_lookupi+0x461/0x5e0 fs/gfs2/inode.c:340
gfs2_lookup_meta+0x100/0x200 fs/gfs2/inode.c:280
init_journal+0x1bf/0x2410 fs/gfs2/ops_fstype.c:721
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

Last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47
__kasan_record_aux_stack+0xac/0xc0 mm/kasan/generic.c:544
__call_rcu_common kernel/rcu/tree.c:3086 [inline]
call_rcu+0x167/0xa70 kernel/rcu/tree.c:3190
__gfs2_glock_free+0xda0/0xef0 fs/gfs2/glock.c:172
gfs2_glock_free+0x3c/0xb0 fs/gfs2/glock.c:178
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

Second to last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47
__kasan_record_aux_stack+0xac/0xc0 mm/kasan/generic.c:544
insert_work+0x3e/0x330 kernel/workqueue.c:2183
__queue_work+0xc8b/0xf50 kernel/workqueue.c:2339
queue_delayed_work_on+0x1ca/0x390 kernel/workqueue.c:2552
queue_delayed_work include/linux/workqueue.h:677 [inline]
gfs2_glock_queue_work fs/gfs2/glock.c:250 [inline]
do_xmote+0xaf8/0x1250 fs/gfs2/glock.c:833
glock_work_func+0x343/0x5c0 fs/gfs2/glock.c:1091
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

The buggy address belongs to the object at ffff88804ffb0fd8
which belongs to the cache gfs2_glock(aspace) of size 1224
The buggy address is located 968 bytes inside of
freed 1224-byte region [ffff88804ffb0fd8, ffff88804ffb14a0)

The buggy address belongs to the physical page:
page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x4ffb0
head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0x4fff00000000040(head|node=1|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 04fff00000000040 ffff88801f8a7c80 dead000000000122 0000000000000000
raw: 0000000000000000 00000000800c000c 00000001f5000000 0000000000000000
head: 04fff00000000040 ffff88801f8a7c80 dead000000000122 0000000000000000
head: 0000000000000000 00000000800c000c 00000001f5000000 0000000000000000
head: 04fff00000000002 ffffea00013fec01 ffffffffffffffff 0000000000000000
head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0xd2040(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 5989, tgid 5987 (syz.0.16), ts 130709784579, free_ts 130654550879
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x1f3/0x230 mm/page_alloc.c:1556
prep_new_page mm/page_alloc.c:1564 [inline]
get_page_from_freelist+0x365c/0x37a0 mm/page_alloc.c:3474
__alloc_pages_noprof+0x292/0x710 mm/page_alloc.c:4751
alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2269
alloc_slab_page+0x6a/0x110 mm/slub.c:2423
allocate_slab+0x5a/0x2b0 mm/slub.c:2589
new_slab mm/slub.c:2642 [inline]
___slab_alloc+0xc27/0x14a0 mm/slub.c:3830
__slab_alloc+0x58/0xa0 mm/slub.c:3920
__slab_alloc_node mm/slub.c:3995 [inline]
slab_alloc_node mm/slub.c:4156 [inline]
kmem_cache_alloc_noprof+0x268/0x380 mm/slub.c:4175
gfs2_glock_get+0x309/0x1010 fs/gfs2/glock.c:1179
gfs2_inode_lookup+0x2a3/0xc90 fs/gfs2/inode.c:135
gfs2_lookup_root fs/gfs2/ops_fstype.c:440 [inline]
init_sb+0xa2a/0x1270 fs/gfs2/ops_fstype.c:507
gfs2_fill_super+0x19b3/0x24d0 fs/gfs2/ops_fstype.c:1216
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1330
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
page last free pid 52 tgid 52 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1127 [inline]
free_unref_page+0xd3f/0x1010 mm/page_alloc.c:2657
discard_slab mm/slub.c:2688 [inline]
__put_partials+0x160/0x1c0 mm/slub.c:3157
put_cpu_partial+0x17c/0x250 mm/slub.c:3232
__slab_free+0x290/0x380 mm/slub.c:4483
qlink_free mm/kasan/quarantine.c:163 [inline]
qlist_free_all+0x9a/0x140 mm/kasan/quarantine.c:179
kasan_quarantine_reduce+0x14f/0x170 mm/kasan/quarantine.c:286
__kasan_slab_alloc+0x23/0x80 mm/kasan/common.c:329
kasan_slab_alloc include/linux/kasan.h:250 [inline]
slab_post_alloc_hook mm/slub.c:4119 [inline]
slab_alloc_node mm/slub.c:4168 [inline]
__kmalloc_cache_noprof+0x1d9/0x390 mm/slub.c:4324
kmalloc_noprof include/linux/slab.h:901 [inline]
kzalloc_noprof include/linux/slab.h:1037 [inline]
keypair_create drivers/net/wireguard/noise.c:100 [inline]
wg_noise_handshake_begin_session+0xc4/0xb80 drivers/net/wireguard/noise.c:827
wg_receive_handshake_packet drivers/net/wireguard/receive.c:176 [inline]
wg_packet_handshake_receive_worker+0x632/0xf50 drivers/net/wireguard/receive.c:213
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

Memory state around the buggy address:
ffff88804ffb1280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88804ffb1300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88804ffb1380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff88804ffb1400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88804ffb1480: fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================


Tested on:

commit: 2d8308bf Merge tag 'scsi-fixes' of git://git.kernel.or..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1509fbe8580000
kernel config: https://syzkaller.appspot.com/x/.config?x=fee25f93665c89ac
dashboard link: https://syzkaller.appspot.com/bug?extid=4c7590f1cee06597e43a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=13394344580000

Edward Adam Davis

unread,
Dec 15, 2024, 12:19:29 AM12/15/24
to syzbot+4c7590...@syzkaller.appspotmail.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
#syz test

diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
index 042329b74c6e..3dcef4bb0427 100644
--- a/drivers/scsi/scsi_scan.c
+++ b/drivers/scsi/scsi_scan.c
@@ -222,6 +222,7 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
bool need_free = false;
int ret;
struct sbitmap sb_backup;
+ unsigned int flags;

depth = min_t(unsigned int, depth, scsi_device_max_queue_depth(sdev));

@@ -243,10 +244,12 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
blk_mq_freeze_queue(sdev->request_queue);
sb_backup = sdev->budget_map;
}
+ flags = memalloc_nofs_save();
ret = sbitmap_init_node(&sdev->budget_map,
scsi_device_max_queue_depth(sdev),
new_shift, GFP_KERNEL,
sdev->request_queue->node, false, true);
+ memalloc_nofs_restore(flags);
if (!ret)
sbitmap_resize(&sdev->budget_map, depth);

diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index e83d293c3614..573f62ccd01e 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -839,6 +839,8 @@ static int init_journal(struct gfs2_sbd *sdp, int undo)
gfs2_holder_initialized(&sdp->sd_jinode_gh))
gfs2_glock_dq_uninit(&sdp->sd_jinode_gh);
fail_journal_gh:
+ if (ip)
+ cancel_delayed_work(&ip->i_gl->gl_work);
if (!sdp->sd_args.ar_spectator &&
gfs2_holder_initialized(&sdp->sd_journal_gh))
gfs2_glock_dq_uninit(&sdp->sd_journal_gh);

syzbot

unread,
Dec 15, 2024, 12:35:04 AM12/15/24
to ead...@qq.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
KASAN: slab-use-after-free Read in folio_evictable

==================================================================
BUG: KASAN: slab-use-after-free in instrument_atomic_read include/linux/instrumented.h:68 [inline]
BUG: KASAN: slab-use-after-free in _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
BUG: KASAN: slab-use-after-free in mapping_unevictable include/linux/pagemap.h:269 [inline]
BUG: KASAN: slab-use-after-free in folio_evictable+0xe3/0x250 mm/internal.h:435
Read of size 8 at addr ffff88804fd613a0 by task syz.0.23/6094

CPU: 0 UID: 0 PID: 6094 Comm: syz.0.23 Not tainted 6.13.0-rc2-syzkaller-00362-g2d8308bf5b67-dirty #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0x169/0x550 mm/kasan/report.c:489
kasan_report+0x143/0x180 mm/kasan/report.c:602
kasan_check_range+0x282/0x290 mm/kasan/generic.c:189
instrument_atomic_read include/linux/instrumented.h:68 [inline]
_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
mapping_unevictable include/linux/pagemap.h:269 [inline]
folio_evictable+0xe3/0x250 mm/internal.h:435
sort_folio mm/vmscan.c:4299 [inline]
scan_folios mm/vmscan.c:4424 [inline]
isolate_folios mm/vmscan.c:4550 [inline]
evict_folios+0xff2/0x5800 mm/vmscan.c:4581
try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4789
shrink_one+0x3b9/0x850 mm/vmscan.c:4834
shrink_many mm/vmscan.c:4897 [inline]
lru_gen_shrink_node mm/vmscan.c:4975 [inline]
shrink_node+0x37c5/0x3e50 mm/vmscan.c:5956
shrink_zones mm/vmscan.c:6215 [inline]
do_try_to_free_pages+0x78c/0x1cf0 mm/vmscan.c:6277
try_to_free_pages+0x47c/0x1050 mm/vmscan.c:6527
__perform_reclaim mm/page_alloc.c:3927 [inline]
__alloc_pages_direct_reclaim+0x178/0x3c0 mm/page_alloc.c:3949
__alloc_pages_slowpath+0x764/0x1020 mm/page_alloc.c:4380
__alloc_pages_noprof+0x49b/0x710 mm/page_alloc.c:4764
alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2269
folio_alloc_mpol_noprof+0x36/0x50 mm/mempolicy.c:2287
shmem_alloc_folio mm/shmem.c:1794 [inline]
shmem_alloc_and_add_folio+0x4a0/0x1080 mm/shmem.c:1833
shmem_get_folio_gfp+0x621/0x1840 mm/shmem.c:2355
shmem_get_folio mm/shmem.c:2461 [inline]
shmem_write_begin+0x165/0x350 mm/shmem.c:3117
generic_perform_write+0x346/0x990 mm/filemap.c:4055
shmem_file_write_iter+0xf9/0x120 mm/shmem.c:3293
new_sync_write fs/read_write.c:586 [inline]
vfs_write+0xaeb/0xd30 fs/read_write.c:679
ksys_write+0x18f/0x2b0 fs/read_write.c:731
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fee41f847cf
Code: 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 f9 92 02 00 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 44 24 08 e8 4c 93 02 00 48
RSP: 002b:00007fee42e47df0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000001000000 RCX: 00007fee41f847cf
RDX: 0000000001000000 RSI: 00007fee38c00000 RDI: 0000000000000003
RBP: 0000000000000000 R08: 0000000000000000 R09: 000000000001274a
R10: 0000000020000142 R11: 0000000000000293 R12: 0000000000000003
R13: 00007fee42e47ef0 R14: 00007fee42e47eb0 R15: 00007fee38c00000
</TASK>

Allocated by task 6026:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
unpoison_slab_object mm/kasan/common.c:319 [inline]
__kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:345
kasan_slab_alloc include/linux/kasan.h:250 [inline]
slab_post_alloc_hook mm/slub.c:4119 [inline]
slab_alloc_node mm/slub.c:4168 [inline]
kmem_cache_alloc_noprof+0x1d9/0x380 mm/slub.c:4175
gfs2_glock_get+0x309/0x1010 fs/gfs2/glock.c:1178
gfs2_inode_lookup+0x2a3/0xc90 fs/gfs2/inode.c:135
gfs2_dir_search+0x229/0x2f0 fs/gfs2/dir.c:1667
gfs2_lookupi+0x461/0x5e0 fs/gfs2/inode.c:340
gfs2_jindex_hold fs/gfs2/ops_fstype.c:587 [inline]
init_journal+0x602/0x2470 fs/gfs2/ops_fstype.c:729
init_inodes+0xdc/0x320 fs/gfs2/ops_fstype.c:866
gfs2_fill_super+0x1bd1/0x24d0 fs/gfs2/ops_fstype.c:1251
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1332
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
do_new_mount+0x2be/0xb40 fs/namespace.c:3507
do_mount fs/namespace.c:3847 [inline]
__do_sys_mount fs/namespace.c:4057 [inline]
__se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4034
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

Freed by task 6014:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:582
poison_slab_object mm/kasan/common.c:247 [inline]
__kasan_slab_free+0x59/0x70 mm/kasan/common.c:264
kasan_slab_free include/linux/kasan.h:233 [inline]
slab_free_hook mm/slub.c:2353 [inline]
slab_free mm/slub.c:4613 [inline]
kmem_cache_free+0x195/0x410 mm/slub.c:4715
rcu_do_batch kernel/rcu/tree.c:2567 [inline]
rcu_core+0xaaa/0x17a0 kernel/rcu/tree.c:2823
handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561
do_softirq+0x11b/0x1e0 kernel/softirq.c:462
__local_bh_enable_ip+0x1bb/0x200 kernel/softirq.c:389
ipv6_get_lladdr+0x295/0x3d0 net/ipv6/addrconf.c:1936
mld_newpack+0x337/0xaf0 net/ipv6/mcast.c:1755
add_grhead net/ipv6/mcast.c:1850 [inline]
add_grec+0x1492/0x19a0 net/ipv6/mcast.c:1988
mld_send_cr net/ipv6/mcast.c:2114 [inline]
mld_ifc_work+0x691/0xd90 net/ipv6/mcast.c:2651
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

Last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47
__kasan_record_aux_stack+0xac/0xc0 mm/kasan/generic.c:544
__call_rcu_common kernel/rcu/tree.c:3086 [inline]
call_rcu+0x167/0xa70 kernel/rcu/tree.c:3190
__gfs2_glock_free+0xda0/0xef0 fs/gfs2/glock.c:172
gfs2_glock_free+0x3c/0xb0 fs/gfs2/glock.c:178
gfs2_glock_put_eventually fs/gfs2/super.c:1257 [inline]
gfs2_evict_inode+0x6e2/0x13c0 fs/gfs2/super.c:1546
evict+0x4e8/0x9a0 fs/inode.c:796
gfs2_jindex_free+0x3f6/0x4b0 fs/gfs2/super.c:79
init_journal+0xa46/0x2470 fs/gfs2/ops_fstype.c:848
init_inodes+0xdc/0x320 fs/gfs2/ops_fstype.c:866
gfs2_fill_super+0x1bd1/0x24d0 fs/gfs2/ops_fstype.c:1251
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1332
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
do_new_mount+0x2be/0xb40 fs/namespace.c:3507
do_mount fs/namespace.c:3847 [inline]
__do_sys_mount fs/namespace.c:4057 [inline]
__se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4034
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

Second to last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47
__kasan_record_aux_stack+0xac/0xc0 mm/kasan/generic.c:544
insert_work+0x3e/0x330 kernel/workqueue.c:2183
__queue_work+0xc8b/0xf50 kernel/workqueue.c:2339
queue_delayed_work_on+0x1ca/0x390 kernel/workqueue.c:2552
queue_delayed_work include/linux/workqueue.h:677 [inline]
gfs2_glock_queue_work fs/gfs2/glock.c:250 [inline]
do_xmote+0xaf8/0x1250 fs/gfs2/glock.c:832
glock_work_func+0x343/0x5c0 fs/gfs2/glock.c:1090
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

The buggy address belongs to the object at ffff88804fd60fd8
which belongs to the cache gfs2_glock(aspace) of size 1224
The buggy address is located 968 bytes inside of
freed 1224-byte region [ffff88804fd60fd8, ffff88804fd614a0)

The buggy address belongs to the physical page:
page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x4fd60
head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0x4fff00000000040(head|node=1|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 04fff00000000040 ffff88801fa6cdc0 dead000000000122 0000000000000000
raw: 0000000000000000 00000000800c000c 00000001f5000000 0000000000000000
head: 04fff00000000040 ffff88801fa6cdc0 dead000000000122 0000000000000000
head: 0000000000000000 00000000800c000c 00000001f5000000 0000000000000000
head: 04fff00000000002 ffffea00013f5801 ffffffffffffffff 0000000000000000
head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0xd2040(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 6026, tgid 6025 (syz.0.16), ts 112070887808, free_ts 112070839340
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x1f3/0x230 mm/page_alloc.c:1556
prep_new_page mm/page_alloc.c:1564 [inline]
get_page_from_freelist+0x365c/0x37a0 mm/page_alloc.c:3474
__alloc_pages_noprof+0x292/0x710 mm/page_alloc.c:4751
alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2269
alloc_slab_page+0x6a/0x110 mm/slub.c:2423
allocate_slab+0x5a/0x2b0 mm/slub.c:2589
new_slab mm/slub.c:2642 [inline]
___slab_alloc+0xc27/0x14a0 mm/slub.c:3830
__slab_alloc+0x58/0xa0 mm/slub.c:3920
__slab_alloc_node mm/slub.c:3995 [inline]
slab_alloc_node mm/slub.c:4156 [inline]
kmem_cache_alloc_noprof+0x268/0x380 mm/slub.c:4175
gfs2_glock_get+0x309/0x1010 fs/gfs2/glock.c:1178
gfs2_inode_lookup+0x2a3/0xc90 fs/gfs2/inode.c:135
gfs2_lookup_root fs/gfs2/ops_fstype.c:440 [inline]
init_sb+0xa2a/0x1270 fs/gfs2/ops_fstype.c:507
gfs2_fill_super+0x19b3/0x24d0 fs/gfs2/ops_fstype.c:1218
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1332
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
page last free pid 6026 tgid 6025 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1127 [inline]
free_unref_page+0xd3f/0x1010 mm/page_alloc.c:2657
stack_depot_save_flags+0x7c6/0x940 lib/stackdepot.c:674
kasan_save_stack mm/kasan/common.c:48 [inline]
kasan_save_track+0x51/0x80 mm/kasan/common.c:68
unpoison_slab_object mm/kasan/common.c:319 [inline]
__kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:345
kasan_slab_alloc include/linux/kasan.h:250 [inline]
slab_post_alloc_hook mm/slub.c:4119 [inline]
slab_alloc_node mm/slub.c:4168 [inline]
kmem_cache_alloc_lru_noprof+0x1dd/0x390 mm/slub.c:4187
gfs2_alloc_inode+0x58/0x170 fs/gfs2/super.c:1555
alloc_inode+0x65/0x1a0 fs/inode.c:336
iget5_locked+0x4a/0xa0 fs/inode.c:1404
gfs2_inode_lookup+0xf3/0xc90 fs/gfs2/inode.c:124
gfs2_lookup_root fs/gfs2/ops_fstype.c:440 [inline]
init_sb+0xa2a/0x1270 fs/gfs2/ops_fstype.c:507
gfs2_fill_super+0x19b3/0x24d0 fs/gfs2/ops_fstype.c:1218
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1332
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
do_new_mount+0x2be/0xb40 fs/namespace.c:3507
do_mount fs/namespace.c:3847 [inline]
__do_sys_mount fs/namespace.c:4057 [inline]
__se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4034

Memory state around the buggy address:
ffff88804fd61280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88804fd61300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88804fd61380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff88804fd61400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88804fd61480: fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================


Tested on:

commit: 2d8308bf Merge tag 'scsi-fixes' of git://git.kernel.or..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=12116730580000
kernel config: https://syzkaller.appspot.com/x/.config?x=fee25f93665c89ac
dashboard link: https://syzkaller.appspot.com/bug?extid=4c7590f1cee06597e43a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=15fe6730580000

Edward Adam Davis

unread,
Dec 15, 2024, 12:59:52 AM12/15/24
to syzbot+4c7590...@syzkaller.appspotmail.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
#syz test

diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index e83d293c3614..a87c1d5547e5 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -839,6 +839,8 @@ static int init_journal(struct gfs2_sbd *sdp, int undo)
gfs2_holder_initialized(&sdp->sd_jinode_gh))
gfs2_glock_dq_uninit(&sdp->sd_jinode_gh);
fail_journal_gh:
+ cancel_delayed_work(&sdp->sd_rename_gl->gl_work);
+ cancel_delayed_work(&sdp->sd_freeze_gl->gl_work);
if (!sdp->sd_args.ar_spectator &&
gfs2_holder_initialized(&sdp->sd_journal_gh))
gfs2_glock_dq_uninit(&sdp->sd_journal_gh);

syzbot

unread,
Dec 15, 2024, 1:14:03 AM12/15/24
to ead...@qq.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
KASAN: slab-use-after-free Read in move_to_new_folio

==================================================================
BUG: KASAN: slab-use-after-free in instrument_atomic_read include/linux/instrumented.h:68 [inline]
BUG: KASAN: slab-use-after-free in _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
BUG: KASAN: slab-use-after-free in mapping_inaccessible include/linux/pagemap.h:335 [inline]
BUG: KASAN: slab-use-after-free in move_to_new_folio+0x201/0xc20 mm/migrate.c:1050
Read of size 8 at addr ffff888055fcc910 by task kcompactd1/29

CPU: 0 UID: 0 PID: 29 Comm: kcompactd1 Not tainted 6.13.0-rc2-syzkaller-00362-g2d8308bf5b67-dirty #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0x169/0x550 mm/kasan/report.c:489
kasan_report+0x143/0x180 mm/kasan/report.c:602
kasan_check_range+0x282/0x290 mm/kasan/generic.c:189
instrument_atomic_read include/linux/instrumented.h:68 [inline]
_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
mapping_inaccessible include/linux/pagemap.h:335 [inline]
move_to_new_folio+0x201/0xc20 mm/migrate.c:1050
migrate_folio_move mm/migrate.c:1368 [inline]
migrate_pages_batch+0x1d1b/0x2a90 mm/migrate.c:1899
migrate_pages_sync mm/migrate.c:1965 [inline]
migrate_pages+0x1d57/0x3380 mm/migrate.c:2074
compact_zone+0x3404/0x4ac0 mm/compaction.c:2641
compact_node+0x2de/0x460 mm/compaction.c:2910
kcompactd+0x788/0x1510 mm/compaction.c:3208
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>

Allocated by task 6029:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
unpoison_slab_object mm/kasan/common.c:319 [inline]
__kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:345
kasan_slab_alloc include/linux/kasan.h:250 [inline]
slab_post_alloc_hook mm/slub.c:4119 [inline]
slab_alloc_node mm/slub.c:4168 [inline]
kmem_cache_alloc_noprof+0x1d9/0x380 mm/slub.c:4175
gfs2_glock_get+0x309/0x1010 fs/gfs2/glock.c:1178
gfs2_inode_lookup+0x2a3/0xc90 fs/gfs2/inode.c:135
gfs2_dir_search+0x229/0x2f0 fs/gfs2/dir.c:1667
gfs2_lookupi+0x461/0x5e0 fs/gfs2/inode.c:340
gfs2_jindex_hold fs/gfs2/ops_fstype.c:587 [inline]
init_journal+0x5fa/0x2470 fs/gfs2/ops_fstype.c:729
init_inodes+0xdc/0x320 fs/gfs2/ops_fstype.c:866
gfs2_fill_super+0x1bd1/0x24d0 fs/gfs2/ops_fstype.c:1251
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1332
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
do_new_mount+0x2be/0xb40 fs/namespace.c:3507
do_mount fs/namespace.c:3847 [inline]
__do_sys_mount fs/namespace.c:4057 [inline]
__se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4034
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

Freed by task 6063:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:582
poison_slab_object mm/kasan/common.c:247 [inline]
__kasan_slab_free+0x59/0x70 mm/kasan/common.c:264
kasan_slab_free include/linux/kasan.h:233 [inline]
slab_free_hook mm/slub.c:2353 [inline]
slab_free mm/slub.c:4613 [inline]
kmem_cache_free+0x195/0x410 mm/slub.c:4715
rcu_do_batch kernel/rcu/tree.c:2567 [inline]
rcu_core+0xaaa/0x17a0 kernel/rcu/tree.c:2823
handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561
__do_softirq kernel/softirq.c:595 [inline]
invoke_softirq kernel/softirq.c:435 [inline]
__irq_exit_rcu+0xf7/0x220 kernel/softirq.c:662
irq_exit_rcu+0x9/0x30 kernel/softirq.c:678
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1049 [inline]
sysvec_apic_timer_interrupt+0xa6/0xc0 arch/x86/kernel/apic/apic.c:1049
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702

Last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47
__kasan_record_aux_stack+0xac/0xc0 mm/kasan/generic.c:544
__call_rcu_common kernel/rcu/tree.c:3086 [inline]
call_rcu+0x167/0xa70 kernel/rcu/tree.c:3190
__gfs2_glock_free+0xda0/0xef0 fs/gfs2/glock.c:172
gfs2_glock_free+0x3c/0xb0 fs/gfs2/glock.c:178
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

Second to last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47
__kasan_record_aux_stack+0xac/0xc0 mm/kasan/generic.c:544
insert_work+0x3e/0x330 kernel/workqueue.c:2183
__queue_work+0xc8b/0xf50 kernel/workqueue.c:2339
queue_delayed_work_on+0x1ca/0x390 kernel/workqueue.c:2552
queue_delayed_work include/linux/workqueue.h:677 [inline]
gfs2_glock_queue_work fs/gfs2/glock.c:250 [inline]
do_xmote+0xaf8/0x1250 fs/gfs2/glock.c:832
glock_work_func+0x343/0x5c0 fs/gfs2/glock.c:1090
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

The buggy address belongs to the object at ffff888055fcc548
which belongs to the cache gfs2_glock(aspace) of size 1224
The buggy address is located 968 bytes inside of
freed 1224-byte region [ffff888055fcc548, ffff888055fcca10)

The buggy address belongs to the physical page:
page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x55fcc
head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0x4fff00000000040(head|node=1|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 04fff00000000040 ffff888033766dc0 dead000000000122 0000000000000000
raw: 0000000000000000 00000000800c000c 00000001f5000000 0000000000000000
head: 04fff00000000040 ffff888033766dc0 dead000000000122 0000000000000000
head: 0000000000000000 00000000800c000c 00000001f5000000 0000000000000000
head: 04fff00000000002 ffffea000157f301 ffffffffffffffff 0000000000000000
head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0xd2040(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 5959, tgid 5958 (syz.0.16), ts 138900958357, free_ts 127446675079
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x1f3/0x230 mm/page_alloc.c:1556
prep_new_page mm/page_alloc.c:1564 [inline]
get_page_from_freelist+0x365c/0x37a0 mm/page_alloc.c:3474
__alloc_pages_noprof+0x292/0x710 mm/page_alloc.c:4751
alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2269
alloc_slab_page+0x6a/0x110 mm/slub.c:2423
allocate_slab+0x5a/0x2b0 mm/slub.c:2589
new_slab mm/slub.c:2642 [inline]
___slab_alloc+0xc27/0x14a0 mm/slub.c:3830
__slab_alloc+0x58/0xa0 mm/slub.c:3920
__slab_alloc_node mm/slub.c:3995 [inline]
slab_alloc_node mm/slub.c:4156 [inline]
kmem_cache_alloc_noprof+0x268/0x380 mm/slub.c:4175
gfs2_glock_get+0x309/0x1010 fs/gfs2/glock.c:1178
gfs2_inode_lookup+0x2a3/0xc90 fs/gfs2/inode.c:135
gfs2_lookup_root fs/gfs2/ops_fstype.c:440 [inline]
init_sb+0xa2a/0x1270 fs/gfs2/ops_fstype.c:507
gfs2_fill_super+0x19b3/0x24d0 fs/gfs2/ops_fstype.c:1218
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1332
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
page last free pid 5419 tgid 5419 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1127 [inline]
free_unref_page+0xd3f/0x1010 mm/page_alloc.c:2657
kasan_depopulate_vmalloc_pte+0x74/0x90 mm/kasan/shadow.c:408
apply_to_pte_range mm/memory.c:2831 [inline]
apply_to_pmd_range mm/memory.c:2875 [inline]
apply_to_pud_range mm/memory.c:2911 [inline]
apply_to_p4d_range mm/memory.c:2947 [inline]
__apply_to_page_range+0x806/0xde0 mm/memory.c:2981
kasan_release_vmalloc+0xa5/0xd0 mm/kasan/shadow.c:529
kasan_release_vmalloc_node mm/vmalloc.c:2196 [inline]
purge_vmap_node+0x22f/0x8d0 mm/vmalloc.c:2213
__purge_vmap_area_lazy+0x708/0xae0 mm/vmalloc.c:2304
drain_vmap_area_work+0x27/0x40 mm/vmalloc.c:2338
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

Memory state around the buggy address:
ffff888055fcc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888055fcc880: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff888055fcc900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff888055fcc980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888055fcca00: fb fb fc fc fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================


Tested on:

commit: 2d8308bf Merge tag 'scsi-fixes' of git://git.kernel.or..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=11a094f8580000
kernel config: https://syzkaller.appspot.com/x/.config?x=fee25f93665c89ac
dashboard link: https://syzkaller.appspot.com/bug?extid=4c7590f1cee06597e43a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=15dd4344580000

syzbot

unread,
Feb 3, 2025, 11:58:22 PM2/3/25
to ak...@linux-foundation.org, ead...@qq.com, hda...@sina.com, linux-...@vger.kernel.org, linu...@kvack.org, syzkall...@googlegroups.com, wi...@infradead.org, yuz...@google.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 0de63bb7d919 Merge tag 'pull-fix' of git://git.kernel.org/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=14c39d18580000
kernel config: https://syzkaller.appspot.com/x/.config?x=1909f2f0d8e641ce
dashboard link: https://syzkaller.appspot.com/bug?extid=4c7590f1cee06597e43a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1640eeb0580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=13aa23df980000

Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7feb34a89c2a/non_bootable_disk-0de63bb7.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/1142009a30a7/vmlinux-0de63bb7.xz
kernel image: https://storage.googleapis.com/syzbot-assets/5d9e46a8998d/bzImage-0de63bb7.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/07d8a470b3fc/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4c7590...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in instrument_atomic_read include/linux/instrumented.h:68 [inline]
BUG: KASAN: slab-use-after-free in _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
BUG: KASAN: slab-use-after-free in mapping_unevictable include/linux/pagemap.h:269 [inline]
BUG: KASAN: slab-use-after-free in folio_evictable+0xe3/0x250 mm/internal.h:437
Read of size 8 at addr ffff888054e45e30 by task kswapd0/79

CPU: 0 UID: 0 PID: 79 Comm: kswapd0 Not tainted 6.14.0-rc1-syzkaller-00020-g0de63bb7d919 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0x169/0x550 mm/kasan/report.c:489
kasan_report+0x143/0x180 mm/kasan/report.c:602
kasan_check_range+0x282/0x290 mm/kasan/generic.c:189
instrument_atomic_read include/linux/instrumented.h:68 [inline]
_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
mapping_unevictable include/linux/pagemap.h:269 [inline]
folio_evictable+0xe3/0x250 mm/internal.h:437
sort_folio mm/vmscan.c:4398 [inline]
scan_folios mm/vmscan.c:4524 [inline]
isolate_folios mm/vmscan.c:4619 [inline]
evict_folios+0x1a99/0x56a0 mm/vmscan.c:4648
try_to_shrink_lruvec+0x713/0x9b0 mm/vmscan.c:4821
shrink_one+0x3b9/0x850 mm/vmscan.c:4866
shrink_many mm/vmscan.c:4929 [inline]
lru_gen_shrink_node mm/vmscan.c:5007 [inline]
shrink_node+0x37c5/0x3e50 mm/vmscan.c:5978
kswapd_shrink_node mm/vmscan.c:6807 [inline]
balance_pgdat mm/vmscan.c:6999 [inline]
kswapd+0x20f3/0x3b10 mm/vmscan.c:7264
kthread+0x7a9/0x920 kernel/kthread.c:464
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>

Allocated by task 5481:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
unpoison_slab_object mm/kasan/common.c:319 [inline]
__kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:345
kasan_slab_alloc include/linux/kasan.h:250 [inline]
slab_post_alloc_hook mm/slub.c:4115 [inline]
slab_alloc_node mm/slub.c:4164 [inline]
kmem_cache_alloc_noprof+0x1d9/0x380 mm/slub.c:4171
gfs2_glock_get+0x309/0x1010 fs/gfs2/glock.c:1178
gfs2_inode_lookup+0x2a3/0xc90 fs/gfs2/inode.c:135
gfs2_dir_search+0x229/0x2f0 fs/gfs2/dir.c:1667
gfs2_lookupi+0x461/0x5e0 fs/gfs2/inode.c:340
gfs2_jindex_hold fs/gfs2/ops_fstype.c:587 [inline]
init_journal+0x5fa/0x2410 fs/gfs2/ops_fstype.c:729
init_inodes+0xdc/0x320 fs/gfs2/ops_fstype.c:864
gfs2_fill_super+0x1bd1/0x24d0 fs/gfs2/ops_fstype.c:1249
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1330
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
do_new_mount+0x2be/0xb40 fs/namespace.c:3560
do_mount fs/namespace.c:3900 [inline]
__do_sys_mount fs/namespace.c:4111 [inline]
__se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4088
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

Freed by task 16:
kasan_save_stack mm/kasan/common.c:47 [inline]
kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:576
poison_slab_object mm/kasan/common.c:247 [inline]
__kasan_slab_free+0x59/0x70 mm/kasan/common.c:264
kasan_slab_free include/linux/kasan.h:233 [inline]
slab_free_hook mm/slub.c:2353 [inline]
slab_free mm/slub.c:4609 [inline]
kmem_cache_free+0x195/0x410 mm/slub.c:4711
rcu_do_batch kernel/rcu/tree.c:2546 [inline]
rcu_core+0xaaa/0x17a0 kernel/rcu/tree.c:2802
handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561
run_ksoftirqd+0xca/0x130 kernel/softirq.c:950
smpboot_thread_fn+0x544/0xa30 kernel/smpboot.c:164
kthread+0x7a9/0x920 kernel/kthread.c:464
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

Last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47
kasan_record_aux_stack+0xaa/0xc0 mm/kasan/generic.c:548
__call_rcu_common kernel/rcu/tree.c:3065 [inline]
call_rcu+0x168/0xac0 kernel/rcu/tree.c:3172
__gfs2_glock_free+0xda0/0xef0 fs/gfs2/glock.c:172
gfs2_glock_free+0x3c/0xb0 fs/gfs2/glock.c:178
process_one_work kernel/workqueue.c:3236 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317
worker_thread+0x870/0xd30 kernel/workqueue.c:3398
kthread+0x7a9/0x920 kernel/kthread.c:464
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

Second to last potentially related work creation:
kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47
kasan_record_aux_stack+0xaa/0xc0 mm/kasan/generic.c:548
insert_work+0x3e/0x330 kernel/workqueue.c:2183
__queue_work+0xc8b/0xf50 kernel/workqueue.c:2339
queue_delayed_work_on+0x1ca/0x390 kernel/workqueue.c:2559
queue_delayed_work include/linux/workqueue.h:677 [inline]
gfs2_glock_queue_work fs/gfs2/glock.c:250 [inline]
do_xmote+0xaf8/0x1250 fs/gfs2/glock.c:832
glock_work_func+0x343/0x5c0 fs/gfs2/glock.c:1090
process_one_work kernel/workqueue.c:3236 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317
worker_thread+0x870/0xd30 kernel/workqueue.c:3398
kthread+0x7a9/0x920 kernel/kthread.c:464
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

The buggy address belongs to the object at ffff888054e45a68
which belongs to the cache gfs2_glock(aspace) of size 1224
The buggy address is located 968 bytes inside of
freed 1224-byte region [ffff888054e45a68, ffff888054e45f30)

The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x54e44
head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0x4fff00000000040(head|node=1|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 04fff00000000040 ffff88801f75e3c0 dead000000000122 0000000000000000
raw: 0000000000000000 00000000800c000c 00000000f5000000 0000000000000000
head: 04fff00000000040 ffff88801f75e3c0 dead000000000122 0000000000000000
head: 0000000000000000 00000000800c000c 00000000f5000000 0000000000000000
head: 04fff00000000002 ffffea0001539101 ffffffffffffffff 0000000000000000
head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0xd2040(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 5429, tgid 5429 (syz-executor294), ts 123398815759, free_ts 0
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x1f4/0x240 mm/page_alloc.c:1551
prep_new_page mm/page_alloc.c:1559 [inline]
get_page_from_freelist+0x365c/0x37a0 mm/page_alloc.c:3477
__alloc_frozen_pages_noprof+0x292/0x710 mm/page_alloc.c:4739
alloc_pages_mpol+0x311/0x660 mm/mempolicy.c:2270
alloc_slab_page mm/slub.c:2423 [inline]
allocate_slab+0x8f/0x3a0 mm/slub.c:2587
new_slab mm/slub.c:2640 [inline]
___slab_alloc+0xc27/0x14a0 mm/slub.c:3826
__slab_alloc+0x58/0xa0 mm/slub.c:3916
__slab_alloc_node mm/slub.c:3991 [inline]
slab_alloc_node mm/slub.c:4152 [inline]
kmem_cache_alloc_noprof+0x268/0x380 mm/slub.c:4171
gfs2_glock_get+0x309/0x1010 fs/gfs2/glock.c:1178
gfs2_inode_lookup+0x2a3/0xc90 fs/gfs2/inode.c:135
gfs2_lookup_root fs/gfs2/ops_fstype.c:440 [inline]
init_sb+0xa2a/0x1270 fs/gfs2/ops_fstype.c:507
gfs2_fill_super+0x19b3/0x24d0 fs/gfs2/ops_fstype.c:1216
get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1330
vfs_get_tree+0x90/0x2b0 fs/super.c:1814
do_new_mount+0x2be/0xb40 fs/namespace.c:3560
page_owner free stack trace missing

Memory state around the buggy address:
ffff888054e45d00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888054e45d80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff888054e45e00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff888054e45e80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888054e45f00: fb fb fb fb fb fb fc fc fc fc fc fc fc fc fc fc
==================================================================

Reply all
Reply to author
Forward
0 new messages