[v6.1] KASAN: use-after-free Read in f2fs_release_folio

0 views
Skip to first unread message

syzbot

unread,
Oct 16, 2023, 2:16:51 AM10/16/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: adc4d740ad9e Linux 6.1.58
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=14cddf29680000
kernel config: https://syzkaller.appspot.com/x/.config?x=51bbb1424030ff42
dashboard link: https://syzkaller.appspot.com/bug?extid=1b91d7ec836b1073ec32
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/246710b6d0f3/disk-adc4d740.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/a30ab7d80ee1/vmlinux-adc4d740.xz
kernel image: https://storage.googleapis.com/syzbot-assets/af48ffe0cff6/bzImage-adc4d740.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+1b91d7...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: use-after-free in F2FS_SB fs/f2fs/f2fs.h:1969 [inline]
BUG: KASAN: use-after-free in F2FS_I_SB fs/f2fs/f2fs.h:1974 [inline]
BUG: KASAN: use-after-free in F2FS_M_SB fs/f2fs/f2fs.h:1979 [inline]
BUG: KASAN: use-after-free in f2fs_release_folio+0x10e/0x450 fs/f2fs/data.c:3709
Read of size 8 at addr ffff88801ba92678 by task kswapd0/110

CPU: 1 PID: 110 Comm: kswapd0 Not tainted 6.1.58-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/06/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_address_description mm/kasan/report.c:284 [inline]
print_report+0x15f/0x4f0 mm/kasan/report.c:395
kasan_report+0x136/0x160 mm/kasan/report.c:495
F2FS_SB fs/f2fs/f2fs.h:1969 [inline]
F2FS_I_SB fs/f2fs/f2fs.h:1974 [inline]
F2FS_M_SB fs/f2fs/f2fs.h:1979 [inline]
f2fs_release_folio+0x10e/0x450 fs/f2fs/data.c:3709
shrink_folio_list+0x2872/0x8ee0 mm/vmscan.c:1996
evict_folios+0xb42/0x2810 mm/vmscan.c:5039
lru_gen_shrink_lruvec mm/vmscan.c:5223 [inline]
shrink_lruvec+0xdbf/0x4650 mm/vmscan.c:5918
</TASK>

Allocated by task 4576:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4b/0x70 mm/kasan/common.c:52
____kasan_kmalloc mm/kasan/common.c:374 [inline]
__kasan_kmalloc+0x97/0xb0 mm/kasan/common.c:383
kasan_kmalloc include/linux/kasan.h:211 [inline]
__do_kmalloc_node mm/slab_common.c:955 [inline]
__kmalloc_node_track_caller+0xb1/0x220 mm/slab_common.c:975
kmalloc_reserve net/core/skbuff.c:446 [inline]
__alloc_skb+0x135/0x670 net/core/skbuff.c:515
alloc_skb include/linux/skbuff.h:1276 [inline]
nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:748 [inline]
nsim_dev_trap_report drivers/net/netdevsim/dev.c:805 [inline]
nsim_dev_trap_report_work+0x24c/0xa90 drivers/net/netdevsim/dev.c:850
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306

Freed by task 4576:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4b/0x70 mm/kasan/common.c:52
kasan_save_free_info+0x27/0x40 mm/kasan/generic.c:516
____kasan_slab_free+0xd6/0x120 mm/kasan/common.c:236
kasan_slab_free include/linux/kasan.h:177 [inline]
slab_free_hook mm/slub.c:1724 [inline]
slab_free_freelist_hook mm/slub.c:1750 [inline]
slab_free mm/slub.c:3661 [inline]
__kmem_cache_free+0x25c/0x3c0 mm/slub.c:3674
skb_free_head net/core/skbuff.c:762 [inline]
skb_release_data+0x5de/0x7a0 net/core/skbuff.c:791
skb_release_all net/core/skbuff.c:856 [inline]
__kfree_skb net/core/skbuff.c:870 [inline]
consume_skb+0xa3/0x140 net/core/skbuff.c:1035
nsim_dev_trap_report drivers/net/netdevsim/dev.c:821 [inline]
nsim_dev_trap_report_work+0x75d/0xa90 drivers/net/netdevsim/dev.c:850
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306

Last potentially related work creation:
kasan_save_stack+0x3b/0x60 mm/kasan/common.c:45
__kasan_record_aux_stack+0xb0/0xc0 mm/kasan/generic.c:486
insert_work+0x54/0x3d0 kernel/workqueue.c:1361
__queue_work+0xb4b/0xf90 kernel/workqueue.c:1520
queue_work_on+0x14b/0x250 kernel/workqueue.c:1548
rcu_do_batch kernel/rcu/tree.c:2251 [inline]
rcu_core+0xad4/0x17e0 kernel/rcu/tree.c:2511
__do_softirq+0x2e9/0xa4c kernel/softirq.c:571

Second to last potentially related work creation:
kasan_save_stack+0x3b/0x60 mm/kasan/common.c:45
__kasan_record_aux_stack+0xb0/0xc0 mm/kasan/generic.c:486
call_rcu+0x163/0xa10 kernel/rcu/tree.c:2799
put_super fs/super.c:311 [inline]
deactivate_locked_super+0xd4/0x110 fs/super.c:343
cleanup_mnt+0x490/0x520 fs/namespace.c:1186
task_work_run+0x246/0x300 kernel/task_work.c:179
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xde/0x100 kernel/entry/common.c:171
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:204
__syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline]
syscall_exit_to_user_mode+0x60/0x270 kernel/entry/common.c:297
do_syscall_64+0x49/0xb0 arch/x86/entry/common.c:86
entry_SYSCALL_64_after_hwframe+0x63/0xcd

The buggy address belongs to the object at ffff88801ba92000
which belongs to the cache kmalloc-4k of size 4096
The buggy address is located 1656 bytes inside of
4096-byte region [ffff88801ba92000, ffff88801ba93000)

The buggy address belongs to the physical page:
page:ffffea00006ea400 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1ba90
head:ffffea00006ea400 order:3 compound_mapcount:0 compound_pincount:0
flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff)
raw: 00fff00000010200 ffffea0002412c00 dead000000000002 ffff888012442140
raw: 0000000000000000 0000000000040004 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd2a20(GFP_ATOMIC|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 3653, tgid 3653 (kworker/0:6), ts 234313729822, free_ts 234310384865
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x18d/0x1b0 mm/page_alloc.c:2513
prep_new_page mm/page_alloc.c:2520 [inline]
get_page_from_freelist+0x31a1/0x3320 mm/page_alloc.c:4279
__alloc_pages+0x28d/0x770 mm/page_alloc.c:5545
alloc_slab_page+0x6a/0x150 mm/slub.c:1794
allocate_slab mm/slub.c:1939 [inline]
new_slab+0x84/0x2d0 mm/slub.c:1992
___slab_alloc+0xc20/0x1270 mm/slub.c:3180
__slab_alloc mm/slub.c:3279 [inline]
slab_alloc_node mm/slub.c:3364 [inline]
__kmem_cache_alloc_node+0x19f/0x260 mm/slub.c:3437
__do_kmalloc_node mm/slab_common.c:954 [inline]
__kmalloc_node_track_caller+0xa0/0x220 mm/slab_common.c:975
kmalloc_reserve net/core/skbuff.c:446 [inline]
__alloc_skb+0x135/0x670 net/core/skbuff.c:515
alloc_skb include/linux/skbuff.h:1276 [inline]
nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:748 [inline]
nsim_dev_trap_report drivers/net/netdevsim/dev.c:805 [inline]
nsim_dev_trap_report_work+0x24c/0xa90 drivers/net/netdevsim/dev.c:850
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1440 [inline]
free_pcp_prepare mm/page_alloc.c:1490 [inline]
free_unref_page_prepare+0xf63/0x1120 mm/page_alloc.c:3358
free_unref_page+0x33/0x3e0 mm/page_alloc.c:3453
free_slab mm/slub.c:2031 [inline]
discard_slab mm/slub.c:2037 [inline]
__unfreeze_partials+0x1b7/0x210 mm/slub.c:2586
put_cpu_partial+0x17b/0x250 mm/slub.c:2662
qlink_free mm/kasan/quarantine.c:168 [inline]
qlist_free_all+0x76/0xe0 mm/kasan/quarantine.c:187
kasan_quarantine_reduce+0x156/0x170 mm/kasan/quarantine.c:294
__kasan_slab_alloc+0x1f/0x70 mm/kasan/common.c:305
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook+0x52/0x3a0 mm/slab.h:737
slab_alloc_node mm/slub.c:3398 [inline]
kmem_cache_alloc_node+0x136/0x310 mm/slub.c:3443
__alloc_skb+0xde/0x670 net/core/skbuff.c:505
alloc_skb include/linux/skbuff.h:1276 [inline]
nlmsg_new include/net/netlink.h:991 [inline]
netlink_ack+0x392/0x1290 net/netlink/af_netlink.c:2445
netlink_rcv_skb+0x24a/0x410 net/netlink/af_netlink.c:2514
netlink_unicast_kernel net/netlink/af_netlink.c:1326 [inline]
netlink_unicast+0x7d8/0x970 net/netlink/af_netlink.c:1352
netlink_sendmsg+0xa26/0xd60 net/netlink/af_netlink.c:1874
sock_sendmsg_nosec net/socket.c:716 [inline]
__sock_sendmsg net/socket.c:728 [inline]
__sys_sendto+0x471/0x5f0 net/socket.c:2134
__do_sys_sendto net/socket.c:2146 [inline]
__se_sys_sendto net/socket.c:2142 [inline]
__x64_sys_sendto+0xda/0xf0 net/socket.c:2142

Memory state around the buggy address:
ffff88801ba92500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88801ba92580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88801ba92600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff88801ba92680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88801ba92700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Mar 13, 2024, 4:28:11 AMMar 13
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages