[Android 5.4] KASAN: use-after-free Read in f2fs_remove_dirty_inode

7 views
Skip to first unread message

syzbot

unread,
Apr 8, 2023, 8:35:36 AM4/8/23
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 21086923c1e6 UPSTREAM: ext4: fix kernel BUG in 'ext4_write..
git tree: android12-5.4
console output: https://syzkaller.appspot.com/x/log.txt?x=17de3755c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=89d7298c04c40d1f
dashboard link: https://syzkaller.appspot.com/bug?extid=6f60a4623562d66b5867
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/851416b90a9f/disk-21086923.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/36a4c1d487fd/vmlinux-21086923.xz
kernel image: https://storage.googleapis.com/syzbot-assets/ed22c36886a8/bzImage-21086923.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+6f60a4...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: use-after-free in __list_del_entry_valid+0x80/0x120 lib/list_debug.c:59
Read of size 8 at addr ffff8881b451a398 by task kworker/u4:1/92

CPU: 0 PID: 92 Comm: kworker/u4:1 Not tainted 5.4.233-syzkaller-00032-g21086923c1e6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
Workqueue: writeback wb_workfn (flush-7:2)
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1d8/0x241 lib/dump_stack.c:118
print_address_description+0x8c/0x600 mm/kasan/report.c:384
__kasan_report+0xf3/0x120 mm/kasan/report.c:516
kasan_report+0x30/0x60 mm/kasan/common.c:653
__list_del_entry_valid+0x80/0x120 lib/list_debug.c:59
__list_del_entry include/linux/list.h:131 [inline]
list_del_init include/linux/list.h:190 [inline]
__remove_dirty_inode fs/f2fs/checkpoint.c:1015 [inline]
f2fs_remove_dirty_inode+0x214/0x3e0 fs/f2fs/checkpoint.c:1051
__f2fs_write_data_pages fs/f2fs/data.c:3247 [inline]
f2fs_write_data_pages+0x24b5/0x2c20 fs/f2fs/data.c:3261
do_writepages+0x12b/0x270 mm/page-writeback.c:2344
__writeback_single_inode+0xd9/0xcc0 fs/fs-writeback.c:1467
writeback_sb_inodes+0xa2c/0x1990 fs/fs-writeback.c:1730
wb_writeback+0x403/0xd70 fs/fs-writeback.c:1905
wb_do_writeback fs/fs-writeback.c:2050 [inline]
wb_workfn+0x3a9/0x10c0 fs/fs-writeback.c:2091
process_one_work+0x765/0xd20 kernel/workqueue.c:2287
worker_thread+0xaef/0x1470 kernel/workqueue.c:2433
kthread+0x2da/0x360 kernel/kthread.c:288
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:354

Allocated by task 20064:
save_stack mm/kasan/common.c:70 [inline]
set_track mm/kasan/common.c:78 [inline]
__kasan_kmalloc+0x130/0x1d0 mm/kasan/common.c:529
kmalloc include/linux/slab.h:556 [inline]
kzalloc include/linux/slab.h:690 [inline]
f2fs_fill_super+0xc8/0x8310 fs/f2fs/super.c:3818
mount_bdev+0x22e/0x340 fs/super.c:1417
legacy_get_tree+0xdf/0x170 fs/fs_context.c:647
vfs_get_tree+0x85/0x260 fs/super.c:1547
do_new_mount+0x292/0x570 fs/namespace.c:2843
do_mount+0x688/0xdd0 fs/namespace.c:3163
ksys_mount+0xc2/0xf0 fs/namespace.c:3372
__do_sys_mount fs/namespace.c:3386 [inline]
__se_sys_mount fs/namespace.c:3383 [inline]
__x64_sys_mount+0xb1/0xc0 fs/namespace.c:3383
do_syscall_64+0xca/0x1c0 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x5c/0xc1

Freed by task 320:
save_stack mm/kasan/common.c:70 [inline]
set_track mm/kasan/common.c:78 [inline]
kasan_set_free_info mm/kasan/common.c:345 [inline]
__kasan_slab_free+0x178/0x230 mm/kasan/common.c:487
slab_free_hook mm/slub.c:1455 [inline]
slab_free_freelist_hook mm/slub.c:1494 [inline]
slab_free mm/slub.c:3080 [inline]
kfree+0xeb/0x320 mm/slub.c:4071
f2fs_put_super+0xb3b/0xcd0 fs/f2fs/super.c:1522
generic_shutdown_super+0x121/0x2a0 fs/super.c:464
kill_block_super+0x7a/0xe0 fs/super.c:1444
kill_f2fs_super+0x2f9/0x3c0 fs/f2fs/super.c:4355
deactivate_locked_super+0xa8/0x110 fs/super.c:335
deactivate_super+0x1e2/0x2a0 fs/super.c:366
cleanup_mnt+0x419/0x4d0 fs/namespace.c:1102
task_work_run+0x140/0x170 kernel/task_work.c:113
tracehook_notify_resume include/linux/tracehook.h:188 [inline]
exit_to_usermode_loop+0x18b/0x1a0 arch/x86/entry/common.c:163
prepare_exit_to_usermode+0x199/0x200 arch/x86/entry/common.c:194
entry_SYSCALL_64_after_hwframe+0x5c/0xc1

The buggy address belongs to the object at ffff8881b451a000
which belongs to the cache kmalloc-4k of size 4096
The buggy address is located 920 bytes inside of
4096-byte region [ffff8881b451a000, ffff8881b451b000)
The buggy address belongs to the page:
page:ffffea0006d14600 refcount:1 mapcount:0 mapping:ffff8881f5c0c280 index:0xffff8881b451a000 compound_mapcount: 0
flags: 0x8000000000010200(slab|head)
raw: 8000000000010200 ffffea0007b73208 ffffea00076e1408 ffff8881f5c0c280
raw: ffff8881b451a000 0000000000040001 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Unmovable, gfp_mask 0x1d20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL)
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook mm/page_alloc.c:2165 [inline]
prep_new_page+0x18f/0x370 mm/page_alloc.c:2171
get_page_from_freelist+0x2ce8/0x2d70 mm/page_alloc.c:3794
__alloc_pages_nodemask+0x393/0x840 mm/page_alloc.c:4891
alloc_slab_page+0x39/0x3c0 mm/slub.c:343
allocate_slab mm/slub.c:1683 [inline]
new_slab+0x97/0x440 mm/slub.c:1749
new_slab_objects mm/slub.c:2505 [inline]
___slab_alloc+0x2fe/0x490 mm/slub.c:2667
__slab_alloc+0x5a/0x90 mm/slub.c:2707
slab_alloc_node mm/slub.c:2792 [inline]
slab_alloc mm/slub.c:2837 [inline]
kmem_cache_alloc_trace+0x128/0x240 mm/slub.c:2854
kmalloc include/linux/slab.h:556 [inline]
kzalloc include/linux/slab.h:690 [inline]
uevent_show+0x158/0x2e0 drivers/base/core.c:1938
dev_attr_show+0x50/0xb0 drivers/base/core.c:1647
sysfs_kf_seq_show+0x265/0x3e0 fs/sysfs/file.c:61
seq_read+0x4df/0xe60 fs/seq_file.c:232
__vfs_read+0x103/0x730 fs/read_write.c:425
vfs_read+0x148/0x360 fs/read_write.c:461
ksys_read+0x199/0x2c0 fs/read_write.c:587
do_syscall_64+0xca/0x1c0 arch/x86/entry/common.c:290
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1176 [inline]
free_pcp_prepare mm/page_alloc.c:1233 [inline]
free_unref_page_prepare+0x297/0x380 mm/page_alloc.c:3085
free_unref_page_list+0x10a/0x590 mm/page_alloc.c:3154
release_pages+0xa06/0xa40 mm/swap.c:842
__pagevec_release+0xc3/0x150 mm/swap.c:862
pagevec_release include/linux/pagevec.h:88 [inline]
truncate_inode_pages_range+0x7b7/0x1620 mm/truncate.c:367
evict+0x2af/0x6a0 fs/inode.c:577
__dentry_kill+0x429/0x630 fs/dcache.c:579
shrink_dentry_list+0x34a/0x490 fs/dcache.c:1122
shrink_dcache_parent+0xc9/0x330 fs/dcache.c:1548
do_one_tree+0x23/0xe0 fs/dcache.c:1603
shrink_dcache_for_umount+0x79/0x130 fs/dcache.c:1620
generic_shutdown_super+0x66/0x2a0 fs/super.c:447
kill_anon_super fs/super.c:1108 [inline]
kill_litter_super+0x72/0xa0 fs/super.c:1117
deactivate_locked_super+0xa8/0x110 fs/super.c:335
deactivate_super+0x1e2/0x2a0 fs/super.c:366
cleanup_mnt+0x419/0x4d0 fs/namespace.c:1102

Memory state around the buggy address:
ffff8881b451a280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8881b451a300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff8881b451a380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff8881b451a400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8881b451a480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Aug 19, 2023, 10:54:46 PM8/19/23
to syzkaller-a...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages