KASAN: slab-out-of-bounds Read in __access_remote_vm

4 views
Skip to first unread message

syzbot

unread,
Aug 26, 2022, 6:10:34 PM8/26/22
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 8755ae45a9e8 Add linux-next specific files for 20220819
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=12da730d080000
kernel config: https://syzkaller.appspot.com/x/.config?x=ead6107a3bbe3c62
dashboard link: https://syzkaller.appspot.com/bug?extid=14c74f84dac76fd6bf3b
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
CC: [dhow...@redhat.com linux-...@redhat.com linux-...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+14c74f...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-out-of-bounds in memcmp+0x16f/0x1c0 lib/string.c:757
Read of size 8 at addr ffff8880803df490 by task syz-executor.1/12871

CPU: 0 PID: 12871 Comm: syz-executor.1 Not tainted 6.0.0-rc1-next-20220819-syzkaller #0
BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:1521
in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 12871, name: syz-executor.1
preempt_count: 2, expected: 0
RCU nest depth: 0, expected: 0
no locks held by syz-executor.1/12871.
irq event stamp: 750
hardirqs last enabled at (749): [<ffffffff89835570>] __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:151 [inline]
hardirqs last enabled at (749): [<ffffffff89835570>] _raw_spin_unlock_irqrestore+0x50/0x70 kernel/locking/spinlock.c:194
hardirqs last disabled at (750): [<ffffffff8983532e>] __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:108 [inline]
hardirqs last disabled at (750): [<ffffffff8983532e>] _raw_spin_lock_irqsave+0x4e/0x50 kernel/locking/spinlock.c:162
softirqs last enabled at (362): [<ffffffff812d4fcb>] wrpkru arch/x86/include/asm/special_insns.h:103 [inline]
softirqs last enabled at (362): [<ffffffff812d4fcb>] pkru_write_default arch/x86/include/asm/pkru.h:59 [inline]
softirqs last enabled at (362): [<ffffffff812d4fcb>] restore_fpregs_from_init_fpstate arch/x86/kernel/fpu/core.c:674 [inline]
softirqs last enabled at (362): [<ffffffff812d4fcb>] fpu__clear_user_states+0xdb/0x1e0 arch/x86/kernel/fpu/core.c:729
softirqs last disabled at (360): [<ffffffff812d4f14>] fpu__clear_user_states+0x24/0x1e0 arch/x86/kernel/fpu/core.c:711
Preemption disabled at:
[<ffffffff82082f91>] bit_spin_lock include/linux/bit_spinlock.h:25 [inline]
[<ffffffff82082f91>] hlist_bl_lock include/linux/list_bl.h:148 [inline]
[<ffffffff82082f91>] fscache_hash_volume fs/fscache/volume.c:169 [inline]
[<ffffffff82082f91>] __fscache_acquire_volume+0x541/0x1080 fs/fscache/volume.c:328
CPU: 0 PID: 12871 Comm: syz-executor.1 Not tainted 6.0.0-rc1-next-20220819-syzkaller #0
syz-executor.1[12871] cmdline: /root/syz-executor.1 exec
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:122 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:140
__might_resched.cold+0x222/0x26b kernel/sched/core.c:9896
down_read_killable+0x75/0x490 kernel/locking/rwsem.c:1521
mmap_read_lock_killable include/linux/mmap_lock.h:126 [inline]
__access_remote_vm+0xac/0x6f0 mm/memory.c:5461
get_mm_cmdline.part.0+0x217/0x620 fs/proc/base.c:299
get_mm_cmdline fs/proc/base.c:367 [inline]
get_task_cmdline_kernel+0x1d9/0x220 fs/proc/base.c:367
dump_stack_print_cmdline.part.0+0x82/0x150 lib/dump_stack.c:61
dump_stack_print_cmdline lib/dump_stack.c:89 [inline]
dump_stack_print_info+0x185/0x190 lib/dump_stack.c:97
__dump_stack lib/dump_stack.c:121 [inline]
dump_stack_lvl+0xc1/0x134 lib/dump_stack.c:140
print_address_description mm/kasan/report.c:317 [inline]
print_report.cold+0x2ba/0x719 mm/kasan/report.c:433
kasan_report+0xb1/0x1e0 mm/kasan/report.c:495
memcmp+0x16f/0x1c0 lib/string.c:757
memcmp include/linux/fortify-string.h:420 [inline]
fscache_volume_same fs/fscache/volume.c:133 [inline]
fscache_hash_volume fs/fscache/volume.c:171 [inline]
__fscache_acquire_volume+0x76c/0x1080 fs/fscache/volume.c:328
fscache_acquire_volume include/linux/fscache.h:204 [inline]
v9fs_cache_session_get_cookie+0x143/0x240 fs/9p/cache.c:34
v9fs_session_init+0x1166/0x1810 fs/9p/v9fs.c:473
v9fs_mount+0xba/0xc90 fs/9p/vfs_super.c:126
legacy_get_tree+0x105/0x220 fs/fs_context.c:610
vfs_get_tree+0x89/0x2f0 fs/super.c:1530
do_new_mount fs/namespace.c:3040 [inline]
path_mount+0x1326/0x1e20 fs/namespace.c:3370
do_mount fs/namespace.c:3383 [inline]
__do_sys_mount fs/namespace.c:3591 [inline]
__se_sys_mount fs/namespace.c:3568 [inline]
__x64_sys_mount+0x27f/0x300 fs/namespace.c:3568
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f2a1d689279
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f2a1e7ff168 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f2a1d79bf80 RCX: 00007f2a1d689279
RDX: 0000000020000040 RSI: 0000000020000000 RDI: 0000000000000000
RBP: 00007f2a1d6e3189 R08: 0000000020000200 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffc391d79ef R14: 00007f2a1e7ff300 R15: 0000000000022000
</TASK>
syz-executor.1[12871] cmdline: /root/syz-executor.1 exec
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:122 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:140
print_address_description mm/kasan/report.c:317 [inline]
print_report.cold+0x2ba/0x719 mm/kasan/report.c:433
kasan_report+0xb1/0x1e0 mm/kasan/report.c:495
memcmp+0x16f/0x1c0 lib/string.c:757
memcmp include/linux/fortify-string.h:420 [inline]
fscache_volume_same fs/fscache/volume.c:133 [inline]
fscache_hash_volume fs/fscache/volume.c:171 [inline]
__fscache_acquire_volume+0x76c/0x1080 fs/fscache/volume.c:328
fscache_acquire_volume include/linux/fscache.h:204 [inline]
v9fs_cache_session_get_cookie+0x143/0x240 fs/9p/cache.c:34
v9fs_session_init+0x1166/0x1810 fs/9p/v9fs.c:473
v9fs_mount+0xba/0xc90 fs/9p/vfs_super.c:126
legacy_get_tree+0x105/0x220 fs/fs_context.c:610
vfs_get_tree+0x89/0x2f0 fs/super.c:1530
do_new_mount fs/namespace.c:3040 [inline]
path_mount+0x1326/0x1e20 fs/namespace.c:3370
do_mount fs/namespace.c:3383 [inline]
__do_sys_mount fs/namespace.c:3591 [inline]
__se_sys_mount fs/namespace.c:3568 [inline]
__x64_sys_mount+0x27f/0x300 fs/namespace.c:3568
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f2a1d689279
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f2a1e7ff168 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f2a1d79bf80 RCX: 00007f2a1d689279
RDX: 0000000020000040 RSI: 0000000020000000 RDI: 0000000000000000
RBP: 00007f2a1d6e3189 R08: 0000000020000200 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffc391d79ef R14: 00007f2a1e7ff300 R15: 0000000000022000
</TASK>

Allocated by task 12871:
kasan_save_stack+0x1e/0x40 mm/kasan/common.c:38
kasan_set_track mm/kasan/common.c:45 [inline]
set_alloc_info mm/kasan/common.c:437 [inline]
____kasan_kmalloc mm/kasan/common.c:516 [inline]
____kasan_kmalloc mm/kasan/common.c:475 [inline]
__kasan_kmalloc+0xa9/0xd0 mm/kasan/common.c:525
kmalloc include/linux/slab.h:611 [inline]
kzalloc include/linux/slab.h:739 [inline]
fscache_alloc_volume fs/fscache/volume.c:234 [inline]
__fscache_acquire_volume+0x2c2/0x1080 fs/fscache/volume.c:323
fscache_acquire_volume include/linux/fscache.h:204 [inline]
v9fs_cache_session_get_cookie+0x143/0x240 fs/9p/cache.c:34
v9fs_session_init+0x1166/0x1810 fs/9p/v9fs.c:473
v9fs_mount+0xba/0xc90 fs/9p/vfs_super.c:126
legacy_get_tree+0x105/0x220 fs/fs_context.c:610
vfs_get_tree+0x89/0x2f0 fs/super.c:1530
do_new_mount fs/namespace.c:3040 [inline]
path_mount+0x1326/0x1e20 fs/namespace.c:3370
do_mount fs/namespace.c:3383 [inline]
__do_sys_mount fs/namespace.c:3591 [inline]
__se_sys_mount fs/namespace.c:3568 [inline]
__x64_sys_mount+0x27f/0x300 fs/namespace.c:3568
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

Last potentially related work creation:
kasan_save_stack+0x1e/0x40 mm/kasan/common.c:38
__kasan_record_aux_stack+0xbe/0xd0 mm/kasan/generic.c:348
kvfree_call_rcu+0x74/0x940 kernel/rcu/tree.c:3322
fib_rules_unregister+0x35f/0x450 net/core/fib_rules.c:207
ip_fib_net_exit+0x212/0x310 net/ipv4/fib_frontend.c:1587
fib_net_exit_batch+0x4f/0xa0 net/ipv4/fib_frontend.c:1634
ops_exit_list+0x125/0x170 net/core/net_namespace.c:168
cleanup_net+0x4ea/0xb00 net/core/net_namespace.c:595
process_one_work+0x991/0x1610 kernel/workqueue.c:2289
worker_thread+0x665/0x1080 kernel/workqueue.c:2436
kthread+0x2e4/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306

Second to last potentially related work creation:
kasan_save_stack+0x1e/0x40 mm/kasan/common.c:38
__kasan_record_aux_stack+0xbe/0xd0 mm/kasan/generic.c:348
call_rcu+0x99/0x790 kernel/rcu/tree.c:2793
neigh_parms_release net/core/neighbour.c:1739 [inline]
neigh_parms_release+0x205/0x290 net/core/neighbour.c:1730
inetdev_destroy net/ipv4/devinet.c:328 [inline]
inetdev_event+0xd2b/0x1610 net/ipv4/devinet.c:1602
notifier_call_chain+0xb5/0x200 kernel/notifier.c:87
call_netdevice_notifiers_info+0xb5/0x130 net/core/dev.c:1945
call_netdevice_notifiers_extack net/core/dev.c:1983 [inline]
call_netdevice_notifiers net/core/dev.c:1997 [inline]
unregister_netdevice_many+0xa62/0x1980 net/core/dev.c:10862
sit_exit_batch_net+0x530/0x750 net/ipv6/sit.c:1942
ops_exit_list+0x125/0x170 net/core/net_namespace.c:168
cleanup_net+0x4ea/0xb00 net/core/net_namespace.c:595
process_one_work+0x991/0x1610 kernel/workqueue.c:2289
worker_thread+0x665/0x1080 kernel/workqueue.c:2436
kthread+0x2e4/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306

The buggy address belongs to the object at ffff8880803df400
which belongs to the cache kmalloc-192 of size 192
The buggy address is located 144 bytes inside of
192-byte region [ffff8880803df400, ffff8880803df4c0)

The buggy address belongs to the physical page:
page:ffffea000200f7c0 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x803df
flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff)
raw: 00fff00000000200 ffffea00009638c0 dead000000000002 ffff888011841a00
raw: 0000000000000000 0000000080100010 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x112c40(GFP_NOFS|__GFP_NOWARN|__GFP_NORETRY|__GFP_HARDWALL), pid 11285, tgid 11282 (syz-executor.3), ts 550038807296, free_ts 550021991810
prep_new_page mm/page_alloc.c:2532 [inline]
get_page_from_freelist+0x109b/0x2ce0 mm/page_alloc.c:4283
__alloc_pages+0x1c7/0x510 mm/page_alloc.c:5507
__alloc_pages_node include/linux/gfp.h:246 [inline]
alloc_slab_page mm/slub.c:1826 [inline]
allocate_slab+0x80/0x3d0 mm/slub.c:1969
new_slab mm/slub.c:2029 [inline]
___slab_alloc+0x7f1/0xe10 mm/slub.c:3031
__slab_alloc.constprop.0+0x4d/0xa0 mm/slub.c:3118
slab_alloc_node mm/slub.c:3209 [inline]
__kmalloc_node+0x2e2/0x380 mm/slub.c:4468
kmalloc_array_node include/linux/slab.h:701 [inline]
kcalloc_node include/linux/slab.h:706 [inline]
memcg_alloc_slab_cgroups+0x8b/0x140 mm/memcontrol.c:2831
memcg_slab_post_alloc_hook+0xaa/0x480 mm/slab.h:523
slab_post_alloc_hook mm/slab.h:734 [inline]
slab_alloc_node mm/slub.c:3243 [inline]
slab_alloc mm/slub.c:3251 [inline]
__kmem_cache_alloc_lru mm/slub.c:3258 [inline]
kmem_cache_alloc+0x164/0x3b0 mm/slub.c:3268
kmem_cache_zalloc include/linux/slab.h:729 [inline]
alloc_buffer_head+0x20/0x140 fs/buffer.c:2974
alloc_page_buffers+0x280/0x790 fs/buffer.c:829
grow_dev_page fs/buffer.c:965 [inline]
grow_buffers fs/buffer.c:1011 [inline]
__getblk_slow+0x4fe/0x1030 fs/buffer.c:1038
__getblk_gfp fs/buffer.c:1333 [inline]
__bread_gfp+0x243/0x390 fs/buffer.c:1378
sb_bread include/linux/buffer_head.h:328 [inline]
fat__get_entry+0x51c/0x920 fs/fat/dir.c:100
fat_get_entry fs/fat/dir.c:128 [inline]
fat_get_short_entry+0x13f/0x2f0 fs/fat/dir.c:873
fat_subdirs+0xa5/0x180 fs/fat/dir.c:939
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1449 [inline]
free_pcp_prepare+0x5e4/0xd20 mm/page_alloc.c:1499
free_unref_page_prepare mm/page_alloc.c:3380 [inline]
free_unref_page+0x19/0x4d0 mm/page_alloc.c:3476
__vunmap+0x85d/0xd30 mm/vmalloc.c:2696
__vfree+0x3c/0xd0 mm/vmalloc.c:2744
vfree+0x5a/0x90 mm/vmalloc.c:2775
__do_replace+0x165/0x950 net/ipv6/netfilter/ip6_tables.c:1117
do_replace net/ipv6/netfilter/ip6_tables.c:1157 [inline]
do_ip6t_set_ctl+0x90d/0xb90 net/ipv6/netfilter/ip6_tables.c:1639
nf_setsockopt+0x83/0xe0 net/netfilter/nf_sockopt.c:101
ipv6_setsockopt+0x122/0x180 net/ipv6/ipv6_sockglue.c:1026
tcp_setsockopt+0x136/0x2520 net/ipv4/tcp.c:3789
__sys_setsockopt+0x2d6/0x690 net/socket.c:2252
__do_sys_setsockopt net/socket.c:2263 [inline]
__se_sys_setsockopt net/socket.c:2260 [inline]
__x64_sys_setsockopt+0xba/0x150 net/socket.c:2260
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

Memory state around the buggy address:
ffff8880803df380: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
ffff8880803df400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffff8880803df480: 00 00 04 fc fc fc fc fc fc fc fc fc fc fc fc fc
^
ffff8880803df500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8880803df580: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Oct 21, 2022, 5:58:31 PM10/21/22
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages