kernel panic: corrupted stack end in loop_control_ioctl

7 views
Skip to first unread message

syzbot

unread,
Aug 1, 2022, 6:30:23 PM8/1/22
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 0966d385830d riscv: Fix auipc+jalr relocation range checks
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git fixes
console output: https://syzkaller.appspot.com/x/log.txt?x=1739097e080000
kernel config: https://syzkaller.appspot.com/x/.config?x=6295d67591064921
dashboard link: https://syzkaller.appspot.com/bug?extid=5297d3bbce7a12f73f7d
compiler: riscv64-linux-gnu-gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: riscv64
CC: [ak...@linux-foundation.org cgr...@vger.kernel.org han...@cmpxchg.org linux-...@vger.kernel.org linu...@kvack.org mho...@kernel.org roman.g...@linux.dev shak...@google.com songm...@bytedance.com]

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5297d3...@syzkaller.appspotmail.com

Kernel panic - not syncing: corrupted stack end detected inside scheduler
CPU: 0 PID: 4898 Comm: syz-executor.1 Not tainted 5.17.0-rc1-syzkaller-00002-g0966d385830d #0
Hardware name: riscv-virtio,qemu (DT)
Call Trace:
[<ffffffff8000a228>] dump_backtrace+0x2e/0x3c arch/riscv/kernel/stacktrace.c:113
[<ffffffff831668cc>] show_stack+0x34/0x40 arch/riscv/kernel/stacktrace.c:119
[<ffffffff831756ba>] __dump_stack lib/dump_stack.c:88 [inline]
[<ffffffff831756ba>] dump_stack_lvl+0xe4/0x150 lib/dump_stack.c:106
[<ffffffff83175742>] dump_stack+0x1c/0x24 lib/dump_stack.c:113
[<ffffffff83166fa8>] panic+0x24a/0x634 kernel/panic.c:233
[<ffffffff831a688a>] schedule_debug kernel/sched/core.c:5541 [inline]
[<ffffffff831a688a>] schedule+0x0/0x14c kernel/sched/core.c:6187
[<ffffffff831a6c62>] preempt_schedule_notrace+0x9c/0x19a kernel/sched/core.c:6541
[<ffffffff8010f014>] rcu_read_unlock_sched_notrace include/linux/rcupdate.h:816 [inline]
[<ffffffff8010f014>] trace_lock_acquire+0xd6/0x1fc include/trace/events/lock.h:13
[<ffffffff801167fe>] lock_acquire+0x28/0x6a kernel/locking/lockdep.c:5610
[<ffffffff80491e90>] rcu_lock_acquire include/linux/rcupdate.h:268 [inline]
[<ffffffff80491e90>] rcu_read_lock include/linux/rcupdate.h:694 [inline]
[<ffffffff80491e90>] percpu_ref_tryget_many include/linux/percpu-refcount.h:241 [inline]
[<ffffffff80491e90>] percpu_ref_tryget include/linux/percpu-refcount.h:266 [inline]
[<ffffffff80491e90>] obj_cgroup_tryget include/linux/memcontrol.h:774 [inline]
[<ffffffff80491e90>] get_obj_cgroup_from_current+0x22c/0x53c mm/memcontrol.c:2930
[<ffffffff80470748>] memcg_slab_pre_alloc_hook mm/slab.h:486 [inline]
[<ffffffff80470748>] slab_pre_alloc_hook mm/slab.h:710 [inline]
[<ffffffff80470748>] slab_alloc_node mm/slub.c:3144 [inline]
[<ffffffff80470748>] slab_alloc mm/slub.c:3238 [inline]
[<ffffffff80470748>] kmem_cache_alloc+0x84/0x3de mm/slub.c:3243
[<ffffffff804fe03c>] __d_alloc+0x3a/0x3e0 fs/dcache.c:1769
[<ffffffff804fe4ec>] d_alloc+0x32/0x102 fs/dcache.c:1848
[<ffffffff80505a82>] d_alloc_parallel+0xe0/0x143c fs/dcache.c:2600
[<ffffffff804e116c>] __lookup_slow+0x14a/0x306 fs/namei.c:1692
[<ffffffff804e51ee>] lookup_one_len+0x158/0x170 fs/namei.c:2736
[<ffffffff80874106>] start_creating.part.0+0x104/0x2b4 fs/debugfs/inode.c:352
[<ffffffff80875664>] start_creating fs/debugfs/inode.c:325 [inline]
[<ffffffff80875664>] __debugfs_create_file+0xae/0x34e fs/debugfs/inode.c:397
[<ffffffff80875954>] debugfs_create_file+0x50/0x64 fs/debugfs/inode.c:459
[<ffffffff80ab7b9e>] debugfs_create_files block/blk-mq-debugfs.c:703 [inline]
[<ffffffff80ab7b9e>] debugfs_create_files block/blk-mq-debugfs.c:694 [inline]
[<ffffffff80ab7b9e>] blk_mq_debugfs_register_ctx block/blk-mq-debugfs.c:754 [inline]
[<ffffffff80ab7b9e>] blk_mq_debugfs_register_hctx+0x24c/0x2ee block/blk-mq-debugfs.c:770
[<ffffffff80ab82c4>] blk_mq_debugfs_register+0x162/0x254 block/blk-mq-debugfs.c:725
[<ffffffff80a218f2>] blk_register_queue+0x166/0x30a block/blk-sysfs.c:868
[<ffffffff80a5573a>] device_add_disk+0x4a4/0x772 block/genhd.c:497
[<ffffffff8143c728>] add_disk include/linux/genhd.h:169 [inline]
[<ffffffff8143c728>] loop_add+0x47a/0x524 drivers/block/loop.c:2047
[<ffffffff8143c9aa>] loop_control_ioctl+0x158/0x440 drivers/block/loop.c:2168
[<ffffffff804f6ff8>] vfs_ioctl fs/ioctl.c:51 [inline]
[<ffffffff804f6ff8>] __do_sys_ioctl fs/ioctl.c:874 [inline]
[<ffffffff804f6ff8>] sys_ioctl+0x75c/0x139e fs/ioctl.c:860
[<ffffffff80005716>] ret_from_syscall+0x0/0x2
SMP: stopping secondary CPUs
Rebooting in 86400 seconds..


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

Dmitry Vyukov

unread,
Aug 2, 2022, 3:09:58 AM8/2/22
to syzbot, syzkaller-upst...@googlegroups.com
#syz fix: riscv: Increase stack size under KASAN
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-upstream-moderation" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-upstream-m...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-upstream-moderation/000000000000a805ff05e5358b91%40google.com.
Reply all
Reply to author
Forward
0 new messages