kernel panic: corrupted stack end in vfs_fallocate

5 views
Skip to first unread message

syzbot

unread,
Oct 25, 2022, 3:06:42 AM10/25/22
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 0966d385830d riscv: Fix auipc+jalr relocation range checks
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git fixes
console output: https://syzkaller.appspot.com/x/log.txt?x=16cf07a4880000
kernel config: https://syzkaller.appspot.com/x/.config?x=6295d67591064921
dashboard link: https://syzkaller.appspot.com/bug?extid=af4a753312709c45911b
compiler: riscv64-linux-gnu-gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: riscv64
CC: [adilger...@dilger.ca linux...@vger.kernel.org linux-...@vger.kernel.org ty...@mit.edu]

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+af4a75...@syzkaller.appspotmail.com

Kernel panic - not syncing: corrupted stack end detected inside scheduler
CPU: 0 PID: 3829 Comm: syz-executor.1 Not tainted 5.17.0-rc1-syzkaller-00002-g0966d385830d #0
Hardware name: riscv-virtio,qemu (DT)
Call Trace:
[<ffffffff8000a228>] dump_backtrace+0x2e/0x3c arch/riscv/kernel/stacktrace.c:113
[<ffffffff831668cc>] show_stack+0x34/0x40 arch/riscv/kernel/stacktrace.c:119
[<ffffffff831756ba>] __dump_stack lib/dump_stack.c:88 [inline]
[<ffffffff831756ba>] dump_stack_lvl+0xe4/0x150 lib/dump_stack.c:106
[<ffffffff83175742>] dump_stack+0x1c/0x24 lib/dump_stack.c:113
[<ffffffff83166fa8>] panic+0x24a/0x634 kernel/panic.c:233
[<ffffffff831a688a>] schedule_debug kernel/sched/core.c:5541 [inline]
[<ffffffff831a688a>] schedule+0x0/0x14c kernel/sched/core.c:6187
[<ffffffff831a6b00>] preempt_schedule_common+0x4e/0xde kernel/sched/core.c:6462
[<ffffffff831a6bc4>] preempt_schedule+0x34/0x36 kernel/sched/core.c:6487
[<ffffffff831afc2c>] __raw_spin_unlock include/linux/spinlock_api_smp.h:143 [inline]
[<ffffffff831afc2c>] _raw_spin_unlock+0x60/0x6a kernel/locking/spinlock.c:186
[<ffffffff807272de>] spin_unlock include/linux/spinlock.h:389 [inline]
[<ffffffff807272de>] ext4_do_update_inode fs/ext4/inode.c:5084 [inline]
[<ffffffff807272de>] ext4_mark_iloc_dirty+0x2da/0x1144 fs/ext4/inode.c:5677
[<ffffffff8072882c>] __ext4_mark_inode_dirty+0x1d8/0x6bc fs/ext4/inode.c:5873
[<ffffffff806de6e4>] __ext4_ext_dirty+0x134/0x156 fs/ext4/extents.c:183
[<ffffffff806e3696>] ext4_ext_insert_extent+0xf44/0x27b2 fs/ext4/extents.c:2170
[<ffffffff806ea018>] ext4_ext_map_blocks+0x1004/0x3e86 fs/ext4/extents.c:4303
[<ffffffff8071fc44>] ext4_map_blocks+0x4fe/0xe64 fs/ext4/inode.c:638
[<ffffffff806de9d2>] ext4_alloc_file_blocks+0x2cc/0x756 fs/ext4/extents.c:4467
[<ffffffff806ed3f2>] ext4_fallocate+0x356/0x2cf4 fs/ext4/extents.c:4744
[<ffffffff804bfc8c>] vfs_fallocate+0x344/0x7a4 fs/open.c:308
[<ffffffff804f6114>] ioctl_preallocate+0x16a/0x1a8 fs/ioctl.c:294
[<ffffffff804f7662>] file_ioctl fs/ioctl.c:334 [inline]
[<ffffffff804f7662>] do_vfs_ioctl fs/ioctl.c:853 [inline]
[<ffffffff804f7662>] __do_sys_ioctl fs/ioctl.c:872 [inline]
[<ffffffff804f7662>] sys_ioctl+0xdc6/0x139e fs/ioctl.c:860
[<ffffffff80005716>] ret_from_syscall+0x0/0x2
SMP: stopping secondary CPUs
Rebooting in 86400 seconds..


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

Dmitry Vyukov

unread,
Nov 10, 2022, 4:04:51 PM11/10/22
to syzbot, syzkaller-upst...@googlegroups.com
#syz fix: riscv: Increase stack size under KASAN
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-upstream-moderation" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-upstream-m...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-upstream-moderation/000000000000ceafeb05ebd68c9b%40google.com.
Reply all
Reply to author
Forward
0 new messages