[v6.1] BUG: using __this_cpu_add() in preemptible code in validate_chain

0 views
Skip to first unread message

syzbot

unread,
Apr 7, 2024, 4:00:34 PMApr 7
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 347385861c50 Linux 6.1.84
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=10f6f8bd180000
kernel config: https://syzkaller.appspot.com/x/.config?x=40dfd13b04bfc094
dashboard link: https://syzkaller.appspot.com/bug?extid=0a9a5f829da991395ca5
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/73d2a8622b6e/disk-34738586.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/e7bc2e0101a7/vmlinux-34738586.xz
kernel image: https://storage.googleapis.com/syzbot-assets/7b96d1168608/bzImage-34738586.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+0a9a5f...@syzkaller.appspotmail.com

BUG: using __this_cpu_add() in preemptible [00000000] code: syz-executor.2/4224
caller is __pv_queued_spin_lock_slowpath+0x941/0xc50 kernel/locking/qspinlock.c:565
CPU: 1 PID: 4224 Comm: syz-executor.2 Not tainted 6.1.84-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_preemption_disabled+0x107/0x110 lib/smp_processor_id.c:49
__pv_queued_spin_lock_slowpath+0x941/0xc50 kernel/locking/qspinlock.c:565
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
lockdep_lock+0x1a7/0x2a0 kernel/locking/lockdep.c:144
graph_lock kernel/locking/lockdep.c:170 [inline]
lookup_chain_cache_add kernel/locking/lockdep.c:3760 [inline]
validate_chain+0x1d0/0x5950 kernel/locking/lockdep.c:3793
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__might_fault+0xbd/0x110 mm/memory.c:5831
_copy_from_iter+0xfa/0xff0 lib/iov_iter.c:630
copy_from_iter include/linux/uio.h:187 [inline]
kernfs_fop_write_iter+0x1a6/0x4f0 fs/kernfs/file.c:311
call_write_iter include/linux/fs.h:2265 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x7ae/0xba0 fs/read_write.c:584
ksys_write+0x19c/0x2c0 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f21e267de69
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f21e335e0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f21e27abf80 RCX: 00007f21e267de69
RDX: 0000000000000012 RSI: 0000000020000380 RDI: 0000000000000007
RBP: 00007f21e26ca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f21e27abf80 R15: 00007ffe9c965678
</TASK>
------------[ cut here ]------------


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages