WARNING in rcu_check_gp_start_stall

40 views
Skip to first unread message

syzbot

unread,
Feb 22, 2019, 12:10:05 PM2/22/19
to b...@alien8.de, douly...@cn.fujitsu.com, h...@zytor.com, konra...@oracle.com, len....@intel.com, linux-...@vger.kernel.org, mi...@redhat.com, pu...@hygon.cn, syzkall...@googlegroups.com, tg...@linutronix.de, wang...@zte.com.cn, x...@kernel.org
Hello,

syzbot found the following crash on:

HEAD commit: 8a61716ff2ab Merge tag 'ceph-for-5.0-rc8' of git://github...
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1531dd3f400000
kernel config: https://syzkaller.appspot.com/x/.config?x=7132344728e7ec3f
dashboard link: https://syzkaller.appspot.com/bug?extid=111bc509cd9740d7e4aa
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=16d4966cc00000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10c492d0c00000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+111bc5...@syzkaller.appspotmail.com

hrtimer: interrupt took 25959 ns
rcu: rcu_check_gp_start_stall: g4348->4352 gar:15680 ga:15694 f0x1 gs:1
rcu_preempt->state:0x0
WARNING: CPU: 0 PID: 7398 at kernel/rcu/tree.c:2666
rcu_check_gp_start_stall kernel/rcu/tree.c:2660 [inline]
WARNING: CPU: 0 PID: 7398 at kernel/rcu/tree.c:2666
rcu_check_gp_start_stall.cold+0x7f/0xb1 kernel/rcu/tree.c:2619
Kernel panic - not syncing: panic_on_warn set ...
CPU: 0 PID: 7398 Comm: syz-executor615 Not tainted 5.0.0-rc7+ #83
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
<IRQ>
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x172/0x1f0 lib/dump_stack.c:113
panic+0x2cb/0x65c kernel/panic.c:214
__warn.cold+0x20/0x45 kernel/panic.c:571
report_bug+0x263/0x2b0 lib/bug.c:186
fixup_bug arch/x86/kernel/traps.c:178 [inline]
fixup_bug arch/x86/kernel/traps.c:173 [inline]
do_error_trap+0x11b/0x200 arch/x86/kernel/traps.c:271
do_invalid_op+0x37/0x50 arch/x86/kernel/traps.c:290
invalid_op+0x14/0x20 arch/x86/entry/entry_64.S:973
RIP: 0010:rcu_check_gp_start_stall kernel/rcu/tree.c:2666 [inline]
RIP: 0010:rcu_check_gp_start_stall.cold+0x7f/0xb1 kernel/rcu/tree.c:2619
Code: 48 8b 0d 93 ae 3b 07 4c 2b 0d 1c c4 3b 07 50 0f bf 05 a4 c1 3b 07 48
8b 15 45 c1 3b 07 4c 2b 05 0e c4 3b 07 50 e8 a4 c5 fb ff <0f> 0b 48 83 c4
20 49 81 fc 00 69 9a 88 74 0c 48 c7 c7 00 69 9a 88
RSP: 0018:ffff8880ae807dc0 EFLAGS: 00010086
RAX: 000000000000005e RBX: ffff8880aa25e280 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffff815a92c6 RDI: ffffed1015d00faa
RBP: ffff8880ae807e00 R08: 000000000000005e R09: ffffed1015d05021
R10: ffffed1015d05020 R11: ffff8880ae828107 R12: ffffffff889a6900
R13: 0000000100014001 R14: 0000000000000286 R15: dffffc0000000000
rcu_process_callbacks+0x3ba/0x1390 kernel/rcu/tree.c:2750
__do_softirq+0x266/0x95a kernel/softirq.c:292
invoke_softirq kernel/softirq.c:373 [inline]
irq_exit+0x180/0x1d0 kernel/softirq.c:413
exiting_irq arch/x86/include/asm/apic.h:536 [inline]
smp_apic_timer_interrupt+0x14a/0x570 arch/x86/kernel/apic/apic.c:1062
apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:807
</IRQ>
RIP: 0010:__sanitizer_cov_trace_pc+0x26/0x50 kernel/kcov.c:101
Code: 90 90 90 90 55 48 89 e5 48 8b 75 08 65 48 8b 04 25 40 ee 01 00 65 8b
15 38 0c 92 7e 81 e2 00 01 1f 00 75 2b 8b 90 d8 12 00 00 <83> fa 02 75 20
48 8b 88 e0 12 00 00 8b 80 dc 12 00 00 48 8b 11 48
RSP: 0018:ffff8880a9bcf590 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
RAX: ffff888086a48500 RBX: ffff8880a9bcf688 RCX: ffffffff8700203f
RDX: 0000000000000000 RSI: ffffffff87002051 RDI: 0000000000000001
RBP: ffff8880a9bcf590 R08: ffff888086a48500 R09: ffffed1015d05bd0
R10: ffffed1015d05bcf R11: ffff8880ae82de7b R12: ffff88809e40d742
R13: ffff8880a9bcf6a0 R14: ffff88808474e778 R15: ffff88809e40d742
xa_head include/linux/xarray.h:988 [inline]
xas_start+0x1a1/0x560 lib/xarray.c:182
xas_load+0x21/0x150 lib/xarray.c:227
find_get_entry+0x13d/0x8d0 mm/filemap.c:1476
pagecache_get_page+0x4a/0x740 mm/filemap.c:1579
find_get_page include/linux/pagemap.h:272 [inline]
generic_file_buffered_read mm/filemap.c:2076 [inline]
generic_file_read_iter+0x716/0x2870 mm/filemap.c:2350
ext4_file_read_iter+0x180/0x3c0 fs/ext4/file.c:77
call_read_iter include/linux/fs.h:1857 [inline]
generic_file_splice_read+0x4b2/0x800 fs/splice.c:308
do_splice_to+0x12a/0x190 fs/splice.c:880
splice_direct_to_actor+0x2d2/0x970 fs/splice.c:957
do_splice_direct+0x1da/0x2a0 fs/splice.c:1066
do_sendfile+0x597/0xd00 fs/read_write.c:1436
__do_sys_sendfile64 fs/read_write.c:1491 [inline]
__se_sys_sendfile64 fs/read_write.c:1483 [inline]
__x64_sys_sendfile64+0x15a/0x220 fs/read_write.c:1483
do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x446a59
Code: e8 dc e6 ff ff 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7
48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff
ff 0f 83 4b 07 fc ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007f032c353db8 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00000000006dcc28 RCX: 0000000000446a59
RDX: 0000000020000000 RSI: 0000000000000003 RDI: 0000000000000003
RBP: 00000000006dcc20 R08: 0000000000000000 R09: 0000000000000000
R10: 00008080fffffffe R11: 0000000000000246 R12: 00000000006dcc2c
R13: 00007fff7d4161cf R14: 00007f032c3549c0 R15: 20c49ba5e353f7cf
Kernel Offset: disabled
Rebooting in 86400 seconds..


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#bug-status-tracking for how to communicate with
syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches

Borislav Petkov

unread,
Feb 22, 2019, 5:20:50 PM2/22/19
to syzbot, douly...@cn.fujitsu.com, h...@zytor.com, konra...@oracle.com, len....@intel.com, linux-...@vger.kernel.org, mi...@redhat.com, pu...@hygon.cn, syzkall...@googlegroups.com, tg...@linutronix.de, wang...@zte.com.cn, x...@kernel.org
On Fri, Feb 22, 2019 at 09:10:04AM -0800, syzbot wrote:
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit: 8a61716ff2ab Merge tag 'ceph-for-5.0-rc8' of git://github...
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=1531dd3f400000
> kernel config: https://syzkaller.appspot.com/x/.config?x=7132344728e7ec3f
> dashboard link: https://syzkaller.appspot.com/bug?extid=111bc509cd9740d7e4aa
> compiler: gcc (GCC) 9.0.0 20181231 (experimental)
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=16d4966cc00000
> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10c492d0c00000

So I ran this for more than an hour in a guest here with the above
.config but nothing happened. The compiler I used is 8.2, dunno of that
makes the difference or I'm missing something else...

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

Dmitry Vyukov

unread,
Feb 23, 2019, 5:33:45 AM2/23/19
to Borislav Petkov, syzbot, Dou Liyang, H. Peter Anvin, konra...@oracle.com, Len Brown, LKML, Ingo Molnar, pu...@hygon.cn, syzkaller-bugs, Thomas Gleixner, wang...@zte.com.cn, the arch/x86 maintainers
On Fri, Feb 22, 2019 at 11:20 PM Borislav Petkov <b...@alien8.de> wrote:
>
> On Fri, Feb 22, 2019 at 09:10:04AM -0800, syzbot wrote:
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit: 8a61716ff2ab Merge tag 'ceph-for-5.0-rc8' of git://github...
> > git tree: upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=1531dd3f400000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=7132344728e7ec3f
> > dashboard link: https://syzkaller.appspot.com/bug?extid=111bc509cd9740d7e4aa
> > compiler: gcc (GCC) 9.0.0 20181231 (experimental)
> > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=16d4966cc00000
> > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10c492d0c00000
>
> So I ran this for more than an hour in a guest here with the above
> .config but nothing happened. The compiler I used is 8.2, dunno of that
> makes the difference or I'm missing something else...

I was able to reproduce this on the first run:

# ./syz-execprog -procs=8 -repeat=0 hang
2019/02/23 10:24:31 parsed 1 programs
2019/02/23 10:24:31 executed programs: 0
2019/02/23 10:24:36 executed programs: 23
2019/02/23 10:24:41 executed programs: 71
2019/02/23 10:24:46 executed programs: 118
2019/02/23 10:24:52 executed programs: 162
2019/02/23 10:24:57 executed programs: 208
2019/02/23 10:25:02 executed programs: 258
2019/02/23 10:25:07 executed programs: 288

And on the console:

[ 77.032078] sched: RT throttling activated
[ 183.901866] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[ 183.902595] rcu: (detected by 2, t=10502 jiffies, g=5945, q=335)
[ 183.903249] rcu: All QSes seen, last rcu_preempt kthread activity
10500 (4294955649-4294945149), jiffies_till_next_fqs=1, root ->qsmask
0x0
[ 183.904548] syz-executor R running task 56728 7574 6076 0x00000000
[ 183.905300] Call Trace:
[ 183.905570] <IRQ>
[ 183.905807] sched_show_task.cold+0x273/0x2d5
[ 183.906283] ? can_nice.part.0+0x20/0x20
[ 183.906708] ? kmsg_dump_rewind_nolock+0xe4/0xe4
[ 183.907205] ? print_usage_bug+0xd0/0xd0
[ 183.907629] ? __sanitizer_cov_trace_cmp8+0x18/0x20
[ 183.908149] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 183.908729] print_other_cpu_stall.cold+0x7f2/0x8bb
[ 183.909260] ? print_cpu_stall+0x170/0x170
[ 183.909703] ? add_lock_to_list.isra.0+0x450/0x450
[ 183.910219] ? find_held_lock+0x35/0x120
[ 183.910643] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 183.911217] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 183.911791] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 183.912367] ? check_preemption_disabled+0x48/0x290
[ 183.912889] ? __this_cpu_preempt_check+0x1d/0x30
[ 183.913389] ? rcu_preempt_need_deferred_qs+0x71/0x1a0
[ 183.913939] ? do_trace_rcu_torture_read+0x10/0x10
[ 183.914496] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 183.915119] ? check_preemption_disabled+0x48/0x290
[ 183.915681] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 183.916307] ? check_preemption_disabled+0x48/0x290
[ 183.916876] rcu_check_callbacks+0xf36/0x1380
[ 183.917381] ? account_system_index_time+0x31a/0x5f0
[ 183.917964] ? rcutree_dead_cpu+0x10/0x10
[ 183.918437] ? trace_hardirqs_off+0xb8/0x310
[ 183.918934] ? __lock_is_held+0xb6/0x140
[ 183.919392] ? trace_hardirqs_on_caller+0x310/0x310
[ 183.919966] ? check_preemption_disabled+0x48/0x290
[ 183.920536] ? raise_softirq+0x189/0x430
[ 183.920997] ? account_system_index_time+0x33f/0x5f0
[ 183.921575] ? raise_softirq_irqoff+0x2d0/0x2d0
[ 183.922110] ? check_preemption_disabled+0x48/0x290
[ 183.922679] ? __sanitizer_cov_trace_const_cmp1+0x1a/0x20
[ 183.923303] ? hrtimer_run_queues+0x99/0x410
[ 183.923806] ? run_local_timers+0x194/0x230
[ 183.924301] ? timer_clear_idle+0x90/0x90
[ 183.924777] ? account_process_tick+0x27f/0x350
[ 183.925314] ? ktime_mono_to_any+0x3a0/0x3a0
[ 183.925819] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 183.926461] update_process_times+0x32/0x80
[ 183.926960] tick_sched_handle+0xa2/0x190
[ 183.927438] tick_sched_timer+0x47/0x130
[ 183.927905] __hrtimer_run_queues+0x3a7/0x1050
[ 183.928433] ? tick_sched_do_timer+0x1b0/0x1b0
[ 183.928959] ? hrtimer_fixup_init+0x90/0x90
[ 183.929450] ? kvm_clock_read+0x18/0x30
[ 183.929904] ? __sanitizer_cov_trace_cmp4+0x16/0x20
[ 183.930473] ? ktime_get_update_offsets_now+0x3d5/0x5e0
[ 183.931081] ? do_timer+0x50/0x50
[ 183.931474] ? add_lock_to_list.isra.0+0x450/0x450
[ 183.932032] ? rcu_softirq_qs+0x20/0x20
[ 183.932482] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 183.933112] hrtimer_interrupt+0x314/0x770
[ 183.933600] smp_apic_timer_interrupt+0x18d/0x760
[ 183.934160] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 183.934716] ? smp_call_function_single_interrupt+0x640/0x640
[ 183.935386] ? trace_hardirqs_off+0x310/0x310
[ 183.935897] ? task_prio+0x50/0x50
[ 183.936304] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 183.936932] ? check_preemption_disabled+0x48/0x290
[ 183.937504] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 183.938067] apic_timer_interrupt+0xf/0x20
[ 183.938549] </IRQ>
[ 183.938808] RIP: 0010:lock_is_held_type+0x17e/0x210
[ 183.939379] Code: 00 00 00 fc ff df 41 c7 85 7c 08 00 00 00 00 00
00 48 c1 e8 03 80 3c 10 00 75 63 48 83 3d f9 65 2e 08 00 74 30 48 89
df 57 9d <0f> 1f 44 00 00 48 83 c4 08 44 89 e0 5b 41 5c 41 5d 5d c3 48
83 c4
[ 183.941512] RSP: 0018:ffff88804c04f2c8 EFLAGS: 00000286 ORIG_RAX:
ffffffffffffff13
[ 183.942388] RAX: 1ffffffff132607e RBX: 0000000000000286 RCX: dffffc0000000000
[ 183.943208] RDX: dffffc0000000000 RSI: 0000000000000000 RDI: 0000000000000286
[ 183.944032] RBP: ffff88804c04f2e8 R08: ffff88805c074000 R09: ffffed100d8a7f98
[ 183.944855] R10: ffffed100d8a7f97 R11: ffff88806c53fcbb R12: 0000000000000000
[ 183.945674] R13: ffff88805c074000 R14: dffffc0000000000 R15: 0000000000000008
[ 183.946508] ___might_sleep+0xd5/0x160
[ 183.946956] ? ext4_write_end+0x1090/0x1090
[ 183.947452] generic_perform_write+0x3fd/0x6a0
[ 183.947978] ? add_page_wait_queue+0x480/0x480
[ 183.948504] ? current_time+0x1b0/0x1b0
[ 183.948958] ? generic_write_check_limits+0x380/0x380
[ 183.949547] ? ext4_file_write_iter+0x28b/0x1440
[ 183.950091] __generic_file_write_iter+0x25e/0x630
[ 183.950659] ext4_file_write_iter+0x37a/0x1440
[ 183.951184] ? ext4_file_mmap+0x410/0x410
[ 183.951654] ? save_stack+0xa9/0xd0
[ 183.952065] ? save_stack+0x45/0xd0
[ 183.952477] ? __kasan_kmalloc.constprop.0+0xcf/0xe0
[ 183.953053] ? kasan_kmalloc+0x9/0x10
[ 183.953485] ? __kmalloc+0x15c/0x740
[ 183.953918] ? iter_file_splice_write+0x267/0xfc0
[ 183.954466] ? splice_direct_to_actor+0x3be/0x9d0
[ 183.955015] ? do_splice_direct+0x2c7/0x420
[ 183.955504] ? do_sendfile+0x61d/0xe60
[ 183.955948] ? __x64_sys_sendfile64+0x15a/0x240
[ 183.956475] ? do_syscall_64+0x1a3/0x800
[ 183.956939] ? entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 183.957550] ? debug_lockdep_rcu_enabled+0x71/0xa0
[ 183.958115] ? common_file_perm+0x231/0x800
[ 183.958604] ? print_usage_bug+0xd0/0xd0
[ 183.959066] do_iter_readv_writev+0x902/0xbc0
[ 183.959582] ? vfs_dedupe_file_range+0x780/0x780
[ 183.960122] ? apparmor_file_permission+0x25/0x30
[ 183.960674] ? rw_verify_area+0x118/0x360
[ 183.961146] do_iter_write+0x184/0x610
[ 183.961587] ? pipe_to_sendpage+0x390/0x390
[ 183.962087] ? rcu_read_lock_sched_held+0x110/0x130
[ 183.962658] ? __kmalloc+0x5d5/0x740
[ 183.963082] vfs_iter_write+0x77/0xb0
[ 183.963515] iter_file_splice_write+0x885/0xfc0
[ 183.964044] ? fsnotify+0x4f5/0xed0
[ 183.964462] ? page_cache_pipe_buf_steal+0x800/0x800
[ 183.965046] ? rw_verify_area+0x118/0x360
[ 183.965516] ? page_cache_pipe_buf_steal+0x800/0x800
[ 183.966098] direct_splice_actor+0x126/0x1a0
[ 183.966598] splice_direct_to_actor+0x3be/0x9d0
[ 183.967125] ? generic_pipe_buf_nosteal+0x10/0x10
[ 183.967679] ? do_splice_to+0x190/0x190
[ 183.968134] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 183.968762] ? rw_verify_area+0x118/0x360
[ 183.969233] do_splice_direct+0x2c7/0x420
[ 183.969709] ? splice_direct_to_actor+0x9d0/0x9d0
[ 183.970264] ? rcu_read_lock_sched_held+0x110/0x130
[ 183.970833] ? rcu_sync_lockdep_assert+0x73/0xb0
[ 183.971376] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 183.972003] ? __sb_start_write+0x1aa/0x360
[ 183.972499] do_sendfile+0x61d/0xe60
[ 183.972926] ? do_compat_pwritev64+0x1c0/0x1c0
[ 183.973453] ? __sanitizer_cov_trace_const_cmp8+0x18/0x20
[ 183.974084] ? _copy_from_user+0xdd/0x150
[ 183.974561] __x64_sys_sendfile64+0x15a/0x240
[ 183.975080] ? __ia32_sys_sendfile+0x2a0/0x2a0
[ 183.975605] ? trace_hardirqs_on_thunk+0x1a/0x1c
[ 183.976151] do_syscall_64+0x1a3/0x800
[ 183.976598] ? syscall_return_slowpath+0x5f0/0x5f0
[ 183.977161] ? prepare_exit_to_usermode+0x3b0/0x3b0
[ 183.977733] ? __switch_to_asm+0x34/0x70
[ 183.978204] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 183.978758] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 183.979348] RIP: 0033:0x457629
[ 183.979714] Code: 8d b5 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66
90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24
08 0f 05 <48> 3d 01 f0 ff ff 0f 83 5b b5 fb ff c3 66 2e 0f 1f 84 00 00
00 00
[ 183.981845] RSP: 002b:00007f435f197c88 EFLAGS: 00000246 ORIG_RAX:
0000000000000028
[ 183.982718] RAX: ffffffffffffffda RBX: 000000000071bfa0 RCX: 0000000000457629
[ 183.983533] RDX: 0000000020000000 RSI: 0000000000000003 RDI: 0000000000000003
[ 183.984358] RBP: 0000000000000004 R08: 0000000000000000 R09: 0000000000000000
[ 183.985179] R10: 00008080fffffffe R11: 0000000000000246 R12: 00007f435f1986d4
[ 183.986005] R13: 00000000004abf30 R14: 00000000006eb8b8 R15: 00000000ffffffff



This is with qemu with 4 CPUs:

qemu-system-x86_64 -hda wheezy.img -net
user,host=10.0.2.10,hostfwd=tcp::10022-:22 -net nic -nographic -kernel
arch/x86/boot/bzImage -append "kvm-intel.nested=1
kvm-intel.unrestricted_guest=1 kvm-intel.ept=1
kvm-intel.flexpriority=1 kvm-intel.vpid=1
kvm-intel.emulate_invalid_guest_state=1 kvm-intel.eptad=1
kvm-intel.enable_shadow_vmcs=1 kvm-intel.pml=1
kvm-intel.enable_apicv=1 console=ttyS0 root=/dev/sda
earlyprintk=serial slub_debug=UZ vsyscall=native rodata=n oops=panic
panic_on_warn=1 panic=86400 ima_policy=tcb" -enable-kvm -pidfile
vm_pid -m 2G -smp 4 -cpu host


There is a bunch of other bug reports about hangs where reproducers
mention perf_event_open and sched_setattr.

Borislav Petkov

unread,
Feb 23, 2019, 5:38:18 AM2/23/19
to Dmitry Vyukov, syzbot, Dou Liyang, H. Peter Anvin, konra...@oracle.com, Len Brown, LKML, Ingo Molnar, pu...@hygon.cn, syzkaller-bugs, Thomas Gleixner, wang...@zte.com.cn, the arch/x86 maintainers, Peter Zijlstra
On Sat, Feb 23, 2019 at 11:33:33AM +0100, Dmitry Vyukov wrote:
> On Fri, Feb 22, 2019 at 11:20 PM Borislav Petkov <b...@alien8.de> wrote:
> >
> > On Fri, Feb 22, 2019 at 09:10:04AM -0800, syzbot wrote:
> > > Hello,
> > >
> > > syzbot found the following crash on:
> > >
> > > HEAD commit: 8a61716ff2ab Merge tag 'ceph-for-5.0-rc8' of git://github...
> > > git tree: upstream
> > > console output: https://syzkaller.appspot.com/x/log.txt?x=1531dd3f400000
> > > kernel config: https://syzkaller.appspot.com/x/.config?x=7132344728e7ec3f
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=111bc509cd9740d7e4aa
> > > compiler: gcc (GCC) 9.0.0 20181231 (experimental)
> > > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=16d4966cc00000
> > > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10c492d0c00000
> >
> > So I ran this for more than an hour in a guest here with the above
> > .config but nothing happened. The compiler I used is 8.2, dunno of that
> > makes the difference or I'm missing something else...
>
> I was able to reproduce this on the first run:
>
> # ./syz-execprog -procs=8 -repeat=0 hang

Ok, this is what I'm missing: I ran the reproducer directly but you run
a multithreaded thing with this syz-execprog. Where do I get that syz-
thing?

> There is a bunch of other bug reports about hangs where reproducers
> mention perf_event_open and sched_setattr.

This is a known issue, says peterz.

Dmitry Vyukov

unread,
Feb 23, 2019, 5:51:02 AM2/23/19
to Borislav Petkov, syzbot, Dou Liyang, H. Peter Anvin, konra...@oracle.com, Len Brown, LKML, Ingo Molnar, pu...@hygon.cn, syzkaller-bugs, Thomas Gleixner, wang...@zte.com.cn, the arch/x86 maintainers, Peter Zijlstra
Peter, what is the canonical location to reference for this issue
(open bug or something)? When we get back to this report later, how
does one know if this is fixed or not and what's the status?

The C repro hanged the machine even faster, so it must be some other difference.
I would expect compiler to not make difference. qemu, number of CPUs
and maybe host kernel config (at least for interrupt granularity or
something?) may be relevant.

But if you want to run syzkaller reproducer with syz-execprog, there
is a docs link in the "syz repro" link.

# [ 38.881989] hrtimer: interrupt took 28594 ns
[ 91.730829] BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0
nice=0 stuck for 51s!
[ 91.734505] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0
nice=0 stuck for 51s!
[ 91.735403] BUG: workqueue lockup - pool cpus=2 node=0 flags=0x0
nice=0 stuck for 51s!
[ 91.736273] BUG: workqueue lockup - pool cpus=2 node=0 flags=0x0
nice=-20 stuck for 49s!
[ 91.737173] BUG: workqueue lockup - pool cpus=3 node=0 flags=0x0
nice=0 stuck for 51s!
[ 91.738044] BUG: workqueue lockup - pool cpus=0-3 flags=0x4 nice=0
stuck for 49s!
[ 91.738887] Showing busy workqueues and worker pools:
[ 91.739496] workqueue events: flags=0x0
[ 91.740012] pwq 6: cpus=3 node=0 flags=0x0 nice=0 active=3/256
[ 91.740784] pending: defense_work_handler, e1000_watchdog, cache_reap
[ 91.741584] pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=2/256
[ 91.742237] pending: cache_reap, check_corruption
[ 91.742802] pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256
[ 91.743451] pending: cache_reap
[ 91.743844] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=3/256
[ 91.744497] pending: vmstat_shepherd, cache_reap, psi_update_work
[ 91.745286] workqueue events_unbound: flags=0x2
[ 91.745872] pwq 8: cpus=0-3 flags=0x4 nice=0 active=1/512
[ 91.747586] pending: flush_to_ldisc
[ 91.748049]
[ 91.748052] ======================================================
[ 91.748055] WARNING: possible circular locking dependency detected
[ 91.748056] 5.0.0-rc7+ #7 Not tainted
[ 91.748058] ------------------------------------------------------
[ 91.748060] a.out/6057 is trying to acquire lock:
[ 91.748062] 00000000a11439e2 (console_owner){-.-.}, at:
console_unlock+0x4d3/0x11e0
[ 91.748068]
[ 91.748069] but task is already holding lock:
[ 91.748071] 0000000082e8b4f5 (&pool->lock/1){-.-.}, at:
show_workqueue_state.cold+0xac5/0x15a8
[ 91.748078]
[ 91.748080] which lock already depends on the new lock.
[ 91.748081]
[ 91.748082]
[ 91.748084] the existing dependency chain (in reverse order) is:
[ 91.748085]
[ 91.748086] -> #3 (&pool->lock/1){-.-.}:
[ 91.748092] _raw_spin_lock+0x2f/0x40
[ 91.748094] __queue_work+0x2d9/0x1450
[ 91.748095] queue_work_on+0x192/0x200
[ 91.748097] tty_schedule_flip+0x149/0x1e0
[ 91.748099] tty_flip_buffer_push+0x16/0x20
[ 91.748101] pty_write+0x1a6/0x200
[ 91.748102] n_tty_write+0xb9e/0x1220
[ 91.748104] tty_write+0x45b/0x7a0
[ 91.748105] __vfs_write+0x116/0xb40
[ 91.748107] vfs_write+0x20c/0x580
[ 91.748109] ksys_write+0x105/0x260
[ 91.748110] __x64_sys_write+0x73/0xb0
[ 91.748112] do_syscall_64+0x1a3/0x800
[ 91.748114] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 91.748115]
[ 91.748116] -> #2 (&(&port->lock)->rlock){-.-.}:
[ 91.748122] _raw_spin_lock_irqsave+0x95/0xd0
[ 91.748123] tty_port_tty_get+0x22/0x80
[ 91.748125] tty_port_default_wakeup+0x16/0x40
[ 91.748127] tty_port_tty_wakeup+0x5d/0x70
[ 91.748129] uart_write_wakeup+0x46/0x70
[ 91.748131] serial8250_tx_chars+0x4a4/0xcc0
[ 91.748133] serial8250_handle_irq.part.0+0x1be/0x2e0
[ 91.748135] serial8250_default_handle_irq+0xc5/0x150
[ 91.748136] serial8250_interrupt+0xfb/0x1a0
[ 91.748138] __handle_irq_event_percpu+0x1c6/0xb10
[ 91.748140] handle_irq_event_percpu+0xa0/0x1d0
[ 91.748142] handle_irq_event+0xa7/0x134
[ 91.748144] handle_edge_irq+0x232/0x8a0
[ 91.748145] handle_irq+0x252/0x3d8
[ 91.748147] do_IRQ+0x99/0x1d0
[ 91.748148] ret_from_intr+0x0/0x1e
[ 91.748150] native_safe_halt+0x2/0x10
[ 91.748152] arch_cpu_idle+0x10/0x20
[ 91.748153] default_idle_call+0x36/0x90
[ 91.748155] do_idle+0x386/0x5d0
[ 91.748157] cpu_startup_entry+0x1b/0x20
[ 91.748158] rest_init+0x245/0x37b
[ 91.748160] arch_call_rest_init+0xe/0x1b
[ 91.748161] start_kernel+0x877/0x8b2
[ 91.748163] x86_64_start_reservations+0x29/0x2b
[ 91.748165] x86_64_start_kernel+0x77/0x7b
[ 91.748167] secondary_startup_64+0xa4/0xb0
[ 91.748168]
[ 91.748169] -> #1 (&port_lock_key){-.-.}:
[ 91.748175] _raw_spin_lock_irqsave+0x95/0xd0
[ 91.748177] serial8250_console_write+0x253/0xab0
[ 91.748178] univ8250_console_write+0x5f/0x70
[ 91.748180] console_unlock+0xcff/0x11e0
[ 91.748182] vprintk_emit+0x370/0x960
[ 91.748183] vprintk_default+0x28/0x30
[ 91.748185] vprintk_func+0x7e/0x189
[ 91.748187] printk+0xba/0xed
[ 91.748189] register_console+0x74d/0xb50
[ 91.748190] univ8250_console_init+0x3e/0x4b
[ 91.748192] console_init+0x6af/0x9f3
[ 91.748194] start_kernel+0x5dc/0x8b2
[ 91.748196] x86_64_start_reservations+0x29/0x2b
[ 91.748198] x86_64_start_kernel+0x77/0x7b
[ 91.748200] secondary_startup_64+0xa4/0xb0
[ 91.748200]
[ 91.748201] -> #0 (console_owner){-.-.}:
[ 91.748207] lock_acquire+0x1db/0x570
[ 91.748209] console_unlock+0x53d/0x11e0
[ 91.748211] vprintk_emit+0x370/0x960
[ 91.748212] vprintk_default+0x28/0x30
[ 91.748214] vprintk_func+0x7e/0x189
[ 91.748215] printk+0xba/0xed
[ 91.748217] show_workqueue_state.cold+0xc5f/0x15a8
[ 91.748219] wq_watchdog_timer_fn+0x6bd/0x7e0
[ 91.748221] call_timer_fn+0x254/0x900
[ 91.748222] __run_timers+0x6fc/0xd50
[ 91.748224] run_timer_softirq+0x88/0xb0
[ 91.748226] __do_softirq+0x30b/0xb11
[ 91.748227] irq_exit+0x180/0x1d0
[ 91.748229] smp_apic_timer_interrupt+0x1b7/0x760
[ 91.748231] apic_timer_interrupt+0xf/0x20
[ 91.748233] __sanitizer_cov_trace_const_cmp8+0x13/0x20
[ 91.748234] sanity+0x109/0x330
[ 91.748236] copy_page_to_iter+0x634/0x1000
[ 91.748238] generic_file_read_iter+0xbb1/0x2d40
[ 91.748240] ext4_file_read_iter+0x180/0x3c0
[ 91.748242] generic_file_splice_read+0x5c4/0xa90
[ 91.748243] do_splice_to+0x12a/0x190
[ 91.748245] splice_direct_to_actor+0x31b/0x9d0
[ 91.748247] do_splice_direct+0x2c7/0x420
[ 91.748249] do_sendfile+0x61d/0xe60
[ 91.748250] __x64_sys_sendfile64+0x15a/0x240
[ 91.748252] do_syscall_64+0x1a3/0x800
[ 91.748254] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 91.748255]
[ 91.748257] other info that might help us debug this:
[ 91.748258]
[ 91.748259] Chain exists of:
[ 91.748260] console_owner --> &(&port->lock)->rlock --> &pool->lock/1
[ 91.748268]
[ 91.748270] Possible unsafe locking scenario:
[ 91.748271]
[ 91.748273] CPU0 CPU1
[ 91.748274] ---- ----
[ 91.748275] lock(&pool->lock/1);
[ 91.748280] lock(&(&port->lock)->rlock);
[ 91.748284] lock(&pool->lock/1);
[ 91.748288] lock(console_owner);
[ 91.748291]
[ 91.748293] *** DEADLOCK ***
[ 91.748293]
[ 91.748295] 5 locks held by a.out/6057:
[ 91.748296] #0: 00000000878cae3a (sb_writers#3){.+.+}, at:
do_sendfile+0xae0/0xe60
[ 91.748304] #1: 000000007b4589c2 ((&wq_watchdog_timer)){+.-.}, at:
call_timer_fn+0x1b4/0x900
[ 91.748311] #2: 0000000085eca237 (rcu_read_lock_sched){....}, at:
show_workqueue_state+0x0/0x180
[ 91.748318] #3: 0000000082e8b4f5 (&pool->lock/1){-.-.}, at:
show_workqueue_state.cold+0xac5/0x15a8
[ 91.748326] #4: 00000000fffd7726 (console_lock){+.+.}, at:
vprintk_emit+0x351/0x960
[ 91.748332]
[ 91.748334] stack backtrace:
[ 91.748336] CPU: 0 PID: 6057 Comm: a.out Not tainted 5.0.0-rc7+ #7
[ 91.748339] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
BIOS 1.10.2-1 04/01/2014
[ 91.748340] Call Trace:
[ 91.748341] <IRQ>
[ 91.748343] dump_stack+0x1db/0x2d0
[ 91.748345] ? dump_stack_print_info.cold+0x20/0x20
[ 91.748347] ? print_stack_trace+0x77/0xb0
[ 91.748348] ? vprintk_func+0x86/0x189
[ 91.748350] print_circular_bug.isra.0.cold+0x1cc/0x28f
[ 91.748352] __lock_acquire+0x3014/0x4a30
[ 91.748354] ? mark_held_locks+0x100/0x100
[ 91.748355] ? memcpy+0x46/0x50
[ 91.748357] ? add_lock_to_list.isra.0+0x450/0x450
[ 91.748358] ? sprintf+0xc0/0x100
[ 91.748360] ? scnprintf+0x140/0x140
[ 91.748362] ? console_unlock+0x518/0x11e0
[ 91.748364] ? find_held_lock+0x35/0x120
[ 91.748365] lock_acquire+0x1db/0x570
[ 91.748367] ? console_unlock+0x4d3/0x11e0
[ 91.748369] ? lock_release+0xc40/0xc40
[ 91.748371] ? do_raw_spin_trylock+0x270/0x270
[ 91.748373] ? lock_acquire+0x1db/0x570
[ 91.748374] console_unlock+0x53d/0x11e0
[ 91.748376] ? console_unlock+0x4d3/0x11e0
[ 91.748378] ? kmsg_dump_rewind+0x2b0/0x2b0
[ 91.748380] ? _raw_spin_unlock_irqrestore+0xa4/0xe0
[ 91.748382] ? vprintk_emit+0x351/0x960
[ 91.748384] ? __down_trylock_console_sem+0x148/0x210
[ 91.748385] vprintk_emit+0x370/0x960
[ 91.748387] ? wake_up_klogd+0x180/0x180
[ 91.748389] ? lockdep_hardirqs_on+0x19b/0x5d0
[ 91.748390] ? retint_kernel+0x2d/0x2d
[ 91.748392] ? trace_hardirqs_on_caller+0xc0/0x310
[ 91.748394] vprintk_default+0x28/0x30
[ 91.748396] vprintk_func+0x7e/0x189
[ 91.748397] ? printk+0xba/0xed
[ 91.748398] printk+0xba/0xed
[ 91.748400] ? kmsg_dump_rewind_nolock+0xe4/0xe4
[ 91.748402] ? wq_watchdog_touch+0xb0/0x102
[ 91.748404] show_workqueue_state.cold+0xc5f/0x15a8
[ 91.748406] ? print_worker_info+0x540/0x540
[ 91.748408] ? add_lock_to_list.isra.0+0x450/0x450
[ 91.748409] ? retint_kernel+0x2d/0x2d
[ 91.748411] ? trace_hardirqs_on_thunk+0x1a/0x1c
[ 91.748413] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 91.748415] ? lock_downgrade+0x910/0x910
[ 91.748416] ? kasan_check_read+0x11/0x20
[ 91.748418] ? rcu_dynticks_curr_cpu_in_eqs+0xa2/0x170
[ 91.748420] ? rcu_read_unlock_special+0x380/0x380
[ 91.748422] ? wq_watchdog_timer_fn+0x4f4/0x7e0
[ 91.748424] wq_watchdog_timer_fn+0x6bd/0x7e0
[ 91.748426] ? show_workqueue_state+0x180/0x180
[ 91.748428] ? add_lock_to_list.isra.0+0x450/0x450
[ 91.748430] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 91.748432] ? check_preemption_disabled+0x48/0x290
[ 91.748433] ? __lock_is_held+0xb6/0x140
[ 91.748435] call_timer_fn+0x254/0x900
[ 91.748437] ? show_workqueue_state+0x180/0x180
[ 91.748438] ? process_timeout+0x40/0x40
[ 91.748440] ? retint_kernel+0x2d/0x2d
[ 91.748442] ? show_workqueue_state+0x180/0x180
[ 91.748443] ? _raw_spin_unlock_irq+0x54/0x90
[ 91.748445] ? show_workqueue_state+0x180/0x180
[ 91.748447] __run_timers+0x6fc/0xd50
[ 91.748449] ? __bpf_trace_timer_expire_entry+0x30/0x30
[ 91.748451] ? print_usage_bug+0xd0/0xd0
[ 91.748452] ? find_held_lock+0x35/0x120
[ 91.748454] ? clockevents_program_event+0x15f/0x380
[ 91.748456] ? add_lock_to_list.isra.0+0x450/0x450
[ 91.748458] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 91.748460] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 91.748462] ? check_preemption_disabled+0x48/0x290
[ 91.748464] ? __lock_is_held+0xb6/0x140
[ 91.748466] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 91.748468] ? check_preemption_disabled+0x48/0x290
[ 91.748469] run_timer_softirq+0x88/0xb0
[ 91.748471] ? rcu_read_lock_sched_held+0x110/0x130
[ 91.748473] __do_softirq+0x30b/0xb11
[ 91.748475] ? __irqentry_text_end+0x1f96c2/0x1f96c2
[ 91.748477] ? kvm_clock_read+0x18/0x30
[ 91.748478] ? kvm_sched_clock_read+0x9/0x20
[ 91.748480] ? sched_clock+0x2e/0x50
[ 91.748482] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 91.748484] ? __sanitizer_cov_trace_const_cmp8+0x18/0x20
[ 91.748486] ? check_preemption_disabled+0x48/0x290
[ 91.748487] irq_exit+0x180/0x1d0
[ 91.748489] smp_apic_timer_interrupt+0x1b7/0x760
[ 91.748491] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 91.748493] ? smp_call_function_single_interrupt+0x640/0x640
[ 91.748495] ? trace_hardirqs_off+0x310/0x310
[ 91.748497] ? task_prio+0x50/0x50
[ 91.748499] ? __sanitizer_cov_trace_const_cmp8+0x18/0x20
[ 91.748501] ? check_preemption_disabled+0x48/0x290
[ 91.748502] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 91.748504] apic_timer_interrupt+0xf/0x20
[ 91.748505] </IRQ>
[ 91.748508] RIP: 0010:__sanitizer_cov_trace_const_cmp8+0x13/0x20
[ 91.748509] Code: 00
[ 91.748512] Lost 71 message(s)!
[ 91.866234] workqueue events_power_efficient: flags=0x80
[ 91.866863] pwq 6: cpus=3 node=0 flags=0x0 nice=0 active=2/256
[ 91.867568] pending: gc_worker, neigh_periodic_work
[ 91.868202] pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=3/256
[ 91.868911] pending: crda_timeout_work, neigh_periodic_work,
do_cache_clean
[ 91.869781] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
[ 91.870488] pending: fb_flashcursor
[ 91.870963] workqueue mm_percpu_wq: flags=0x8
[ 91.871482] pwq 6: cpus=3 node=0 flags=0x0 nice=0 active=1/256
[ 91.872188] pending: vmstat_update
[ 91.872644] pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=1/256
[ 91.873353] pending: vmstat_update
[ 91.873806] pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256
[ 91.874512] pending: vmstat_update
[ 91.874964] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
[ 91.875670] pending: vmstat_update
[ 91.876131] workqueue writeback: flags=0x4e
[ 91.876631] pwq 8: cpus=0-3 flags=0x4 nice=0 active=1/256
[ 91.877292] pending: wb_workfn
[ 91.877716] workqueue kblockd: flags=0x18
[ 91.878201] pwq 5: cpus=2 node=0 flags=0x0 nice=-20 active=1/256
[ 91.878930] pending: blk_mq_timeout_work
[ 91.879513] workqueue dm_bufio_cache: flags=0x8
[ 91.880053] pwq 6: cpus=3 node=0 flags=0x0 nice=0 active=1/256
[ 91.880761] pending: work_fn

Borislav Petkov

unread,
Feb 23, 2019, 5:56:57 AM2/23/19
to Dmitry Vyukov, syzbot, H. Peter Anvin, konra...@oracle.com, Len Brown, LKML, Ingo Molnar, pu...@hygon.cn, syzkaller-bugs, Thomas Gleixner, wang...@zte.com.cn, the arch/x86 maintainers, Peter Zijlstra
On Sat, Feb 23, 2019 at 11:50:49AM +0100, Dmitry Vyukov wrote:
> Peter, what is the canonical location to reference for this issue
> (open bug or something)? When we get back to this report later, how
> does one know if this is fixed or not and what's the status?

bugzilla.kernel.org maybe?

> The C repro hanged the machine even faster, so it must be some other difference.
> I would expect compiler to not make difference. qemu, number of CPUs
> and maybe host kernel config (at least for interrupt granularity or
> something?) may be relevant.
>
> But if you want to run syzkaller reproducer with syz-execprog, there
> is a docs link in the "syz repro" link.

Thanks.

syzbot

unread,
Mar 17, 2019, 6:43:02 AM3/17/19
to b...@alien8.de, de...@driverdev.osuosl.org, douly...@cn.fujitsu.com, dvy...@google.com, for...@alittletooquiet.net, gre...@linuxfoundation.org, h...@zytor.com, konra...@oracle.com, len....@intel.com, linux-...@vger.kernel.org, linu...@kvack.org, mi...@redhat.com, pet...@infradead.org, pu...@hygon.cn, syzkall...@googlegroups.com, tg...@linutronix.de, tvbo...@gmail.com, wang...@zte.com.cn, x...@kernel.org
syzbot has bisected this bug to:

commit f1e3e92135202ff3d95195393ee62808c109208c
Author: Malcolm Priestley <tvbo...@gmail.com>
Date: Wed Jul 22 18:16:42 2015 +0000

staging: vt6655: fix tagSRxDesc -> next_desc type

bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=111856cf200000
start commit: f1e3e921 staging: vt6655: fix tagSRxDesc -> next_desc type
git tree: upstream
final crash: https://syzkaller.appspot.com/x/report.txt?x=131856cf200000
console output: https://syzkaller.appspot.com/x/log.txt?x=151856cf200000
Reported-by: syzbot+111bc5...@syzkaller.appspotmail.com
Fixes: f1e3e921 ("staging: vt6655: fix tagSRxDesc -> next_desc type")

Greg KH

unread,
Mar 17, 2019, 7:04:50 AM3/17/19
to syzbot, b...@alien8.de, de...@driverdev.osuosl.org, douly...@cn.fujitsu.com, dvy...@google.com, for...@alittletooquiet.net, h...@zytor.com, konra...@oracle.com, len....@intel.com, linux-...@vger.kernel.org, linu...@kvack.org, mi...@redhat.com, pet...@infradead.org, pu...@hygon.cn, syzkall...@googlegroups.com, tg...@linutronix.de, tvbo...@gmail.com, wang...@zte.com.cn, x...@kernel.org
I think syzbot is a bit confused here, how can this simple patch, where
you do not have the hardware for this driver, cause this problem?

thanks,

greg k-h

Dmitry Vyukov

unread,
Mar 18, 2019, 8:18:16 AM3/18/19
to Greg KH, syzbot, Borislav Petkov, open list:ANDROID DRIVERS, Dou Liyang, for...@alittletooquiet.net, H. Peter Anvin, konra...@oracle.com, Len Brown, LKML, Linux-MM, Ingo Molnar, Peter Zijlstra, pu...@hygon.cn, syzkaller-bugs, Thomas Gleixner, tvbo...@gmail.com, wang...@zte.com.cn, the arch/x86 maintainers
Yes, I guess so.
This perf_event_open+sched_setattr combo bug causes problems with
hangs at random places, developers looking at these hangs again and
again, incorrect bisection. I would be useful if somebody
knowledgeable in perf/sched look at it.
Reply all
Reply to author
Forward
0 new messages