[moderation] [kernel?] BUG: soft lockup in __hrtimer_run_queues (2)

0 views
Skip to first unread message

syzbot

unread,
Jul 3, 2024, 6:45:27 PM (16 hours ago) Jul 3
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: de0a9f448633 Merge tag 'riscv-for-linus-6.10-rc6' of git:/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=137fafa6980000
kernel config: https://syzkaller.appspot.com/x/.config?x=68f694eee402f940
dashboard link: https://syzkaller.appspot.com/bug?extid=865df6ad789e27d205ce
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
CC: [anna-...@linutronix.de fred...@kernel.org linux-...@vger.kernel.org tg...@linutronix.de]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/c45cd3b7e225/disk-de0a9f44.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/665937f730d5/vmlinux-de0a9f44.xz
kernel image: https://storage.googleapis.com/syzbot-assets/decdffb12226/bzImage-de0a9f44.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+865df6...@syzkaller.appspotmail.com

vkms_vblank_simulate: vblank timer overrun
vkms_vblank_simulate: vblank timer overrun
watchdog: BUG: soft lockup - CPU#1 stuck for 140s! [kworker/u8:4:61]
Modules linked in:
irq event stamp: 541899
hardirqs last enabled at (541898): [<ffffffff8aebf452>] __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:151 [inline]
hardirqs last enabled at (541898): [<ffffffff8aebf452>] _raw_spin_unlock_irqrestore+0x52/0x80 kernel/locking/spinlock.c:194
hardirqs last disabled at (541899): [<ffffffff8ae83a5e>] sysvec_apic_timer_interrupt+0xe/0xb0 arch/x86/kernel/apic/apic.c:1043
softirqs last enabled at (527620): [<ffffffff815339ae>] softirq_handle_end kernel/softirq.c:400 [inline]
softirqs last enabled at (527620): [<ffffffff815339ae>] handle_softirqs+0x5be/0x8f0 kernel/softirq.c:582
softirqs last disabled at (528057): [<ffffffff815346db>] __do_softirq kernel/softirq.c:588 [inline]
softirqs last disabled at (528057): [<ffffffff815346db>] invoke_softirq kernel/softirq.c:428 [inline]
softirqs last disabled at (528057): [<ffffffff815346db>] __irq_exit_rcu kernel/softirq.c:637 [inline]
softirqs last disabled at (528057): [<ffffffff815346db>] irq_exit_rcu+0xbb/0x120 kernel/softirq.c:649
CPU: 1 PID: 61 Comm: kworker/u8:4 Not tainted 6.10.0-rc5-syzkaller-00253-gde0a9f448633 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
Workqueue: writeback wb_workfn (flush-8:0)
RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
RIP: 0010:_raw_spin_unlock_irqrestore+0x31/0x80 kernel/locking/spinlock.c:194
Code: f5 53 48 8b 74 24 10 48 89 fb 48 83 c7 18 e8 76 22 81 f6 48 89 df e8 5e 9f 81 f6 f7 c5 00 02 00 00 75 23 9c 58 f6 c4 02 75 37 <bf> 01 00 00 00 e8 75 e1 72 f6 65 8b 05 c6 e4 17 75 85 c0 74 16 5b
RSP: 0018:ffffc90000a18de8 EFLAGS: 00000246
RAX: 0000000000000006 RBX: ffff8880b932c9c0 RCX: 1ffffffff285002e
RDX: 0000000000000000 RSI: ffffffff8b2cc140 RDI: ffffffff8b9024c0
RBP: 0000000000000286 R08: 0000000000000001 R09: fffffbfff284fa58
R10: ffffffff9427d2c7 R11: 000000000000000a R12: ffff8880b932cc40
R13: ffff88807b17ef20 R14: ffff8880b932c9c0 R15: ffffffff86935f70
FS: 0000000000000000(0000) GS:ffff8880b9300000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fbe8c255260 CR3: 000000000d97a000 CR4: 0000000000350ef0
Call Trace:
<IRQ>
__run_hrtimer kernel/time/hrtimer.c:1683 [inline]
__hrtimer_run_queues+0x5a7/0xcc0 kernel/time/hrtimer.c:1751
NMI backtrace for cpu 1
CPU: 1 PID: 61 Comm: kworker/u8:4 Not tainted 6.10.0-rc5-syzkaller-00253-gde0a9f448633 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
Workqueue: writeback wb_workfn (flush-8:0)
RIP: 0010:format_decode+0x27d/0xba0 lib/vsprintf.c:2575
Code: e2 07 42 0f b6 04 30 38 d0 7f 08 84 c0 0f 85 b3 08 00 00 0f b6 7d 00 48 c7 c6 60 f3 81 8c 49 89 ff e8 67 7b b5 f6 41 80 ff 2b <0f> 84 bb 04 00 00 76 8b 41 80 ff 2d 0f 84 c2 04 00 00 41 80 ff 30
RSP: 0018:ffffc90000a18190 EFLAGS: 00000016
RAX: 0000000000000000 RBX: ffffffff8b2d9681 RCX: ffffffff8ad9af09
RDX: ffff888018738000 RSI: 0000000000000030 RDI: 0000000000000001
RBP: ffffffff8b2d9682 R08: 0000000000000001 R09: 0000000000000030
R10: 0000000000000035 R11: 000000000000000a R12: ffffc90000a18290
R13: ffffffff8b2d9681 R14: dffffc0000000000 R15: 0000000000000035
FS: 0000000000000000(0000) GS:ffff8880b9300000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fbe8c255260 CR3: 000000000d97a000 CR4: 0000000000350ef0
Call Trace:
<NMI>
</NMI>
<IRQ>
vsnprintf+0x13d/0x1880 lib/vsprintf.c:2776
sprintf+0xcd/0x110 lib/vsprintf.c:3028
print_time kernel/printk/printk.c:1327 [inline]
info_print_prefix+0x25c/0x350 kernel/printk/printk.c:1353
record_print_text+0x141/0x400 kernel/printk/printk.c:1402
printk_get_next_message+0x2a6/0x670 kernel/printk/printk.c:2855
console_emit_next_record kernel/printk/printk.c:2895 [inline]
console_flush_all+0x3b2/0xd70 kernel/printk/printk.c:2994
console_unlock+0xae/0x290 kernel/printk/printk.c:3063
vprintk_emit kernel/printk/printk.c:2345 [inline]
vprintk_emit+0x11a/0x5a0 kernel/printk/printk.c:2300
vprintk+0x7f/0xa0 kernel/printk/printk_safe.c:45
_printk+0xc8/0x100 kernel/printk/printk.c:2370
printk_stack_address arch/x86/kernel/dumpstack.c:72 [inline]
show_trace_log_lvl+0x211/0x500 arch/x86/kernel/dumpstack.c:285
show_regs arch/x86/kernel/dumpstack.c:478 [inline]
show_regs+0x8c/0xa0 arch/x86/kernel/dumpstack.c:465
watchdog_timer_fn+0x570/0x7d0 kernel/watchdog.c:759
__run_hrtimer kernel/time/hrtimer.c:1687 [inline]
__hrtimer_run_queues+0x65a/0xcc0 kernel/time/hrtimer.c:1751
hrtimer_interrupt+0x31b/0x800 kernel/time/hrtimer.c:1813
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1032 [inline]
__sysvec_apic_timer_interrupt+0x112/0x450 arch/x86/kernel/apic/apic.c:1049
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1043 [inline]
sysvec_apic_timer_interrupt+0x43/0xb0 arch/x86/kernel/apic/apic.c:1043
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
RIP: 0010:_raw_spin_unlock_irqrestore+0x31/0x80 kernel/locking/spinlock.c:194
Code: f5 53 48 8b 74 24 10 48 89 fb 48 83 c7 18 e8 76 22 81 f6 48 89 df e8 5e 9f 81 f6 f7 c5 00 02 00 00 75 23 9c 58 f6 c4 02 75 37 <bf> 01 00 00 00 e8 75 e1 72 f6 65 8b 05 c6 e4 17 75 85 c0 74 16 5b
RSP: 0018:ffffc90000a18de8 EFLAGS: 00000246
RAX: 0000000000000006 RBX: ffff8880b932c9c0 RCX: 1ffffffff285002e
RDX: 0000000000000000 RSI: ffffffff8b2cc140 RDI: ffffffff8b9024c0
RBP: 0000000000000286 R08: 0000000000000001 R09: fffffbfff284fa58
R10: ffffffff9427d2c7 R11: 000000000000000a R12: ffff8880b932cc40
R13: ffff88807b17ef20 R14: ffff8880b932c9c0 R15: ffffffff86935f70
__run_hrtimer kernel/time/hrtimer.c:1683 [inline]
__hrtimer_run_queues+0x5a7/0xcc0 kernel/time/hrtimer.c:1751
hrtimer_run_softirq+0x17d/0x350 kernel/time/hrtimer.c:1768
handle_softirqs+0x219/0x8f0 kernel/softirq.c:554
__do_softirq kernel/softirq.c:588 [inline]
invoke_softirq kernel/softirq.c:428 [inline]
__irq_exit_rcu kernel/softirq.c:637 [inline]
irq_exit_rcu+0xbb/0x120 kernel/softirq.c:649
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1043 [inline]
sysvec_apic_timer_interrupt+0x95/0xb0 arch/x86/kernel/apic/apic.c:1043
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:srso_alias_safe_ret+0x0/0x7 arch/x86/lib/retpoline.S:171
Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc <48> 8d 64 24 08 c3 cc e8 f4 ff ff ff 0f 0b cc cc cc cc cc cc cc cc
RSP: 0018:ffffc900015c6a28 EFLAGS: 00000286
RAX: 0000000000000001 RBX: ffffea0001c17180 RCX: ffffffff81f6281a
RDX: ffff888018738000 RSI: ffffffff8b902440 RDI: ffffffff8b902480
RBP: 0000000000000001 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000008 R12: 0000000080000001
R13: 0000000000000001 R14: 0000000000000001 R15: ffff88807afcd580
srso_alias_return_thunk+0x5/0xfbef5 arch/x86/lib/retpoline.S:181
rcu_dynticks_curr_cpu_in_eqs include/linux/context_tracking.h:122 [inline]
rcu_is_watching+0x12/0xc0 kernel/rcu/tree.c:724
rcu_read_lock include/linux/rcupdate.h:782 [inline]
page_ext_get+0x1d7/0x310 mm/page_ext.c:521
page_table_check_clear.part.0+0x32/0x540 mm/page_table_check.c:74
page_table_check_clear mm/page_table_check.c:70 [inline]
__page_table_check_pte_clear+0x31c/0x570 mm/page_table_check.c:169
page_table_check_pte_clear include/linux/page_table_check.h:49 [inline]
ptep_get_and_clear arch/x86/include/asm/pgtable.h:1263 [inline]
ptep_clear_flush+0x14b/0x180 mm/pgtable-generic.c:99
page_vma_mkclean_one.constprop.0+0x397/0x7b0 mm/rmap.c:1029
page_mkclean_one+0x178/0x230 mm/rmap.c:1070
rmap_walk_file+0x322/0x690 mm/rmap.c:2667
rmap_walk mm/rmap.c:2685 [inline]
folio_mkclean+0x246/0x3e0 mm/rmap.c:1102
folio_clear_dirty_for_io+0x153/0x7f0 mm/page-writeback.c:2975
mpage_submit_folio+0x80/0x350 fs/ext4/inode.c:1850
mpage_map_and_submit_buffers+0x590/0xae0 fs/ext4/inode.c:2115
mpage_map_and_submit_extent fs/ext4/inode.c:2254 [inline]
ext4_do_writepages+0x186c/0x3250 fs/ext4/inode.c:2679
ext4_writepages+0x303/0x730 fs/ext4/inode.c:2768
do_writepages+0x1a6/0x7f0 mm/page-writeback.c:2634
__writeback_single_inode+0x163/0xf90 fs/fs-writeback.c:1651
writeback_sb_inodes+0x611/0x1150 fs/fs-writeback.c:1947
__writeback_inodes_wb+0xff/0x2e0 fs/fs-writeback.c:2018
wb_writeback+0x721/0xb50 fs/fs-writeback.c:2129
wb_check_old_data_flush fs/fs-writeback.c:2233 [inline]
wb_do_writeback fs/fs-writeback.c:2286 [inline]
wb_workfn+0xa54/0xf40 fs/fs-writeback.c:2314
process_one_work+0x9c8/0x1b40 kernel/workqueue.c:3248
process_scheduled_works kernel/workqueue.c:3329 [inline]
worker_thread+0x6c8/0xf30 kernel/workqueue.c:3409
kthread+0x2c4/0x3a0 kernel/kthread.c:389
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
hrtimer_run_softirq+0x17d/0x350 kernel/time/hrtimer.c:1768
handle_softirqs+0x219/0x8f0 kernel/softirq.c:554
__do_softirq kernel/softirq.c:588 [inline]
invoke_softirq kernel/softirq.c:428 [inline]
__irq_exit_rcu kernel/softirq.c:637 [inline]
irq_exit_rcu+0xbb/0x120 kernel/softirq.c:649
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1043 [inline]
sysvec_apic_timer_interrupt+0x95/0xb0 arch/x86/kernel/apic/apic.c:1043
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:srso_alias_safe_ret+0x0/0x7 arch/x86/lib/retpoline.S:171
Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc <48> 8d 64 24 08 c3 cc e8 f4 ff ff ff 0f 0b cc cc cc cc cc cc cc cc
RSP: 0018:ffffc900015c6a28 EFLAGS: 00000286
RAX: 0000000000000001 RBX: ffffea0001c17180 RCX: ffffffff81f6281a
RDX: ffff888018738000 RSI: ffffffff8b902440 RDI: ffffffff8b902480
RBP: 0000000000000001 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000008 R12: 0000000080000001
R13: 0000000000000001 R14: 0000000000000001 R15: ffff88807afcd580
srso_alias_return_thunk+0x5/0xfbef5 arch/x86/lib/retpoline.S:181
rcu_dynticks_curr_cpu_in_eqs include/linux/context_tracking.h:122 [inline]
rcu_is_watching+0x12/0xc0 kernel/rcu/tree.c:724
rcu_read_lock include/linux/rcupdate.h:782 [inline]
page_ext_get+0x1d7/0x310 mm/page_ext.c:521
page_table_check_clear.part.0+0x32/0x540 mm/page_table_check.c:74
page_table_check_clear mm/page_table_check.c:70 [inline]
__page_table_check_pte_clear+0x31c/0x570 mm/page_table_check.c:169
page_table_check_pte_clear include/linux/page_table_check.h:49 [inline]
ptep_get_and_clear arch/x86/include/asm/pgtable.h:1263 [inline]
ptep_clear_flush+0x14b/0x180 mm/pgtable-generic.c:99
page_vma_mkclean_one.constprop.0+0x397/0x7b0 mm/rmap.c:1029
page_mkclean_one+0x178/0x230 mm/rmap.c:1070
rmap_walk_file+0x322/0x690 mm/rmap.c:2667
rmap_walk mm/rmap.c:2685 [inline]
folio_mkclean+0x246/0x3e0 mm/rmap.c:1102
folio_clear_dirty_for_io+0x153/0x7f0 mm/page-writeback.c:2975
mpage_submit_folio+0x80/0x350 fs/ext4/inode.c:1850
mpage_map_and_submit_buffers+0x590/0xae0 fs/ext4/inode.c:2115
mpage_map_and_submit_extent fs/ext4/inode.c:2254 [inline]
ext4_do_writepages+0x186c/0x3250 fs/ext4/inode.c:2679
ext4_writepages+0x303/0x730 fs/ext4/inode.c:2768
do_writepages+0x1a6/0x7f0 mm/page-writeback.c:2634
__writeback_single_inode+0x163/0xf90 fs/fs-writeback.c:1651
writeback_sb_inodes+0x611/0x1150 fs/fs-writeback.c:1947
__writeback_inodes_wb+0xff/0x2e0 fs/fs-writeback.c:2018
wb_writeback+0x721/0xb50 fs/fs-writeback.c:2129
wb_check_old_data_flush fs/fs-writeback.c:2233 [inline]
wb_do_writeback fs/fs-writeback.c:2286 [inline]
wb_workfn+0xa54/0xf40 fs/fs-writeback.c:2314
process_one_work+0x9c8/0x1b40 kernel/workqueue.c:3248
process_scheduled_works kernel/workqueue.c:3329 [inline]
worker_thread+0x6c8/0xf30 kernel/workqueue.c:3409
kthread+0x2c4/0x3a0 kernel/kthread.c:389
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 2908 Comm: kworker/u8:8 Not tainted 6.10.0-rc5-syzkaller-00253-gde0a9f448633 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
Workqueue: events_unbound toggle_allocation_gate
RIP: 0010:csd_lock_wait kernel/smp.c:311 [inline]
RIP: 0010:smp_call_function_many_cond+0x4ec/0x1420 kernel/smp.c:855
Code: 4d 48 b8 00 00 00 00 00 fc ff df 4d 89 f4 4c 89 f5 49 c1 ec 03 83 e5 07 49 01 c4 83 c5 03 e8 7b 38 0c 00 f3 90 41 0f b6 04 24 <40> 38 c5 7c 08 84 c0 0f 85 f7 0c 00 00 8b 43 08 31 ff 83 e0 01 41
RSP: 0018:ffffc90009bcf908 EFLAGS: 00000293
RAX: 0000000000000000 RBX: ffff8880b9344900 RCX: ffffffff8182f6bb
RDX: ffff88802bd50000 RSI: ffffffff8182f695 RDI: 0000000000000005
RBP: 0000000000000003 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000006 R12: ffffed1017268921
R13: 0000000000000001 R14: ffff8880b9344908 R15: ffff8880b923fd80
FS: 0000000000000000(0000) GS:ffff8880b9200000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000555569ca05c8 CR3: 000000000d97a000 CR4: 0000000000350ef0
Call Trace:
<NMI>
</NMI>
<TASK>
on_each_cpu_cond_mask+0x40/0x90 kernel/smp.c:1023
on_each_cpu include/linux/smp.h:71 [inline]
text_poke_sync arch/x86/kernel/alternative.c:2069 [inline]
text_poke_bp_batch+0x561/0x760 arch/x86/kernel/alternative.c:2362
text_poke_flush arch/x86/kernel/alternative.c:2470 [inline]
text_poke_flush arch/x86/kernel/alternative.c:2467 [inline]
text_poke_finish+0x30/0x40 arch/x86/kernel/alternative.c:2477
arch_jump_label_transform_apply+0x1c/0x30 arch/x86/kernel/jump_label.c:146
jump_label_update+0x1d7/0x400 kernel/jump_label.c:882
static_key_enable_cpuslocked+0x1b7/0x270 kernel/jump_label.c:205
static_key_enable+0x1a/0x20 kernel/jump_label.c:218
toggle_allocation_gate mm/kfence/core.c:826 [inline]
toggle_allocation_gate+0xf8/0x250 mm/kfence/core.c:818
process_one_work+0x9c8/0x1b40 kernel/workqueue.c:3248
process_scheduled_works kernel/workqueue.c:3329 [inline]
worker_thread+0x6c8/0xf30 kernel/workqueue.c:3409
kthread+0x2c4/0x3a0 kernel/kthread.c:389
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages