INFO: task hung in _rcu_barrier

12 views
Skip to first unread message

syzbot

unread,
Sep 3, 2018, 5:21:05 AM9/3/18
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 360bd62dc494 Merge tag 'linux-watchdog-4.19-rc2' of git://..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=133d9d1e400000
kernel config: https://syzkaller.appspot.com/x/.config?x=531a917630d2a492
dashboard link: https://syzkaller.appspot.com/bug?extid=75cc60c8e9ef0753b83f
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
CC: [ava...@virtuozzo.com da...@davemloft.net
ebie...@xmission.com edum...@google.com ktk...@virtuozzo.com
linux-...@vger.kernel.org net...@vger.kernel.org tyh...@canonical.com
wi...@infradead.org]

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+75cc60...@syzkaller.appspotmail.com

INFO: task kworker/u4:4:7138 blocked for more than 140 seconds.
Not tainted 4.19.0-rc1+ #218
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kworker/u4:4 D15352 7138 2 0x80000000
Workqueue: netns cleanup_net
Call Trace:
context_switch kernel/sched/core.c:2825 [inline]
__schedule+0x87c/0x1df0 kernel/sched/core.c:3473
schedule+0xfb/0x450 kernel/sched/core.c:3517
schedule_timeout+0x1cc/0x260 kernel/time/timer.c:1780
do_wait_for_common kernel/sched/completion.c:83 [inline]
__wait_for_common kernel/sched/completion.c:104 [inline]
wait_for_common kernel/sched/completion.c:115 [inline]
wait_for_completion+0x430/0x8d0 kernel/sched/completion.c:136
_rcu_barrier+0x48c/0x790 kernel/rcu/tree.c:3478
rcu_barrier_sched kernel/rcu/tree.c:3502 [inline]
rcu_barrier+0x10/0x20 kernel/rcu/tree_plugin.h:995
cleanup_net+0x637/0xb60 net/core/net_namespace.c:562
process_one_work+0xc73/0x1aa0 kernel/workqueue.c:2153
worker_thread+0x189/0x13c0 kernel/workqueue.c:2296
kthread+0x35a/0x420 kernel/kthread.c:246
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:413

Showing all locks held in the system:
1 lock held by khungtaskd/792:
#0: 000000006e83ad8c (rcu_read_lock){....}, at:
debug_show_all_locks+0xd0/0x428 kernel/locking/lockdep.c:4436
1 lock held by rsyslogd/4542:
2 locks held by getty/4632:
#0: 00000000564e7fed (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:353
#1: 000000008f0981e8 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x335/0x1ce0 drivers/tty/n_tty.c:2140
2 locks held by getty/4633:
#0: 000000005c02b658 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:353
#1: 00000000b6a7d1f8 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x335/0x1ce0 drivers/tty/n_tty.c:2140
2 locks held by getty/4634:
#0: 00000000f977d9b5 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:353
#1: 00000000f3399091 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x335/0x1ce0 drivers/tty/n_tty.c:2140
2 locks held by getty/4635:
#0: 000000003be9ec4d (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:353
#1: 00000000d002d58a (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x335/0x1ce0 drivers/tty/n_tty.c:2140
2 locks held by getty/4636:
#0: 00000000688e4521 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:353
#1: 00000000556a6d08 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x335/0x1ce0 drivers/tty/n_tty.c:2140
2 locks held by getty/4637:
#0: 000000002362149b (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:353
#1: 000000008b12ad64 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x335/0x1ce0 drivers/tty/n_tty.c:2140
2 locks held by getty/4638:
#0: 00000000719a8a9c (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:353
#1: 000000003a513f59 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x335/0x1ce0 drivers/tty/n_tty.c:2140
3 locks held by kworker/u4:4/7138:
#0: 00000000876499dc ((wq_completion)"%s""netns"){+.+.}, at:
__write_once_size include/linux/compiler.h:215 [inline]
#0: 00000000876499dc ((wq_completion)"%s""netns"){+.+.}, at:
arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: 00000000876499dc ((wq_completion)"%s""netns"){+.+.}, at: atomic64_set
include/asm-generic/atomic-instrumented.h:40 [inline]
#0: 00000000876499dc ((wq_completion)"%s""netns"){+.+.}, at:
atomic_long_set include/asm-generic/atomic-long.h:59 [inline]
#0: 00000000876499dc ((wq_completion)"%s""netns"){+.+.}, at: set_work_data
kernel/workqueue.c:617 [inline]
#0: 00000000876499dc ((wq_completion)"%s""netns"){+.+.}, at:
set_work_pool_and_clear_pending kernel/workqueue.c:644 [inline]
#0: 00000000876499dc ((wq_completion)"%s""netns"){+.+.}, at:
process_one_work+0xb44/0x1aa0 kernel/workqueue.c:2124
#1: 00000000fdecbd34 (net_cleanup_work){+.+.}, at:
process_one_work+0xb9b/0x1aa0 kernel/workqueue.c:2128
#2: 00000000d6ef345e (rcu_sched_state.barrier_mutex){+.+.}, at:
_rcu_barrier+0x14a/0x790 kernel/rcu/tree.c:3413
1 lock held by syz-executor4/11650:

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 792 Comm: khungtaskd Not tainted 4.19.0-rc1+ #218
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1c9/0x2b4 lib/dump_stack.c:113
nmi_cpu_backtrace.cold.3+0x48/0x88 lib/nmi_backtrace.c:101
nmi_trigger_cpumask_backtrace+0x151/0x192 lib/nmi_backtrace.c:62
arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
trigger_all_cpu_backtrace include/linux/nmi.h:144 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:204 [inline]
watchdog+0xb39/0x1040 kernel/hung_task.c:265
kthread+0x35a/0x420 kernel/kthread.c:246
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:413
Sending NMI from CPU 1 to CPUs 0:
INFO: NMI handler (nmi_cpu_backtrace_handler) took too long to run: 1.370
msecs
NMI backtrace for cpu 0
CPU: 0 PID: 11650 Comm: syz-executor4 Not tainted 4.19.0-rc1+ #218
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
RIP: 0010:__wrmsr arch/x86/include/asm/msr.h:105 [inline]
RIP: 0010:native_write_msr+0xa/0x30 arch/x86/include/asm/msr.h:162
Code: 5d c3 0f 21 c8 5d c3 0f 21 d0 5d c3 0f 21 d8 5d c3 0f 21 f0 5d c3 0f
0b 0f 1f 84 00 00 00 00 00 55 89 f9 89 f0 48 89 e5 0f 30 <0f> 1f 44 00 00
5d c3 48 c1 e2 20 89 f6 48 09 d6 31 d2 e8 df 66 50
RSP: 0018:ffff8801db007d38 EFLAGS: 00000046
RAX: 000000000000003e RBX: 0000000000000838 RCX: 0000000000000838
RDX: 0000000000000000 RSI: 000000000000003e RDI: 0000000000000838
RBP: ffff8801db007d38 R08: ffff8801ba3044c0 R09: ffffffff88f7801b
R10: fffffbfff11ef001 R11: 0000000000000000 R12: 000000000000003e
R13: 0000000000000000 R14: 0000000000000000 R15: ffff8801db025d04
FS: 00007fc7d31e6700(0000) GS:ffff8801db000000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 00000001d73ec000 CR4: 00000000001426f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<IRQ>
paravirt_write_msr arch/x86/include/asm/paravirt.h:117 [inline]
native_apic_msr_write+0x5b/0x80 arch/x86/include/asm/apic.h:209
apic_write arch/x86/include/asm/apic.h:397 [inline]
lapic_next_event+0x5a/0x90 arch/x86/kernel/apic/apic.c:461
clockevents_program_event+0x251/0x370 kernel/time/clockevents.c:344
tick_program_event+0xab/0x130 kernel/time/tick-oneshot.c:48
hrtimer_interrupt+0x348/0x750 kernel/time/hrtimer.c:1531
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1029 [inline]
smp_apic_timer_interrupt+0x16d/0x6a0 arch/x86/kernel/apic/apic.c:1054
apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:864
</IRQ>
RIP: 0010:arch_local_irq_enable arch/x86/include/asm/paravirt.h:798 [inline]
RIP: 0010:vcpu_enter_guest+0x1263/0x61a0 arch/x86/kvm/x86.c:7610
Code: 89 fa 48 c1 ea 03 80 3c 02 00 0f 85 c9 46 00 00 48 83 3d d7 50 03 07
00 0f 84 05 3d 00 00 e8 94 53 6e 00 fb 66 0f 1f 44 00 00 <48> b8 00 00 00
00 00 fc ff df 48 89 da 48 c1 ea 03 65 ff 0d c5 80
RSP: 0018:ffff88018c3af4f0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
RAX: 0000000000040000 RBX: ffff88018e2a00c0 RCX: ffffc90003c7b000
RDX: 0000000000040000 RSI: ffffffff810e6cec RDI: ffffffff8811bdb8
RBP: ffff88018c3af840 R08: ffff8801ba304d00 R09: 0000000000000006
R10: ffff8801ba3044c0 R11: 0000000000000000 R12: ffff8801ba3044c0
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
vcpu_run arch/x86/kvm/x86.c:7693 [inline]
kvm_arch_vcpu_ioctl_run+0x373/0x16d0 arch/x86/kvm/x86.c:7870
kvm_vcpu_ioctl+0x7b8/0x1280 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2590
vfs_ioctl fs/ioctl.c:46 [inline]
file_ioctl fs/ioctl.c:501 [inline]
do_vfs_ioctl+0x1de/0x1720 fs/ioctl.c:685
ksys_ioctl+0xa9/0xd0 fs/ioctl.c:702
__do_sys_ioctl fs/ioctl.c:709 [inline]
__se_sys_ioctl fs/ioctl.c:707 [inline]
__x64_sys_ioctl+0x73/0xb0 fs/ioctl.c:707
do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x457099
Code: fd b4 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7
48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff
ff 0f 83 cb b4 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007fc7d31e5c78 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fc7d31e66d4 RCX: 0000000000457099
RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000006
RBP: 00000000009300a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
R13: 00000000004cf460 R14: 00000000004c5786 R15: 0000000000000000


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#bug-status-tracking for how to communicate with
syzbot.

syzbot

unread,
Apr 13, 2019, 3:47:04 AM4/13/19
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages