INFO: task hung in kvm_mmu_pre_destroy_vm

4 views
Skip to first unread message

syzbot

unread,
Nov 26, 2019, 12:57:09 AM11/26/19
to syzkaller...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 43598c57 Linux 4.14.156
git tree: linux-4.14.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1449c8a2e00000
kernel config: https://syzkaller.appspot.com/x/.config?x=1a4783d222b56f4
dashboard link: https://syzkaller.appspot.com/bug?extid=d015637785dc5569ccf7
compiler: gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+d01563...@syzkaller.appspotmail.com

audit: type=1400 audit(1574744014.029:50): avc: denied { map } for
pid=7305 comm="syz-executor.3" path=2F6D656D66643A202864656C6574656429
dev="tmpfs" ino=28272
scontext=unconfined_u:system_r:insmod_t:s0-s0:c0.c1023
tcontext=unconfined_u:object_r:tmpfs_t:s0 tclass=file permissive=1
IPVS: ftp: loaded support on port[0] = 21
INFO: task syz-executor.2:7271 blocked for more than 140 seconds.
Not tainted 4.14.156-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor.2 D28336 7271 6851 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2808 [inline]
__schedule+0x7b8/0x1cd0 kernel/sched/core.c:3384
schedule+0x92/0x1c0 kernel/sched/core.c:3428
schedule_timeout+0x93b/0xe10 kernel/time/timer.c:1723
do_wait_for_common kernel/sched/completion.c:91 [inline]
__wait_for_common kernel/sched/completion.c:112 [inline]
wait_for_common kernel/sched/completion.c:123 [inline]
wait_for_completion+0x27c/0x420 kernel/sched/completion.c:144
kthread_stop+0xda/0x650 kernel/kthread.c:530
kvm_mmu_pre_destroy_vm+0x46/0x57 arch/x86/kvm/mmu.c:5850
kvm_arch_pre_destroy_vm+0x16/0x20 arch/x86/kvm/x86.c:8513
kvm_destroy_vm arch/x86/kvm/../../../virt/kvm/kvm_main.c:749 [inline]
kvm_put_kvm+0x320/0xaa0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:786
kvm_vm_release+0x44/0x60 arch/x86/kvm/../../../virt/kvm/kvm_main.c:797
__fput+0x275/0x7a0 fs/file_table.c:210
____fput+0x16/0x20 fs/file_table.c:244
task_work_run+0x114/0x190 kernel/task_work.c:113
tracehook_notify_resume include/linux/tracehook.h:191 [inline]
exit_to_usermode_loop+0x1da/0x220 arch/x86/entry/common.c:164
prepare_exit_to_usermode arch/x86/entry/common.c:199 [inline]
syscall_return_slowpath arch/x86/entry/common.c:270 [inline]
do_syscall_64+0x4bc/0x640 arch/x86/entry/common.c:297
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x45a639
RSP: 002b:00007f4800bdac78 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: fffffffffffffff4 RBX: 0000000000000003 RCX: 000000000045a639
RDX: 0000000000000000 RSI: 000000000000ae01 RDI: 0000000000000003
RBP: 000000000075bf20 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f4800bdb6d4
R13: 00000000004c38d2 R14: 00000000004d7dc8 R15: 00000000ffffffff

Showing all locks held in the system:
1 lock held by khungtaskd/1018:
#0: (tasklist_lock){.+.+}, at: [<ffffffff8148bca8>]
debug_show_all_locks+0x7f/0x21f kernel/locking/lockdep.c:4544
2 locks held by getty/6800:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861ced23>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff831144c6>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/6801:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861ced23>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff831144c6>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/6802:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861ced23>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff831144c6>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/6803:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861ced23>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff831144c6>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/6804:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861ced23>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff831144c6>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/6805:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861ced23>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff831144c6>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/6806:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861ced23>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff831144c6>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 1018 Comm: khungtaskd Not tainted 4.14.156-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0x142/0x197 lib/dump_stack.c:58
nmi_cpu_backtrace.cold+0x57/0x94 lib/nmi_backtrace.c:101
nmi_trigger_cpumask_backtrace+0x141/0x189 lib/nmi_backtrace.c:62
arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
trigger_all_cpu_backtrace include/linux/nmi.h:140 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:195 [inline]
watchdog+0x5e7/0xb90 kernel/hung_task.c:274
kthread+0x319/0x430 kernel/kthread.c:232
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:404
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at pc 0xffffffff861cf80e


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Apr 29, 2020, 11:32:10 PM4/29/20
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages