INFO: task hung in flush_work

6 views
Skip to first unread message

syzbot

unread,
Apr 21, 2019, 3:04:06 AM4/21/19
to syzkaller...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 68d7a45e Linux 4.14.113
git tree: linux-4.14.y
console output: https://syzkaller.appspot.com/x/log.txt?x=15e4897b200000
kernel config: https://syzkaller.appspot.com/x/.config?x=dbf1fde4d7489e1c
dashboard link: https://syzkaller.appspot.com/bug?extid=d920ef3da686d4330a35
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=17e0007f200000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1261d61d200000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+d920ef...@syzkaller.appspotmail.com

INFO: task syz-executor454:7556 blocked for more than 140 seconds.
Not tainted 4.14.113 #3
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor454 D28848 7556 7105 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2807 [inline]
__schedule+0x7be/0x1cf0 kernel/sched/core.c:3383
schedule+0x92/0x1c0 kernel/sched/core.c:3427
schedule_timeout+0x93d/0xe10 kernel/time/timer.c:1721
do_wait_for_common kernel/sched/completion.c:91 [inline]
__wait_for_common kernel/sched/completion.c:112 [inline]
wait_for_common kernel/sched/completion.c:123 [inline]
wait_for_completion+0x27c/0x420 kernel/sched/completion.c:144
flush_work+0x3eb/0x730 kernel/workqueue.c:2885
__cancel_work_timer+0x2f0/0x480 kernel/workqueue.c:2956
cancel_work_sync+0x18/0x20 kernel/workqueue.c:2992
p9_conn_destroy net/9p/trans_fd.c:872 [inline]
p9_fd_close+0x2a1/0x450 net/9p/trans_fd.c:899
p9_client_create+0x793/0x1130 net/9p/client.c:1095
v9fs_session_init+0x1dc/0x1630 fs/9p/v9fs.c:422
v9fs_mount+0x7d/0x870 fs/9p/vfs_super.c:135
mount_fs+0x9d/0x2a7 fs/super.c:1237
vfs_kern_mount.part.0+0x5e/0x3d0 fs/namespace.c:1046
vfs_kern_mount fs/namespace.c:1036 [inline]
do_new_mount fs/namespace.c:2549 [inline]
do_mount+0x417/0x27d0 fs/namespace.c:2879
SYSC_mount fs/namespace.c:3095 [inline]
SyS_mount+0xab/0x120 fs/namespace.c:3072
do_syscall_64+0x1eb/0x630 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x4469a9
RSP: 002b:00007f42d1314db8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00000000006dbc48 RCX: 00000000004469a9
RDX: 0000000020000100 RSI: 00000000200000c0 RDI: 0000000000000000
RBP: 00000000006dbc40 R08: 0000000020000140 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006dbc4c
R13: 00007ffd4ac815bf R14: 00007f42d13159c0 R15: 0000000000000000
INFO: task syz-executor454:7614 blocked for more than 140 seconds.
Not tainted 4.14.113 #3
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor454 D28848 7614 7106 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2807 [inline]
__schedule+0x7be/0x1cf0 kernel/sched/core.c:3383
schedule+0x92/0x1c0 kernel/sched/core.c:3427
schedule_timeout+0x93d/0xe10 kernel/time/timer.c:1721
do_wait_for_common kernel/sched/completion.c:91 [inline]
__wait_for_common kernel/sched/completion.c:112 [inline]
wait_for_common kernel/sched/completion.c:123 [inline]
wait_for_completion+0x27c/0x420 kernel/sched/completion.c:144
flush_work+0x3eb/0x730 kernel/workqueue.c:2885
__cancel_work_timer+0x2f0/0x480 kernel/workqueue.c:2956
cancel_work_sync+0x18/0x20 kernel/workqueue.c:2992
p9_conn_destroy net/9p/trans_fd.c:872 [inline]
p9_fd_close+0x2a1/0x450 net/9p/trans_fd.c:899
p9_client_create+0x793/0x1130 net/9p/client.c:1095
v9fs_session_init+0x1dc/0x1630 fs/9p/v9fs.c:422
v9fs_mount+0x7d/0x870 fs/9p/vfs_super.c:135
mount_fs+0x9d/0x2a7 fs/super.c:1237
vfs_kern_mount.part.0+0x5e/0x3d0 fs/namespace.c:1046
vfs_kern_mount fs/namespace.c:1036 [inline]
do_new_mount fs/namespace.c:2549 [inline]
do_mount+0x417/0x27d0 fs/namespace.c:2879
SYSC_mount fs/namespace.c:3095 [inline]
SyS_mount+0xab/0x120 fs/namespace.c:3072
do_syscall_64+0x1eb/0x630 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x4469a9
RSP: 002b:00007f42d12d2db8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00000000006dbc68 RCX: 00000000004469a9
RDX: 0000000020000100 RSI: 00000000200000c0 RDI: 0000000000000000
RBP: 00000000006dbc60 R08: 0000000020000140 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006dbc6c
R13: 00007ffd4ac815bf R14: 00007f42d12d39c0 R15: 0000000000000000
INFO: task syz-executor454:7606 blocked for more than 140 seconds.
Not tainted 4.14.113 #3
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor454 D28848 7606 7103 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2807 [inline]
__schedule+0x7be/0x1cf0 kernel/sched/core.c:3383
schedule+0x92/0x1c0 kernel/sched/core.c:3427
schedule_timeout+0x93d/0xe10 kernel/time/timer.c:1721
do_wait_for_common kernel/sched/completion.c:91 [inline]
__wait_for_common kernel/sched/completion.c:112 [inline]
wait_for_common kernel/sched/completion.c:123 [inline]
wait_for_completion+0x27c/0x420 kernel/sched/completion.c:144
flush_work+0x3eb/0x730 kernel/workqueue.c:2885
__cancel_work_timer+0x2f0/0x480 kernel/workqueue.c:2956
cancel_work_sync+0x18/0x20 kernel/workqueue.c:2992
p9_conn_destroy net/9p/trans_fd.c:872 [inline]
p9_fd_close+0x2a1/0x450 net/9p/trans_fd.c:899
p9_client_create+0x793/0x1130 net/9p/client.c:1095
v9fs_session_init+0x1dc/0x1630 fs/9p/v9fs.c:422
v9fs_mount+0x7d/0x870 fs/9p/vfs_super.c:135
mount_fs+0x9d/0x2a7 fs/super.c:1237
vfs_kern_mount.part.0+0x5e/0x3d0 fs/namespace.c:1046
vfs_kern_mount fs/namespace.c:1036 [inline]
do_new_mount fs/namespace.c:2549 [inline]
do_mount+0x417/0x27d0 fs/namespace.c:2879
SYSC_mount fs/namespace.c:3095 [inline]
SyS_mount+0xab/0x120 fs/namespace.c:3072
do_syscall_64+0x1eb/0x630 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x4469a9
RSP: 002b:00007f42d1314db8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00000000006dbc48 RCX: 00000000004469a9
RDX: 0000000020000100 RSI: 00000000200000c0 RDI: 0000000000000000
RBP: 00000000006dbc40 R08: 0000000020000140 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006dbc4c
R13: 00007ffd4ac815bf R14: 00007f42d13159c0 R15: 0000000000000000
INFO: task syz-executor454:7612 blocked for more than 140 seconds.
Not tainted 4.14.113 #3
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor454 D28848 7612 7104 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2807 [inline]
__schedule+0x7be/0x1cf0 kernel/sched/core.c:3383
schedule+0x92/0x1c0 kernel/sched/core.c:3427
schedule_timeout+0x93d/0xe10 kernel/time/timer.c:1721
do_wait_for_common kernel/sched/completion.c:91 [inline]
__wait_for_common kernel/sched/completion.c:112 [inline]
wait_for_common kernel/sched/completion.c:123 [inline]
wait_for_completion+0x27c/0x420 kernel/sched/completion.c:144
flush_work+0x3eb/0x730 kernel/workqueue.c:2885
__cancel_work_timer+0x2f0/0x480 kernel/workqueue.c:2956
cancel_work_sync+0x18/0x20 kernel/workqueue.c:2992
p9_conn_destroy net/9p/trans_fd.c:872 [inline]
p9_fd_close+0x2a1/0x450 net/9p/trans_fd.c:899
p9_client_create+0x793/0x1130 net/9p/client.c:1095
v9fs_session_init+0x1dc/0x1630 fs/9p/v9fs.c:422
v9fs_mount+0x7d/0x870 fs/9p/vfs_super.c:135
mount_fs+0x9d/0x2a7 fs/super.c:1237
vfs_kern_mount.part.0+0x5e/0x3d0 fs/namespace.c:1046
vfs_kern_mount fs/namespace.c:1036 [inline]
do_new_mount fs/namespace.c:2549 [inline]
do_mount+0x417/0x27d0 fs/namespace.c:2879
SYSC_mount fs/namespace.c:3095 [inline]
SyS_mount+0xab/0x120 fs/namespace.c:3072
do_syscall_64+0x1eb/0x630 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x4469a9
RSP: 002b:00007f42d1314db8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00000000006dbc48 RCX: 00000000004469a9
RDX: 0000000020000100 RSI: 00000000200000c0 RDI: 0000000000000000
RBP: 00000000006dbc40 R08: 0000000020000140 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006dbc4c
R13: 00007ffd4ac815bf R14: 00007f42d13159c0 R15: 0000000000000000
INFO: task syz-executor454:7616 blocked for more than 140 seconds.
Not tainted 4.14.113 #3
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor454 D28976 7616 7102 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2807 [inline]
__schedule+0x7be/0x1cf0 kernel/sched/core.c:3383
schedule+0x92/0x1c0 kernel/sched/core.c:3427
schedule_timeout+0x93d/0xe10 kernel/time/timer.c:1721
do_wait_for_common kernel/sched/completion.c:91 [inline]
__wait_for_common kernel/sched/completion.c:112 [inline]
wait_for_common kernel/sched/completion.c:123 [inline]
wait_for_completion+0x27c/0x420 kernel/sched/completion.c:144
flush_work+0x3eb/0x730 kernel/workqueue.c:2885
__cancel_work_timer+0x2f0/0x480 kernel/workqueue.c:2956
cancel_work_sync+0x18/0x20 kernel/workqueue.c:2992
p9_conn_destroy net/9p/trans_fd.c:872 [inline]
p9_fd_close+0x2a1/0x450 net/9p/trans_fd.c:899
p9_client_create+0x793/0x1130 net/9p/client.c:1095
v9fs_session_init+0x1dc/0x1630 fs/9p/v9fs.c:422
v9fs_mount+0x7d/0x870 fs/9p/vfs_super.c:135
mount_fs+0x9d/0x2a7 fs/super.c:1237
vfs_kern_mount.part.0+0x5e/0x3d0 fs/namespace.c:1046
vfs_kern_mount fs/namespace.c:1036 [inline]
do_new_mount fs/namespace.c:2549 [inline]
do_mount+0x417/0x27d0 fs/namespace.c:2879
SYSC_mount fs/namespace.c:3095 [inline]
SyS_mount+0xab/0x120 fs/namespace.c:3072
do_syscall_64+0x1eb/0x630 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x4469a9
RSP: 002b:00007f42d1314db8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00000000006dbc48 RCX: 00000000004469a9
RDX: 0000000020000100 RSI: 00000000200000c0 RDI: 0000000000000000
RBP: 00000000006dbc40 R08: 0000000020000140 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006dbc4c
R13: 00007ffd4ac815bf R14: 00007f42d13159c0 R15: 0000000000000000
INFO: task syz-executor454:7621 blocked for more than 140 seconds.
Not tainted 4.14.113 #3
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor454 D28848 7621 7101 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2807 [inline]
__schedule+0x7be/0x1cf0 kernel/sched/core.c:3383
schedule+0x92/0x1c0 kernel/sched/core.c:3427
schedule_timeout+0x93d/0xe10 kernel/time/timer.c:1721
do_wait_for_common kernel/sched/completion.c:91 [inline]
__wait_for_common kernel/sched/completion.c:112 [inline]
wait_for_common kernel/sched/completion.c:123 [inline]
wait_for_completion+0x27c/0x420 kernel/sched/completion.c:144
flush_work+0x3eb/0x730 kernel/workqueue.c:2885
__cancel_work_timer+0x2f0/0x480 kernel/workqueue.c:2956
cancel_work_sync+0x18/0x20 kernel/workqueue.c:2992
p9_conn_destroy net/9p/trans_fd.c:872 [inline]
p9_fd_close+0x2a1/0x450 net/9p/trans_fd.c:899
p9_client_create+0x793/0x1130 net/9p/client.c:1095
v9fs_session_init+0x1dc/0x1630 fs/9p/v9fs.c:422
v9fs_mount+0x7d/0x870 fs/9p/vfs_super.c:135
mount_fs+0x9d/0x2a7 fs/super.c:1237
vfs_kern_mount.part.0+0x5e/0x3d0 fs/namespace.c:1046
vfs_kern_mount fs/namespace.c:1036 [inline]
do_new_mount fs/namespace.c:2549 [inline]
do_mount+0x417/0x27d0 fs/namespace.c:2879
SYSC_mount fs/namespace.c:3095 [inline]
SyS_mount+0xab/0x120 fs/namespace.c:3072
do_syscall_64+0x1eb/0x630 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x4469a9
RSP: 002b:00007f42d1314db8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00000000006dbc48 RCX: 00000000004469a9
RDX: 0000000020000100 RSI: 00000000200000c0 RDI: 0000000000000000
RBP: 00000000006dbc40 R08: 0000000020000140 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006dbc4c
R13: 00007ffd4ac815bf R14: 00007f42d13159c0 R15: 0000000000000000

Showing all locks held in the system:
2 locks held by kworker/0:1/944:
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] work_static
include/linux/workqueue.h:199 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] set_work_data
kernel/workqueue.c:619 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
process_one_work+0x76e/0x1610 kernel/workqueue.c:2085
#1: ((&m->rq)){+.+.}, at: [<ffffffff813d134b>]
process_one_work+0x7ab/0x1610 kernel/workqueue.c:2089
1 lock held by khungtaskd/1008:
#0: (tasklist_lock){.+.+}, at: [<ffffffff81486f98>]
debug_show_all_locks+0x7f/0x21f kernel/locking/lockdep.c:4544
2 locks held by kworker/1:2/2677:
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] work_static
include/linux/workqueue.h:199 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] set_work_data
kernel/workqueue.c:619 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
process_one_work+0x76e/0x1610 kernel/workqueue.c:2085
#1: ((&m->rq)){+.+.}, at: [<ffffffff813d134b>]
process_one_work+0x7ab/0x1610 kernel/workqueue.c:2089
2 locks held by kworker/0:2/3144:
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] work_static
include/linux/workqueue.h:199 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] set_work_data
kernel/workqueue.c:619 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
process_one_work+0x76e/0x1610 kernel/workqueue.c:2085
#1: ((&m->rq)){+.+.}, at: [<ffffffff813d134b>]
process_one_work+0x7ab/0x1610 kernel/workqueue.c:2089
1 lock held by rsyslogd/6949:
#0: (&f->f_pos_lock){+.+.}, at: [<ffffffff81942cbb>]
__fdget_pos+0xab/0xd0 fs/file.c:769
2 locks held by getty/7072:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861b0323>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:377
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff8310c666>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/7073:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861b0323>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:377
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff8310c666>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/7074:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861b0323>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:377
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff8310c666>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/7075:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861b0323>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:377
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff8310c666>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/7076:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861b0323>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:377
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff8310c666>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/7077:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861b0323>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:377
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff8310c666>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by getty/7078:
#0: (&tty->ldisc_sem){++++}, at: [<ffffffff861b0323>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:377
#1: (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff8310c666>]
n_tty_read+0x1e6/0x17b0 drivers/tty/n_tty.c:2156
2 locks held by kworker/0:0/7201:
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] work_static
include/linux/workqueue.h:199 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] set_work_data
kernel/workqueue.c:619 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
process_one_work+0x76e/0x1610 kernel/workqueue.c:2085
#1: ((&m->rq)){+.+.}, at: [<ffffffff813d134b>]
process_one_work+0x7ab/0x1610 kernel/workqueue.c:2089
2 locks held by kworker/0:3/7202:
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] work_static
include/linux/workqueue.h:199 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] set_work_data
kernel/workqueue.c:619 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
process_one_work+0x76e/0x1610 kernel/workqueue.c:2085
#1: ((&m->rq)){+.+.}, at: [<ffffffff813d134b>]
process_one_work+0x7ab/0x1610 kernel/workqueue.c:2089
2 locks held by kworker/0:4/7381:
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] work_static
include/linux/workqueue.h:199 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>] set_work_data
kernel/workqueue.c:619 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
#0: ("events"){+.+.}, at: [<ffffffff813d130e>]
process_one_work+0x76e/0x1610 kernel/workqueue.c:2085
#1: ((&m->rq)){+.+.}, at: [<ffffffff813d134b>]
process_one_work+0x7ab/0x1610 kernel/workqueue.c:2089

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 1008 Comm: khungtaskd Not tainted 4.14.113 #3
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0x138/0x19c lib/dump_stack.c:53
nmi_cpu_backtrace.cold+0x57/0x94 lib/nmi_backtrace.c:101
nmi_trigger_cpumask_backtrace+0x141/0x189 lib/nmi_backtrace.c:62
arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
trigger_all_cpu_backtrace include/linux/nmi.h:140 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:195 [inline]
watchdog+0x5e7/0xb90 kernel/hung_task.c:274
kthread+0x31c/0x430 kernel/kthread.c:232
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:402
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0 skipped: idling at pc 0xffffffff861b0e02


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches
Reply all
Reply to author
Forward
0 new messages