INFO: task hung in xfrm_user_net_exit

5 views
Skip to first unread message

syzbot

unread,
Apr 8, 2018, 3:02:02 AM4/8/18
to syzkaller-upst...@googlegroups.com
Hello,

syzbot hit the following crash on upstream commit
3fd14cdcc05a682b03743683ce3a726898b20555 (Fri Apr 6 19:15:41 2018 +0000)
Merge tag 'mtd/for-4.17' of git://git.infradead.org/linux-mtd
syzbot dashboard link:
https://syzkaller.appspot.com/bug?extid=8902adfc6f2bdbce20cc

Unfortunately, I don't have any reproducer for this crash yet.
Raw console output:
https://syzkaller.appspot.com/x/log.txt?id=5859489857142784
Kernel config:
https://syzkaller.appspot.com/x/.config?id=-5813481738265533882
compiler: gcc (GCC) 8.0.1 20180301 (experimental)
CC: [da...@davemloft.net her...@gondor.apana.org.au
linux-...@vger.kernel.org net...@vger.kernel.org
steffen....@secunet.com]

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+8902ad...@syzkaller.appspotmail.com
It will help syzbot understand when the bug is fixed. See footer for
details.
If you forward the report, please keep this part and the footer.

INFO: task kworker/u4:1:22 blocked for more than 120 seconds.
Not tainted 4.16.0+ #4
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kworker/u4:1 D15736 22 2 0x80000000
Workqueue: netns cleanup_net
Call Trace:
context_switch kernel/sched/core.c:2848 [inline]
__schedule+0x807/0x1e40 kernel/sched/core.c:3490
schedule+0xef/0x430 kernel/sched/core.c:3549
schedule_timeout+0x1b5/0x240 kernel/time/timer.c:1777
do_wait_for_common kernel/sched/completion.c:83 [inline]
__wait_for_common kernel/sched/completion.c:104 [inline]
wait_for_common kernel/sched/completion.c:115 [inline]
wait_for_completion+0x3e7/0x870 kernel/sched/completion.c:136
__wait_rcu_gp+0x257/0x350 kernel/rcu/update.c:414
synchronize_sched.part.64+0xe3/0xf0 kernel/rcu/tree.c:3209
synchronize_sched+0x76/0xf0 kernel/rcu/tree.c:3210
synchronize_rcu include/linux/rcupdate.h:94 [inline]
synchronize_net+0x52/0x60 net/core/dev.c:8496
xfrm_user_net_exit+0x106/0x200 net/xfrm/xfrm_user.c:3248
ops_exit_list.isra.7+0x105/0x160 net/core/net_namespace.c:155
cleanup_net+0x51d/0xb20 net/core/net_namespace.c:523
process_one_work+0xc1e/0x1b50 kernel/workqueue.c:2145
worker_thread+0x1cc/0x1440 kernel/workqueue.c:2279
kthread+0x345/0x410 kernel/kthread.c:238
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:411

Showing all locks held in the system:
3 locks held by kworker/u4:1/22:
#0: 00000000a4e4ac85 ((wq_completion)"%s""netns"){+.+.}, at:
__write_once_size include/linux/compiler.h:215 [inline]
#0: 00000000a4e4ac85 ((wq_completion)"%s""netns"){+.+.}, at:
arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: 00000000a4e4ac85 ((wq_completion)"%s""netns"){+.+.}, at: atomic64_set
include/asm-generic/atomic-instrumented.h:40 [inline]
#0: 00000000a4e4ac85 ((wq_completion)"%s""netns"){+.+.}, at:
atomic_long_set include/asm-generic/atomic-long.h:57 [inline]
#0: 00000000a4e4ac85 ((wq_completion)"%s""netns"){+.+.}, at: set_work_data
kernel/workqueue.c:617 [inline]
#0: 00000000a4e4ac85 ((wq_completion)"%s""netns"){+.+.}, at:
set_work_pool_and_clear_pending kernel/workqueue.c:644 [inline]
#0: 00000000a4e4ac85 ((wq_completion)"%s""netns"){+.+.}, at:
process_one_work+0xaef/0x1b50 kernel/workqueue.c:2116
#1: 000000009d5b1ead (net_cleanup_work){+.+.}, at:
process_one_work+0xb46/0x1b50 kernel/workqueue.c:2120
#2: 00000000312eea3c (pernet_ops_rwsem){++++}, at: cleanup_net+0x11a/0xb20
net/core/net_namespace.c:490
2 locks held by khungtaskd/882:
#0: 0000000091d56f31 (rcu_read_lock){....}, at:
check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline]
#0: 0000000091d56f31 (rcu_read_lock){....}, at: watchdog+0x1ff/0xf60
kernel/hung_task.c:249
#1: 00000000c00eb276 (tasklist_lock){.+.+}, at:
debug_show_all_locks+0xde/0x34a kernel/locking/lockdep.c:4470
2 locks held by getty/4452:
#0: 0000000056700cec (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: 00000000e846b08e (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4453:
#0: 0000000022d7a644 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: 000000009f1e3718 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4454:
#0: 0000000095f4d3db (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: 000000004456b5e3 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4455:
#0: 000000004dd80ca9 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: 000000007e766c33 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4456:
#0: 0000000006b15edd (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: 00000000e8f58fe9 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4457:
#0: 00000000144b0e49 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: 00000000a81df11e (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131
2 locks held by getty/4458:
#0: 00000000d391102c (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: 00000000d759ecd9 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x321/0x1cc0 drivers/tty/n_tty.c:2131

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 882 Comm: khungtaskd Not tainted 4.16.0+ #4
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1b9/0x294 lib/dump_stack.c:113
nmi_cpu_backtrace.cold.4+0x19/0xce lib/nmi_backtrace.c:103
nmi_trigger_cpumask_backtrace+0x151/0x192 lib/nmi_backtrace.c:62
arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline]
check_hung_task kernel/hung_task.c:132 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline]
watchdog+0xc10/0xf60 kernel/hung_task.c:249
kthread+0x345/0x410 kernel/kthread.c:238
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:411
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 10516 Comm: syz-executor5 Not tainted 4.16.0+ #4
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
RIP: 0010:__lock_acquire+0x89e/0x5130 kernel/locking/lockdep.c:3421
RSP: 0018:ffff8801d0f06660 EFLAGS: 00000046
RAX: dffffc0000000000 RBX: 000000000000046c RCX: 1ffff100352b0d7c
RDX: d44998b9d334489e RSI: ffff8801a9586be0 RDI: ffffffff8a0312e0
RBP: ffff8801d0f069e8 R08: 0000000000000008 R09: 0000000000000001
R10: ffff8801a9586be0 R11: ffff8801a9586340 R12: 000000000000046c
R13: 0000000000000004 R14: 000000000000000e R15: 0000000000000000
FS: 00007f992927c700(0000) GS:ffff8801db000000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe2a01ea000 CR3: 0000000008a6a000 CR4: 00000000001406f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
lock_acquire+0x1dc/0x520 kernel/locking/lockdep.c:3920
__raw_read_lock include/linux/rwlock_api_smp.h:149 [inline]
_raw_read_lock+0x2d/0x40 kernel/locking/spinlock.c:216
snd_pcm_stream_lock+0xaf/0xe0 sound/core/pcm_native.c:116
snd_pcm_stream_lock_irq+0x7d/0xf0 sound/core/pcm_native.c:152
__snd_pcm_lib_xfer+0x345/0x1d10 sound/core/pcm_lib.c:2162
snd_pcm_oss_write3+0xe9/0x220 sound/core/oss/pcm_oss.c:1236
snd_pcm_oss_write2+0x34c/0x460 sound/core/oss/pcm_oss.c:1373
snd_pcm_oss_sync1+0x332/0x5a0 sound/core/oss/pcm_oss.c:1606
snd_pcm_oss_sync.isra.29+0x790/0x980 sound/core/oss/pcm_oss.c:1682
snd_pcm_oss_release+0x214/0x290 sound/core/oss/pcm_oss.c:2559
__fput+0x34d/0x890 fs/file_table.c:209
____fput+0x15/0x20 fs/file_table.c:243
task_work_run+0x1e4/0x290 kernel/task_work.c:113
exit_task_work include/linux/task_work.h:22 [inline]
do_exit+0x1aee/0x2730 kernel/exit.c:865
do_group_exit+0x16f/0x430 kernel/exit.c:968
get_signal+0x886/0x1960 kernel/signal.c:2469
do_signal+0x98/0x2040 arch/x86/kernel/signal.c:810
exit_to_usermode_loop+0x28a/0x310 arch/x86/entry/common.c:162
prepare_exit_to_usermode arch/x86/entry/common.c:196 [inline]
syscall_return_slowpath arch/x86/entry/common.c:265 [inline]
do_syscall_64+0x792/0x9d0 arch/x86/entry/common.c:292
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x455259
RSP: 002b:00007f992927bce8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: fffffffffffffe00 RBX: 000000000072bf80 RCX: 0000000000455259
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000000072bf80
RBP: 000000000072bf80 R08: 0000000000000000 R09: 000000000072bf58
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000a3e81f R14: 00007f992927c9c0 R15: 0000000000000001
Code: 85 5d 33 00 00 45 31 ff 49 8b 93 68 08 00 00 45 85 c9 41 0f 94 c7 48
b8 00 00 00 00 00 fc ff df 4c 89 d1 48 c1 e9 03 80 3c 01 00 <0f> 85 36 32
00 00 48 8b 8c 24 88 00 00 00 49 89 12 48 b8 00 00
INFO: NMI handler (nmi_cpu_backtrace_handler) took too long to run: 1.387
msecs


---
This bug is generated by a dumb bot. It may contain errors.
See https://goo.gl/tpsmEJ for details.
Direct all questions to syzk...@googlegroups.com.

syzbot will keep track of this bug report.
If you forgot to add the Reported-by tag, once the fix for this bug is
merged
into any tree, please reply to this email with:
#syz fix: exact-commit-title
To mark this as a duplicate of another syzbot report, please reply with:
#syz dup: exact-subject-of-another-report
If it's a one-off invalid bug report, please reply with:
#syz invalid
Note: if the crash happens again, it will cause creation of a new bug
report.
Note: all commands must start from beginning of the line in the email body.
To upstream this report, please reply with:
#syz upstream

syzbot

unread,
Feb 22, 2019, 5:26:09 AM2/22/19
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages