INFO: task hung in __flush_work

8 views
Skip to first unread message

syzbot

unread,
Apr 21, 2019, 6:07:06 AM4/21/19
to syzkaller...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: c98875d9 Linux 4.19.36
git tree: linux-4.19.y
console output: https://syzkaller.appspot.com/x/log.txt?x=158acb08a00000
kernel config: https://syzkaller.appspot.com/x/.config?x=5e40ac5fbcc6366d
dashboard link: https://syzkaller.appspot.com/bug?extid=1000508396c783108d91
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=11370a57200000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+100050...@syzkaller.appspotmail.com

IPv6: ADDRCONF(NETDEV_CHANGE): hsr0: link becomes ready
IPv6: ADDRCONF(NETDEV_UP): vxcan1: link is not ready
8021q: adding VLAN 0 to HW filter on device batadv0
audit: type=1400 audit(1555836799.478:38): avc: denied { associate } for
pid=7996 comm="syz-executor.0" name="syz0"
scontext=unconfined_u:object_r:unlabeled_t:s0
tcontext=system_u:object_r:unlabeled_t:s0 tclass=filesystem permissive=1
INFO: task syz-executor.0:8302 blocked for more than 140 seconds.
Not tainted 4.19.36 #4
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor.0 D28504 8302 7996 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2826 [inline]
__schedule+0x813/0x1d00 kernel/sched/core.c:3474
schedule+0x92/0x1c0 kernel/sched/core.c:3518
schedule_timeout+0x8ca/0xfd0 kernel/time/timer.c:1780
do_wait_for_common kernel/sched/completion.c:83 [inline]
__wait_for_common kernel/sched/completion.c:104 [inline]
wait_for_common kernel/sched/completion.c:115 [inline]
wait_for_completion+0x29c/0x440 kernel/sched/completion.c:136
__flush_work+0x474/0x840 kernel/workqueue.c:2917
__cancel_work_timer+0x3bf/0x520 kernel/workqueue.c:3004
cancel_work_sync+0x18/0x20 kernel/workqueue.c:3040
p9_conn_destroy net/9p/trans_fd.c:864 [inline]
p9_fd_close+0x2b7/0x470 net/9p/trans_fd.c:891
p9_client_create+0x9c5/0x12e0 net/9p/client.c:1086
v9fs_session_init+0x1e7/0x18d0 fs/9p/v9fs.c:421
v9fs_mount+0x7d/0x920 fs/9p/vfs_super.c:135
mount_fs+0xae/0x331 fs/super.c:1261
vfs_kern_mount.part.0+0x6f/0x410 fs/namespace.c:961
vfs_kern_mount fs/namespace.c:951 [inline]
do_new_mount fs/namespace.c:2469 [inline]
do_mount+0x53e/0x2bc0 fs/namespace.c:2799
ksys_mount+0xdb/0x150 fs/namespace.c:3015
__do_sys_mount fs/namespace.c:3029 [inline]
__se_sys_mount fs/namespace.c:3026 [inline]
__x64_sys_mount+0xbe/0x150 fs/namespace.c:3026
do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x458c29
Code: 08 48 c7 44 24 10 04 00 00 00 e8 62 d8 fa ff 48 8b 44 24 18 48 8b 4c
24 30 48 83 c1 08 48 89 0c 24 48 89 44 24 08 48 c7 44 24 <10> 10 00 00 00
e8 3d d8 fa ff 48 8b 44 24 18 48 89 44 24 40 48 8b
RSP: 002b:00007f7b928bac78 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 0000000000458c29
RDX: 0000000020000100 RSI: 00000000200000c0 RDI: 0000000000000000
RBP: 000000000073bf00 R08: 0000000020000140 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f7b928bb6d4
R13: 00000000004c4c14 R14: 00000000004d88a0 R15: 00000000ffffffff

Showing all locks held in the system:
1 lock held by khungtaskd/1027:
#0: 0000000077c0cb21 (rcu_read_lock){....}, at:
debug_show_all_locks+0x5f/0x27e kernel/locking/lockdep.c:4438
2 locks held by kworker/1:2/2710:
#0: 000000009405ed2f ((wq_completion)"events"){+.+.}, at:
__write_once_size include/linux/compiler.h:220 [inline]
#0: 000000009405ed2f ((wq_completion)"events"){+.+.}, at:
arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: 000000009405ed2f ((wq_completion)"events"){+.+.}, at: atomic64_set
include/asm-generic/atomic-instrumented.h:40 [inline]
#0: 000000009405ed2f ((wq_completion)"events"){+.+.}, at: atomic_long_set
include/asm-generic/atomic-long.h:59 [inline]
#0: 000000009405ed2f ((wq_completion)"events"){+.+.}, at: set_work_data
kernel/workqueue.c:617 [inline]
#0: 000000009405ed2f ((wq_completion)"events"){+.+.}, at:
set_work_pool_and_clear_pending kernel/workqueue.c:644 [inline]
#0: 000000009405ed2f ((wq_completion)"events"){+.+.}, at:
process_one_work+0x87e/0x1760 kernel/workqueue.c:2124
#1: 00000000170ff86c ((work_completion)(&m->rq)){+.+.}, at:
process_one_work+0x8b4/0x1760 kernel/workqueue.c:2128
1 lock held by rsyslogd/7832:
#0: 000000003d030570 (&f->f_pos_lock){+.+.}, at: __fdget_pos+0xee/0x110
fs/file.c:767
2 locks held by getty/7953:
#0: 00000000bbc6053a (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:363
#1: 000000003402ee32 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b30 drivers/tty/n_tty.c:2154
2 locks held by getty/7954:
#0: 0000000098e436f9 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:363
#1: 0000000049a1c156 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b30 drivers/tty/n_tty.c:2154
2 locks held by getty/7955:
#0: 00000000635a8a7f (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:363
#1: 0000000083602d88 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b30 drivers/tty/n_tty.c:2154
2 locks held by getty/7956:
#0: 0000000043d31f9e (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:363
#1: 00000000cdb56a82 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b30 drivers/tty/n_tty.c:2154
2 locks held by getty/7957:
#0: 000000008491a118 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:363
#1: 00000000c0e44665 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b30 drivers/tty/n_tty.c:2154
2 locks held by getty/7958:
#0: 00000000435de401 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:363
#1: 00000000770604d2 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b30 drivers/tty/n_tty.c:2154
2 locks held by getty/7959:
#0: 000000002d19a28b (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:363
#1: 000000002426230d (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b30 drivers/tty/n_tty.c:2154

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 1027 Comm: khungtaskd Not tainted 4.19.36 #4
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x172/0x1f0 lib/dump_stack.c:113
nmi_cpu_backtrace.cold+0x63/0xa4 lib/nmi_backtrace.c:101
nmi_trigger_cpumask_backtrace+0x1b0/0x1f8 lib/nmi_backtrace.c:62
arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:203 [inline]
watchdog+0x9df/0xee0 kernel/hung_task.c:287
kthread+0x357/0x430 kernel/kthread.c:246
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:413
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at native_safe_halt+0x2/0x10
arch/x86/include/asm/irqflags.h:57


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches

syzbot

unread,
Aug 12, 2020, 6:55:09 PM8/12/20
to syzkaller...@googlegroups.com
syzbot suspects this issue was fixed by commit:

commit af224c2eeda2bd6679355f588766c5a8da8920a2
Author: Christoph Hellwig <h...@lst.de>
Date: Fri Jul 10 08:57:22 2020 +0000

net/9p: validate fds in p9_fd_open

bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=1083f69a900000
start commit: c98875d9 Linux 4.19.36
git tree: linux-4.19.y
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=11370a57200000

If the result looks correct, please mark the issue as fixed by replying with:

#syz fix: net/9p: validate fds in p9_fd_open

For information about bisection process see: https://goo.gl/tpsmEJ#bisection
Reply all
Reply to author
Forward
0 new messages