INFO: task hung in fib6_rules_net_exit (2)

5 views
Skip to first unread message

syzbot

unread,
Oct 3, 2019, 2:34:08 AM10/3/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 7fe05eed Merge 4.9.194 into android-4.9
git tree: android-4.9
console output: https://syzkaller.appspot.com/x/log.txt?x=17996c2d600000
kernel config: https://syzkaller.appspot.com/x/.config?x=c6d462552c77f021
dashboard link: https://syzkaller.appspot.com/bug?extid=1b97cb40abc8e0cc8618
compiler: gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+1b97cb...@syzkaller.appspotmail.com

ip6_tunnel: � xmit: Local address not yet configured!
ip6_tunnel: � xmit: Local address not yet configured!
ip6_tunnel: a xmit: Local address not yet configured!
INFO: task kworker/u4:7:2373 blocked for more than 140 seconds.
Not tainted 4.9.194+ #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kworker/u4:7 D26016 2373 2 0x80000000
Workqueue: netns cleanup_net
0000000000000087 ffff8801a4240000 ffff8801ca0e4780 ffff8801db721000
ffff8801d1145f00 ffff8801db721018 ffff8801a424f938 ffffffff8281af8e
ffff8801a424f958 ffffffff81bd0269 00ffffff8167e35e ffff8801db7218f0
Call Trace:
[<000000003167e0f6>] schedule+0x92/0x1c0 kernel/sched/core.c:3546
[<00000000ecd3961a>] schedule_preempt_disabled+0x13/0x20
kernel/sched/core.c:3579
[<00000000d1720e9b>] __mutex_lock_common kernel/locking/mutex.c:582
[inline]
[<00000000d1720e9b>] mutex_lock_nested+0x38d/0x920
kernel/locking/mutex.c:621
[<000000006188c393>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
[<000000008aa38efb>] fib6_rules_net_exit+0x12/0x50
net/ipv6/fib6_rules.c:318
[<00000000c10c8361>] ops_exit_list.isra.0+0xb0/0x160
net/core/net_namespace.c:136
[<00000000664a9357>] cleanup_net+0x3d6/0x8a0 net/core/net_namespace.c:474
[<00000000eab322a5>] process_one_work+0x88b/0x1600 kernel/workqueue.c:2114
[<0000000040bca397>] worker_thread+0x5df/0x11d0 kernel/workqueue.c:2251
[<00000000b78e6a96>] kthread+0x278/0x310 kernel/kthread.c:211
[<000000003223c68a>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:375

Showing all locks held in the system:
2 locks held by khungtaskd/24:
#0: (rcu_read_lock){......}, at: [<00000000b91a64c5>]
check_hung_uninterruptible_tasks kernel/hung_task.c:169 [inline]
#0: (rcu_read_lock){......}, at: [<00000000b91a64c5>]
watchdog+0x14b/0xaf0 kernel/hung_task.c:263
#1: (tasklist_lock){.+.+..}, at: [<00000000d74ac53a>]
debug_show_all_locks+0x7f/0x21f kernel/locking/lockdep.c:4336
3 locks held by rs:main Q:Reg/1894:
#0: (&f->f_pos_lock){+.+.+.}, at: [<000000004faf6cbf>]
__fdget_pos+0xa8/0xd0 fs/file.c:782
#1: (sb_writers#4){.+.+.+}, at: [<00000000190f6d1b>] file_start_write
include/linux/fs.h:2646 [inline]
#1: (sb_writers#4){.+.+.+}, at: [<00000000190f6d1b>]
vfs_write+0x3e9/0x520 fs/read_write.c:558
#2: (&sb->s_type->i_mutex_key#9){++++++}, at: [<000000001fee7b25>]
inode_lock include/linux/fs.h:771 [inline]
#2: (&sb->s_type->i_mutex_key#9){++++++}, at: [<000000001fee7b25>]
__generic_file_fsync+0xcd/0x1c0 fs/libfs.c:978
1 lock held by rsyslogd/1897:
#0: (&f->f_pos_lock){+.+.+.}, at: [<000000004faf6cbf>]
__fdget_pos+0xa8/0xd0 fs/file.c:782
2 locks held by getty/2024:
#0: (&tty->ldisc_sem){++++++}, at: [<000000001e4bd477>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
#1: (&ldata->atomic_read_lock){+.+...}, at: [<00000000954356a4>]
n_tty_read+0x1fe/0x1820 drivers/tty/n_tty.c:2156
4 locks held by kworker/u4:7/2373:
#0: ("%s""netns"){.+.+.+}, at: [<00000000871c0cd6>]
process_one_work+0x790/0x1600 kernel/workqueue.c:2107
#1: (net_cleanup_work){+.+.+.}, at: [<0000000035b425d4>]
process_one_work+0x7ce/0x1600 kernel/workqueue.c:2111
#2: (net_mutex){+.+.+.}, at: [<00000000e6e47d15>] cleanup_net+0x131/0x8a0
net/core/net_namespace.c:440
#3: (rtnl_mutex){+.+.+.}, at: [<000000006188c393>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
2 locks held by kworker/0:4/5555:
#0: ("events"){.+.+.+}, at: [<00000000871c0cd6>]
process_one_work+0x790/0x1600 kernel/workqueue.c:2107
#1: ((&rew.rew_work)){+.+...}, at: [<0000000035b425d4>]
process_one_work+0x7ce/0x1600 kernel/workqueue.c:2111
3 locks held by kworker/1:0/26780:
#0: ("%s"("ipv6_addrconf")){.+.+..}, at: [<00000000871c0cd6>]
process_one_work+0x790/0x1600 kernel/workqueue.c:2107
#1: ((addr_chk_work).work){+.+...}, at: [<0000000035b425d4>]
process_one_work+0x7ce/0x1600 kernel/workqueue.c:2111
#2: (rtnl_mutex){+.+.+.}, at: [<000000006188c393>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
2 locks held by syz-executor.0/1966:
#0: (rtnl_mutex){+.+.+.}, at: [<000000004417a77a>] rtnl_lock
net/core/rtnetlink.c:70 [inline]
#0: (rtnl_mutex){+.+.+.}, at: [<000000004417a77a>]
rtnetlink_rcv+0x1c/0x40 net/core/rtnetlink.c:4086
#1: (rcu_preempt_state.exp_mutex){+.+...}, at: [<000000001281f5f9>]
exp_funnel_lock kernel/rcu/tree_exp.h:256 [inline]
#1: (rcu_preempt_state.exp_mutex){+.+...}, at: [<000000001281f5f9>]
_synchronize_rcu_expedited+0x339/0x850 kernel/rcu/tree_exp.h:569
1 lock held by syz-executor.2/1992:
#0: (rtnl_mutex){+.+.+.}, at: [<000000004417a77a>] rtnl_lock
net/core/rtnetlink.c:70 [inline]
#0: (rtnl_mutex){+.+.+.}, at: [<000000004417a77a>]
rtnetlink_rcv+0x1c/0x40 net/core/rtnetlink.c:4086
1 lock held by syz-executor.2/1998:
#0: (rtnl_mutex){+.+.+.}, at: [<000000004417a77a>] rtnl_lock
net/core/rtnetlink.c:70 [inline]
#0: (rtnl_mutex){+.+.+.}, at: [<000000004417a77a>]
rtnetlink_rcv+0x1c/0x40 net/core/rtnetlink.c:4086
1 lock held by syz-executor.5/1990:
#0: (rtnl_mutex){+.+.+.}, at: [<000000004417a77a>] rtnl_lock
net/core/rtnetlink.c:70 [inline]
#0: (rtnl_mutex){+.+.+.}, at: [<000000004417a77a>]
rtnetlink_rcv+0x1c/0x40 net/core/rtnetlink.c:4086

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 24 Comm: khungtaskd Not tainted 4.9.194+ #0
ffff8801d98d7cc8 ffffffff81b67001 0000000000000000 0000000000000000
0000000000000000 ffffffff81099d01 dffffc0000000000 ffff8801d98d7d00
ffffffff81b7229c 0000000000000000 0000000000000000 0000000000000000
Call Trace:
[<00000000aacd9939>] __dump_stack lib/dump_stack.c:15 [inline]
[<00000000aacd9939>] dump_stack+0xc1/0x120 lib/dump_stack.c:51
[<000000009e58c4ea>] nmi_cpu_backtrace.cold+0x47/0x87
lib/nmi_backtrace.c:99
[<0000000006179039>] nmi_trigger_cpumask_backtrace+0x124/0x155
lib/nmi_backtrace.c:60
[<000000006ffce282>] arch_trigger_cpumask_backtrace+0x14/0x20
arch/x86/kernel/apic/hw_nmi.c:37
[<000000000f9ee59d>] trigger_all_cpu_backtrace include/linux/nmi.h:58
[inline]
[<000000000f9ee59d>] check_hung_task kernel/hung_task.c:126 [inline]
[<000000000f9ee59d>] check_hung_uninterruptible_tasks
kernel/hung_task.c:183 [inline]
[<000000000f9ee59d>] watchdog+0x670/0xaf0 kernel/hung_task.c:263
[<00000000b78e6a96>] kthread+0x278/0x310 kernel/kthread.c:211
[<000000003223c68a>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:375
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at pc 0xffffffff8282a0e1


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jan 31, 2020, 12:34:06 AM1/31/20
to syzkaller-a...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages