INFO: task hung in fib6_rules_net_exit (2)

4 views
Skip to first unread message

syzbot

unread,
Jul 31, 2022, 9:33:22 PM7/31/22
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 3f8a27f9e27b Linux 4.19.211
git tree: linux-4.19.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12f19c82080000
kernel config: https://syzkaller.appspot.com/x/.config?x=9b9277b418617afe
dashboard link: https://syzkaller.appspot.com/bug?extid=7406e83c34094ceacd2e
compiler: gcc version 10.2.1 20210110 (Debian 10.2.1-6)

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+7406e8...@syzkaller.appspotmail.com

audit: type=1804 audit(1659317479.961:822): pid=26139 uid=0 auid=4294967295 ses=4294967295 subj==unconfined op=invalid_pcr cause=open_writers comm="syz-executor.5" name="/root/syzkaller-testdir905282275/syzkaller.iLswX1/205/file0" dev="sda1" ino=13905 res=1
ieee802154 phy0 wpan0: encryption failed: -22
ieee802154 phy1 wpan1: encryption failed: -22
INFO: task kworker/u4:4:992 blocked for more than 140 seconds.
Not tainted 4.19.211-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kworker/u4:4 D25048 992 2 0x80000000
Workqueue: netns cleanup_net
Call Trace:
context_switch kernel/sched/core.c:2828 [inline]
__schedule+0x887/0x2040 kernel/sched/core.c:3517
schedule+0x8d/0x1b0 kernel/sched/core.c:3561
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:3619
__mutex_lock_common kernel/locking/mutex.c:1016 [inline]
__mutex_lock+0x5f0/0x1190 kernel/locking/mutex.c:1078
fib6_rules_net_exit+0xe/0x50 net/ipv6/fib6_rules.c:487
ops_exit_list+0xa5/0x150 net/core/net_namespace.c:153
cleanup_net+0x3b4/0x8b0 net/core/net_namespace.c:554
process_one_work+0x864/0x1570 kernel/workqueue.c:2153
worker_thread+0x64c/0x1130 kernel/workqueue.c:2296
kthread+0x33f/0x460 kernel/kthread.c:259
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:415

Showing all locks held in the system:
1 lock held by systemd/1:
4 locks held by kworker/u4:4/992:
#0: 000000002fafc4d3 ((wq_completion)"%s""netns"){+.+.}, at: process_one_work+0x767/0x1570 kernel/workqueue.c:2124
#1: 000000002195495b (net_cleanup_work){+.+.}, at: process_one_work+0x79c/0x1570 kernel/workqueue.c:2128
#2: 00000000f8b72850 (pernet_ops_rwsem){++++}, at: cleanup_net+0xa8/0x8b0 net/core/net_namespace.c:521
#3: 0000000034c72a15 (rtnl_mutex){+.+.}, at: fib6_rules_net_exit+0xe/0x50 net/ipv6/fib6_rules.c:487
1 lock held by khungtaskd/1571:
#0: 00000000c043da5e (rcu_read_lock){....}, at: debug_show_all_locks+0x53/0x265 kernel/locking/lockdep.c:4441
3 locks held by kworker/0:2/3598:
#0: 0000000060d8a166 ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: process_one_work+0x767/0x1570 kernel/workqueue.c:2124
#1: 000000002b614a74 ((addr_chk_work).work){+.+.}, at: process_one_work+0x79c/0x1570 kernel/workqueue.c:2128
#2: 0000000034c72a15 (rtnl_mutex){+.+.}, at: addrconf_verify_work+0xa/0x20 net/ipv6/addrconf.c:4476
1 lock held by systemd-journal/4693:
1 lock held by syz-fuzzer/8111:
1 lock held by syz-executor.4/8148:
#0: 00000000b117df4a (rcu_preempt_state.exp_mutex){+.+.}, at: exp_funnel_lock kernel/rcu/tree_exp.h:297 [inline]
#0: 00000000b117df4a (rcu_preempt_state.exp_mutex){+.+.}, at: _synchronize_rcu_expedited+0x4dc/0x6f0 kernel/rcu/tree_exp.h:667
2 locks held by kworker/u4:8/17886:
3 locks held by kworker/1:35/21043:
#0: 0000000071b5e89c ((wq_completion)"events"){+.+.}, at: process_one_work+0x767/0x1570 kernel/workqueue.c:2124
#1: 00000000b662c0ec (deferred_process_work){+.+.}, at: process_one_work+0x79c/0x1570 kernel/workqueue.c:2128
#2: 0000000034c72a15 (rtnl_mutex){+.+.}, at: switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:150
2 locks held by kworker/1:36/21044:
2 locks held by syz-executor.0/25105:
1 lock held by syz-executor.3/26034:
1 lock held by syz-executor.3/26143:
#0: 0000000034c72a15 (rtnl_mutex){+.+.}, at: rtnl_lock net/core/rtnetlink.c:77 [inline]
#0: 0000000034c72a15 (rtnl_mutex){+.+.}, at: rtnetlink_rcv_msg+0x3fe/0xb80 net/core/rtnetlink.c:4779
1 lock held by syz-executor.2/26153:
#0: 0000000034c72a15 (rtnl_mutex){+.+.}, at: tun_detach drivers/net/tun.c:759 [inline]
#0: 0000000034c72a15 (rtnl_mutex){+.+.}, at: tun_chr_close+0x3a/0x180 drivers/net/tun.c:3323
1 lock held by syz-executor.2/26181:
#0: 0000000034c72a15 (rtnl_mutex){+.+.}, at: rtnl_lock net/core/rtnetlink.c:77 [inline]
#0: 0000000034c72a15 (rtnl_mutex){+.+.}, at: rtnetlink_rcv_msg+0x3fe/0xb80 net/core/rtnetlink.c:4779

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 1571 Comm: khungtaskd Not tainted 4.19.211-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1fc/0x2ef lib/dump_stack.c:118
nmi_cpu_backtrace.cold+0x63/0xa2 lib/nmi_backtrace.c:101
nmi_trigger_cpumask_backtrace+0x1a6/0x1f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:203 [inline]
watchdog+0x991/0xe60 kernel/hung_task.c:287
kthread+0x33f/0x460 kernel/kthread.c:259
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:415
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 21044 Comm: kworker/1:36 Not tainted 4.19.211-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022
Workqueue: events iterate_cleanup_work
RIP: 0010:preempt_count_add+0xa0/0x190 kernel/sched/core.c:3242
Code: c6 c8 e2 0b 85 d2 75 11 65 8b 05 ab 70 c0 7e 0f b6 c0 3d f4 00 00 00 7f 64 65 8b 05 9a 70 c0 7e 25 ff ff ff 7f 39 c5 74 03 5b <5d> c3 48 8b 5c 24 10 48 89 df e8 41 15 0a 00 85 c0 75 35 65 48 8b
RSP: 0018:ffff8880ba1078e8 EFLAGS: 00000297
RAX: 0000000000000102 RBX: 1ffff11017420f28 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffff8880ba1077d0 RDI: 0000000000000001
RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000001
R10: ffff8880ba107a87 R11: 0000000000074071 R12: ffff8880ba107a70
R13: 0000000000000000 R14: ffff8880ba107a28 R15: 00000000000001c0
FS: 0000000000000000(0000) GS:ffff8880ba100000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000c020389000 CR3: 000000005413e000 CR4: 00000000003406e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<IRQ>
unwind_next_frame+0x135/0x1400 arch/x86/kernel/unwind_orc.c:407
__save_stack_trace+0xd6/0x190 arch/x86/kernel/stacktrace.c:44
save_stack mm/kasan/kasan.c:448 [inline]
set_track mm/kasan/kasan.c:460 [inline]
kasan_kmalloc+0xeb/0x160 mm/kasan/kasan.c:553
__do_kmalloc_node mm/slab.c:3689 [inline]
__kmalloc_node_track_caller+0x4c/0x70 mm/slab.c:3703
__kmalloc_reserve net/core/skbuff.c:137 [inline]
__alloc_skb+0xae/0x560 net/core/skbuff.c:205
alloc_skb include/linux/skbuff.h:995 [inline]
bcm_can_tx+0x259/0x800 net/can/bcm.c:287
bcm_tx_timeout_tsklet+0x1f0/0x3a0 net/can/bcm.c:414
tasklet_action_common.constprop.0+0x265/0x360 kernel/softirq.c:522
__do_softirq+0x265/0x980 kernel/softirq.c:292
do_softirq_own_stack+0x2a/0x40 arch/x86/entry/entry_64.S:1092
</IRQ>
do_softirq.part.0+0x160/0x1c0 kernel/softirq.c:336
do_softirq kernel/softirq.c:328 [inline]
__local_bh_enable_ip+0x20e/0x270 kernel/softirq.c:189
local_bh_enable include/linux/bottom_half.h:32 [inline]
get_next_corpse net/netfilter/nf_conntrack_core.c:1907 [inline]
nf_ct_iterate_cleanup+0x239/0x520 net/netfilter/nf_conntrack_core.c:1930
nf_ct_iterate_cleanup_net net/netfilter/nf_conntrack_core.c:2015 [inline]
nf_ct_iterate_cleanup_net+0x113/0x170 net/netfilter/nf_conntrack_core.c:2000
iterate_cleanup_work+0x43/0xd0 net/ipv6/netfilter/nf_nat_masquerade_ipv6.c:113
process_one_work+0x864/0x1570 kernel/workqueue.c:2153
worker_thread+0x64c/0x1130 kernel/workqueue.c:2296
kthread+0x33f/0x460 kernel/kthread.c:259
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:415


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Nov 28, 2022, 8:33:30 PM11/28/22
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages