INFO: task hung in nf_ct_iterate_cleanup

14 views
Skip to first unread message

syzbot

unread,
Apr 14, 2019, 5:33:12 AM4/14/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 4b1d0d3e ANDROID: mnt: Propagate remount correctly
git tree: android-4.9
console output: https://syzkaller.appspot.com/x/log.txt?x=11afdab7400000
kernel config: https://syzkaller.appspot.com/x/.config?x=a4941358bc28b522
dashboard link: https://syzkaller.appspot.com/bug?extid=4bac7f75556053190a0d
compiler: gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+4bac7f...@syzkaller.appspotmail.com

ip6_tunnel: ip6tnl1 xmit: Local address not yet configured!
ip6_tunnel: ip6tnl1 xmit: Local address not yet configured!
ip6_tunnel: ip6tnl1 xmit: Local address not yet configured!
INFO: task syz-executor4:4299 blocked for more than 140 seconds.
Not tainted 4.9.151+ #12
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor4 D27112 4299 2120 0x00000004
ffff8801a2684740 0000000000000000 ffff8801db621000 ffff8801a3b14740
ffff8801db621018 ffff8801a2087720 ffffffff82805336 ffffffff811eecd8
0000000000000000 ffffffff830d1b50 00ffffff830d1b50 ffff8801db6218f0
Call Trace:
[<ffffffff828068c2>] schedule+0x92/0x1c0 kernel/sched/core.c:3553
[<ffffffff8124a393>] _synchronize_rcu_expedited+0x593/0x850
kernel/rcu/tree_exp.h:588
[<ffffffff8124f692>] synchronize_rcu_expedited kernel/rcu/tree_exp.h:687
[inline]
[<ffffffff8124f692>] synchronize_rcu_expedited+0x22/0x30
kernel/rcu/tree_exp.h:681
[<ffffffff823037bf>] synchronize_net+0x2f/0x50 net/core/dev.c:7862
[<ffffffff823fb428>] nf_ct_iterate_cleanup+0x218/0x480
net/netfilter/nf_conntrack_core.c:1619
[<ffffffff82608768>] masq_device_event
net/ipv4/netfilter/nf_nat_masquerade_ipv4.c:100 [inline]
[<ffffffff82608768>] masq_inet_event+0x108/0x150
net/ipv4/netfilter/nf_nat_masquerade_ipv4.c:123
[<ffffffff81146c24>] notifier_call_chain+0xb4/0x1d0 kernel/notifier.c:93
[<ffffffff81147f90>] __blocking_notifier_call_chain kernel/notifier.c:317
[inline]
[<ffffffff81147f90>] __blocking_notifier_call_chain kernel/notifier.c:304
[inline]
[<ffffffff81147f90>] blocking_notifier_call_chain kernel/notifier.c:328
[inline]
[<ffffffff81147f90>] blocking_notifier_call_chain+0x80/0xa0
kernel/notifier.c:325
[<ffffffff82586d18>] __inet_del_ifa+0x4d8/0xb30 net/ipv4/devinet.c:403
[<ffffffff8258dbf6>] inet_del_ifa net/ipv4/devinet.c:433 [inline]
[<ffffffff8258dbf6>] devinet_ioctl+0x7b6/0x1600 net/ipv4/devinet.c:1113
[<ffffffff8259633b>] inet_ioctl+0x10b/0x1a0 net/ipv4/af_inet.c:908
[<ffffffff8229e56a>] sock_do_ioctl+0x6a/0xb0 net/socket.c:905
[<ffffffff8229ef9c>] sock_ioctl+0x24c/0x3d0 net/socket.c:991
[<ffffffff81549827>] vfs_ioctl fs/ioctl.c:43 [inline]
[<ffffffff81549827>] file_ioctl fs/ioctl.c:493 [inline]
[<ffffffff81549827>] do_vfs_ioctl+0xb87/0x11d0 fs/ioctl.c:677
[<ffffffff81549eff>] SYSC_ioctl fs/ioctl.c:694 [inline]
[<ffffffff81549eff>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:685
[<ffffffff810056bd>] do_syscall_64+0x1ad/0x570 arch/x86/entry/common.c:285
[<ffffffff828155d3>] entry_SYSCALL_64_after_swapgs+0x5d/0xdb

Showing all locks held in the system:
2 locks held by khungtaskd/24:
#0: (rcu_read_lock){......}, at: [<ffffffff8131b99b>]
check_hung_uninterruptible_tasks kernel/hung_task.c:168 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8131b99b>]
watchdog+0x11b/0xa40 kernel/hung_task.c:239
#1: (tasklist_lock){.+.+..}, at: [<ffffffff813fe87b>]
debug_show_all_locks+0x7f/0x21f kernel/locking/lockdep.c:4336
2 locks held by getty/2025:
#0: (&tty->ldisc_sem){++++++}, at: [<ffffffff828136b3>]
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:367
#1: (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff81d38dde>]
n_tty_read+0x1fe/0x1820 drivers/tty/n_tty.c:2156
3 locks held by kworker/1:3/3860:
#0: ("%s"("ipv6_addrconf")){.+.+..}, at: [<ffffffff8112ff20>]
process_one_work+0x790/0x15c0 kernel/workqueue.c:2085
#1: ((addr_chk_work).work){+.+...}, at: [<ffffffff8112ff5e>]
process_one_work+0x7ce/0x15c0 kernel/workqueue.c:2089
#2: (rtnl_mutex){+.+.+.}, at: [<ffffffff823422c7>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
3 locks held by syz-executor4/4299:
#0: (rtnl_mutex){+.+.+.}, at: [<ffffffff823422c7>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
#1: ((inetaddr_chain).rwsem){.+.+.+}, at: [<ffffffff81147f7a>]
__blocking_notifier_call_chain kernel/notifier.c:316 [inline]
#1: ((inetaddr_chain).rwsem){.+.+.+}, at: [<ffffffff81147f7a>]
__blocking_notifier_call_chain kernel/notifier.c:304 [inline]
#1: ((inetaddr_chain).rwsem){.+.+.+}, at: [<ffffffff81147f7a>]
blocking_notifier_call_chain kernel/notifier.c:328 [inline]
#1: ((inetaddr_chain).rwsem){.+.+.+}, at: [<ffffffff81147f7a>]
blocking_notifier_call_chain+0x6a/0xa0 kernel/notifier.c:325
#2: (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a139>]
exp_funnel_lock kernel/rcu/tree_exp.h:256 [inline]
#2: (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a139>]
_synchronize_rcu_expedited+0x339/0x850 kernel/rcu/tree_exp.h:569
1 lock held by syz-executor4/4315:
#0: (rtnl_mutex){+.+.+.}, at: [<ffffffff823422c7>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
1 lock held by syz-executor4/4346:
#0: (rtnl_mutex){+.+.+.}, at: [<ffffffff823422c7>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
1 lock held by syz-executor4/4347:
#0: (rtnl_mutex){+.+.+.}, at: [<ffffffff823422c7>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
2 locks held by syz-executor2/4323:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229e2d9>]
inode_lock include/linux/fs.h:768 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229e2d9>]
__sock_release+0x89/0x260 net/socket.c:604
#1: (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a1a7>]
exp_funnel_lock kernel/rcu/tree_exp.h:289 [inline]
#1: (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a1a7>]
_synchronize_rcu_expedited+0x3a7/0x850 kernel/rcu/tree_exp.h:569
1 lock held by syz-executor0/4327:
#0: (rtnl_mutex){+.+.+.}, at: [<ffffffff823422c7>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
1 lock held by syz-executor0/4337:
#0: (rtnl_mutex){+.+.+.}, at: [<ffffffff823422c7>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
1 lock held by syz-executor3/4331:
#0: (rtnl_mutex){+.+.+.}, at: [<ffffffff8234b92c>] rtnl_lock
net/core/rtnetlink.c:70 [inline]
#0: (rtnl_mutex){+.+.+.}, at: [<ffffffff8234b92c>]
rtnetlink_rcv+0x1c/0x40 net/core/rtnetlink.c:4086
1 lock held by syz-executor3/4339:
#0: (rtnl_mutex){+.+.+.}, at: [<ffffffff8234b92c>] rtnl_lock
net/core/rtnetlink.c:70 [inline]
#0: (rtnl_mutex){+.+.+.}, at: [<ffffffff8234b92c>]
rtnetlink_rcv+0x1c/0x40 net/core/rtnetlink.c:4086

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 24 Comm: khungtaskd Not tainted 4.9.151+ #12
ffff8801d9907cd0 ffffffff81b46e21 0000000000000001 0000000000000000
0000000000000001 ffffffff81097301 00000000003fff89 ffff8801d9907d08
ffffffff81b520ac 0000000000000001 0000000000000000 0000000000000001
Call Trace:
[<ffffffff81b46e21>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81b46e21>] dump_stack+0xc1/0x120 lib/dump_stack.c:51
[<ffffffff81b520ac>] nmi_cpu_backtrace.cold+0x47/0x87
lib/nmi_backtrace.c:99
[<ffffffff81b52034>] nmi_trigger_cpumask_backtrace+0x124/0x155
lib/nmi_backtrace.c:60
[<ffffffff81097494>] arch_trigger_cpumask_backtrace+0x14/0x20
arch/x86/kernel/apic/hw_nmi.c:37
[<ffffffff8131be77>] trigger_all_cpu_backtrace include/linux/nmi.h:58
[inline]
[<ffffffff8131be77>] check_hung_task kernel/hung_task.c:125 [inline]
[<ffffffff8131be77>] check_hung_uninterruptible_tasks
kernel/hung_task.c:182 [inline]
[<ffffffff8131be77>] watchdog+0x5f7/0xa40 kernel/hung_task.c:239
[<ffffffff81141e18>] kthread+0x278/0x310 kernel/kthread.c:211
[<ffffffff8281579c>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:373
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 2497 Comm: kworker/0:2 Not tainted 4.9.151+ #12
Workqueue: events rtc_timer_do_work c
task: ffff8801a3b14740 task.stack: ffff8801a3b80000
RIP: 0010:[<ffffffff8120771f>] c [<ffffffff8120771f>]
__lock_acquire+0x2af/0x4350 kernel/locking/lockdep.c:3286
RSP: 0018:ffff8801a3b87790 EFLAGS: 00000806
RAX: dffffc0000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000454 RDI: ffff8801a3b15060
RBP: ffff8801a3b87920 R08: 0000000000000001 R09: 0000000000000001
R10: ffff8801a3b15068 R11: 1ffff10034762a0c R12: ffff8801d615dae0
R13: 0000000000000454 R14: 0000000000000003 R15: ffff8801a3b14740
FS: 0000000000000000(0000) GS:ffff8801db600000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f6a3828991a CR3: 00000001d4aa5000 CR4: 00000000001606b0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Stack:
0000000000000000 c ffff8801a3b14740 c ffff8801a3b877c0 c ffffffff81207245 c
0000000000000001 c ffffffff83cef550 c ffff8801a3b877d0 c ffff8801a3b15080 c
ffff8801a3b14fe0 c ffff8801a3b15088 c ffff8801a3b14fe8 c ffffffff83ce3f10 c
Call Trace:
[<ffffffff8120c283>] lock_acquire+0x133/0x3d0 kernel/locking/lockdep.c:3756
[<ffffffff82814b10>] __raw_spin_lock_irqsave
include/linux/spinlock_api_smp.h:112 [inline]
[<ffffffff82814b10>] _raw_spin_lock_irqsave+0x50/0x70
kernel/locking/spinlock.c:159
[<ffffffff8208b769>] rtc_handle_legacy_irq+0x89/0x190
drivers/rtc/interface.c:518
[<ffffffff8208b8c0>] rtc_uie_update_irq+0x20/0x30
drivers/rtc/interface.c:550
[<ffffffff8208bbbe>] rtc_timer_do_work+0x1fe/0x600
drivers/rtc/interface.c:881
[<ffffffff8113001b>] process_one_work+0x88b/0x15c0 kernel/workqueue.c:2092
[<ffffffff8113132f>] worker_thread+0x5df/0x11d0 kernel/workqueue.c:2226
[<ffffffff81141e18>] kthread+0x278/0x310 kernel/kthread.c:211
[<ffffffff8281579c>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:373
Code: c24 c88 c00 c00 c00 c44 c89 cee c66 c81 ce6 cff c1f c49
c8d c42 c20 c48 c89 cc2 c48 c89 c44 c24 c78 c48 cb8 c00 c00
c00 c00 c00 cfc cff cdf c48 cc1 cea c03 c0f cb6 c14 c02
c<84> cd2 c74 c09 c80 cfa c03 c0f c8e c63 c0f c00 c00 c41
c0f cb7 c42 c20 c49 c8d c7a c


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jul 21, 2019, 3:48:05 AM7/21/19
to syzkaller-a...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages