INFO: task hung in _rcu_barrier

4 views
Skip to first unread message

syzbot

unread,
Apr 14, 2019, 4:51:32 AM4/14/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: f68c8f49 Merge 4.20-rc1-4.9 into android-4.9
git tree: android-4.9
console output: https://syzkaller.appspot.com/x/log.txt?x=168153bd400000
kernel config: https://syzkaller.appspot.com/x/.config?x=13558268b29d9d4a
dashboard link: https://syzkaller.appspot.com/bug?extid=a597e53cb87edb8e246c
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=135c485d400000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=146e7583400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+a597e5...@syzkaller.appspotmail.com

Killed process 7721 (syz-executor900) total-vm:17872kB, anon-rss:16848kB,
file-rss:0kB, shmem-rss:0kB
Out of memory: Kill process 7729 (syz-executor900) score 1002 or sacrifice
child
Killed process 7729 (syz-executor900) total-vm:17872kB, anon-rss:16848kB,
file-rss:0kB, shmem-rss:0kB
Out of memory: Kill process 7739 (syz-executor900) score 1002 or sacrifice
child
Killed process 7739 (syz-executor900) total-vm:17872kB, anon-rss:16848kB,
file-rss:0kB, shmem-rss:0kB
INFO: task kworker/u4:1:64 blocked for more than 140 seconds.
Not tainted 4.9.135+ #62
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kworker/u4:1 D23304 64 2 0x80000000
Workqueue: netns cleanup_net
ffff8801d7845f00 ffff8801cc3bee00 ffff8801cc3bee00 ffff8801cc0c97c0
ffff8801db721018 ffff8801d79af5d0 ffffffff828067a2 ffffffff83ccf600
ffffffff00000000 fffffbfff0848da8 005164f000000001 ffff8801db7218f0
Call Trace:
[<ffffffff82807ccf>] schedule+0x7f/0x1b0 kernel/sched/core.c:3553
[<ffffffff82813475>] schedule_timeout+0x735/0xe20 kernel/time/timer.c:1771
[<ffffffff828097ef>] do_wait_for_common kernel/sched/completion.c:75
[inline]
[<ffffffff828097ef>] __wait_for_common kernel/sched/completion.c:93
[inline]
[<ffffffff828097ef>] wait_for_common+0x3ef/0x5d0
kernel/sched/completion.c:101
[<ffffffff828099e8>] wait_for_completion+0x18/0x20
kernel/sched/completion.c:122
[<ffffffff8124ae21>] _rcu_barrier+0x231/0x340 kernel/rcu/tree.c:3701
[<ffffffff8124af80>] rcu_barrier+0x10/0x20 kernel/rcu/tree_plugin.h:698
[<ffffffff8231acf0>] netdev_run_todo+0x110/0x770 net/core/dev.c:7542
[<ffffffff82340d1e>] rtnl_unlock+0xe/0x10 net/core/rtnetlink.c:104
[<ffffffff827add22>] ip6_tnl_exit_net+0x3e2/0x5b0
net/ipv6/ip6_tunnel.c:2240
[<ffffffff822e38a0>] ops_exit_list.isra.0+0xb0/0x160
net/core/net_namespace.c:136
[<ffffffff822e6602>] cleanup_net+0x3f2/0x8b0 net/core/net_namespace.c:473
[<ffffffff81130d61>] process_one_work+0x831/0x1530 kernel/workqueue.c:2092
[<ffffffff81131b36>] worker_thread+0xd6/0x1140 kernel/workqueue.c:2226
[<ffffffff811428dd>] kthread+0x26d/0x300 kernel/kthread.c:211
[<ffffffff82816bdc>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:373

Showing all locks held in the system:
2 locks held by khungtaskd/24:
#0: (rcu_read_lock){......}, at: [<ffffffff8131bb4c>]
check_hung_uninterruptible_tasks kernel/hung_task.c:168 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8131bb4c>]
watchdog+0x11c/0xa20 kernel/hung_task.c:239
#1: (tasklist_lock){.+.+..}, at: [<ffffffff813fe314>]
debug_show_all_locks+0x79/0x218 kernel/locking/lockdep.c:4336
4 locks held by kworker/u4:1/64:
#0: ("%s""netns"){.+.+.+}, at: [<ffffffff81130c6c>]
process_one_work+0x73c/0x1530 kernel/workqueue.c:2085
#1: (net_cleanup_work){+.+.+.}, at: [<ffffffff81130ca4>]
process_one_work+0x774/0x1530 kernel/workqueue.c:2089
#2: (net_mutex){+.+.+.}, at: [<ffffffff822e634f>] cleanup_net+0x13f/0x8b0
net/core/net_namespace.c:439
#3: (rcu_preempt_state.barrier_mutex){+.+...}, at: [<ffffffff8124ac4d>]
_rcu_barrier+0x5d/0x340 kernel/rcu/tree.c:3637
4 locks held by kworker/1:2/622:
#0: ("events"){.+.+.+}, at: [<ffffffff81130c6c>]
process_one_work+0x73c/0x1530 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130ca4>]
process_one_work+0x774/0x1530 kernel/workqueue.c:2089
#2: (&type->s_umount_key#19){++++.+}, at: [<ffffffff81514509>]
deactivate_super+0x89/0xd0 fs/super.c:340
#3: (shrinker_rwsem){++++..}, at: [<ffffffff81449678>]
unregister_shrinker+0x58/0x230 mm/vmscan.c:300
1 lock held by rsyslogd/1892:
#0: (&f->f_pos_lock){+.+.+.}, at: [<ffffffff8156cc6c>]
__fdget_pos+0xac/0xd0 fs/file.c:781
2 locks held by getty/2019:
#0: (&tty->ldisc_sem){++++++}, at: [<ffffffff82814af2>]
ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
#1: (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff81d36e52>]
n_tty_read+0x202/0x16e0 drivers/tty/n_tty.c:2142
1 lock held by init/8890:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130
1 lock held by init/8891:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130
1 lock held by init/8892:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130
1 lock held by init/8893:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130
1 lock held by init/8894:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130
1 lock held by init/8895:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2b686>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130
1 lock held by syz-executor900/9646:
#0: (&mm->mmap_sem){++++++}, at: [<ffffffff81469e08>]
vm_mmap_pgoff+0x128/0x1b0 mm/util.c:327
1 lock held by syz-executor900/9647:
#0: (&mm->mmap_sem){++++++}, at: [<ffffffff81491fa7>]
__mm_populate+0x257/0x350 mm/gup.c:1136
1 lock held by syz-executor900/9652:
#0: (&mm->mmap_sem){++++++}, at: [<ffffffff81469e08>]
vm_mmap_pgoff+0x128/0x1b0 mm/util.c:327
1 lock held by syz-executor900/9653:
#0: (&mm->mmap_sem){++++++}, at: [<ffffffff81491fa7>]
__mm_populate+0x257/0x350 mm/gup.c:1136

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 24 Comm: khungtaskd Not tainted 4.9.135+ #62
ffff8801d9907d08 ffffffff81b42a19 0000000000000000 0000000000000000
0000000000000000 0000000000000001 ffffffff81098330 ffff8801d9907d40
ffffffff81b4db29 0000000000000000 0000000000000000 0000000000000003
Call Trace:
[<ffffffff81b42a19>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81b42a19>] dump_stack+0xc1/0x128 lib/dump_stack.c:51
[<ffffffff81b4db29>] nmi_cpu_backtrace.cold.0+0x48/0x87
lib/nmi_backtrace.c:99
[<ffffffff81b4dabc>] nmi_trigger_cpumask_backtrace+0x12c/0x151
lib/nmi_backtrace.c:60
[<ffffffff81098434>] arch_trigger_cpumask_backtrace+0x14/0x20
arch/x86/kernel/apic/hw_nmi.c:37
[<ffffffff8131c0dd>] trigger_all_cpu_backtrace include/linux/nmi.h:58
[inline]
[<ffffffff8131c0dd>] check_hung_task kernel/hung_task.c:125 [inline]
[<ffffffff8131c0dd>] check_hung_uninterruptible_tasks
kernel/hung_task.c:182 [inline]
[<ffffffff8131c0dd>] watchdog+0x6ad/0xa20 kernel/hung_task.c:239
[<ffffffff811428dd>] kthread+0x26d/0x300 kernel/kthread.c:211
[<ffffffff82816bdc>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:373
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 2159 Comm: syz-executor900 Not tainted 4.9.135+ #62
task: ffff8801cc0c97c0 task.stack: ffff8801c5610000
RIP: 0010:[<ffffffff81b702ba>] c [<ffffffff81b702ba>]
__const_udelay+0x2a/0x30 arch/x86/lib/delay.c:174
RSP: 0018:ffff8801c56170d8 EFLAGS: 00000082
RAX: 0000000080000001 RBX: ffffffff84b5db20 RCX: 0000000000000000
RDX: 0000000000000002 RSI: ffffffff81ba789b RDI: ffffffff841ed840
RBP: ffff8801c56170d8 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: 000000000000270e
R13: 0000000000000020 R14: fffffbfff096bbab R15: fffffbfff096bb6d
FS: 0000000002229880(0000) GS:ffff8801db700000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000022328b8 CR3: 00000001cb08d000 CR4: 00000000001606b0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Stack:
ffff8801c5617128 c ffffffff81d643ff c ffffffff81b6c908 c ffffffff84b5db68 c
ffffffff84b5dd5a c ffffffff84b5db20 c 000000000000005d c ffffffff81d64570 c
dffffc0000000000 c 000000000000005d c ffff8801c5617148 c ffffffff81d6458f c
Call Trace:
[<ffffffff81d643ff>] wait_for_xmitr+0x6f/0x1e0
drivers/tty/serial/8250/8250_port.c:2005
[<ffffffff81d6458f>] serial8250_console_putchar+0x1f/0x60
drivers/tty/serial/8250/8250_port.c:3103
[<ffffffff81d4c9a9>] uart_console_write+0x59/0xf0
drivers/tty/serial/serial_core.c:1866
[<ffffffff81d6f7a8>] serial8250_console_write+0x528/0x820
drivers/tty/serial/8250/8250_port.c:3169
[<ffffffff81d5d1bf>] univ8250_console_write+0x5f/0x70
drivers/tty/serial/8250/8250_core.c:594
[<ffffffff8122337d>] call_console_drivers.isra.0.constprop.15+0x1ad/0x360
kernel/printk/printk.c:1589
[<ffffffff812260af>] console_unlock+0x47f/0xb50 kernel/printk/printk.c:2449
[<ffffffff81226bc8>] vprintk_emit+0x448/0x790 kernel/printk/printk.c:1903
[<ffffffff81226f38>] vprintk+0x28/0x30 kernel/printk/printk.c:1913
[<ffffffff81226f5d>] vprintk_default+0x1d/0x30 kernel/printk/printk.c:1914
[<ffffffff81402ea2>] vprintk_func kernel/printk/internal.h:36 [inline]
[<ffffffff81402ea2>] printk+0xaf/0xd7 kernel/printk/printk.c:1975
[<ffffffff8222d57f>] lowmem_scan.cold.1+0x1f9/0x35b
drivers/staging/android/lowmemorykiller.c:177
[<ffffffff81449c16>] do_shrink_slab mm/vmscan.c:398 [inline]
[<ffffffff81449c16>] shrink_slab.part.8+0x3c6/0xa00 mm/vmscan.c:501
[<ffffffff8145574d>] shrink_slab mm/vmscan.c:465 [inline]
[<ffffffff8145574d>] shrink_node+0x1ed/0x740 mm/vmscan.c:2602
[<ffffffff81456017>] shrink_zones mm/vmscan.c:2749 [inline]
[<ffffffff81456017>] do_try_to_free_pages mm/vmscan.c:2791 [inline]
[<ffffffff81456017>] try_to_free_pages+0x377/0xb80 mm/vmscan.c:3002
[<ffffffff81428951>] __perform_reclaim mm/page_alloc.c:3324 [inline]
[<ffffffff81428951>] __alloc_pages_direct_reclaim mm/page_alloc.c:3345
[inline]
[<ffffffff81428951>] __alloc_pages_slowpath mm/page_alloc.c:3697 [inline]
[<ffffffff81428951>] __alloc_pages_nodemask+0x981/0x1bd0
mm/page_alloc.c:3862
[<ffffffff814eb737>] __alloc_pages include/linux/gfp.h:433 [inline]
[<ffffffff814eb737>] __alloc_pages_node include/linux/gfp.h:446 [inline]
[<ffffffff814eb737>] alloc_slab_page mm/slub.c:1408 [inline]
[<ffffffff814eb737>] allocate_slab mm/slub.c:1557 [inline]
[<ffffffff814eb737>] new_slab+0x367/0x3d0 mm/slub.c:1635
[<ffffffff814ed8cd>] new_slab_objects mm/slub.c:2419 [inline]
[<ffffffff814ed8cd>] ___slab_alloc.constprop.33+0x2ed/0x470 mm/slub.c:2576
[<ffffffff814edaa0>] __slab_alloc.isra.25.constprop.32+0x50/0xa0
mm/slub.c:2618
[<ffffffff814edd02>] slab_alloc_node mm/slub.c:2681 [inline]
[<ffffffff814edd02>] slab_alloc mm/slub.c:2723 [inline]
[<ffffffff814edd02>] kmem_cache_alloc+0x212/0x2b0 mm/slub.c:2728
[<ffffffff8153ec98>] getname_flags+0xc8/0x550 fs/namei.c:137
[<ffffffff8153f139>] getname+0x19/0x20 fs/namei.c:208
[<ffffffff8150606b>] do_sys_open+0x20b/0x5c0 fs/open.c:1066
[<ffffffff8150644d>] SYSC_open fs/open.c:1090 [inline]
[<ffffffff8150644d>] SyS_open+0x2d/0x40 fs/open.c:1085
[<ffffffff810056ef>] do_syscall_64+0x19f/0x550 arch/x86/entry/common.c:285
[<ffffffff82816a13>] entry_SYSCALL_64_after_swapgs+0x5d/0xdb
Code: c00 c55 c48 c8d c0c cbd c00 c00 c00 c00 c65 c48 c8b c15
c67 c5e c4a c7e c48 c8d c14 c92 c48 c89 ce5 c48 c89 cc8 c48
c8d c14 c92 cf7 ce2 c48 c8d c7a c01 ce8 cb6 cff cff cff
c<5d> cc3 c0f c1f c40 c00 c48 c69 ccf c1c c43 c00 c00 c55
c65 c48 c8b c15 c38 c5e c4a c


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches
Reply all
Reply to author
Forward
0 new messages