INFO: task hung in get_info (2)

4 views
Skip to first unread message

syzbot

unread,
Jul 28, 2019, 4:20:07 AM7/28/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 8fe42840 Merge 4.9.141 into android-4.9
git tree: android-4.9
console output: https://syzkaller.appspot.com/x/log.txt?x=158243a2600000
kernel config: https://syzkaller.appspot.com/x/.config?x=22a5ba9f73b6da1d
dashboard link: https://syzkaller.appspot.com/bug?extid=89cf3aa8c7837bf2c968
compiler: gcc (GCC) 8.0.1 20180413 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+89cf3a...@syzkaller.appspotmail.com

Free memory is -13444kB above reserved
lowmemorykiller: Killing 'syz-executor.3' (31883) (tgid 31881), adj 1000,
to free 51404kB on behalf of 'syz-fuzzer' (2044) because
cache 424kB is below limit 6144kB for oom_score_adj 0
Free memory is -13444kB above reserved
INFO: task syz-executor.5:2088 blocked for more than 140 seconds.
Not tainted 4.9.141+ #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor.5 D24648 2088 2082 0x00000000
ffff8801d114df00 0000000000000000 ffff8801cf741b80 ffff8801d9942f80
ffff8801db621018 ffff8801b30cf7d8 ffffffff828075c2 0000000000000000
ffff8801d114e7b0 ffffed003a229cf5 00ff8801d114df00 ffff8801db6218f0
Call Trace:
[<ffffffff82808aef>] schedule+0x7f/0x1b0 kernel/sched/core.c:3553
[<ffffffff828094a3>] schedule_preempt_disabled+0x13/0x20
kernel/sched/core.c:3586
[<ffffffff8280b51d>] __mutex_lock_common kernel/locking/mutex.c:582
[inline]
[<ffffffff8280b51d>] mutex_lock_nested+0x38d/0x900
kernel/locking/mutex.c:621
[<ffffffff8244fa1c>] xt_find_table_lock+0x3c/0x3d0
net/netfilter/x_tables.c:1027
[<ffffffff8261a90d>] get_info+0x13d/0x510
net/ipv6/netfilter/ip6_tables.c:1012
[<ffffffff8261c36b>] do_arpt_get_ctl+0x38b/0x860
net/ipv4/netfilter/arp_tables.c:1492
[<ffffffff823e2840>] nf_sockopt net/netfilter/nf_sockopt.c:103 [inline]
[<ffffffff823e2840>] nf_getsockopt+0x70/0xd0 net/netfilter/nf_sockopt.c:121
[<ffffffff824bd877>] ip_getsockopt+0x127/0x170 net/ipv4/ip_sockglue.c:1557
[<ffffffff824e0228>] tcp_getsockopt+0x88/0xe0 net/ipv4/tcp.c:3106
[<ffffffff822a706a>] sock_common_getsockopt+0x9a/0xe0 net/core/sock.c:2665
[<ffffffff822a4fc0>] SYSC_getsockopt net/socket.c:1816 [inline]
[<ffffffff822a4fc0>] SyS_getsockopt+0x150/0x240 net/socket.c:1798
[<ffffffff810056ef>] do_syscall_64+0x19f/0x550 arch/x86/entry/common.c:285
[<ffffffff82817893>] entry_SYSCALL_64_after_swapgs+0x5d/0xdb

Showing all locks held in the system:
2 locks held by khungtaskd/24:
#0: (rcu_read_lock){......}, at: [<ffffffff8131c0cc>]
check_hung_uninterruptible_tasks kernel/hung_task.c:168 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8131c0cc>]
watchdog+0x11c/0xa20 kernel/hung_task.c:239
#1: (tasklist_lock){.+.+..}, at: [<ffffffff813fe63f>]
debug_show_all_locks+0x79/0x218 kernel/locking/lockdep.c:4336
1 lock held by rsyslogd/1891:
#0: (&f->f_pos_lock){+.+.+.}, at: [<ffffffff8156cc7c>]
__fdget_pos+0xac/0xd0 fs/file.c:781
2 locks held by getty/2018:
#0: (&tty->ldisc_sem){++++++}, at: [<ffffffff82815952>]
ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
#1: (&ldata->atomic_read_lock){+.+.+.}, at: [<ffffffff81d37362>]
n_tty_read+0x202/0x16e0 drivers/tty/n_tty.c:2142
1 lock held by syz-executor.5/2088:
#0: (&xt[i].mutex){+.+.+.}, at: [<ffffffff8244fa1c>]
xt_find_table_lock+0x3c/0x3d0 net/netfilter/x_tables.c:1027
3 locks held by kworker/1:5/6287:
#0: ("%s"("ipv6_addrconf")){.+.+..}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((addr_chk_work).work){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
#2: (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
3 locks held by kworker/u4:31/15694:
#0: ("%s""netns"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: (net_cleanup_work){+.+.+.}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
#2: (net_mutex){+.+.+.}, at: [<ffffffff822e681f>] cleanup_net+0x13f/0x8b0
net/core/net_namespace.c:439
1 lock held by syz-executor.0/21354:
#0: (&xt[i].mutex){+.+.+.}, at: [<ffffffff8244fa1c>]
xt_find_table_lock+0x3c/0x3d0 net/netfilter/x_tables.c:1027
2 locks held by syz-executor.2/23093:
#0: (&tty->ldisc_sem){++++++}, at: [<ffffffff82815952>]
ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
#1: (&tty->atomic_write_lock){+.+.+.}, at: [<ffffffff81d1f7e1>]
tty_write_lock+0x21/0x60 drivers/tty/tty_io.c:1107
1 lock held by syz-executor.3/31883:
#0: (net_mutex){+.+.+.}, at: [<ffffffff822e70e5>] copy_net_ns+0x155/0x330
net/core/net_namespace.c:406

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 24 Comm: khungtaskd Not tainted 4.9.141+ #1
ffff8801d9907d08 ffffffff81b42e79 0000000000000000 0000000000000001
0000000000000001 0000000000000001 ffffffff810983b0 ffff8801d9907d40
ffffffff81b4df89 0000000000000001 0000000000000000 0000000000000002
Call Trace:
[<ffffffff81b42e79>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81b42e79>] dump_stack+0xc1/0x128 lib/dump_stack.c:51
[<ffffffff81b4df89>] nmi_cpu_backtrace.cold.0+0x48/0x87
lib/nmi_backtrace.c:99
[<ffffffff81b4df1c>] nmi_trigger_cpumask_backtrace+0x12c/0x151
lib/nmi_backtrace.c:60
[<ffffffff810984b4>] arch_trigger_cpumask_backtrace+0x14/0x20
arch/x86/kernel/apic/hw_nmi.c:37
[<ffffffff8131c65d>] trigger_all_cpu_backtrace include/linux/nmi.h:58
[inline]
[<ffffffff8131c65d>] check_hung_task kernel/hung_task.c:125 [inline]
[<ffffffff8131c65d>] check_hung_uninterruptible_tasks
kernel/hung_task.c:182 [inline]
[<ffffffff8131c65d>] watchdog+0x6ad/0xa20 kernel/hung_task.c:239
[<ffffffff81142c3d>] kthread+0x26d/0x300 kernel/kthread.c:211
[<ffffffff82817a5c>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:373
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 31916 Comm: syz-executor.1 Not tainted 4.9.141+ #1
task: ffff8801d9fe8000 task.stack: ffff88010b150000
RIP: 0010:[<ffffffff8120c91d>] c [<ffffffff8120c91d>]
lock_acquire+0x17d/0x3e0 kernel/locking/lockdep.c:3760
RSP: 0018:ffff88010b1572f0 EFLAGS: 00000286
RAX: 0000000000000007 RBX: ffff8801976a07c0 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffff8801d9fe8950 RDI: 0000000000000246
RBP: ffff88010b157310 R08: ffff8801d9fe8970 R09: 069fa0192f184961
R10: ffff8801d9fe8000 R11: 0000000000000001 R12: ffff8801d9fe8000
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
FS: 00007fc669a95700(0000) GS:ffff8801db600000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f76dc47afda CR3: 000000017b006000 CR4: 00000000001606b0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
Stack:
ffff8801976a0000 c dffffc0000000000 c ffff8801976a07c0 c ffff8801976a0738 c
ffff88010b157330 c ffffffff82816c26 c ffffffff8141a061 c ffff8801976a0000 c
ffff88010b157378 c ffffffff8141a061 c ffffffff81419f70 c ffffed0032ed40e7 c
Call Trace:
[<ffffffff82816c26>] __raw_spin_lock include/linux/spinlock_api_smp.h:144
[inline]
[<ffffffff82816c26>] _raw_spin_lock+0x36/0x50 kernel/locking/spinlock.c:151
[<ffffffff8141a061>] spin_lock include/linux/spinlock.h:302 [inline]
[<ffffffff8141a061>] task_lock include/linux/sched.h:3257 [inline]
[<ffffffff8141a061>] find_lock_task_mm+0xf1/0x270 mm/oom_kill.c:115
[<ffffffff821effdf>] lowmem_scan+0x34f/0xaf0
drivers/staging/android/lowmemorykiller.c:134
[<ffffffff81449cc6>] do_shrink_slab mm/vmscan.c:398 [inline]
[<ffffffff81449cc6>] shrink_slab.part.8+0x3c6/0xa00 mm/vmscan.c:501
[<ffffffff814557fd>] shrink_slab mm/vmscan.c:465 [inline]
[<ffffffff814557fd>] shrink_node+0x1ed/0x740 mm/vmscan.c:2602
[<ffffffff814560c7>] shrink_zones mm/vmscan.c:2749 [inline]
[<ffffffff814560c7>] do_try_to_free_pages mm/vmscan.c:2791 [inline]
[<ffffffff814560c7>] try_to_free_pages+0x377/0xb80 mm/vmscan.c:3002
[<ffffffff81428a01>] __perform_reclaim mm/page_alloc.c:3324 [inline]
[<ffffffff81428a01>] __alloc_pages_direct_reclaim mm/page_alloc.c:3345
[inline]
[<ffffffff81428a01>] __alloc_pages_slowpath mm/page_alloc.c:3697 [inline]
[<ffffffff81428a01>] __alloc_pages_nodemask+0x981/0x1bd0
mm/page_alloc.c:3862
[<ffffffff814c9e8b>] __alloc_pages include/linux/gfp.h:433 [inline]
[<ffffffff814c9e8b>] __alloc_pages_node include/linux/gfp.h:446 [inline]
[<ffffffff814c9e8b>] alloc_pages_node include/linux/gfp.h:460 [inline]
[<ffffffff814c9e8b>] __vmalloc_area_node mm/vmalloc.c:1644 [inline]
[<ffffffff814c9e8b>] __vmalloc_node_range+0x25b/0x600 mm/vmalloc.c:1702
[<ffffffff814ca63b>] __vmalloc_node mm/vmalloc.c:1745 [inline]
[<ffffffff814ca63b>] __vmalloc_node_flags mm/vmalloc.c:1759 [inline]
[<ffffffff814ca63b>] vzalloc+0x5b/0x70 mm/vmalloc.c:1791
[<ffffffff827cfbce>] alloc_one_pg_vec_page net/packet/af_packet.c:4208
[inline]
[<ffffffff827cfbce>] alloc_pg_vec net/packet/af_packet.c:4233 [inline]
[<ffffffff827cfbce>] packet_set_ring+0x51e/0x1810
net/packet/af_packet.c:4323
[<ffffffff827d29d3>] packet_setsockopt+0xfa3/0x2630
net/packet/af_packet.c:3685
[<ffffffff822a4d76>] SYSC_setsockopt net/socket.c:1785 [inline]
[<ffffffff822a4d76>] SyS_setsockopt+0x166/0x260 net/socket.c:1764
[<ffffffff810056ef>] do_syscall_64+0x19f/0x550 arch/x86/entry/common.c:285
[<ffffffff82817893>] entry_SYSCALL_64_after_swapgs+0x5d/0xdb
Code: c07 c83 cc0 c03 c38 cd0 c7c c08 c84 cd2 c0f c85 cc8 c01
c00 c00 c41 cc7 c84 c24 cac c08 c00 c00 c00 c00 c00 c00 c48
c89 cdf c57 c9d c0f c1f c44 c00 c00 c48 c83 cc4 c40 c5b
c<41> c5c c41 c5d c41 c5e c41 c5f c5d cc3 c4c c89 c5d cc8
c65 cff c05 c5e cb4 ce0 c7e c


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Nov 25, 2019, 2:20:06 AM11/25/19
to syzkaller-a...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages