INFO: task hung in drain_all_pages (2)

17 views
Skip to first unread message

syzbot

unread,
Apr 28, 2020, 9:04:15 AM4/28/20
to syzkaller...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 050272a0 Linux 4.14.177
git tree: linux-4.14.y
console output: https://syzkaller.appspot.com/x/log.txt?x=142ee5d8100000
kernel config: https://syzkaller.appspot.com/x/.config?x=b24dc669afb42f8b
dashboard link: https://syzkaller.appspot.com/bug?extid=c88f4eaac5c23b4304b8
compiler: gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+c88f4e...@syzkaller.appspotmail.com

NOHZ: local_softirq_pending 08
NOHZ: local_softirq_pending 08
NOHZ: local_softirq_pending 08
NOHZ: local_softirq_pending 08
NOHZ: local_softirq_pending 08
INFO: task syz-executor.2:21656 blocked for more than 140 seconds.
Not tainted 4.14.177-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor.2 D27552 21656 6374 0x00000004
Call Trace:
schedule+0x8d/0x1b0 kernel/sched/core.c:3428
schedule_timeout+0x946/0xe40 kernel/time/timer.c:1723
do_wait_for_common kernel/sched/completion.c:91 [inline]
__wait_for_common kernel/sched/completion.c:112 [inline]
wait_for_common kernel/sched/completion.c:123 [inline]
wait_for_completion+0x241/0x390 kernel/sched/completion.c:144
flush_work+0x3f5/0x780 kernel/workqueue.c:2893
drain_all_pages+0x39a/0x570 mm/page_alloc.c:2536
__alloc_pages_direct_reclaim mm/page_alloc.c:3616 [inline]
__alloc_pages_slowpath+0xa16/0x26c0 mm/page_alloc.c:3989
__alloc_pages_nodemask+0x5d3/0x700 mm/page_alloc.c:4198
alloc_pages_vma+0xc2/0x4a0 mm/mempolicy.c:2077
alloc_zeroed_user_highpage_movable include/linux/highmem.h:184 [inline]
do_anonymous_page mm/memory.c:3133 [inline]
handle_pte_fault mm/memory.c:3987 [inline]
__handle_mm_fault+0x17b6/0x3280 mm/memory.c:4113
handle_mm_fault+0x288/0x7a0 mm/memory.c:4150
__do_page_fault+0x4bc/0xb40 arch/x86/mm/fault.c:1442
page_fault+0x45/0x50 arch/x86/entry/entry_64.S:1122
RIP: 165de2ff:0x7f14a72b79c0
RSP: a72b7700:00007fff165de470 EFLAGS: 00000000

Showing all locks held in the system:
1 lock held by khungtaskd/1058:
#0: (tasklist_lock){.+.+}, at: [<ffffffff81465d43>] debug_show_all_locks+0x7c/0x21a kernel/locking/lockdep.c:4548
1 lock held by in:imklog/6050:
#0: (&f->f_pos_lock){+.+.}, at: [<ffffffff8191b8f6>] __fdget_pos+0xa6/0xc0 fs/file.c:769
2 locks held by syz-executor.2/21656:
#0: (&mm->mmap_sem){++++}, at: [<ffffffff8128b84d>] __do_page_fault+0x2cd/0xb40 arch/x86/mm/fault.c:1371
#1: (pcpu_drain_mutex){+.+.}, at: [<ffffffff816f3b9a>] drain_all_pages+0x4a/0x570 mm/page_alloc.c:2493

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 1058 Comm: khungtaskd Not tainted 4.14.177-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0x13e/0x194 lib/dump_stack.c:58
nmi_cpu_backtrace.cold+0x57/0x93 lib/nmi_backtrace.c:101
nmi_trigger_cpumask_backtrace+0x139/0x17e lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:140 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:195 [inline]
watchdog+0x5e2/0xb80 kernel/hung_task.c:274
kthread+0x30d/0x420 kernel/kthread.c:232
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:404
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 11906 Comm: syz-executor.3 Not tainted 4.14.177-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
task: ffff88809f04e440 task.stack: ffff88820b800000
RIP: 0010:unwind_next_frame+0xeb9/0x17a0 arch/x86/kernel/unwind_orc.c:474
RSP: 0000:ffff88820b807798 EFLAGS: 00000807
RAX: ffffffff88e8a1b5 RBX: ffff88820b8078e0 RCX: 0000000000000000
RDX: ffff88820b807920 RSI: 0000000000000000 RDI: ffffffff88e8a1b4
RBP: 1ffff11041700efa R08: 1ffffffff11d1436 R09: ffff88820b807958
R10: ffff88820b807915 R11: 0000000000058071 R12: ffffffff88e8a1b2
R13: ffff88820b807918 R14: ffff88820b807968 R15: 0000000000000001
FS: 00007f683f5d0700(0000) GS:ffff8880aeb00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000004e7682e8 CR3: 000000005aa04000 CR4: 00000000001406e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
__unwind_start arch/x86/kernel/unwind_orc.c:578 [inline]
__unwind_start+0x450/0x800 arch/x86/kernel/unwind_orc.c:508
unwind_start arch/x86/include/asm/unwind.h:60 [inline]
__save_stack_trace+0x4a/0xd0 arch/x86/kernel/stacktrace.c:43
save_stack+0x32/0xa0 mm/kasan/kasan.c:447
set_track mm/kasan/kasan.c:459 [inline]
kasan_kmalloc mm/kasan/kasan.c:551 [inline]
kasan_kmalloc+0xbf/0xe0 mm/kasan/kasan.c:529
kmem_cache_alloc+0x127/0x770 mm/slab.c:3552
__sigqueue_alloc+0x1b8/0x3e0 kernel/signal.c:400
__send_signal+0x194/0x1280 kernel/signal.c:1097
specific_send_sig_info kernel/signal.c:1208 [inline]
force_sig_info+0x240/0x340 kernel/signal.c:1260
force_sig_info_fault.constprop.0+0x185/0x260 arch/x86/mm/fault.c:225
__bad_area_nosemaphore+0x1d9/0x2a0 arch/x86/mm/fault.c:940
__do_page_fault+0x859/0xb40 arch/x86/mm/fault.c:1412
page_fault+0x45/0x50 arch/x86/entry/entry_64.S:1122
RIP: 0bed:0x4ce103
RSP: 508600:000000000078c0e0 EFLAGS: ffffffff
Code: 02 48 b9 00 00 00 00 00 fc ff df 48 8d 53 40 48 89 f8 48 c1 e8 03 0f b6 34 08 49 8d 44 24 03 49 89 c0 49 c1 e8 03 41 0f b6 0c 08 <49> 89 f8 41 83 e0 07 44 38 c6 41 0f 9e c0 40 84 f6 40 0f 95 c6


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Sep 1, 2020, 12:02:14 PM9/1/20
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages