INFO: task can't die in shrink_lruvec

4 views
Skip to first unread message

syzbot

unread,
Sep 6, 2021, 11:02:25 PM9/6/21
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 2265c5286967 Add linux-next specific files for 20210726
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=164b81f4300000
kernel config: https://syzkaller.appspot.com/x/.config?x=531dbd796dcea4b4
dashboard link: https://syzkaller.appspot.com/bug?extid=5463aeaea0e4f896e3d6
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.1
CC: [ak...@linux-foundation.org linux-...@vger.kernel.org linu...@kvack.org]

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5463ae...@syzkaller.appspotmail.com

INFO: task syz-executor.4:16007 can't die for more than 143 seconds.
task:syz-executor.4 state:R running task stack:22832 pid:16007 ppid: 8532 flags:0x00004006
Call Trace:
context_switch kernel/sched/core.c:4700 [inline]
__schedule+0x949/0x2710 kernel/sched/core.c:5957
preempt_schedule_common+0x45/0xc0 kernel/sched/core.c:6117
preempt_schedule_thunk+0x16/0x18 arch/x86/entry/thunk_64.S:35
__raw_spin_unlock_irq include/linux/spinlock_api_smp.h:169 [inline]
_raw_spin_unlock_irq+0x3c/0x40 kernel/locking/spinlock.c:199
shrink_inactive_list mm/vmscan.c:2288 [inline]
shrink_list mm/vmscan.c:2512 [inline]
shrink_lruvec+0xbca/0x2600 mm/vmscan.c:2823
shrink_node_memcgs mm/vmscan.c:3012 [inline]
shrink_node+0x860/0x1ef0 mm/vmscan.c:3135
shrink_zones mm/vmscan.c:3338 [inline]
do_try_to_free_pages+0x387/0x1490 mm/vmscan.c:3393
try_to_free_pages+0x29f/0x750 mm/vmscan.c:3628
__perform_reclaim mm/page_alloc.c:4591 [inline]
__alloc_pages_direct_reclaim mm/page_alloc.c:4612 [inline]
__alloc_pages_slowpath.constprop.0+0x828/0x21b0 mm/page_alloc.c:5016
__alloc_pages+0x412/0x500 mm/page_alloc.c:5387
alloc_pages+0x1a3/0x2d0 mm/mempolicy.c:2278
alloc_slab_page mm/slub.c:1690 [inline]
allocate_slab+0x34c/0x4b0 mm/slub.c:1838
new_slab mm/slub.c:1893 [inline]
new_slab_objects mm/slub.c:2639 [inline]
___slab_alloc+0x4ba/0x820 mm/slub.c:2802
__slab_alloc.constprop.0+0xa7/0xf0 mm/slub.c:2842
slab_alloc_node mm/slub.c:2924 [inline]
slab_alloc mm/slub.c:2966 [inline]
kmem_cache_alloc+0x3e1/0x4a0 mm/slub.c:2971
mempool_alloc+0x146/0x350 mm/mempool.c:393
bio_alloc_bioset+0x2ff/0x4a0 block/bio.c:433
bio_clone_fast+0x21/0x1c0 block/bio.c:666
bio_split+0xc7/0x2b0 block/bio.c:1443
blk_bio_segment_split block/blk-merge.c:290 [inline]
__blk_queue_split+0x102d/0x1550 block/blk-merge.c:339
blk_mq_submit_bio+0x1ca/0x1860 block/blk-mq.c:2190
__submit_bio_noacct_mq block/blk-core.c:1011 [inline]
submit_bio_noacct block/blk-core.c:1044 [inline]
submit_bio_noacct+0xad2/0xf20 block/blk-core.c:1027
submit_bio+0x1ea/0x470 block/blk-core.c:1106
mpage_bio_submit fs/mpage.c:66 [inline]
do_mpage_readpage+0xfee/0x1f80 fs/mpage.c:314
mpage_readahead+0x304/0x750 fs/mpage.c:389
read_pages+0x1e4/0xfa0 mm/readahead.c:130
page_cache_ra_unbounded+0x64b/0x940 mm/readahead.c:239
do_page_cache_ra+0xf9/0x140 mm/readahead.c:269
do_sync_mmap_readahead mm/filemap.c:2970 [inline]
filemap_fault+0x1553/0x2780 mm/filemap.c:3063
__do_fault+0x10d/0x4f0 mm/memory.c:3858
do_cow_fault mm/memory.c:4203 [inline]
do_fault mm/memory.c:4304 [inline]
handle_pte_fault mm/memory.c:4560 [inline]
__handle_mm_fault+0x37e6/0x5150 mm/memory.c:4695
handle_mm_fault+0x1c8/0x790 mm/memory.c:4793
do_user_addr_fault+0x48b/0x11c0 arch/x86/mm/fault.c:1390
handle_page_fault arch/x86/mm/fault.c:1475 [inline]
exc_page_fault+0x9e/0x180 arch/x86/mm/fault.c:1531
asm_exc_page_fault+0x1e/0x30 arch/x86/include/asm/idtentry.h:568
RIP: 0033:0x407a4b
RSP: 002b:00007ffd27252a20 EFLAGS: 00010246
RAX: 0000000020000080 RBX: 0000000000970000 RCX: 0000000000000000
RDX: 0000000000000011 RSI: 0000000000000000 RDI: 00000000029d22f0
RBP: 00007ffd27252b18 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000002 R11: 000000005fd1cc83 R12: 0000000000059739
R13: 00000000000003e8 R14: 000000000056bf80 R15: 000000000005970d

Showing all locks held in the system:
1 lock held by systemd/1:
2 locks held by kworker/u4:3/54:
1 lock held by khungtaskd/1662:
#0: ffffffff8b97eac0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6446
1 lock held by khugepaged/1669:
1 lock held by kswapd0/2160:
1 lock held by kswapd1/2161:
8 locks held by kworker/1:1H/2169:
2 locks held by kworker/1:3/4862:
1 lock held by systemd-journal/4866:
1 lock held by systemd-udevd/4878:
1 lock held by systemd-timesyn/4964:
1 lock held by in:imklog/8187:
1 lock held by cron/8178:
2 locks held by syz-fuzzer/8493:
2 locks held by syz-fuzzer/8494:
#0: ffff88802e9fd550 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:834 [inline]
#0: ffff88802e9fd550 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_fault+0x159e/0x2780 mm/filemap.c:3070
#1: ffffffff8ba9f128 (pcpu_drain_mutex){+.+.}-{3:3}, at: __drain_all_pages+0x4f/0x6c0 mm/page_alloc.c:3187
3 locks held by kworker/0:5/10323:
1 lock held by syz-executor.4/16007:

=============================================



---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Sep 25, 2021, 3:54:18 PM9/25/21
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages