[v5.15] INFO: task hung in nilfs_segctor_thread

0 views
Skip to first unread message

syzbot

unread,
Apr 9, 2023, 11:55:50 PM4/9/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d86dfc4d95cd Linux 5.15.106
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1175b5e9c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=dca379fe384dda80
dashboard link: https://syzkaller.appspot.com/bug?extid=814f1cd7c4a30fc9e739
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/2c159eb4fcae/disk-d86dfc4d.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/5f50187f87c7/vmlinux-d86dfc4d.xz
kernel image: https://storage.googleapis.com/syzbot-assets/f787f3f09c09/bzImage-d86dfc4d.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+814f1c...@syzkaller.appspotmail.com

INFO: task segctord:4862 blocked for more than 144 seconds.
Not tainted 5.15.106-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:segctord state:D stack:28192 pid: 4862 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0x12c4/0x4590 kernel/sched/core.c:6372
schedule+0x11b/0x1f0 kernel/sched/core.c:6455
rwsem_down_write_slowpath+0xebb/0x15c0 kernel/locking/rwsem.c:1157
__down_write_common kernel/locking/rwsem.c:1284 [inline]
__down_write kernel/locking/rwsem.c:1293 [inline]
down_write+0x164/0x170 kernel/locking/rwsem.c:1542
nilfs_transaction_lock+0x25c/0x4f0 fs/nilfs2/segment.c:357
nilfs_segctor_thread_construct fs/nilfs2/segment.c:2488 [inline]
nilfs_segctor_thread+0x542/0x1140 fs/nilfs2/segment.c:2572
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff8c91b920 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
3 locks held by kworker/u4:2/154:
2 locks held by getty/3266:
#0: ffff88814b465098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc90002bb32e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6af/0x1da0 drivers/tty/n_tty.c:2147
3 locks held by kworker/1:6/3682:
#0: ffff888011c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc900038dfd20 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
#2: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:74
4 locks held by kworker/u4:9/4042:
#0: ffff888011db5138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc9000487fd20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
#2: ffffffff8d9c7d90 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:558
#3: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: ip6mr_sk_done+0xa0/0x2a0 net/ipv6/ip6mr.c:1584
2 locks held by kworker/1:16/4272:
#0: ffff888011c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc90008e5fd20 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
6 locks held by syz-executor.2/4853:
1 lock held by segctord/4862:
#0: ffff888037d2b2a0 (&nilfs->ns_segctor_sem){++++}-{3:3}, at: nilfs_transaction_lock+0x25c/0x4f0 fs/nilfs2/segment.c:357
2 locks held by kworker/u4:23/5934:
#0: ffff888011c69138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc9001061fd20 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
2 locks held by syz-executor.4/6273:
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5584
#1: ffffffff8c91fe68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
#1: ffffffff8c91fe68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x280/0x740 kernel/rcu/tree_exp.h:840
1 lock held by syz-executor.4/6276:
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5584
1 lock held by syz-executor.0/6275:
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5584
1 lock held by syz-executor.0/6278:
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5584
1 lock held by syz-executor.1/6274:
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5584
1 lock held by syz-executor.2/6294:
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5584
1 lock held by syz-executor.2/6296:
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5584
1 lock held by syz-executor.3/6301:
#0: ffff88801b513468 (&lo->lo_mutex){+.+.}-{3:3}, at: loop_set_status+0x7c/0x8e0 drivers/block/loop.c:1521
1 lock held by syz-executor.5/6306:
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5584
1 lock held by syz-executor.5/6307:
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9d3ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5584

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted 5.15.106-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:210 [inline]
watchdog+0xe72/0xeb0 kernel/hung_task.c:295
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 4853 Comm: syz-executor.2 Not tainted 5.15.106-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
RIP: 0010:nilfs_direct_lookup+0x3/0xb0 fs/nilfs2/direct.c:37
Code: 00 00 00 00 00 fc ff df 80 3c 08 00 74 08 48 89 df e8 51 9d a5 fe 48 c7 03 00 f3 c4 8a 31 c0 5b c3 66 0f 1f 44 00 00 55 41 57 <41> 56 53 49 89 ce 89 d5 48 89 f3 49 89 ff e8 ca 54 5c fe bf 06 00
RSP: 0018:ffffc900053b7028 EFLAGS: 00000246
RAX: 1ffffffff1589e60 RBX: ffff88803aeaa760 RCX: ffffc900053b7160
RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffff88803aeaa688
RBP: ffffc900053b70f0 R08: dffffc0000000000 R09: ffffed10075d54da
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000001
R13: ffff88803aeaa688 R14: ffffffff8ac4f300 R15: 0000000000000000
FS: 00007fd4efaba700(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f190160e300 CR3: 000000003501b000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
nilfs_bmap_lookup_at_level+0x107/0x360 fs/nilfs2/bmap.c:69
nilfs_bmap_lookup fs/nilfs2/bmap.h:170 [inline]
nilfs_mdt_submit_block+0x2a6/0x9d0 fs/nilfs2/mdt.c:142
nilfs_mdt_read_block+0xf0/0x490 fs/nilfs2/mdt.c:175
nilfs_mdt_get_block+0x123/0xc80 fs/nilfs2/mdt.c:250
nilfs_palloc_get_block+0x142/0x240 fs/nilfs2/alloc.c:216
nilfs_palloc_get_desc_block fs/nilfs2/alloc.c:265 [inline]
nilfs_palloc_prepare_alloc_entry+0x3e0/0x1000 fs/nilfs2/alloc.c:524
nilfs_ifile_create_inode+0x99/0x2c0 fs/nilfs2/ifile.c:64
nilfs_new_inode+0x253/0xa30 fs/nilfs2/inode.c:351
nilfs_create+0xf9/0x2c0 fs/nilfs2/namei.c:85
lookup_open fs/namei.c:3392 [inline]
open_last_lookups fs/namei.c:3462 [inline]
path_openat+0x12f6/0x2f20 fs/namei.c:3669
do_filp_open+0x21c/0x460 fs/namei.c:3699
do_sys_openat2+0x13b/0x500 fs/open.c:1211
do_sys_open fs/open.c:1227 [inline]
__do_sys_openat fs/open.c:1243 [inline]
__se_sys_openat fs/open.c:1238 [inline]
__x64_sys_openat+0x243/0x290 fs/open.c:1238
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fd4f1548169
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fd4efaba168 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007fd4f1667f80 RCX: 00007fd4f1548169
RDX: 000000000000275a RSI: 0000000020000000 RDI: ffffffffffffff9c
RBP: 00007fd4f15a3ca1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fff2a62d26f R14: 00007fd4efaba300 R15: 0000000000022000
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Aug 20, 2023, 1:29:32 AM8/20/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages