[moderation] [fs?] [mm?] INFO: task hung in writeback_iter

0 views
Skip to first unread message

syzbot

unread,
Sep 30, 2024, 5:06:26 AMSep 30
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 3efc57369a0c Merge tag 'for-linus' of git://git.kernel.org..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10830ea9980000
kernel config: https://syzkaller.appspot.com/x/.config?x=26c9ab6638097a6
dashboard link: https://syzkaller.appspot.com/bug?extid=4f91c7b1d53f03e5a3f3
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
CC: [ak...@linux-foundation.org linux-...@vger.kernel.org linux-...@vger.kernel.org linu...@kvack.org wi...@infradead.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/27eb9e7acef6/disk-3efc5736.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/6ca1f5529a2c/vmlinux-3efc5736.xz
kernel image: https://storage.googleapis.com/syzbot-assets/7a0c4c354c23/bzImage-3efc5736.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4f91c7...@syzkaller.appspotmail.com

INFO: task kworker/u8:4:62 blocked for more than 144 seconds.
Not tainted 6.11.0-syzkaller-11993-g3efc57369a0c #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:4 state:D stack:23056 pid:62 tgid:62 ppid:2 flags:0x00004000
Workqueue: writeback wb_workfn (flush-bcachefs-5)
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5315 [inline]
__schedule+0x1843/0x4ae0 kernel/sched/core.c:6675
__schedule_loop kernel/sched/core.c:6752 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6767
io_schedule+0x8d/0x110 kernel/sched/core.c:7552
folio_wait_bit_common+0x882/0x12b0 mm/filemap.c:1309
folio_wait_writeback+0xe7/0x1e0 mm/page-writeback.c:3189
folio_prepare_writeback mm/page-writeback.c:2455 [inline]
writeback_get_folio mm/page-writeback.c:2497 [inline]
writeback_iter+0xb18/0x18d0 mm/page-writeback.c:2590
write_cache_pages+0xb1/0x230 mm/page-writeback.c:2639
bch2_writepages+0x14f/0x380 fs/bcachefs/fs-io-buffered.c:639
do_writepages+0x35d/0x870 mm/page-writeback.c:2683
__writeback_single_inode+0x14f/0x10d0 fs/fs-writeback.c:1658
writeback_sb_inodes+0x80c/0x1370 fs/fs-writeback.c:1954
wb_writeback+0x41b/0xbd0 fs/fs-writeback.c:2134
wb_do_writeback fs/fs-writeback.c:2281 [inline]
wb_workfn+0x410/0x1090 fs/fs-writeback.c:2321
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
INFO: task syz.2.35:5733 blocked for more than 144 seconds.
Not tainted 6.11.0-syzkaller-11993-g3efc57369a0c #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.35 state:D stack:27424 pid:5733 tgid:5700 ppid:5400 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5315 [inline]
__schedule+0x1843/0x4ae0 kernel/sched/core.c:6675
__schedule_loop kernel/sched/core.c:6752 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6767
wb_wait_for_completion+0x166/0x290 fs/fs-writeback.c:216
sync_inodes_sb+0x28d/0xb50 fs/fs-writeback.c:2799
sync_filesystem+0x176/0x230 fs/sync.c:64
__do_sys_syncfs fs/sync.c:160 [inline]
__se_sys_syncfs+0x93/0x110 fs/sync.c:149
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f856f17dff9
RSP: 002b:00007f856ff90038 EFLAGS: 00000246 ORIG_RAX: 0000000000000132
RAX: ffffffffffffffda RBX: 00007f856f336130 RCX: 00007f856f17dff9
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000006
RBP: 00007f856f1f0296 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007f856f336130 R15: 00007ffff5a8d978
</TASK>

Showing all locks held in the system:
4 locks held by kworker/u8:1/12:
#0: ffff88801baeb148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
#0: ffff88801baeb148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
#1: ffffc90000117d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
#1: ffffc90000117d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
#2: ffffffff8fcb24d0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x16a/0xcc0 net/core/net_namespace.c:580
#3: ffffffff8fcbefc8 (rtnl_mutex){+.+.}-{3:3}, at: wg_destruct+0x25/0x2e0 drivers/net/wireguard/device.c:246
1 lock held by khungtaskd/30:
#0: ffffffff8e937da0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
#0: ffffffff8e937da0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
#0: ffffffff8e937da0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6701
2 locks held by kworker/u8:4/62:
#0: ffff88801daa1148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
#0: ffff88801daa1148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
#1: ffffc900015d7d00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
#1: ffffc900015d7d00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
3 locks held by kworker/u8:5/995:
#0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
#0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
#1: ffffc90003c37d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
#1: ffffc90003c37d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
#2: ffffffff8fcbefc8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:276
3 locks held by kworker/u8:6/2489:
#0: ffff88814bf7f948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
#0: ffff88814bf7f948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
#1: ffffc900090a7d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
#1: ffffc900090a7d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
#2: ffffffff8fcbefc8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xd0/0x16f0 net/ipv6/addrconf.c:4196
2 locks held by getty/4971:
#0: ffff88814c5870a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
2 locks held by syz.4.15/5409:
1 lock held by syz.2.35/5702:
#0: ffff88806c376420 (sb_writers#23){.+.+}-{0:0}, at: direct_splice_actor+0x49/0x220 fs/splice.c:1163
2 locks held by syz.2.35/5733:
#0: ffff88806c3760e0 (&type->s_umount_key#81){++++}-{3:3}, at: __do_sys_syncfs fs/sync.c:159 [inline]
#0: ffff88806c3760e0 (&type->s_umount_key#81){++++}-{3:3}, at: __se_sys_syncfs+0x8b/0x110 fs/sync.c:149
#1: ffff8880673d87d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:388 [inline]
#1: ffff8880673d87d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: sync_inodes_sb+0x26e/0xb50 fs/fs-writeback.c:2797
2 locks held by bch-copygc/loop/5730:
3 locks held by syz-executor/6247:
1 lock held by syz-executor/6255:
1 lock held by syz-executor/6259:
#0: ffffffff8fcbefc8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
#0: ffffffff8fcbefc8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x6e6/0xcf0 net/core/rtnetlink.c:6643
1 lock held by syz-executor/6415:
#0: ffffffff8fcbefc8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
#0: ffffffff8fcbefc8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x6e6/0xcf0 net/core/rtnetlink.c:6643
1 lock held by syz-executor/6418:
#0: ffffffff8fcbefc8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
#0: ffffffff8fcbefc8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x6e6/0xcf0 net/core/rtnetlink.c:6643

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.11.0-syzkaller-11993-g3efc57369a0c #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
watchdog+0xff4/0x1040 kernel/hung_task.c:379
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 5730 Comm: bch-copygc/loop Not tainted 6.11.0-syzkaller-11993-g3efc57369a0c #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:26 [inline]
RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:87 [inline]
RIP: 0010:arch_irqs_disabled arch/x86/include/asm/irqflags.h:147 [inline]
RIP: 0010:lock_acquire+0x232/0x550 kernel/locking/lockdep.c:5825
Code: 92 7e 83 f8 01 0f 85 a3 01 00 00 49 89 de 48 c1 eb 03 42 80 3c 2b 00 74 08 4c 89 f7 e8 77 30 8b 00 48 c7 44 24 60 00 00 00 00 <9c> 8f 44 24 60 42 80 3c 2b 00 74 08 4c 89 f7 e8 6a 2f 8b 00 f6 44
RSP: 0018:ffffc90002feec00 EFLAGS: 00000046
RAX: 0000000000000001 RBX: 1ffff920005fdd8c RCX: 3749938460f03b00
RDX: dffffc0000000000 RSI: ffffffff8c0adc40 RDI: ffffffff8c6021e0
RBP: ffffc90002feed48 R08: ffffffff9423f89f R09: 1ffffffff2847f13
R10: dffffc0000000000 R11: fffffbfff2847f14 R12: 1ffff920005fdd88
R13: dffffc0000000000 R14: ffffc90002feec60 R15: 0000000000000046
FS: 0000000000000000(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f543f48c440 CR3: 000000007c2d2000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
__bch2_time_stats_update+0x1b4/0x370 fs/bcachefs/time_stats.c:127
bch2_trans_begin+0x785/0x1c00 fs/bcachefs/btree_iter.c:3094
bch2_evacuate_bucket+0x69b/0x34e0 fs/bcachefs/move.c:680
bch2_copygc+0x3a03/0x4650 fs/bcachefs/movinggc.c:234
bch2_copygc_thread+0x737/0xc20 fs/bcachefs/movinggc.c:375
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages