Hello,
syzbot found the following issue on:
HEAD commit: 7c87defbd336 Linux 6.1.169
git tree: linux-6.1.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=11c22348580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=f0605c5af04d7603
dashboard link:
https://syzkaller.appspot.com/bug?extid=02b30100b5bdbd4be819
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/8d71e14f22ca/disk-7c87defb.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/49c26e1231bb/vmlinux-7c87defb.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/bac8352e3eab/bzImage-7c87defb.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+02b301...@syzkaller.appspotmail.com
INFO: task syz.0.276:5514 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.276 state:D stack:25048 pid:5514 ppid:4267 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x11d1/0x40e0 kernel/sched/core.c:6562
schedule+0xb9/0x180 kernel/sched/core.c:6638
bit_wait+0xd/0xc0 kernel/sched/wait_bit.c:199
__wait_on_bit+0xa8/0x2d0 kernel/sched/wait_bit.c:49
out_of_line_wait_on_bit+0x138/0x190 kernel/sched/wait_bit.c:64
wait_on_bit include/linux/wait_bit.h:76 [inline]
gfs2_recover_journal+0xd6/0x130 fs/gfs2/recovery.c:577
init_journal+0x17fc/0x23e0 fs/gfs2/ops_fstype.c:835
init_inodes+0xdb/0x320 fs/gfs2/ops_fstype.c:889
gfs2_fill_super+0x1749/0x1fb0 fs/gfs2/ops_fstype.c:1246
get_tree_bdev+0x3f1/0x610 fs/super.c:1366
gfs2_get_tree+0x4d/0x1e0 fs/gfs2/ops_fstype.c:1327
vfs_get_tree+0x88/0x270 fs/super.c:1573
do_new_mount+0x24a/0xa40 fs/namespace.c:3078
do_mount fs/namespace.c:3421 [inline]
__do_sys_mount fs/namespace.c:3629 [inline]
__se_sys_mount+0x2e3/0x3d0 fs/namespace.c:3606
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f9b5139e04a
RSP: 002b:00007f9b5229ee58 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f9b5229eee0 RCX: 00007f9b5139e04a
RDX: 0000200000037f40 RSI: 00002000000008c0 RDI: 00007f9b5229eea0
RBP: 0000200000037f40 R08: 00007f9b5229eee0 R09: 0000000001010084
R10: 0000000001010084 R11: 0000000000000246 R12: 00002000000008c0
R13: 00007f9b5229eea0 R14: 0000000000037f45 R15: 0000200000000480
</TASK>
Showing all locks held in the system:
2 locks held by kworker/0:0/7:
#0: ffff888146e86538 ((wq_completion)gfs_recovery){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc900000c7d00 ((work_completion)(&jd->jd_work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8cb2df70 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8cb2e790 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/27:
#0: ffffffff8cb2d5e0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#0: ffffffff8cb2d5e0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#0: ffffffff8cb2d5e0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
2 locks held by getty/4027:
#0: ffff88802fc5e098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x429/0x1390 drivers/tty/n_tty.c:2198
2 locks held by kworker/1:10/5296:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90003667d00 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
2 locks held by kworker/0:12/5302:
#0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc900036a7d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
1 lock held by syz.0.276/5514:
#0: ffff888079ef80e0 (&type->s_umount_key#69/1){+.+.}-{3:3}, at: alloc_super+0x1fa/0x930 fs/super.c:228
2 locks held by syz.7.626/7000:
#0: ffff88801eb260e0 (&type->s_umount_key#74/1){+.+.}-{3:3}, at: alloc_super+0x1fa/0x930 fs/super.c:228
#1: ffffffff8cb332b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
#1: ffffffff8cb332b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3c0/0x890 kernel/rcu/tree_exp.h:962
1 lock held by dhcpcd/7008:
#0: ffff8880727aac10 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
#0: ffff8880727aac10 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:653 [inline]
#0: ffff8880727aac10 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x90/0x240 net/socket.c:1399
1 lock held by dhcpcd/7010:
#0: ffff8880727ad610 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
#0: ffff8880727ad610 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:653 [inline]
#0: ffff8880727ad610 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x90/0x240 net/socket.c:1399
2 locks held by dhcpcd/7011:
#0: ffff8880727ae210 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
#0: ffff8880727ae210 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:653 [inline]
#0: ffff8880727ae210 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x90/0x240 net/socket.c:1399
#1: ffffffff8cb332b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
#1: ffffffff8cb332b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3c0/0x890 kernel/rcu/tree_exp.h:962
1 lock held by dhcpcd/7013:
#0: ffff888053f96130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1805 [inline]
#0: ffff888053f96130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xce0 net/packet/af_packet.c:3249
1 lock held by dhcpcd/7014:
#0: ffff888076dbc130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1805 [inline]
#0: ffff888076dbc130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xce0 net/packet/af_packet.c:3249
1 lock held by dhcpcd/7015:
#0: ffff88802fda6130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1805 [inline]
#0: ffff88802fda6130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xce0 net/packet/af_packet.c:3249
2 locks held by dhcpcd-run-hook/7018:
=============================================
NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0x188/0x24e lib/dump_stack.c:106
nmi_cpu_backtrace+0x3e6/0x460 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xeee/0xf30 kernel/hung_task.c:377
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 4400 Comm: kworker/u4:12 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/18/2026
Workqueue: bat_events batadv_nc_worker
RIP: 0010:check_preemption_disabled+0x49/0x110 lib/smp_processor_id.c:55
Code: 65 8b 0d a2 64 d4 75 f7 c1 ff ff ff 7f 74 1f 65 48 8b 0c 25 28 00 00 00 48 3b 4c 24 08 0f 85 c4 00 00 00 48 83 c4 10 5b 41 5e <41> 5f 5d c3 48 c7 04 24 00 00 00 00 9c 8f 04 24 f7 04 24 00 02 00
RSP: 0018:ffffc90004917a88 EFLAGS: 00000286
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 8833a9cb42d7bc00
RDX: 0000000000000000 RSI: ffffffff8adf1380 RDI: ffffffff8adf1340
RBP: ffffc90004917bc8 R08: ffffffff8e1ff42f R09: 1ffffffff1c3fe85
R10: dffffc0000000000 R11: fffffbfff1c3fe86 R12: 0000000000000000
R13: 1ffff92000922f64 R14: 0000000000000000 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f934ba22fb3 CR3: 000000000c88e000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
rcu_dynticks_curr_cpu_in_eqs include/linux/context_tracking.h:122 [inline]
rcu_is_watching+0x11/0xa0 kernel/rcu/tree.c:721
trace_lock_acquire include/trace/events/lock.h:24 [inline]
lock_acquire+0xe3/0x4a0 kernel/locking/lockdep.c:5633
rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
rcu_read_lock include/linux/rcupdate.h:791 [inline]
batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:408 [inline]
batadv_nc_worker+0xeb/0x600 net/batman-adv/network-coding.c:719
process_one_work+0x8a2/0x1160 kernel/workqueue.c:2292
worker_thread+0xaa2/0x1270 kernel/workqueue.c:2439
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup