[v6.6] INFO: task hung in nbd_queue_rq

0 views
Skip to first unread message

syzbot

unread,
Sep 7, 2025, 1:15:40 PMSep 7
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 355bd0b51d2f Linux 6.6.104
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=10914562580000
kernel config: https://syzkaller.appspot.com/x/.config?x=dac93b93d3de2741
dashboard link: https://syzkaller.appspot.com/bug?extid=3245262071afbd05dd2c
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/ef00d28b2c5b/disk-355bd0b5.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/7627cd51eb0a/vmlinux-355bd0b5.xz
kernel image: https://storage.googleapis.com/syzbot-assets/3fdd0a51dd65/bzImage-355bd0b5.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+324526...@syzkaller.appspotmail.com

INFO: task kworker/1:1H:96 blocked for more than 142 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:1H state:D stack:26424 pid:96 ppid:2 flags:0x00004000
Workqueue: kblockd blk_mq_run_work_fn
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_timeout+0x160/0x280 kernel/time/timer.c:2167
wait_for_reconnect drivers/block/nbd.c:998 [inline]
nbd_handle_cmd drivers/block/nbd.c:1040 [inline]
nbd_queue_rq+0x78d/0x2a10 drivers/block/nbd.c:1115
blk_mq_dispatch_rq_list+0xda8/0x1e40 block/blk-mq.c:2082
__blk_mq_do_dispatch_sched block/blk-mq-sched.c:170 [inline]
blk_mq_do_dispatch_sched block/blk-mq-sched.c:184 [inline]
__blk_mq_sched_dispatch_requests+0xe98/0x1670 block/blk-mq-sched.c:309
blk_mq_sched_dispatch_requests+0xfb/0x1b0 block/blk-mq-sched.c:333
blk_mq_run_work_fn+0x169/0x2f0 block/blk-mq.c:2494
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/29:
#0: ffffffff8cd2fc20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2fc20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2fc20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
4 locks held by kworker/1:1H/96:
#0: ffff8880186ac138 ((wq_completion)kblockd){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff8880186ac138 ((wq_completion)kblockd){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc9000251fd00 ((work_completion)(&(&hctx->run_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc9000251fd00 ((work_completion)(&(&hctx->run_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88801efd4810 (set->srcu){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:116 [inline]
#2: ffff88801efd4810 (set->srcu){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:215 [inline]
#2: ffff88801efd4810 (set->srcu){.+.+}-{0:0}, at: blk_mq_run_work_fn+0x141/0x2f0 block/blk-mq.c:2494
#3: ffff888021c88180 (&cmd->lock){+.+.}-{3:3}, at: nbd_queue_rq+0xf6/0x2a10 drivers/block/nbd.c:1107
2 locks held by getty/5543:
#0: ffff88802cfe90a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
1 lock held by udevd/5782:
#0: ffff888021c244c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:788
4 locks held by kworker/0:5/5832:
#0: ffff888147a6d138 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888147a6d138 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc9000483fd00 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc9000483fd00 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88814373a190 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:992 [inline]
#2: ffff88814373a190 (&dev->mutex){....}-{3:3}, at: hub_event+0x185/0x49c0 drivers/usb/core/hub.c:5861
#3: ffff888059b30190 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:992 [inline]
#3: ffff888059b30190 (&dev->mutex){....}-{3:3}, at: __device_attach+0x89/0x400 drivers/base/dd.c:1005
4 locks held by udevd/5939:
#0: ffff8880686301c8 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb1/0xd50 fs/seq_file.c:182
#1: ffff888064575488 (&of->mutex){+.+.}-{3:3}, at: kernfs_seq_start+0x55/0x3b0 fs/kernfs/file.c:154
#2: ffff88805214a2f0 (kn->active#24){++++}-{0:0}, at: kernfs_seq_start+0x75/0x3b0 fs/kernfs/file.c:155
#3: ffff888059b30190 (&dev->mutex){....}-{3:3}, at: device_lock_interruptible include/linux/device.h:997 [inline]
#3: ffff888059b30190 (&dev->mutex){....}-{3:3}, at: manufacturer_show+0x26/0xa0 drivers/usb/core/sysfs.c:142
2 locks held by kworker/1:4/6087:
2 locks held by syz.5.704/9013:
1 lock held by syz.5.704/9027:
#0: ffffffff8dfbc488 (rtnl_mutex){+.+.}-{3:3}, at: tcx_prog_detach+0xef/0x5f0 kernel/bpf/tcx.c:67
2 locks held by syz.3.705/9016:
#0: ffff88802f459c30 (sk_lock-AF_INET6){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1731 [inline]
#0: ffff88802f459c30 (sk_lock-AF_INET6){+.+.}-{0:0}, at: sctp_sendmsg+0xb92/0x27e0 net/sctp/socket.c:1970
#1: ffffffff8cd2fc20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#1: ffffffff8cd2fc20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#1: ffffffff8cd2fc20 (rcu_read_lock){....}-{1:2}, at: ip_route_output_key_hash+0x12f/0x340 net/ipv4/route.c:2671
1 lock held by syz.3.705/9031:
#0: ffff88802f459c30 (sk_lock-AF_INET6){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1731 [inline]
#0: ffff88802f459c30 (sk_lock-AF_INET6){+.+.}-{0:0}, at: sctp_getsockopt+0x131/0xb60 net/sctp/socket.c:8086

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf41/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 4739 Comm: kworker/u4:8 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:is_kfence_address include/linux/kfence.h:58 [inline]
RIP: 0010:kasan_unpoison+0x6/0x90 mm/kasan/shadow.c:183
Code: 14 48 01 f7 48 c1 ef 03 48 b9 00 00 00 00 00 fc ff df 88 04 0f c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 66 0f 1f 00 41 57 <41> 56 53 48 8b 05 48 c2 6d 0c 48 89 f9 48 29 c1 48 81 f9 00 00 20
RSP: 0000:ffffc9000fb97988 EFLAGS: 00000246
RAX: ffff88823bc00001 RBX: ffff88805cd3a500 RCX: fffffffe2113a500
RDX: 0000000000000001 RSI: 00000000000000f0 RDI: ffff88805cd3a500
RBP: 0000000000000820 R08: ffffc9000fb97a40 R09: 0000000000000001
R10: dffffc0000000000 R11: fffff52001f72f48 R12: 0000000000000001
R13: 00000000000000f0 R14: ffff88814224c000 R15: 0000000000000001
FS: 0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00002000000001c0 CR3: 000000000cb30000 CR4: 00000000003526f0
Call Trace:
<TASK>
__kasan_slab_alloc+0x58/0x80 mm/kasan/common.c:324
kasan_slab_alloc include/linux/kasan.h:188 [inline]
slab_post_alloc_hook+0x6e/0x4d0 mm/slab.h:767
slab_alloc_node mm/slub.c:3485 [inline]
kmem_cache_alloc_node+0x150/0x330 mm/slub.c:3530
__alloc_skb+0x108/0x2c0 net/core/skbuff.c:640
alloc_skb include/linux/skbuff.h:1284 [inline]
nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:748 [inline]
nsim_dev_trap_report drivers/net/netdevsim/dev.c:805 [inline]
nsim_dev_trap_report_work+0x293/0xb00 drivers/net/netdevsim/dev.c:851
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages