[v6.6] INFO: task hung in migrate_pages_batch (2)

0 views
Skip to first unread message

syzbot

unread,
Feb 2, 2026, 1:38:29 AM (yesterday) Feb 2
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 2cf6f68313dc Linux 6.6.122
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=107f5644580000
kernel config: https://syzkaller.appspot.com/x/.config?x=2a950bf7c0bff9f9
dashboard link: https://syzkaller.appspot.com/bug?extid=fb7f46c00136f93b092c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/1694df1122d6/disk-2cf6f683.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/d59542417f4c/vmlinux-2cf6f683.xz
kernel image: https://storage.googleapis.com/syzbot-assets/220dc2f6e7ef/bzImage-2cf6f683.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+fb7f46...@syzkaller.appspotmail.com

INFO: task syz.1.8313:30725 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.8313 state:D stack:23856 pid:30725 ppid:27162 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x1553/0x45a0 kernel/sched/core.c:6700
schedule+0xbd/0x170 kernel/sched/core.c:6774
io_schedule+0x80/0xd0 kernel/sched/core.c:9023
folio_wait_bit_common+0x714/0xfa0 mm/filemap.c:1329
migrate_folio_unmap mm/migrate.c:1162 [inline]
migrate_pages_batch+0x1393/0x3440 mm/migrate.c:1672
migrate_pages_sync mm/migrate.c:1865 [inline]
migrate_pages+0x1f5a/0x27a0 mm/migrate.c:1947
compact_zone+0x2200/0x43a0 mm/compaction.c:2515
compact_node+0x195/0x300 mm/compaction.c:2807
compact_nodes mm/compaction.c:2820 [inline]
sysctl_compaction_handler+0xf9/0x1a0 mm/compaction.c:2866
proc_sys_call_handler+0x463/0x6d0 fs/proc/proc_sysctl.c:599
do_iter_readv_writev fs/read_write.c:-1 [inline]
do_iter_write+0x738/0xc30 fs/read_write.c:860
iter_file_splice_write+0x6a3/0xcb0 fs/splice.c:736
do_splice_from fs/splice.c:933 [inline]
direct_splice_actor+0xe8/0x130 fs/splice.c:1142
splice_direct_to_actor+0x304/0x8c0 fs/splice.c:1088
do_splice_direct+0x1d5/0x2f0 fs/splice.c:1194
do_sendfile+0x5f2/0xef0 fs/read_write.c:1254
__do_sys_sendfile64 fs/read_write.c:1316 [inline]
__se_sys_sendfile64+0xe0/0x1a0 fs/read_write.c:1308
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fe45fb9aeb9
RSP: 002b:00007fe45d9f4028 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007fe45fe16270 RCX: 00007fe45fb9aeb9
RDX: 00002000000000c0 RSI: 0000000000000007 RDI: 0000000000000008
RBP: 00007fe45fc08c1f R08: 0000000000000000 R09: 0000000000000000
R10: 000000000000000a R11: 0000000000000246 R12: 0000000000000000
R13: 00007fe45fe16308 R14: 00007fe45fe16270 R15: 00007fff448b5e68
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/29:
#0: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
2 locks held by getty/5533:
#0: ffff888030fa20a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000326e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x433/0x1390 drivers/tty/n_tty.c:2217
1 lock held by udevd/5762:
#0: ffff888021d0b4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
1 lock held by udevd/5764:
#0: ffff888021d254c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
1 lock held by udevd/5765:
#0: ffff888021e6b4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
1 lock held by udevd/5766:
#0: ffff888021e4f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
5 locks held by kworker/0:5/5868:
#0: ffff88801969c938 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff88801969c938 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
#1: ffffc90002f47d00 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90002f47d00 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
#2: ffff888143b22190 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:995 [inline]
#2: ffff888143b22190 (&dev->mutex){....}-{3:3}, at: hub_event+0x180/0x49f0 drivers/usb/core/hub.c:5861
#3: ffff888036376190 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:995 [inline]
#3: ffff888036376190 (&dev->mutex){....}-{3:3}, at: __device_attach+0x89/0x420 drivers/base/dd.c:1005
#4: ffff88804156a160 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:995 [inline]
#4: ffff88804156a160 (&dev->mutex){....}-{3:3}, at: __device_attach+0x89/0x420 drivers/base/dd.c:1005
3 locks held by kworker/u4:13/6594:
#0: ffff88802c262938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff88802c262938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
#1: ffffc9000c217d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc9000c217d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
#2: ffffffff8e3c0308 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4718
1 lock held by udevd/9674:
#0: ffff888021e7b4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
6 locks held by kworker/1:0/10529:
#0: ffff88801969c938 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff88801969c938 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
#1: ffffc900106efd00 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900106efd00 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
#2: ffff88802530a190 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:995 [inline]
#2: ffff88802530a190 (&dev->mutex){....}-{3:3}, at: hub_event+0x180/0x49f0 drivers/usb/core/hub.c:5861
#3: ffff88807d921190 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:995 [inline]
#3: ffff88807d921190 (&dev->mutex){....}-{3:3}, at: __device_attach+0x89/0x420 drivers/base/dd.c:1005
#4: ffff8880250b3160 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:995 [inline]
#4: ffff8880250b3160 (&dev->mutex){....}-{3:3}, at: __device_attach+0x89/0x420 drivers/base/dd.c:1005
#5: ffffffff8cfdc630 (umhelper_sem){++++}-{3:3}, at: usermodehelper_read_trylock+0xfd/0x2b0 kernel/umh.c:215
3 locks held by kworker/u5:0/13043:
#0: ffff88805e154538 ((wq_completion)hci6){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff88805e154538 ((wq_completion)hci6){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
#1: ffffc9000cbe7d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc9000cbe7d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
#2: ffff888066264e70 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1d4/0x380 net/bluetooth/hci_sync.c:326
1 lock held by udevd/13247:
#0: ffff888021d204c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
1 lock held by udevd/23210:
#0: ffff888021f3f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
1 lock held by udevd/23566:
#0: ffff888021e4a4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
1 lock held by udevd/23611:
#0: ffff888021f254c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
1 lock held by udevd/23669:
#0: ffff888021f204c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
1 lock held by udevd/24739:
#0: ffff8880220844c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
1 lock held by udevd/24988:
#0: ffff888021f3a4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
1 lock held by syz.1.8313/30725:
#0: ffff88805c4f2418 (sb_writers#3){.+.+}-{0:0}, at: do_sendfile+0x5cf/0xef0 fs/read_write.c:1253
2 locks held by kworker/1:9/30774:
#0: ffff888017c72538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017c72538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
#1: ffffc90003507d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90003507d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
1 lock held by syz.2.8370/30906:
#0: ffff88807e2d2418 (sb_writers#3){.+.+}-{0:0}, at: do_sendfile+0x5cf/0xef0 fs/read_write.c:1253
1 lock held by syz.3.8383/30978:
#0: ffff888079dde418 (sb_writers#3){.+.+}-{0:0}, at: do_sendfile+0x5cf/0xef0 fs/read_write.c:1253
1 lock held by syz.0.8390/31005:
#0: ffff88805c444418 (sb_writers#3){.+.+}-{0:0}, at: do_sendfile+0x5cf/0xef0 fs/read_write.c:1253
2 locks held by syz.7.8699/32263:
#0: ffffffff8e3b3150 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x351/0x5e0 net/core/net_namespace.c:516
#1: ffffffff8e3c0308 (rtnl_mutex){+.+.}-{3:3}, at: register_nexthop_notifier+0x88/0x240 net/ipv4/nexthop.c:3636
2 locks held by syz.5.8701/32267:
#0: ffffffff8e3c0308 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline]
#0: ffffffff8e3c0308 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x41/0x1c0 drivers/net/tun.c:3511
#1: ffffffff8d1379b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
#1: ffffffff8d1379b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x306/0x880 kernel/rcu/tree_exp.h:1004
1 lock held by syz.4.8705/32281:
#0: ffffffff8e3c0308 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c0308 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6469
1 lock held by syz.4.8705/32282:
#0: ffffffff8e3c0308 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c0308 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6469

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x18c/0x250 lib/dump_stack.c:106
nmi_cpu_backtrace+0x3a6/0x3e0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf3d/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 22095 Comm: kworker/1:1 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Workqueue: events kfree_rcu_work
RIP: 0010:unwind_next_frame+0x1648/0x2970 arch/x86/kernel/unwind_orc.c:-1
Code: 80 3c 28 00 48 8b 5c 24 60 74 08 48 89 df e8 4f f0 a3 00 48 8b 44 24 08 48 89 03 ba 10 00 00 00 48 89 ef 31 f6 e8 98 f1 a3 00 <48> 8b 5c 24 30 eb 4b e8 3c c8 4b 00 e9 a2 0b 00 00 e8 32 c8 4b 00
RSP: 0018:ffffc9000434f4f8 EFLAGS: 00000246
RAX: ffffc9000434f618 RBX: ffffc9000434f600 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffc9000434f628
RBP: ffffc9000434f618 R08: ffffc9000434f627 R09: 0000000000000000
R10: ffffc9000434f618 R11: fffff52000869ec5 R12: ffffc9000434f5c8
R13: dffffc0000000000 R14: ffffffff81df357e R15: ffffffff8f1990bc
FS: 0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000020000015b030 CR3: 000000002cc8d000 CR4: 00000000003506e0
Call Trace:
<TASK>
arch_stack_walk+0x144/0x190 arch/x86/kernel/stacktrace.c:25
stack_trace_save+0xaa/0x100 kernel/stacktrace.c:122
kasan_save_stack mm/kasan/common.c:46 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:53
kasan_save_free_info+0x2e/0x50 mm/kasan/generic.c:522
____kasan_slab_free+0x126/0x1e0 mm/kasan/common.c:237
kasan_slab_free include/linux/kasan.h:164 [inline]
slab_free_hook mm/slub.c:1811 [inline]
slab_free_freelist_hook+0x130/0x1a0 mm/slub.c:1837
slab_free mm/slub.c:3830 [inline]
kmem_cache_free_bulk+0x33b/0x450 mm/slub.c:3948
kfree_bulk include/linux/slab.h:517 [inline]
kvfree_rcu_bulk+0x1eb/0x470 kernel/rcu/tree.c:3032
kfree_rcu_work+0x344/0x3b0 kernel/rcu/tree.c:3111
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa5d/0x15d0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages