[v5.15] INFO: task hung in addrconf_verify_work (2)

0 views
Skip to first unread message

syzbot

unread,
Feb 7, 2026, 6:02:26 PM (yesterday) Feb 7
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 7b232985052f Linux 5.15.199
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=111ffa5a580000
kernel config: https://syzkaller.appspot.com/x/.config?x=353ae28c40b35af5
dashboard link: https://syzkaller.appspot.com/bug?extid=519aa486c1e2b442d20d
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/97a23052e207/disk-7b232985.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/e9871fb1bda3/vmlinux-7b232985.xz
kernel image: https://storage.googleapis.com/syzbot-assets/b08e48f5be44/bzImage-7b232985.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+519aa4...@syzkaller.appspotmail.com

INFO: task kworker/1:11:8436 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:11 state:D stack:25272 pid: 8436 ppid: 2 flags:0x00004000
Workqueue: ipv6_addrconf addrconf_verify_work
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11ef/0x43c0 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6537
__mutex_lock_common+0xcfc/0x2400 kernel/locking/mutex.c:669
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
addrconf_verify_work+0xa/0x20 net/ipv6/addrconf.c:4654
process_one_work+0x85f/0x1010 kernel/workqueue.c:2310
worker_thread+0xaa6/0x1290 kernel/workqueue.c:2457
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>

Showing all locks held in the system:
3 locks held by kworker/0:0/7:
#0: ffff888016c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc90000cc7d00 ((work_completion)(&(&vi->refill)->work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
#2: ffff8880b903a358 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
2 locks held by kworker/u4:0/9:
#0: ffff888016c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc90000ce7d00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
1 lock held by khungtaskd/27:
#0: ffffffff8c31eaa0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
8 locks held by kworker/u4:3/155:
#0: ffff888016dcd938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc90001f27d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
#2: ffffffff8d430850 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x148/0xba0 net/core/net_namespace.c:589
#3: ffffffff8d461da8 (devlink_mutex){+.+.}-{3:3}, at: devlink_pernet_pre_exit+0xa4/0x310 net/core/devlink.c:11534
#4: ffff8880613e3658 (&nsim_bus_dev->nsim_bus_reload_lock){+.+.}-{3:3}, at: nsim_dev_reload_up+0xc5/0x820 drivers/net/netdevsim/dev.c:897
#5: ffffffff8d43c748 (rtnl_mutex){+.+.}-{3:3}, at: nsim_init_netdevsim drivers/net/netdevsim/netdev.c:310 [inline]
#5: ffffffff8d43c748 (rtnl_mutex){+.+.}-{3:3}, at: nsim_create+0x2ef/0x3e0 drivers/net/netdevsim/netdev.c:365
#6: ffff8880760d1080 (&sb->s_type->i_mutex_key#3){++++}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#6: ffff8880760d1080 (&sb->s_type->i_mutex_key#3){++++}-{3:3}, at: start_creating+0x129/0x310 fs/debugfs/inode.c:350
#7: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#7: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#7: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by kswapd0/255:
1 lock held by jbd2/sda1-8/3522:
#0: ffffffff8c3afc48 (oom_lock){+.+.}-{3:3}, at: __alloc_pages_may_oom mm/page_alloc.c:4308 [inline]
#0: ffffffff8c3afc48 (oom_lock){+.+.}-{3:3}, at: __alloc_pages_slowpath+0x1cf2/0x2890 mm/page_alloc.c:5163
2 locks held by klogd/3549:
#0: ffff88802cabdc28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88802cabdc28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by udevd/3560:
#0: ffff88807df07828 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88807df07828 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by crond/3927:
#0: ffff88807c516328 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88807c516328 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by getty/3948:
#0: ffff88814cbce098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc90002cf62e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x5df/0x1a70 drivers/tty/n_tty.c:2158
2 locks held by sshd-session/4170:
#0: ffff88807af5dc28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88807af5dc28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz-executor/4171:
#0: ffff88807af5ea28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88807af5ea28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz-executor/4186:
#0: ffff888079452b28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff888079452b28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz-executor/4195:
#0: ffff888079453928 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff888079453928 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by kworker/u4:7/4307:
#0: ffff888016c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc9000341fd00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
2 locks held by kworker/u4:8/4308:
#0: ffff888016c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc9000343fd00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
2 locks held by kworker/u4:14/8269:
#0: ffff888016c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc900045afd00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
3 locks held by kworker/1:11/8436:
#0: ffff88802b5dd138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc900034ffd00 ((addr_chk_work).work){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
#2: ffffffff8d43c748 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0xa/0x20 net/ipv6/addrconf.c:4654
1 lock held by syz.0.1765/8910:
#0: ffff888024d26120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1744 [inline]
#0: ffff888024d26120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_setsockopt+0x7f3/0x1af0 net/packet/af_packet.c:3829
2 locks held by syz-executor/9300:
#0: ffff88807f192b28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88807f192b28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz-executor/10539:
#0: ffff888025f7a698 (nlk_cb_mutex-GENERIC){+.+.}-{3:3}, at: netlink_dump+0xec/0xcf0 net/netlink/af_netlink.c:2226
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by udevd/10544:
#0: ffff888077541628 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff888077541628 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz.6.2253/10643:
#0: ffff888074457338 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:842 [inline]
#0: ffff888074457338 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_fault+0x83b/0x1370 mm/filemap.c:3096
#1: ffff8880b903a358 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
2 locks held by syz.6.2253/10650:
#0: ffff88801ef7f828 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88801ef7f828 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz.4.2255/10647:
#0: ffff88802cabf828 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline]
#0: ffff88802cabf828 (&mm->mmap_lock){++++}-{3:3}, at: vm_mmap_pgoff+0x16c/0x2d0 mm/util.c:549
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by modprobe/10656:
#0: ffff88802cb34028 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88802cb34028 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by modprobe/10657:
#0: ffff88802cab9d28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88802cab9d28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by modprobe/10658:
#0: ffff88801d64df48 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:842 [inline]
#0: ffff88801d64df48 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_fault+0x83b/0x1370 mm/filemap.c:3096
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by dhcpcd-run-hook/10659:
#0: ffff88802cab9628 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff88802cab9628 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
#1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by modprobe/10663:

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
Call Trace:
<TASK>
dump_stack_lvl+0x188/0x250 lib/dump_stack.c:106
nmi_cpu_backtrace+0x3a2/0x3d0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x163/0x280 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline]
watchdog+0xe0f/0xe50 kernel/hung_task.c:369
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 10544 Comm: udevd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
RIP: 0010:__lock_acquire+0x5a8d/0x7d10 kernel/locking/lockdep.c:5039
Code: 20 01 00 00 0e 36 e0 45 48 8b 84 24 d0 00 00 00 4a c7 04 00 00 00 00 00 4a c7 44 00 08 00 00 00 00 4a c7 44 00 10 00 00 00 00 <42> c7 44 00 18 00 00 00 00 65 48 8b 04 25 28 00 00 00 48 3b 84 24
RSP: 0000:ffffc90003be6640 EFLAGS: 00000087
RAX: 1ffff9200077ccec RBX: ffff888026d44668 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff901d20c0
RBP: ffffc90003be6890 R08: dffffc0000000000 R09: 1ffffffff203a418
R10: dffffc0000000000 R11: fffffbfff203a419 R12: 56fb501eaf5cf180
R13: ffff888026d43b80 R14: ffff888026d44660 R15: ffff888026d44708
FS: 00007f0f1c512880(0000) GS:ffff8880b9000000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055fa30a9c186 CR3: 000000005c8c8000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
lock_acquire+0x19e/0x400 kernel/locking/lockdep.c:5623
rcu_lock_acquire+0x2a/0x30 include/linux/rcupdate.h:313
rcu_read_lock include/linux/rcupdate.h:740 [inline]
list_lru_count_one+0x49/0x310 mm/list_lru.c:181
list_lru_shrink_count include/linux/list_lru.h:123 [inline]
super_cache_count+0x187/0x290 fs/super.c:148
do_shrink_slab+0x8d/0xd00 mm/vmscan.c:712
shrink_slab_memcg mm/vmscan.c:834 [inline]
shrink_slab+0x450/0x7a0 mm/vmscan.c:913
shrink_node_memcgs mm/vmscan.c:2958 [inline]
shrink_node+0x110c/0x2610 mm/vmscan.c:3079
shrink_zones mm/vmscan.c:3285 [inline]
do_try_to_free_pages+0x606/0x1600 mm/vmscan.c:3340
try_to_free_pages+0x9a1/0xea0 mm/vmscan.c:3575
__perform_reclaim mm/page_alloc.c:4657 [inline]
__alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
__alloc_pages_slowpath+0x1150/0x2890 mm/page_alloc.c:5114
__alloc_pages+0x340/0x480 mm/page_alloc.c:5500
alloc_pages_vma+0x393/0x7c0 mm/mempolicy.c:2146
__read_swap_cache_async+0x1b5/0xa70 mm/swap_state.c:459
read_swap_cache_async mm/swap_state.c:525 [inline]
swap_cluster_readahead+0x6a3/0x7c0 mm/swap_state.c:661
swapin_readahead+0xf1/0xac0 mm/swap_state.c:854
do_swap_page+0x4b6/0x1f40 mm/memory.c:3622
handle_pte_fault mm/memory.c:4654 [inline]
__handle_mm_fault mm/memory.c:4785 [inline]
handle_mm_fault+0x1b16/0x4410 mm/memory.c:4883
do_user_addr_fault+0x489/0xc80 arch/x86/mm/fault.c:1355
handle_page_fault arch/x86/mm/fault.c:1443 [inline]
exc_page_fault+0x60/0x100 arch/x86/mm/fault.c:1496
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:606
RIP: 0010:__get_user_8+0x18/0x30 arch/x86/lib/getuser.S:100
Code: 31 c0 0f 01 ca c3 90 90 90 90 90 90 90 90 90 90 90 90 48 ba f9 ef ff ff ff 7f 00 00 48 39 d0 73 64 48 19 d2 48 21 d0 0f 01 cb <48> 8b 10 31 c0 0f 01 ca c3 90 90 90 90 90 90 90 90 90 90 90 90 90
RSP: 0000:ffffc90003be7db8 EFLAGS: 00050202
RAX: 00007f0f1c5126a8 RBX: 00007f0f1c5126a8 RCX: 7d0785bb894bc000
RDX: ffffffffffffffff RSI: ffffffff8a2b3a20 RDI: ffffffff8a79f740
RBP: ffffc90003be7ec8 R08: ffffffff8d89db2f R09: 1ffffffff1b13b65
R10: dffffc0000000000 R11: fffffbfff1b13b66 R12: ffffc90003be7fd8
R13: 1ffff9200077cfc4 R14: ffff888026d450f8 R15: dffffc0000000000
rseq_get_rseq_cs_ptr_val kernel/rseq.c:131 [inline]
rseq_get_rseq_cs kernel/rseq.c:153 [inline]
rseq_ip_fixup kernel/rseq.c:266 [inline]
__rseq_handle_notify_resume+0x150/0xf80 kernel/rseq.c:314
rseq_handle_notify_resume include/linux/sched.h:2203 [inline]
tracehook_notify_resume include/linux/tracehook.h:201 [inline]
exit_to_user_mode_loop+0xe5/0x130 kernel/entry/common.c:181
exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:214
irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:320
exc_page_fault+0x88/0x100 arch/x86/mm/fault.c:1499
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:606
RIP: 0033:0x562998af0890
Code: Unable to access opcode bytes at RIP 0x562998af0866.
RSP: 002b:00007ffe6982b958 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 00005629b8052f10 RCX: 0000562cda9ea7ca
RDX: 0000000000000000 RSI: 000000000000002f RDI: 00005629b80529c4
RBP: 0000000000000000 R08: 00000000000001e0 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000297 R12: 000000000aba9500
R13: 0000000003938700 R14: 0000562998b38100 R15: 0000562998b38140
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages