[v6.6] INFO: task hung in htable_put (2)

0 views
Skip to first unread message

syzbot

unread,
Mar 10, 2026, 10:51:21 PM (18 hours ago) Mar 10
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 4fc00fe35d46 Linux 6.6.129
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=118d975a580000
kernel config: https://syzkaller.appspot.com/x/.config?x=c5b35c4db8465904
dashboard link: https://syzkaller.appspot.com/bug?extid=dd8f89a8dfa8635f69a9
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/b69a43016252/disk-4fc00fe3.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/240ac8d2cf70/vmlinux-4fc00fe3.xz
kernel image: https://storage.googleapis.com/syzbot-assets/67c7958e5edb/bzImage-4fc00fe3.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+dd8f89...@syzkaller.appspotmail.com

INFO: task syz.2.3407:13668 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.3407 state:D stack:25960 pid:13668 ppid:5768 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x1553/0x45a0 kernel/sched/core.c:6700
schedule+0xbd/0x170 kernel/sched/core.c:6774
schedule_timeout+0xc1/0x2d0 kernel/time/timer.c:2144
do_wait_for_common kernel/sched/completion.c:95 [inline]
__wait_for_common kernel/sched/completion.c:116 [inline]
wait_for_common kernel/sched/completion.c:127 [inline]
wait_for_completion+0x2cb/0x5b0 kernel/sched/completion.c:148
__flush_work+0x913/0xaa0 kernel/workqueue.c:3471
__cancel_work_timer+0x3f8/0x560 kernel/workqueue.c:3558
htable_put+0x1dc/0x240 net/netfilter/xt_hashlimit.c:429
cleanup_match net/ipv6/netfilter/ip6_tables.c:477 [inline]
find_check_entry net/ipv6/netfilter/ip6_tables.c:581 [inline]
translate_table+0x1a0b/0x2090 net/ipv6/netfilter/ip6_tables.c:733
do_replace net/ipv6/netfilter/ip6_tables.c:1154 [inline]
do_ip6t_set_ctl+0x9fc/0xe10 net/ipv6/netfilter/ip6_tables.c:1644
nf_setsockopt+0x263/0x280 net/netfilter/nf_sockopt.c:101
rawv6_setsockopt+0x276/0x5e0 net/ipv6/raw.c:1048
do_sock_setsockopt+0x175/0x1a0 net/socket.c:2321
__sys_setsockopt net/socket.c:2344 [inline]
__do_sys_setsockopt net/socket.c:2353 [inline]
__se_sys_setsockopt net/socket.c:2350 [inline]
__x64_sys_setsockopt+0x182/0x200 net/socket.c:2350
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fd2c079c799
RSP: 002b:00007fd2c1654028 EFLAGS: 00000246 ORIG_RAX: 0000000000000036
RAX: ffffffffffffffda RBX: 00007fd2c0a15fa0 RCX: 00007fd2c079c799
RDX: 0000000000000040 RSI: 0000000000000029 RDI: 0000000000000003
RBP: 00007fd2c0832c99 R08: 0000000000000538 R09: 0000000000000000
R10: 0000200000000a80 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fd2c0a16038 R14: 00007fd2c0a15fa0 R15: 00007ffd9ea92278
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/29:
#0: ffffffff8d132060 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8d132060 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8d132060 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
5 locks held by kworker/u4:2/33:
#0: ffff88801a254938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#0: ffff88801a254938 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#1: ffffc90000a9fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#1: ffffc90000a9fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#2: ffffffff8e3b3d10 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x14c/0xbb0 net/core/net_namespace.c:606
#3: ffffffff8e3c0d48 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_delete_nets+0xd4/0x370 net/ipv4/ip_tunnel.c:1152
#4: ffffffff8d137a38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#4: ffffffff8d137a38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3da/0x880 kernel/rcu/tree_exp.h:1004
2 locks held by kworker/0:2/787:
3 locks held by kworker/1:2/2065:
#0: ffff888017c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#0: ffff888017c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#1: ffffc900054f7d00 (free_ipc_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#1: ffffc900054f7d00 (free_ipc_work){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#2: ffffffff8d137a38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
#2: ffffffff8d137a38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x306/0x880 kernel/rcu/tree_exp.h:1004
3 locks held by kworker/u4:8/2918:
2 locks held by dhcpcd/5431:
#0: ffffffff8e3a5d28 (vlan_ioctl_mutex){+.+.}-{3:3}, at: sock_ioctl+0x635/0x7e0 net/socket.c:1302
#1: ffffffff8e3c0d48 (rtnl_mutex){+.+.}-{3:3}, at: vlan_ioctl_handler+0xf1/0x630 net/8021q/vlan.c:580
2 locks held by getty/5527:
#0: ffff88814e4900a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000326e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x433/0x1390 drivers/tty/n_tty.c:2217
2 locks held by kworker/0:7/5834:
3 locks held by kworker/0:8/5836:
3 locks held by kworker/u4:4/13036:
#0: ffff88802c25f138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#0: ffff88802c25f138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#1: ffffc90003567d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#1: ffffc90003567d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#2: ffffffff8e3c0d48 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xd4/0x1530 net/ipv6/addrconf.c:4176
2 locks held by kworker/0:4/13955:
2 locks held by kworker/1:8/14184:
#0: ffff888017c72538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#0: ffff888017c72538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#1: ffffc90003647d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#1: ffffc90003647d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
1 lock held by syz-executor/14338:
#0: ffffffff8e3c0d48 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c0d48 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6469
1 lock held by syz-executor/14340:
#0: ffffffff8e3c0d48 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c0d48 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6469
7 locks held by syz-executor/14404:
#0: ffff888020df8418 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x21b/0x990 fs/read_write.c:580
#1: ffff88802ecc2088 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x1e7/0x520 fs/kernfs/file.c:343
#2: ffff8881433251f8 (kn->active#54){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
#2: ffff8881433251f8 (kn->active#54){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x23a/0x520 fs/kernfs/file.c:344
#3: ffffffff8dca1308 (nsim_bus_dev_list_lock){+.+.}-{3:3}, at: new_device_store+0x13d/0x690 drivers/net/netdevsim/bus.c:160
#4: ffff8880778170e8 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:995 [inline]
#4: ffff8880778170e8 (&dev->mutex){....}-{3:3}, at: __device_attach+0x89/0x420 drivers/base/dd.c:1005
#5: ffff888056809250 (&devlink->lock_key#14){+.+.}-{3:3}, at: nsim_drv_probe+0xc8/0xbb0 drivers/net/netdevsim/dev.c:1537
#6: ffffffff8e3c0d48 (rtnl_mutex){+.+.}-{3:3}, at: nsim_init_netdevsim drivers/net/netdevsim/netdev.c:335 [inline]
#6: ffffffff8e3c0d48 (rtnl_mutex){+.+.}-{3:3}, at: nsim_create+0x384/0x4a0 drivers/net/netdevsim/netdev.c:401

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
<TASK>
dump_stack_lvl+0x18c/0x250 lib/dump_stack.c:106
nmi_cpu_backtrace+0x3a6/0x3e0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf3d/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 787 Comm: kworker/0:2 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Workqueue: events_power_efficient htable_gc
RIP: 0010:__netif_receive_skb_core+0x34ce/0x3af0 net/core/dev.c:5622
Code: 48 85 db 0f 84 22 01 00 00 e8 be 2d 0b f9 65 48 ff 03 48 8b bc 24 50 01 00 00 be 33 00 00 00 e8 48 b1 f6 ff 41 bc 01 00 00 00 <48> 8b 9c 24 50 01 00 00 48 8b 84 24 c8 00 00 00 42 80 3c 28 00 4c
RSP: 0018:ffffc900000079a0 EFLAGS: 00000246
RAX: ffffffff887bed5c RBX: 0000000000000000 RCX: ffff888021203c00
RDX: 0000000000000100 RSI: ffffffff8e3b90e0 RDI: 0000000000000000
RBP: ffffc90000007b70 R08: ffff888021203c00 R09: 0000000000000004
R10: 0000000000000003 R11: 0000000000000100 R12: 0000000000000000
R13: dffffc0000000000 R14: 0000000000000000 R15: ffff888055e55c90
FS: 0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b2cf2421c CR3: 000000000cf32000 CR4: 00000000003506f0
Call Trace:
<IRQ>
__netif_receive_skb_one_core net/core/dev.c:5632 [inline]
__netif_receive_skb+0x74/0x290 net/core/dev.c:5748
process_backlog+0x391/0x6f0 net/core/dev.c:6076
__napi_poll+0xc0/0x460 net/core/dev.c:6638
napi_poll net/core/dev.c:6705 [inline]
net_rx_action+0x616/0xc40 net/core/dev.c:6841
handle_softirqs+0x280/0x820 kernel/softirq.c:578
do_softirq+0xfa/0x1a0 kernel/softirq.c:479
</IRQ>
<TASK>
__local_bh_enable_ip+0x184/0x1c0 kernel/softirq.c:406
spin_unlock_bh include/linux/spinlock.h:396 [inline]
htable_selective_cleanup+0x286/0x320 net/netfilter/xt_hashlimit.c:374
htable_gc+0x29/0xa0 net/netfilter/xt_hashlimit.c:385
process_one_work kernel/workqueue.c:2653 [inline]
process_scheduled_works+0xa5d/0x15d0 kernel/workqueue.c:2730
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2811
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages