[v6.6] possible deadlock in __dev_queue_xmit

0 views
Skip to first unread message

syzbot

unread,
Feb 6, 2026, 10:49:31 PM (2 days ago) Feb 6
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: c56aaf1a85ae Linux 6.6.123
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=172aa7fa580000
kernel config: https://syzkaller.appspot.com/x/.config?x=2a950bf7c0bff9f9
dashboard link: https://syzkaller.appspot.com/bug?extid=e2c6fd2402d4498dfb39
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/0c1480594070/disk-c56aaf1a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/118e64c14227/vmlinux-c56aaf1a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/320a7e9bfff1/bzImage-c56aaf1a.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e2c6fd...@syzkaller.appspotmail.com

============================================
WARNING: possible recursive locking detected
syzkaller #0 Not tainted
--------------------------------------------
kworker/u4:12/8172 is trying to acquire lock:
ffff88802eb01218 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2){+...}-{2:2}, at: spin_lock include/linux/spinlock.h:351 [inline]
ffff88802eb01218 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2){+...}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3899 [inline]
ffff88802eb01218 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2){+...}-{2:2}, at: __dev_queue_xmit+0x1f6f/0x36b0 net/core/dev.c:4404

but task is already holding lock:
ffff888056a3e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2){+...}-{2:2}, at: spin_trylock include/linux/spinlock.h:361 [inline]
ffff888056a3e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2){+...}-{2:2}, at: qdisc_run_begin include/net/sch_generic.h:195 [inline]
ffff888056a3e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2){+...}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3856 [inline]
ffff888056a3e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2){+...}-{2:2}, at: __dev_queue_xmit+0x126a/0x36b0 net/core/dev.c:4404

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2);
lock(dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2);

*** DEADLOCK ***

May be due to missing lock nesting notation

9 locks held by kworker/u4:12/8172:
#0: ffff88805d8bbd38 ((wq_completion)bond5#5){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff88805d8bbd38 ((wq_completion)bond5#5){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
#1: ffffc90004bbfd00 ((work_completion)(&(&bond->alb_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90004bbfd00 ((work_completion)(&(&bond->alb_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
#2: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#2: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#2: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: bond_alb_monitor+0xf9/0x17e0 drivers/net/bonding/bond_alb.c:1547
#3: ffffffff8d132040 (rcu_read_lock_bh){....}-{1:2}, at: local_bh_disable include/linux/bottom_half.h:20 [inline]
#3: ffffffff8d132040 (rcu_read_lock_bh){....}-{1:2}, at: rcu_read_lock_bh include/linux/rcupdate.h:838 [inline]
#3: ffffffff8d132040 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x26b/0x36b0 net/core/dev.c:4363
#4: ffff888056a3e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2){+...}-{2:2}, at: spin_trylock include/linux/spinlock.h:361 [inline]
#4: ffff888056a3e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2){+...}-{2:2}, at: qdisc_run_begin include/net/sch_generic.h:195 [inline]
#4: ffff888056a3e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2){+...}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3856 [inline]
#4: ffff888056a3e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock#2){+...}-{2:2}, at: __dev_queue_xmit+0x126a/0x36b0 net/core/dev.c:4404
#5: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#5: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#5: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: ip_output+0x60/0x3b0 net/ipv4/ip_output.c:431
#6: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#6: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#6: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: ip_finish_output2+0x457/0x11e0 net/ipv4/ip_output.c:228
#7: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#7: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#7: ffffffff8d131fe0 (rcu_read_lock){....}-{1:2}, at: arp_xmit+0x23/0x270 net/ipv4/arp.c:662
#8: ffffffff8d132040 (rcu_read_lock_bh){....}-{1:2}, at: local_bh_disable include/linux/bottom_half.h:20 [inline]
#8: ffffffff8d132040 (rcu_read_lock_bh){....}-{1:2}, at: rcu_read_lock_bh include/linux/rcupdate.h:838 [inline]
#8: ffffffff8d132040 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x26b/0x36b0 net/core/dev.c:4363

stack backtrace:
CPU: 1 PID: 8172 Comm: kworker/u4:12 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
Workqueue: bond5 bond_alb_monitor
Call Trace:
<TASK>
dump_stack_lvl+0x18c/0x250 lib/dump_stack.c:106
check_deadlock kernel/locking/lockdep.c:3062 [inline]
validate_chain kernel/locking/lockdep.c:3856 [inline]
__lock_acquire+0x5dbc/0x7d40 kernel/locking/lockdep.c:5137
lock_acquire+0x19e/0x420 kernel/locking/lockdep.c:5754
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
spin_lock include/linux/spinlock.h:351 [inline]
__dev_xmit_skb net/core/dev.c:3899 [inline]
__dev_queue_xmit+0x1f6f/0x36b0 net/core/dev.c:4404
NF_HOOK+0x331/0x3b0 include/linux/netfilter.h:-1
arp_xmit+0x16c/0x270 net/ipv4/arp.c:664
arp_solicit+0xc02/0xe40 net/ipv4/arp.c:392
neigh_probe net/core/neighbour.c:1080 [inline]
__neigh_event_send+0xed1/0x1440 net/core/neighbour.c:1247
neigh_event_send_probe include/net/neighbour.h:467 [inline]
neigh_event_send include/net/neighbour.h:473 [inline]
neigh_resolve_output+0x19b/0x730 net/core/neighbour.c:1552
neigh_output include/net/neighbour.h:543 [inline]
ip_finish_output2+0xd3a/0x11e0 net/ipv4/ip_output.c:235
NF_HOOK_COND include/linux/netfilter.h:293 [inline]
ip_output+0x2a1/0x3b0 net/ipv4/ip_output.c:436
iptunnel_xmit+0x4f0/0x920 net/ipv4/ip_tunnel_core.c:82
ip_tunnel_xmit+0x1cbc/0x2410 net/ipv4/ip_tunnel.c:844
__gre_xmit net/ipv4/ip_gre.c:478 [inline]
gre_tap_xmit+0x4fe/0x6f0 net/ipv4/ip_gre.c:757
__netdev_start_xmit include/linux/netdevice.h:4943 [inline]
netdev_start_xmit include/linux/netdevice.h:4957 [inline]
xmit_one net/core/dev.c:3632 [inline]
dev_hard_start_xmit+0x246/0x740 net/core/dev.c:3648
sch_direct_xmit+0x25e/0x4c0 net/sched/sch_generic.c:345
__dev_xmit_skb net/core/dev.c:3869 [inline]
__dev_queue_xmit+0x179c/0x36b0 net/core/dev.c:4404
dev_queue_xmit include/linux/netdevice.h:3113 [inline]
alb_send_lp_vid+0x2fc/0x4e0 drivers/net/bonding/bond_alb.c:949
alb_send_learning_packets+0x12d/0x300 drivers/net/bonding/bond_alb.c:1012
bond_alb_monitor+0x3d6/0x17e0 drivers/net/bonding/bond_alb.c:1564
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa5d/0x15d0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages