[v5.15] possible deadlock in __dev_queue_xmit

7 views
Skip to first unread message

syzbot

unread,
Apr 2, 2023, 12:12:52 PM4/2/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: c957cbb87315 Linux 5.15.105
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=167a96a5c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=6f83fab0469f5de7
dashboard link: https://syzkaller.appspot.com/bug?extid=72dcf9e94b570611775a
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/35817fda76e5/disk-c957cbb8.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/80f96399b8d4/vmlinux-c957cbb8.xz
kernel image: https://storage.googleapis.com/syzbot-assets/4336a6ad59ec/bzImage-c957cbb8.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+72dcf9...@syzkaller.appspotmail.com

============================================
WARNING: possible recursive locking detected
5.15.105-syzkaller #0 Not tainted
--------------------------------------------
syz-executor.5/11023 is trying to acquire lock:
ffff88809e56d218 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: spin_lock include/linux/spinlock.h:363 [inline]
ffff88809e56d218 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3844 [inline]
ffff88809e56d218 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_queue_xmit+0x2184/0x3230 net/core/dev.c:4188

but task is already holding lock:
ffff88802120c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: spin_trylock include/linux/spinlock.h:373 [inline]
ffff88802120c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: qdisc_run_begin include/net/sch_generic.h:173 [inline]
ffff88802120c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3806 [inline]
ffff88802120c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_queue_xmit+0x11f2/0x3230 net/core/dev.c:4188

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(dev->qdisc_tx_busylock ?: &qdisc_tx_busylock);
lock(dev->qdisc_tx_busylock ?: &qdisc_tx_busylock);

*** DEADLOCK ***

May be due to missing lock nesting notation

10 locks held by syz-executor.5/11023:
#0: ffff8880731778a0 (slock-AF_INET6/1){+.-.}-{2:2}, at: l2tp_xmit_core net/l2tp/l2tp_core.c:1044 [inline]
#0: ffff8880731778a0 (slock-AF_INET6/1){+.-.}-{2:2}, at: l2tp_xmit_skb+0x86c/0x1750 net/l2tp/l2tp_core.c:1109
#1: ffffffff8c91b920 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268
#2: ffffffff8c91b980 (rcu_read_lock_bh){....}-{1:2}, at: rcu_lock_acquire+0x9/0x30 include/linux/rcupdate.h:269
#3: ffffffff8c91b980 (rcu_read_lock_bh){....}-{1:2}, at: rcu_lock_acquire+0x9/0x30 include/linux/rcupdate.h:269
#4: ffff88802120c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: spin_trylock include/linux/spinlock.h:373 [inline]
#4: ffff88802120c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: qdisc_run_begin include/net/sch_generic.h:173 [inline]
#4: ffff88802120c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3806 [inline]
#4: ffff88802120c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_queue_xmit+0x11f2/0x3230 net/core/dev.c:4188
#5: ffff88807aef6898 (_xmit_ETHER#2){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:363 [inline]
#5: ffff88807aef6898 (_xmit_ETHER#2){+.-.}-{2:2}, at: __netif_tx_lock include/linux/netdevice.h:4429 [inline]
#5: ffff88807aef6898 (_xmit_ETHER#2){+.-.}-{2:2}, at: sch_direct_xmit+0x1c0/0x5e0 net/sched/sch_generic.c:340
#6: ffff88809afddee0 (k-slock-AF_INET6){+.-.}-{2:2}, at: spin_trylock include/linux/spinlock.h:373 [inline]
#6: ffff88809afddee0 (k-slock-AF_INET6){+.-.}-{2:2}, at: icmpv6_xmit_lock net/ipv6/icmp.c:118 [inline]
#6: ffff88809afddee0 (k-slock-AF_INET6){+.-.}-{2:2}, at: icmp6_send+0xca6/0x21d0 net/ipv6/icmp.c:548
#7: ffffffff8c91b920 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268
#8: ffffffff8c91b980 (rcu_read_lock_bh){....}-{1:2}, at: rcu_lock_acquire+0x9/0x30 include/linux/rcupdate.h:269
#9: ffffffff8c91b980 (rcu_read_lock_bh){....}-{1:2}, at: rcu_lock_acquire+0x9/0x30 include/linux/rcupdate.h:269

stack backtrace:
CPU: 1 PID: 11023 Comm: syz-executor.5 Not tainted 5.15.105-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_deadlock_bug kernel/locking/lockdep.c:2946 [inline]
check_deadlock kernel/locking/lockdep.c:2989 [inline]
validate_chain+0x46cf/0x58b0 kernel/locking/lockdep.c:3774
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5011
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
spin_lock include/linux/spinlock.h:363 [inline]
__dev_xmit_skb net/core/dev.c:3844 [inline]
__dev_queue_xmit+0x2184/0x3230 net/core/dev.c:4188
neigh_hh_output include/net/neighbour.h:500 [inline]
neigh_output include/net/neighbour.h:514 [inline]
ip6_finish_output2+0xea9/0x14f0 net/ipv6/ip6_output.c:126
ip6_send_skb+0x12b/0x240 net/ipv6/ip6_output.c:1932
icmp6_send+0x1723/0x21d0 net/ipv6/icmp.c:627
__icmpv6_send include/linux/icmpv6.h:28 [inline]
icmpv6_send include/linux/icmpv6.h:49 [inline]
ip6_link_failure+0x37/0x4a0 net/ipv6/route.c:2790
ip_tunnel_xmit+0x16d5/0x24a0 net/ipv4/ip_tunnel.c:816
__gre_xmit net/ipv4/ip_gre.c:469 [inline]
erspan_xmit+0xa9c/0x1530 net/ipv4/ip_gre.c:715
__netdev_start_xmit include/linux/netdevice.h:5019 [inline]
netdev_start_xmit include/linux/netdevice.h:5033 [inline]
xmit_one net/core/dev.c:3592 [inline]
dev_hard_start_xmit+0x298/0x7a0 net/core/dev.c:3608
sch_direct_xmit+0x2b2/0x5e0 net/sched/sch_generic.c:342
__dev_xmit_skb net/core/dev.c:3819 [inline]
__dev_queue_xmit+0x18ee/0x3230 net/core/dev.c:4188
neigh_hh_output include/net/neighbour.h:500 [inline]
neigh_output include/net/neighbour.h:514 [inline]
ip6_finish_output2+0xea9/0x14f0 net/ipv6/ip6_output.c:126
ip6_fragment+0x17bb/0x2330 net/ipv6/ip6_output.c:984
dst_output include/net/dst.h:449 [inline]
NF_HOOK include/linux/netfilter.h:307 [inline]
ip6_xmit+0xf5a/0x1560 net/ipv6/ip6_output.c:324
inet6_csk_xmit+0x441/0x6b0 net/ipv6/inet6_connection_sock.c:135
l2tp_xmit_queue net/l2tp/l2tp_core.c:1004 [inline]
l2tp_xmit_core net/l2tp/l2tp_core.c:1093 [inline]
l2tp_xmit_skb+0xf60/0x1750 net/l2tp/l2tp_core.c:1109
pppol2tp_sendmsg+0x388/0x5f0 net/l2tp/l2tp_ppp.c:319
sock_sendmsg_nosec net/socket.c:704 [inline]
sock_sendmsg net/socket.c:724 [inline]
kernel_sendmsg+0xf5/0x130 net/socket.c:744
sock_no_sendpage+0x156/0x1c0 net/core/sock.c:3002
kernel_sendpage+0x25f/0x390 net/socket.c:3509
sock_sendpage+0x7f/0xb0 net/socket.c:1006
pipe_to_sendpage+0x260/0x350 fs/splice.c:364
splice_from_pipe_feed fs/splice.c:418 [inline]
__splice_from_pipe+0x33b/0x890 fs/splice.c:562
splice_from_pipe fs/splice.c:597 [inline]
generic_splice_sendpage+0x195/0x220 fs/splice.c:746
do_splice_from fs/splice.c:767 [inline]
direct_splice_actor+0xe3/0x1c0 fs/splice.c:936
splice_direct_to_actor+0x500/0xc10 fs/splice.c:891
do_splice_direct+0x285/0x3d0 fs/splice.c:979
do_sendfile+0x625/0xff0 fs/read_write.c:1249
__do_sys_sendfile64 fs/read_write.c:1317 [inline]
__se_sys_sendfile64+0x178/0x1e0 fs/read_write.c:1303
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fe204c590f9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fe203189168 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007fe204d79120 RCX: 00007fe204c590f9
RDX: 0000000000000000 RSI: 0000000000000003 RDI: 0000000000000004
RBP: 00007fe204cb4b39 R08: 0000000000000000 R09: 0000000000000000
R10: 000080001d00c0d0 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fff80132dff R14: 00007fe203189300 R15: 0000000000022000
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Oct 25, 2023, 12:56:46 AM10/25/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages