[v5.15] possible deadlock in skb_queue_tail

6 vistas
Ir al primer mensaje no leído

syzbot

no leída,
4 may 2023, 2:07:47 a.m.4/5/2023
para syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 8a7f2a5c5aa1 Linux 5.15.110
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12020938280000
kernel config: https://syzkaller.appspot.com/x/.config?x=ba8d5c9d6c5289f
dashboard link: https://syzkaller.appspot.com/bug?extid=52739e7da8bd46b59ebc
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/fc04f54c047f/disk-8a7f2a5c.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/6b4ba4cb1191/vmlinux-8a7f2a5c.xz
kernel image: https://storage.googleapis.com/syzbot-assets/d927dc3f9670/bzImage-8a7f2a5c.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+52739e...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
5.15.110-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.5/11996 is trying to acquire lock:
ffff88802d07e1e0 (rlock-AF_UNIX){+.+.}-{2:2}, at: skb_queue_tail+0x32/0x120 net/core/skbuff.c:3355

but task is already holding lock:
ffff88802d07e680 (&u->lock/1){+.+.}-{2:2}, at: unix_dgram_sendmsg+0x1001/0x2090 net/unix/af_unix.c:1924

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&u->lock/1){+.+.}-{2:2}:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
_raw_spin_lock_nested+0x2d/0x40 kernel/locking/spinlock.c:368
sk_diag_dump_icons net/unix/diag.c:86 [inline]
sk_diag_fill+0x6e6/0xfe0 net/unix/diag.c:156
sk_diag_dump net/unix/diag.c:195 [inline]
unix_diag_dump+0x3a5/0x5a0 net/unix/diag.c:223
netlink_dump+0x606/0xc40 net/netlink/af_netlink.c:2307
__netlink_dump_start+0x52f/0x6f0 net/netlink/af_netlink.c:2412
netlink_dump_start include/linux/netlink.h:258 [inline]
unix_diag_handler_dump+0x1be/0x810 net/unix/diag.c:323
sock_diag_rcv_msg+0xd5/0x400
netlink_rcv_skb+0x1cf/0x410 net/netlink/af_netlink.c:2533
sock_diag_rcv+0x26/0x40 net/core/sock_diag.c:276
netlink_unicast_kernel net/netlink/af_netlink.c:1330 [inline]
netlink_unicast+0x7b6/0x980 net/netlink/af_netlink.c:1356
netlink_sendmsg+0xa30/0xd60 net/netlink/af_netlink.c:1952
sock_sendmsg_nosec net/socket.c:704 [inline]
sock_sendmsg net/socket.c:724 [inline]
sock_write_iter+0x39b/0x530 net/socket.c:1060
call_write_iter include/linux/fs.h:2103 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0xacf/0xe50 fs/read_write.c:594
ksys_write+0x1a2/0x2c0 fs/read_write.c:647
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #0 (rlock-AF_UNIX){+.+.}-{2:2}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1646/0x58b0 kernel/locking/lockdep.c:3787
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5011
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
skb_queue_tail+0x32/0x120 net/core/skbuff.c:3355
unix_dgram_sendmsg+0x15c1/0x2090 net/unix/af_unix.c:1947
sock_sendmsg_nosec net/socket.c:704 [inline]
sock_sendmsg net/socket.c:724 [inline]
sock_write_iter+0x39b/0x530 net/socket.c:1060
call_write_iter include/linux/fs.h:2103 [inline]
io_write+0x84f/0xea0 io_uring/io_uring.c:3771
io_issue_sqe+0x176a/0xa770 io_uring/io_uring.c:6883
__io_queue_sqe+0x34/0x360 io_uring/io_uring.c:7191
io_queue_sqe io_uring/io_uring.c:7242 [inline]
io_submit_sqe io_uring/io_uring.c:7419 [inline]
io_submit_sqes+0x30fb/0xa410 io_uring/io_uring.c:7525
__do_sys_io_uring_enter io_uring/io_uring.c:10254 [inline]
__se_sys_io_uring_enter+0x28b/0x21c0 io_uring/io_uring.c:10196
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&u->lock/1);
lock(rlock-AF_UNIX);
lock(&u->lock/1);
lock(rlock-AF_UNIX);

*** DEADLOCK ***

2 locks held by syz-executor.5/11996:
#0: ffff8880776b00a8 (&ctx->uring_lock){+.+.}-{3:3}, at: __do_sys_io_uring_enter io_uring/io_uring.c:10253 [inline]
#0: ffff8880776b00a8 (&ctx->uring_lock){+.+.}-{3:3}, at: __se_sys_io_uring_enter+0x280/0x21c0 io_uring/io_uring.c:10196
#1: ffff88802d07e680 (&u->lock/1){+.+.}-{2:2}, at: unix_dgram_sendmsg+0x1001/0x2090 net/unix/af_unix.c:1924

stack backtrace:
CPU: 0 PID: 11996 Comm: syz-executor.5 Not tainted 5.15.110-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2f8/0x3b0 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1646/0x58b0 kernel/locking/lockdep.c:3787
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5011
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
skb_queue_tail+0x32/0x120 net/core/skbuff.c:3355
unix_dgram_sendmsg+0x15c1/0x2090 net/unix/af_unix.c:1947
sock_sendmsg_nosec net/socket.c:704 [inline]
sock_sendmsg net/socket.c:724 [inline]
sock_write_iter+0x39b/0x530 net/socket.c:1060
call_write_iter include/linux/fs.h:2103 [inline]
io_write+0x84f/0xea0 io_uring/io_uring.c:3771
io_issue_sqe+0x176a/0xa770 io_uring/io_uring.c:6883
__io_queue_sqe+0x34/0x360 io_uring/io_uring.c:7191
io_queue_sqe io_uring/io_uring.c:7242 [inline]
io_submit_sqe io_uring/io_uring.c:7419 [inline]
io_submit_sqes+0x30fb/0xa410 io_uring/io_uring.c:7525
__do_sys_io_uring_enter io_uring/io_uring.c:10254 [inline]
__se_sys_io_uring_enter+0x28b/0x21c0 io_uring/io_uring.c:10196
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fdee1d3a169
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fdee028b168 EFLAGS: 00000246 ORIG_RAX: 00000000000001aa
RAX: ffffffffffffffda RBX: 00007fdee1e5a050 RCX: 00007fdee1d3a169
RDX: 0000000000000000 RSI: 00000000000001c9 RDI: 0000000000000003
RBP: 00007fdee1d95ca1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffca10533df R14: 00007fdee028b300 R15: 0000000000022000
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

no leída,
23 sept 2023, 9:49:23 a.m.23/9/2023
para syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Responder a todos
Responder al autor
Reenviar
0 mensajes nuevos