[syzbot] kernel BUG in rxrpc_put_peer

12 views
Skip to first unread message

syzbot

unread,
Dec 6, 2022, 11:34:37 AM12/6/22
to da...@davemloft.net, dhow...@redhat.com, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: c9f8d73645b6 net: mtk_eth_soc: enable flow offload support..
git tree: net-next
console output: https://syzkaller.appspot.com/x/log.txt?x=11fedb97880000
kernel config: https://syzkaller.appspot.com/x/.config?x=c608c21151db14f2
dashboard link: https://syzkaller.appspot.com/bug?extid=c22650d2844392afdcfd
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=103f84db880000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/bf270f71d81b/disk-c9f8d736.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/9df5873e74c3/vmlinux-c9f8d736.xz
kernel image: https://storage.googleapis.com/syzbot-assets/4db90f01e6d3/bzImage-c9f8d736.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c22650...@syzkaller.appspotmail.com

rxrpc: Assertion failed
------------[ cut here ]------------
kernel BUG at net/rxrpc/peer_object.c:413!
invalid opcode: 0000 [#1] PREEMPT SMP KASAN
CPU: 0 PID: 15173 Comm: krxrpcio/0 Not tainted 6.1.0-rc7-syzkaller-01810-gc9f8d73645b6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
RIP: 0010:__rxrpc_put_peer net/rxrpc/peer_object.c:413 [inline]
RIP: 0010:rxrpc_put_peer.cold+0x11/0x13 net/rxrpc/peer_object.c:437
Code: ff e9 2d f2 f9 fe e8 60 39 92 f7 48 c7 c7 20 ce 74 8b e8 f8 72 bd ff 0f 0b e8 4d 39 92 f7 48 c7 c7 20 d3 74 8b e8 e5 72 bd ff <0f> 0b e8 3a 39 92 f7 4c 8b 4c 24 30 48 89 ea 48 89 ee 48 c7 c1 20
RSP: 0018:ffffc9000b017be8 EFLAGS: 00010282
RAX: 0000000000000017 RBX: ffff88807c44a800 RCX: 0000000000000000
RDX: ffff88807b57ba80 RSI: ffffffff816576cc RDI: fffff52001602f6f
RBP: ffff88807b531c00 R08: 0000000000000017 R09: 0000000000000000
R10: 0000000080000000 R11: 0000000000000000 R12: ffff888023d7c000
R13: ffff88807b531d28 R14: ffff88807b531c10 R15: ffff88807b531c30
FS: 0000000000000000(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f9bbe7821b8 CR3: 0000000076947000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
rxrpc_clean_up_connection+0x37d/0x4b0 net/rxrpc/conn_object.c:317
rxrpc_put_connection.part.0+0x1e8/0x210 net/rxrpc/conn_object.c:356
rxrpc_put_connection+0x25/0x30 net/rxrpc/conn_object.c:339
rxrpc_clean_up_local_conns+0x3ad/0x530 net/rxrpc/conn_client.c:1131
rxrpc_destroy_local+0x170/0x2f0 net/rxrpc/local_object.c:392
rxrpc_io_thread+0xcde/0xfa0 net/rxrpc/io_thread.c:492
kthread+0x2e8/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:__rxrpc_put_peer net/rxrpc/peer_object.c:413 [inline]
RIP: 0010:rxrpc_put_peer.cold+0x11/0x13 net/rxrpc/peer_object.c:437
Code: ff e9 2d f2 f9 fe e8 60 39 92 f7 48 c7 c7 20 ce 74 8b e8 f8 72 bd ff 0f 0b e8 4d 39 92 f7 48 c7 c7 20 d3 74 8b e8 e5 72 bd ff <0f> 0b e8 3a 39 92 f7 4c 8b 4c 24 30 48 89 ea 48 89 ee 48 c7 c1 20
RSP: 0018:ffffc9000b017be8 EFLAGS: 00010282
RAX: 0000000000000017 RBX: ffff88807c44a800 RCX: 0000000000000000
RDX: ffff88807b57ba80 RSI: ffffffff816576cc RDI: fffff52001602f6f
RBP: ffff88807b531c00 R08: 0000000000000017 R09: 0000000000000000
R10: 0000000080000000 R11: 0000000000000000 R12: ffff888023d7c000
R13: ffff88807b531d28 R14: ffff88807b531c10 R15: ffff88807b531c30
FS: 0000000000000000(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f9bbe7821b8 CR3: 000000002698e000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this issue, for details see:
https://goo.gl/tpsmEJ#testing-patches

David Howells

unread,
Dec 8, 2022, 9:09:28 AM12/8/22
to syzbot, dhow...@redhat.com, da...@davemloft.net, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com

syzbot

unread,
Dec 8, 2022, 6:34:23 PM12/8/22
to da...@davemloft.net, dhow...@redhat.com, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch and the reproducer did not trigger any issue:

Reported-and-tested-by: syzbot+c22650...@syzkaller.appspotmail.com

Tested on:

commit: efb7555b rxrpc: Fix I/O thread stop
git tree: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/ afs-testing
console output: https://syzkaller.appspot.com/x/log.txt?x=13660d6b880000
kernel config: https://syzkaller.appspot.com/x/.config?x=331c73ac8d6e1cab
dashboard link: https://syzkaller.appspot.com/bug?extid=c22650d2844392afdcfd
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2

Note: no patches were applied.
Note: testing is done by a robot and is best-effort only.

David Howells

unread,
Dec 19, 2022, 11:44:53 AM12/19/22
to syzbot, dhow...@redhat.com, da...@davemloft.net, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
#syz test: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/ 6529d70012e00166ab2ca4a92c4aa01e30a3037b

syzbot

unread,
Dec 19, 2022, 12:17:18 PM12/19/22
to da...@davemloft.net, dhow...@redhat.com, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
INFO: rcu detected stall in corrupted

rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P5792 } 2657 jiffies s: 2825 root: 0x0/T
rcu: blocking rcu_node structures (internal RCU debug):


Tested on:

commit: 6529d700 rxrpc: Move client call connection to the I/O..
git tree: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/
console output: https://syzkaller.appspot.com/x/log.txt?x=13f3b420480000
kernel config: https://syzkaller.appspot.com/x/.config?x=b0e91ad4b5f69c47

David Howells

unread,
Dec 20, 2022, 11:02:52 AM12/20/22
to syzbot, dhow...@redhat.com, da...@davemloft.net, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
#syz test: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/ 2bc808999a4484720747bbfcb03f9eb4a36223f0

syzbot

unread,
Dec 20, 2022, 2:15:22 PM12/20/22
to da...@davemloft.net, dhow...@redhat.com, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
INFO: rcu detected stall in corrupted

rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P5782 } 2677 jiffies s: 2821 root: 0x0/T
rcu: blocking rcu_node structures (internal RCU debug):


Tested on:

commit: 2bc80899 rxrpc: Fix a couple of potential use-after-fr..
git tree: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/
console output: https://syzkaller.appspot.com/x/log.txt?x=12729378480000

David Howells

unread,
Dec 21, 2022, 11:44:17 AM12/21/22
to syzbot, dhow...@redhat.com, da...@davemloft.net, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
#syz test: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/ 97f46a2a6f87e97634b3179190dbb5d947f03bd6

syzbot

unread,
Dec 21, 2022, 12:25:23 PM12/21/22
to da...@davemloft.net, dhow...@redhat.com, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
INFO: rcu detected stall in corrupted

rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P5786 } 2640 jiffies s: 2805 root: 0x0/T
rcu: blocking rcu_node structures (internal RCU debug):


Tested on:

commit: 97f46a2a afs: Fix lost servers_outstanding count
git tree: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/
console output: https://syzkaller.appspot.com/x/log.txt?x=16357637880000

syzbot

unread,
Dec 23, 2022, 3:29:35 AM12/23/22
to da...@davemloft.net, dhow...@redhat.com, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 8395ae05cb5a Merge tag 'scsi-misc' of git://git.kernel.org..
git tree: upstream
console+strace: https://syzkaller.appspot.com/x/log.txt?x=1458e1cf880000
kernel config: https://syzkaller.appspot.com/x/.config?x=b0e81c4eb13a67cd
dashboard link: https://syzkaller.appspot.com/bug?extid=c22650d2844392afdcfd
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=14386450480000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=17223f80480000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/185a22278a16/disk-8395ae05.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/852ad4c6710f/vmlinux-8395ae05.xz
kernel image: https://storage.googleapis.com/syzbot-assets/d0b9daae6d3a/bzImage-8395ae05.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c22650...@syzkaller.appspotmail.com

rxrpc: Assertion failed
------------[ cut here ]------------
kernel BUG at net/rxrpc/peer_object.c:413!
invalid opcode: 0000 [#1] PREEMPT SMP KASAN
CPU: 1 PID: 27502 Comm: krxrpcio/0 Not tainted 6.1.0-syzkaller-14446-g8395ae05cb5a #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
RIP: 0010:__rxrpc_put_peer net/rxrpc/peer_object.c:413 [inline]
RIP: 0010:rxrpc_put_peer.cold+0x11/0x13 net/rxrpc/peer_object.c:437
Code: ff e9 21 62 f9 fe e8 74 30 7e f7 48 c7 c7 a0 16 76 8b e8 04 ef bc ff 0f 0b e8 61 30 7e f7 48 c7 c7 a0 1b 76 8b e8 f1 ee bc ff <0f> 0b e8 4e 30 7e f7 4c 8b 4c 24 30 48 89 ea 48 89 ee 48 c7 c1 a0
RSP: 0018:ffffc9000607fbe8 EFLAGS: 00010282
RAX: 0000000000000017 RBX: ffff88801eeb7800 RCX: 0000000000000000
RDX: ffff88802b638280 RSI: ffffffff8165927c RDI: fffff52000c0ff6f
RBP: ffff888028d23c00 R08: 0000000000000017 R09: 0000000000000000
R10: 0000000080000000 R11: 0000000000000000 R12: ffff888028550000
R13: ffff888028d23d28 R14: ffff888028d23c10 R15: ffff888028d23c30
FS: 0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020000140 CR3: 0000000077fb2000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
rxrpc_clean_up_connection+0x37d/0x4b0 net/rxrpc/conn_object.c:317
rxrpc_put_connection.part.0+0x1e8/0x210 net/rxrpc/conn_object.c:356
rxrpc_put_connection+0x25/0x30 net/rxrpc/conn_object.c:339
rxrpc_clean_up_local_conns+0x3ad/0x530 net/rxrpc/conn_client.c:1129
rxrpc_destroy_local+0x170/0x2f0 net/rxrpc/local_object.c:395
rxrpc_io_thread+0xce8/0xfb0 net/rxrpc/io_thread.c:496
kthread+0x2e8/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:__rxrpc_put_peer net/rxrpc/peer_object.c:413 [inline]
RIP: 0010:rxrpc_put_peer.cold+0x11/0x13 net/rxrpc/peer_object.c:437
Code: ff e9 21 62 f9 fe e8 74 30 7e f7 48 c7 c7 a0 16 76 8b e8 04 ef bc ff 0f 0b e8 61 30 7e f7 48 c7 c7 a0 1b 76 8b e8 f1 ee bc ff <0f> 0b e8 4e 30 7e f7 4c 8b 4c 24 30 48 89 ea 48 89 ee 48 c7 c1 a0
RSP: 0018:ffffc9000607fbe8 EFLAGS: 00010282
RAX: 0000000000000017 RBX: ffff88801eeb7800 RCX: 0000000000000000
RDX: ffff88802b638280 RSI: ffffffff8165927c RDI: fffff52000c0ff6f
RBP: ffff888028d23c00 R08: 0000000000000017 R09: 0000000000000000
R10: 0000000080000000 R11: 0000000000000000 R12: ffff888028550000
R13: ffff888028d23d28 R14: ffff888028d23c10 R15: ffff888028d23c30
FS: 0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020000140 CR3: 000000000c48e000 CR4: 00000000003506e0

David Howells

unread,
Jan 5, 2023, 11:03:32 AM1/5/23
to syzbot, dhow...@redhat.com, da...@davemloft.net, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
#syz test: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/ a5852d9046053fc64eb250c1c07e49162de616ab

syzbot

unread,
Jan 5, 2023, 11:32:22 AM1/5/23
to da...@davemloft.net, dhow...@redhat.com, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
INFO: rcu detected stall in corrupted

rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P5529 } 2650 jiffies s: 2809 root: 0x0/T
rcu: blocking rcu_node structures (internal RCU debug):


Tested on:

commit: a5852d90 rxrpc: Move client call connection to the I/O..
git tree: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/
console output: https://syzkaller.appspot.com/x/log.txt?x=13297ee6480000
kernel config: https://syzkaller.appspot.com/x/.config?x=90282e312d5fd612
dashboard link: https://syzkaller.appspot.com/bug?extid=c22650d2844392afdcfd
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2

David Howells

unread,
Jan 6, 2023, 6:47:45 AM1/6/23
to syzbot, dhow...@redhat.com, da...@davemloft.net, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
#syz test: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/ 9e80802b1c2374cdc7ed4a3fd40a3489ec8e9910

syzbot

unread,
Jan 6, 2023, 12:18:29 PM1/6/23
to da...@davemloft.net, dhow...@redhat.com, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
INFO: rcu detected stall in corrupted

rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P5540 } 2684 jiffies s: 2885 root: 0x0/T
rcu: blocking rcu_node structures (internal RCU debug):


Tested on:

commit: 9e80802b rxrpc: TEST: Remove almost all use of RCU
git tree: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/
console output: https://syzkaller.appspot.com/x/log.txt?x=1587a762480000
kernel config: https://syzkaller.appspot.com/x/.config?x=affadc28955d87c3

David Howells

unread,
Jan 6, 2023, 4:55:01 PM1/6/23
to syzbot, dhow...@redhat.com, da...@davemloft.net, edum...@google.com, ku...@kernel.org, linu...@lists.infradead.org, linux-...@vger.kernel.org, marc....@auristor.com, net...@vger.kernel.org, pab...@redhat.com, syzkall...@googlegroups.com
syzbot <syzbot+c22650...@syzkaller.appspotmail.com> wrote:

> syzbot has tested the proposed patch but the reproducer is still triggering an issue:
> INFO: rcu detected stall in corrupted
>
> rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P5540 } 2684 jiffies s: 2885 root: 0x0/T
> rcu: blocking rcu_node structures (internal RCU debug):

Okay, I think this is very likely not due to rxrpc, but rather to one of the
tunnel drivers used by the test (it seems to use a lot of different drivers).
I added the attached patch which removes almost every last bit of RCU from
rxrpc (there's still a bit because the UDP socket notification hooks require
it), and the problem still occurs.

However, the original problem seems to be fixed.

David
---
commit 9e80802b1c2374cdc7ed4a3fd40a3489ec8e9910
Author: David Howells <dhow...@redhat.com>
Date: Fri Jan 6 09:49:06 2023 +0000

rxrpc: TEST: Remove almost all use of RCU

... to try and fix syzbot rcu issue

diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
index cf200e4e0eae..b746f8d556db 100644
--- a/net/rxrpc/af_rxrpc.c
+++ b/net/rxrpc/af_rxrpc.c
@@ -45,29 +45,11 @@ struct workqueue_struct *rxrpc_workqueue;

static void rxrpc_sock_destructor(struct sock *);

-/*
- * see if an RxRPC socket is currently writable
- */
-static inline int rxrpc_writable(struct sock *sk)
-{
- return refcount_read(&sk->sk_wmem_alloc) < (size_t) sk->sk_sndbuf;
-}
-
/*
* wait for write bufferage to become available
*/
static void rxrpc_write_space(struct sock *sk)
{
- _enter("%p", sk);
- rcu_read_lock();
- if (rxrpc_writable(sk)) {
- struct socket_wq *wq = rcu_dereference(sk->sk_wq);
-
- if (skwq_has_sleeper(wq))
- wake_up_interruptible(&wq->wait);
- sk_wake_async(sk, SOCK_WAKE_SPACE, POLL_OUT);
- }
- rcu_read_unlock();
}

/*
@@ -155,10 +137,10 @@ static int rxrpc_bind(struct socket *sock, struct sockaddr *saddr, int len)

if (service_id) {
write_lock(&local->services_lock);
- if (rcu_access_pointer(local->service))
+ if (local->service)
goto service_in_use;
rx->local = local;
- rcu_assign_pointer(local->service, rx);
+ local->service = rx;
write_unlock(&local->services_lock);

rx->sk.sk_state = RXRPC_SERVER_BOUND;
@@ -738,8 +720,7 @@ static __poll_t rxrpc_poll(struct file *file, struct socket *sock,
/* the socket is writable if there is space to add new data to the
* socket; there is no guarantee that any particular call in progress
* on the socket may have space in the Tx ACK window */
- if (rxrpc_writable(sk))
- mask |= EPOLLOUT | EPOLLWRNORM;
+ mask |= EPOLLOUT | EPOLLWRNORM;

return mask;
}
@@ -875,9 +856,9 @@ static int rxrpc_release_sock(struct sock *sk)

sk->sk_state = RXRPC_CLOSE;

- if (rx->local && rcu_access_pointer(rx->local->service) == rx) {
+ if (rx->local && rx->local->service == rx) {
write_lock(&rx->local->services_lock);
- rcu_assign_pointer(rx->local->service, NULL);
+ rx->local->service = NULL;
write_unlock(&rx->local->services_lock);
}

@@ -1053,12 +1034,6 @@ static void __exit af_rxrpc_exit(void)
proto_unregister(&rxrpc_proto);
unregister_pernet_device(&rxrpc_net_ops);
ASSERTCMP(atomic_read(&rxrpc_n_rx_skbs), ==, 0);
-
- /* Make sure the local and peer records pinned by any dying connections
- * are released.
- */
- rcu_barrier();
-
destroy_workqueue(rxrpc_workqueue);
rxrpc_exit_security();
kmem_cache_destroy(rxrpc_call_jar);
diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index 007258538bee..d21eea915967 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -283,7 +283,7 @@ struct rxrpc_local {
struct socket *socket; /* my UDP socket */
struct task_struct *io_thread;
struct completion io_thread_ready; /* Indication that the I/O thread started */
- struct rxrpc_sock __rcu *service; /* Service(s) listening on this endpoint */
+ struct rxrpc_sock *service; /* Service(s) listening on this endpoint */
struct rw_semaphore defrag_sem; /* control re-enablement of IP DF bit */
struct sk_buff_head rx_queue; /* Received packets */
struct list_head conn_attend_q; /* Conns requiring immediate attention */
@@ -313,7 +313,6 @@ struct rxrpc_local {
* - matched by local endpoint, remote port, address and protocol type
*/
struct rxrpc_peer {
- struct rcu_head rcu; /* This must be first */
refcount_t ref;
unsigned long hash_key;
struct hlist_node hash_link;
@@ -460,7 +459,6 @@ struct rxrpc_connection {

refcount_t ref;
atomic_t active; /* Active count for service conns */
- struct rcu_head rcu;
struct list_head cache_link;

unsigned char act_chans; /* Mask of active channels */
@@ -593,12 +591,11 @@ enum rxrpc_congest_mode {
* - matched by { connection, call_id }
*/
struct rxrpc_call {
- struct rcu_head rcu;
struct rxrpc_connection *conn; /* connection carrying call */
struct rxrpc_bundle *bundle; /* Connection bundle to use */
struct rxrpc_peer *peer; /* Peer record for remote address */
struct rxrpc_local *local; /* Representation of local endpoint */
- struct rxrpc_sock __rcu *socket; /* socket responsible */
+ struct rxrpc_sock *socket; /* socket responsible */
struct rxrpc_net *rxnet; /* Network namespace to which call belongs */
struct key *key; /* Security details */
const struct rxrpc_security *security; /* applied security module */
@@ -770,7 +767,6 @@ struct rxrpc_send_params {
* Buffer of data to be output as a packet.
*/
struct rxrpc_txbuf {
- struct rcu_head rcu;
struct list_head call_link; /* Link in call->tx_sendmsg/tx_buffer */
struct list_head tx_link; /* Link in live Enc queue or Tx queue */
ktime_t last_sent; /* Time at which last transmitted */
@@ -979,9 +975,9 @@ extern unsigned int rxrpc_closed_conn_expiry;

void rxrpc_poke_conn(struct rxrpc_connection *conn, enum rxrpc_conn_trace why);
struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *, gfp_t);
-struct rxrpc_connection *rxrpc_find_client_connection_rcu(struct rxrpc_local *,
- struct sockaddr_rxrpc *,
- struct sk_buff *);
+struct rxrpc_connection *rxrpc_find_client_connection(struct rxrpc_local *,
+ struct sockaddr_rxrpc *,
+ struct sk_buff *);
void __rxrpc_disconnect_call(struct rxrpc_connection *, struct rxrpc_call *);
void rxrpc_disconnect_call(struct rxrpc_call *);
void rxrpc_kill_client_conn(struct rxrpc_connection *);
@@ -1014,8 +1010,8 @@ static inline void rxrpc_reduce_conn_timer(struct rxrpc_connection *conn,
/*
* conn_service.c
*/
-struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *,
- struct sk_buff *);
+struct rxrpc_connection *rxrpc_find_service_conn(struct rxrpc_peer *,
+ struct sk_buff *);
struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *, gfp_t);
void rxrpc_new_incoming_connection(struct rxrpc_sock *, struct rxrpc_connection *,
const struct rxrpc_security *, struct sk_buff *);
@@ -1141,8 +1137,8 @@ void rxrpc_peer_keepalive_worker(struct work_struct *);
/*
* peer_object.c
*/
-struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *,
- const struct sockaddr_rxrpc *);
+struct rxrpc_peer *rxrpc_find_peer(struct rxrpc_local *,
+ const struct sockaddr_rxrpc *);
struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
struct sockaddr_rxrpc *srx, gfp_t gfp);
struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t,
@@ -1156,10 +1152,6 @@ void rxrpc_put_peer(struct rxrpc_peer *, enum rxrpc_peer_trace);
/*
* proc.c
*/
-extern const struct seq_operations rxrpc_call_seq_ops;
-extern const struct seq_operations rxrpc_connection_seq_ops;
-extern const struct seq_operations rxrpc_peer_seq_ops;
-extern const struct seq_operations rxrpc_local_seq_ops;

/*
* recvmsg.c
diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
index 3fbf2fcaaf9e..7dd7a9a37632 100644
--- a/net/rxrpc/call_accept.c
+++ b/net/rxrpc/call_accept.c
@@ -139,7 +139,7 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,

rxnet = call->rxnet;
spin_lock(&rxnet->call_lock);
- list_add_tail_rcu(&call->link, &rxnet->calls);
+ list_add_tail(&call->link, &rxnet->calls);
spin_unlock(&rxnet->call_lock);

b->call_backlog[call_head] = call;
@@ -218,7 +218,7 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
tail = b->call_backlog_tail;
while (CIRC_CNT(head, tail, size) > 0) {
struct rxrpc_call *call = b->call_backlog[tail];
- rcu_assign_pointer(call->socket, rx);
+ call->socket = rx;
if (rx->discard_new_call) {
_debug("discard %lx", call->user_call_ID);
rx->discard_new_call(call, call->user_call_ID);
@@ -343,13 +343,13 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local,
if (sp->hdr.type != RXRPC_PACKET_TYPE_DATA)
return rxrpc_protocol_error(skb, rxrpc_eproto_no_service_call);

- rcu_read_lock();
+ read_lock(&local->services_lock);

/* Weed out packets to services we're not offering. Packets that would
* begin a call are explicitly rejected and the rest are just
* discarded.
*/
- rx = rcu_dereference(local->service);
+ rx = local->service;
if (!rx || (sp->hdr.serviceId != rx->srx.srx_service &&
sp->hdr.serviceId != rx->second_service)
) {
@@ -399,7 +399,7 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local,
spin_unlock(&conn->state_lock);

spin_unlock(&rx->incoming_lock);
- rcu_read_unlock();
+ read_unlock(&local->services_lock);

if (hlist_unhashed(&call->error_link)) {
spin_lock(&call->peer->lock);
@@ -413,20 +413,20 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local,
return true;

unsupported_service:
- rcu_read_unlock();
+ read_unlock(&local->services_lock);
return rxrpc_direct_abort(skb, rxrpc_abort_service_not_offered,
RX_INVALID_OPERATION, -EOPNOTSUPP);
unsupported_security:
- rcu_read_unlock();
+ read_unlock(&local->services_lock);
return rxrpc_direct_abort(skb, rxrpc_abort_service_not_offered,
RX_INVALID_OPERATION, -EKEYREJECTED);
no_call:
spin_unlock(&rx->incoming_lock);
- rcu_read_unlock();
+ read_unlock(&local->services_lock);
_leave(" = f [%u]", skb->mark);
return false;
discard:
- rcu_read_unlock();
+ read_unlock(&local->services_lock);
return true;
}

diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
index 1abdef15debc..436b8db6667a 100644
--- a/net/rxrpc/call_event.c
+++ b/net/rxrpc/call_event.c
@@ -76,8 +76,7 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason,

rxrpc_inc_stat(call->rxnet, stat_tx_acks[ack_reason]);

- txb = rxrpc_alloc_txbuf(call, RXRPC_PACKET_TYPE_ACK,
- rcu_read_lock_held() ? GFP_ATOMIC | __GFP_NOWARN : GFP_NOFS);
+ txb = rxrpc_alloc_txbuf(call, RXRPC_PACKET_TYPE_ACK, GFP_NOFS);
if (!txb) {
kleave(" = -ENOMEM");
return;
diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
index 3ded5a24627c..54c1dc7dde5c 100644
--- a/net/rxrpc/call_object.c
+++ b/net/rxrpc/call_object.c
@@ -377,7 +377,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
goto error_dup_user_ID;
}

- rcu_assign_pointer(call->socket, rx);
+ call->socket = rx;
call->user_call_ID = p->user_call_ID;
__set_bit(RXRPC_CALL_HAS_USERID, &call->flags);
rxrpc_get_call(call, rxrpc_call_get_userid);
@@ -389,7 +389,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,

rxnet = call->rxnet;
spin_lock(&rxnet->call_lock);
- list_add_tail_rcu(&call->link, &rxnet->calls);
+ list_add_tail(&call->link, &rxnet->calls);
spin_unlock(&rxnet->call_lock);

/* From this point on, the call is protected by its own lock. */
@@ -448,7 +448,7 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,

_enter(",%d", call->conn->debug_id);

- rcu_assign_pointer(call->socket, rx);
+ call->socket = rx;
call->call_id = sp->hdr.callNumber;
call->dest_srx.srx_service = sp->hdr.serviceId;
call->cid = sp->hdr.cid;
@@ -655,11 +655,10 @@ void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace why)
}

/*
- * Free up the call under RCU.
+ * Free up the call.
*/
-static void rxrpc_rcu_free_call(struct rcu_head *rcu)
+static void rxrpc_free_call(struct rxrpc_call *call)
{
- struct rxrpc_call *call = container_of(rcu, struct rxrpc_call, rcu);
struct rxrpc_net *rxnet = READ_ONCE(call->rxnet);

kmem_cache_free(rxrpc_call_jar, call);
@@ -695,7 +694,7 @@ static void rxrpc_destroy_call(struct work_struct *work)
rxrpc_put_bundle(call->bundle, rxrpc_bundle_put_call);
rxrpc_put_peer(call->peer, rxrpc_peer_put_call);
rxrpc_put_local(call->local, rxrpc_local_put_call);
- call_rcu(&call->rcu, rxrpc_rcu_free_call);
+ rxrpc_free_call(call);
}

/*
@@ -709,14 +708,7 @@ void rxrpc_cleanup_call(struct rxrpc_call *call)
ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags));

del_timer(&call->timer);
-
- if (rcu_read_lock_held())
- /* Can't use the rxrpc workqueue as we need to cancel/flush
- * something that may be running/waiting there.
- */
- schedule_work(&call->destroyer);
- else
- rxrpc_destroy_call(&call->destroyer);
+ rxrpc_destroy_call(&call->destroyer);
}

/*
diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
index ac85d4644a3c..beef64d14d98 100644
--- a/net/rxrpc/conn_object.c
+++ b/net/rxrpc/conn_object.c
@@ -88,12 +88,10 @@ struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *rxnet,
*
* When searching for a service call, if we find a peer but no connection, we
* return that through *_peer in case we need to create a new service call.
- *
- * The caller must be holding the RCU read lock.
*/
-struct rxrpc_connection *rxrpc_find_client_connection_rcu(struct rxrpc_local *local,
- struct sockaddr_rxrpc *srx,
- struct sk_buff *skb)
+struct rxrpc_connection *rxrpc_find_client_connection(struct rxrpc_local *local,
+ struct sockaddr_rxrpc *srx,
+ struct sk_buff *skb)
{
struct rxrpc_connection *conn;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
@@ -286,10 +284,8 @@ static void rxrpc_set_service_reap_timer(struct rxrpc_net *rxnet,
/*
* destroy a virtual connection
*/
-static void rxrpc_rcu_free_connection(struct rcu_head *rcu)
+static void rxrpc_free_connection(struct rxrpc_connection *conn)
{
- struct rxrpc_connection *conn =
- container_of(rcu, struct rxrpc_connection, rcu);
struct rxrpc_net *rxnet = conn->rxnet;

_enter("{%d,u=%d}", conn->debug_id, refcount_read(&conn->ref));
@@ -341,7 +337,7 @@ static void rxrpc_clean_up_connection(struct work_struct *work)
*/
rxrpc_purge_queue(&conn->rx_queue);

- call_rcu(&conn->rcu, rxrpc_rcu_free_connection);
+ rxrpc_free_connection(conn);
}

/*
diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c
index f30323de82bd..0d4e34c8c7ad 100644
--- a/net/rxrpc/conn_service.c
+++ b/net/rxrpc/conn_service.c
@@ -14,7 +14,7 @@ static struct rxrpc_bundle rxrpc_service_dummy_bundle = {
};

/*
- * Find a service connection under RCU conditions.
+ * Find a service connection conditions.
*
* We could use a hash table, but that is subject to bucket stuffing by an
* attacker as the client gets to pick the epoch and cid values and would know
@@ -23,40 +23,33 @@ static struct rxrpc_bundle rxrpc_service_dummy_bundle = {
* it might be slower than a large hash table, but it is at least limited in
* depth.
*/
-struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *peer,
- struct sk_buff *skb)
+struct rxrpc_connection *rxrpc_find_service_conn(struct rxrpc_peer *peer,
+ struct sk_buff *skb)
{
struct rxrpc_connection *conn = NULL;
struct rxrpc_conn_proto k;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rb_node *p;
- unsigned int seq = 0;

k.epoch = sp->hdr.epoch;
k.cid = sp->hdr.cid & RXRPC_CIDMASK;

- do {
- /* Unfortunately, rbtree walking doesn't give reliable results
- * under just the RCU read lock, so we have to check for
- * changes.
- */
- read_seqbegin_or_lock(&peer->service_conn_lock, &seq);
-
- p = rcu_dereference_raw(peer->service_conns.rb_node);
- while (p) {
- conn = rb_entry(p, struct rxrpc_connection, service_node);
-
- if (conn->proto.index_key < k.index_key)
- p = rcu_dereference_raw(p->rb_left);
- else if (conn->proto.index_key > k.index_key)
- p = rcu_dereference_raw(p->rb_right);
- else
- break;
- conn = NULL;
- }
- } while (need_seqretry(&peer->service_conn_lock, seq));
-
- done_seqretry(&peer->service_conn_lock, seq);
+ read_seqlock_excl(&peer->service_conn_lock);
+
+ p = peer->service_conns.rb_node;
+ while (p) {
+ conn = rb_entry(p, struct rxrpc_connection, service_node);
+
+ if (conn->proto.index_key < k.index_key)
+ p = p->rb_left;
+ else if (conn->proto.index_key > k.index_key)
+ p = p->rb_right;
+ else
+ break;
+ conn = NULL;
+ }
+
+ read_sequnlock_excl(&peer->service_conn_lock);
_leave(" = %d", conn ? conn->debug_id : -1);
return conn;
}
@@ -89,7 +82,7 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer,
goto found_extant_conn;
}

- rb_link_node_rcu(&conn->service_node, parent, pp);
+ rb_link_node(&conn->service_node, parent, pp);
rb_insert_color(&conn->service_node, &peer->service_conns);
conn_published:
set_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags);
@@ -110,9 +103,9 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer,
replace_old_connection:
/* The old connection is from an outdated epoch. */
_debug("replace conn");
- rb_replace_node_rcu(&cursor->service_node,
- &conn->service_node,
- &peer->service_conns);
+ rb_replace_node(&cursor->service_node,
+ &conn->service_node,
+ &peer->service_conns);
clear_bit(RXRPC_CONN_IN_SERVICE_CONNS, &cursor->flags);
goto conn_published;
}
diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
index 367927a99881..ad622d185dea 100644
--- a/net/rxrpc/input.c
+++ b/net/rxrpc/input.c
@@ -210,7 +210,7 @@ static bool rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
struct rxrpc_txbuf *txb;
bool rot_last = false;

- list_for_each_entry_rcu(txb, &call->tx_buffer, call_link, false) {
+ list_for_each_entry(txb, &call->tx_buffer, call_link) {
if (before_eq(txb->seq, call->acks_hard_ack))
continue;
summary->nr_rot_new_acks++;
diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c
index 9e9dfb2fc559..b9e6f1e3c6fc 100644
--- a/net/rxrpc/io_thread.c
+++ b/net/rxrpc/io_thread.c
@@ -259,10 +259,8 @@ static bool rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb)
}

if (rxrpc_to_client(sp)) {
- rcu_read_lock();
- conn = rxrpc_find_client_connection_rcu(local, &peer_srx, skb);
+ conn = rxrpc_find_client_connection(local, &peer_srx, skb);
conn = rxrpc_get_connection_maybe(conn, rxrpc_conn_get_call_input);
- rcu_read_unlock();
if (!conn)
return rxrpc_protocol_error(skb, rxrpc_eproto_no_client_conn);

@@ -275,25 +273,25 @@ static bool rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb)
* parameter set. We look up the peer first as an intermediate step
* and then the connection from the peer's tree.
*/
- rcu_read_lock();
+ spin_lock(&local->rxnet->peer_hash_lock);

- peer = rxrpc_lookup_peer_rcu(local, &peer_srx);
+ peer = rxrpc_find_peer(local, &peer_srx);
if (!peer) {
- rcu_read_unlock();
+ spin_lock(&local->rxnet->peer_hash_lock);
return rxrpc_new_incoming_call(local, NULL, NULL, &peer_srx, skb);
}

- conn = rxrpc_find_service_conn_rcu(peer, skb);
+ conn = rxrpc_find_service_conn(peer, skb);
conn = rxrpc_get_connection_maybe(conn, rxrpc_conn_get_call_input);
if (conn) {
- rcu_read_unlock();
+ spin_unlock(&local->rxnet->peer_hash_lock);
ret = rxrpc_input_packet_on_conn(conn, &peer_srx, skb);
rxrpc_put_connection(conn, rxrpc_conn_put_call_input);
return ret;
}

peer = rxrpc_get_peer_maybe(peer, rxrpc_peer_get_input);
- rcu_read_unlock();
+ spin_unlock(&local->rxnet->peer_hash_lock);

ret = rxrpc_new_incoming_call(local, peer, NULL, &peer_srx, skb);
rxrpc_put_peer(peer, rxrpc_peer_put_input);
diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
index b8eaca5d9f22..3d2707d7f478 100644
--- a/net/rxrpc/local_object.c
+++ b/net/rxrpc/local_object.c
@@ -285,10 +285,10 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net,
goto sock_error;

if (cursor) {
- hlist_replace_rcu(cursor, &local->link);
- cursor->pprev = NULL;
+ hlist_add_before(&local->link, cursor);
+ hlist_del_init(cursor);
} else {
- hlist_add_head_rcu(&local->link, &rxnet->local_endpoints);
+ hlist_add_head(&local->link, &rxnet->local_endpoints);
}

found:
@@ -417,7 +417,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local)
local->dead = true;

mutex_lock(&rxnet->local_mutex);
- hlist_del_init_rcu(&local->link);
+ hlist_del_init(&local->link);
mutex_unlock(&rxnet->local_mutex);

rxrpc_clean_up_local_conns(local);
diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c
index a0319c040c25..1f36d27cf257 100644
--- a/net/rxrpc/net_ns.c
+++ b/net/rxrpc/net_ns.c
@@ -73,17 +73,6 @@ static __net_init int rxrpc_init_net(struct net *net)
if (!rxnet->proc_net)
goto err_proc;

- proc_create_net("calls", 0444, rxnet->proc_net, &rxrpc_call_seq_ops,
- sizeof(struct seq_net_private));
- proc_create_net("conns", 0444, rxnet->proc_net,
- &rxrpc_connection_seq_ops,
- sizeof(struct seq_net_private));
- proc_create_net("peers", 0444, rxnet->proc_net,
- &rxrpc_peer_seq_ops,
- sizeof(struct seq_net_private));
- proc_create_net("locals", 0444, rxnet->proc_net,
- &rxrpc_local_seq_ops,
- sizeof(struct seq_net_private));
proc_create_net_single_write("stats", S_IFREG | 0644, rxnet->proc_net,
rxrpc_stats_show, rxrpc_stats_clear, NULL);
return 0;
diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
index 552ba84a255c..a44289cf54f6 100644
--- a/net/rxrpc/peer_event.c
+++ b/net/rxrpc/peer_event.c
@@ -25,11 +25,12 @@ static void rxrpc_distribute_error(struct rxrpc_peer *, struct sk_buff *,
/*
* Find the peer associated with a local error.
*/
-static struct rxrpc_peer *rxrpc_lookup_peer_local_rcu(struct rxrpc_local *local,
- const struct sk_buff *skb,
- struct sockaddr_rxrpc *srx)
+static struct rxrpc_peer *rxrpc_lookup_peer_local(struct rxrpc_local *local,
+ const struct sk_buff *skb,
+ struct sockaddr_rxrpc *srx)
{
struct sock_exterr_skb *serr = SKB_EXT_ERR(skb);
+ struct rxrpc_peer *peer;

_enter("");

@@ -94,7 +95,11 @@ static struct rxrpc_peer *rxrpc_lookup_peer_local_rcu(struct rxrpc_local *local,
BUG();
}

- return rxrpc_lookup_peer_rcu(local, srx);
+ spin_lock(&local->rxnet->peer_hash_lock);
+ peer = rxrpc_find_peer(local, srx);
+ peer = rxrpc_get_peer_maybe(peer, rxrpc_peer_get_input_error);
+ spin_unlock(&local->rxnet->peer_hash_lock);
+ return peer;
}

/*
@@ -144,11 +149,7 @@ void rxrpc_input_error(struct rxrpc_local *local, struct sk_buff *skb)
return;
}

- rcu_read_lock();
- peer = rxrpc_lookup_peer_local_rcu(local, skb, &srx);
- if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_input_error))
- peer = NULL;
- rcu_read_unlock();
+ peer = rxrpc_lookup_peer_local(local, skb, &srx);
if (!peer)
return;

diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
index 8d7a715a0bb1..8d2fd34411ee 100644
--- a/net/rxrpc/peer_object.c
+++ b/net/rxrpc/peer_object.c
@@ -111,15 +111,14 @@ static long rxrpc_peer_cmp_key(const struct rxrpc_peer *peer,
/*
* Look up a remote transport endpoint for the specified address using RCU.
*/
-static struct rxrpc_peer *__rxrpc_lookup_peer_rcu(
- struct rxrpc_local *local,
- const struct sockaddr_rxrpc *srx,
- unsigned long hash_key)
+static struct rxrpc_peer *__rxrpc_find_peer(struct rxrpc_local *local,
+ const struct sockaddr_rxrpc *srx,
+ unsigned long hash_key)
{
struct rxrpc_peer *peer;
struct rxrpc_net *rxnet = local->rxnet;

- hash_for_each_possible_rcu(rxnet->peer_hash, peer, hash_link, hash_key) {
+ hash_for_each_possible(rxnet->peer_hash, peer, hash_link, hash_key) {
if (rxrpc_peer_cmp_key(peer, local, srx, hash_key) == 0 &&
refcount_read(&peer->ref) > 0)
return peer;
@@ -129,15 +128,15 @@ static struct rxrpc_peer *__rxrpc_lookup_peer_rcu(
}

/*
- * Look up a remote transport endpoint for the specified address using RCU.
+ * Look up a remote transport endpoint for the specified address.
*/
-struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *local,
- const struct sockaddr_rxrpc *srx)
+struct rxrpc_peer *rxrpc_find_peer(struct rxrpc_local *local,
+ const struct sockaddr_rxrpc *srx)
{
struct rxrpc_peer *peer;
unsigned long hash_key = rxrpc_peer_hash_key(local, srx);

- peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key);
+ peer = __rxrpc_find_peer(local, srx, hash_key);
if (peer)
_leave(" = %p {u=%d}", peer, refcount_read(&peer->ref));
return peer;
@@ -295,7 +294,7 @@ static void rxrpc_free_peer(struct rxrpc_peer *peer)
{
trace_rxrpc_peer(peer->debug_id, 0, rxrpc_peer_free);
rxrpc_put_local(peer->local, rxrpc_local_put_peer);
- kfree_rcu(peer, rcu);
+ kfree(peer);
}

/*
@@ -312,7 +311,7 @@ void rxrpc_new_incoming_peer(struct rxrpc_local *local, struct rxrpc_peer *peer)
rxrpc_init_peer(local, peer, hash_key);

spin_lock(&rxnet->peer_hash_lock);
- hash_add_rcu(rxnet->peer_hash, &peer->hash_link, hash_key);
+ hash_add(rxnet->peer_hash, &peer->hash_link, hash_key);
list_add_tail(&peer->keepalive_link, &rxnet->peer_keepalive_new);
spin_unlock(&rxnet->peer_hash_lock);
}
@@ -330,11 +329,11 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
_enter("{%pISp}", &srx->transport);

/* search the peer list first */
- rcu_read_lock();
- peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key);
+ spin_lock(&rxnet->peer_hash_lock);
+ peer = __rxrpc_find_peer(local, srx, hash_key);
if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_lookup_client))
peer = NULL;
- rcu_read_unlock();
+ spin_unlock(&rxnet->peer_hash_lock);

if (!peer) {
/* The peer is not yet present in hash - create a candidate
@@ -349,12 +348,12 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
spin_lock(&rxnet->peer_hash_lock);

/* Need to check that we aren't racing with someone else */
- peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key);
+ peer = __rxrpc_find_peer(local, srx, hash_key);
if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_lookup_client))
peer = NULL;
if (!peer) {
- hash_add_rcu(rxnet->peer_hash,
- &candidate->hash_link, hash_key);
+ hash_add(rxnet->peer_hash,
+ &candidate->hash_link, hash_key);
list_add_tail(&candidate->keepalive_link,
&rxnet->peer_keepalive_new);
}
@@ -410,7 +409,7 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer)
ASSERT(hlist_empty(&peer->error_targets));

spin_lock(&rxnet->peer_hash_lock);
- hash_del_rcu(&peer->hash_link);
+ hash_del(&peer->hash_link);
list_del_init(&peer->keepalive_link);
spin_unlock(&rxnet->peer_hash_lock);

diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c
index 750158a085cd..84fa70fe2d74 100644
--- a/net/rxrpc/proc.c
+++ b/net/rxrpc/proc.c
@@ -10,392 +10,6 @@
#include <net/af_rxrpc.h>
#include "ar-internal.h"

-static const char *const rxrpc_conn_states[RXRPC_CONN__NR_STATES] = {
- [RXRPC_CONN_UNUSED] = "Unused ",
- [RXRPC_CONN_CLIENT_UNSECURED] = "ClUnsec ",
- [RXRPC_CONN_CLIENT] = "Client ",
- [RXRPC_CONN_SERVICE_PREALLOC] = "SvPrealc",
- [RXRPC_CONN_SERVICE_UNSECURED] = "SvUnsec ",
- [RXRPC_CONN_SERVICE_CHALLENGING] = "SvChall ",
- [RXRPC_CONN_SERVICE] = "SvSecure",
- [RXRPC_CONN_ABORTED] = "Aborted ",
-};
-
-/*
- * generate a list of extant and dead calls in /proc/net/rxrpc_calls
- */
-static void *rxrpc_call_seq_start(struct seq_file *seq, loff_t *_pos)
- __acquires(rcu)
-{
- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
-
- rcu_read_lock();
- return seq_list_start_head_rcu(&rxnet->calls, *_pos);
-}
-
-static void *rxrpc_call_seq_next(struct seq_file *seq, void *v, loff_t *pos)
-{
- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
-
- return seq_list_next_rcu(v, &rxnet->calls, pos);
-}
-
-static void rxrpc_call_seq_stop(struct seq_file *seq, void *v)
- __releases(rcu)
-{
- rcu_read_unlock();
-}
-
-static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
-{
- struct rxrpc_local *local;
- struct rxrpc_call *call;
- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
- enum rxrpc_call_state state;
- unsigned long timeout = 0;
- rxrpc_seq_t acks_hard_ack;
- char lbuff[50], rbuff[50];
- u64 wtmp;
-
- if (v == &rxnet->calls) {
- seq_puts(seq,
- "Proto Local "
- " Remote "
- " SvID ConnID CallID End Use State Abort "
- " DebugId TxSeq TW RxSeq RW RxSerial CW RxTimo\n");
- return 0;
- }
-
- call = list_entry(v, struct rxrpc_call, link);
-
- local = call->local;
- if (local)
- sprintf(lbuff, "%pISpc", &local->srx.transport);
- else
- strcpy(lbuff, "no_local");
-
- sprintf(rbuff, "%pISpc", &call->dest_srx.transport);
-
- state = rxrpc_call_state(call);
- if (state != RXRPC_CALL_SERVER_PREALLOC) {
- timeout = READ_ONCE(call->expect_rx_by);
- timeout -= jiffies;
- }
-
- acks_hard_ack = READ_ONCE(call->acks_hard_ack);
- wtmp = atomic64_read_acquire(&call->ackr_window);
- seq_printf(seq,
- "UDP %-47.47s %-47.47s %4x %08x %08x %s %3u"
- " %-8.8s %08x %08x %08x %02x %08x %02x %08x %02x %06lx\n",
- lbuff,
- rbuff,
- call->dest_srx.srx_service,
- call->cid,
- call->call_id,
- rxrpc_is_service_call(call) ? "Svc" : "Clt",
- refcount_read(&call->ref),
- rxrpc_call_states[state],
- call->abort_code,
- call->debug_id,
- acks_hard_ack, READ_ONCE(call->tx_top) - acks_hard_ack,
- lower_32_bits(wtmp), upper_32_bits(wtmp) - lower_32_bits(wtmp),
- call->rx_serial,
- call->cong_cwnd,
- timeout);
-
- return 0;
-}
-
-const struct seq_operations rxrpc_call_seq_ops = {
- .start = rxrpc_call_seq_start,
- .next = rxrpc_call_seq_next,
- .stop = rxrpc_call_seq_stop,
- .show = rxrpc_call_seq_show,
-};
-
-/*
- * generate a list of extant virtual connections in /proc/net/rxrpc_conns
- */
-static void *rxrpc_connection_seq_start(struct seq_file *seq, loff_t *_pos)
- __acquires(rxnet->conn_lock)
-{
- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
-
- read_lock(&rxnet->conn_lock);
- return seq_list_start_head(&rxnet->conn_proc_list, *_pos);
-}
-
-static void *rxrpc_connection_seq_next(struct seq_file *seq, void *v,
- loff_t *pos)
-{
- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
-
- return seq_list_next(v, &rxnet->conn_proc_list, pos);
-}
-
-static void rxrpc_connection_seq_stop(struct seq_file *seq, void *v)
- __releases(rxnet->conn_lock)
-{
- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
-
- read_unlock(&rxnet->conn_lock);
-}
-
-static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
-{
- struct rxrpc_connection *conn;
- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
- const char *state;
- char lbuff[50], rbuff[50];
-
- if (v == &rxnet->conn_proc_list) {
- seq_puts(seq,
- "Proto Local "
- " Remote "
- " SvID ConnID End Ref Act State Key "
- " Serial ISerial CallId0 CallId1 CallId2 CallId3\n"
- );
- return 0;
- }
-
- conn = list_entry(v, struct rxrpc_connection, proc_link);
- if (conn->state == RXRPC_CONN_SERVICE_PREALLOC) {
- strcpy(lbuff, "no_local");
- strcpy(rbuff, "no_connection");
- goto print;
- }
-
- sprintf(lbuff, "%pISpc", &conn->local->srx.transport);
- sprintf(rbuff, "%pISpc", &conn->peer->srx.transport);
-print:
- state = rxrpc_is_conn_aborted(conn) ?
- rxrpc_call_completions[conn->completion] :
- rxrpc_conn_states[conn->state];
- seq_printf(seq,
- "UDP %-47.47s %-47.47s %4x %08x %s %3u %3d"
- " %s %08x %08x %08x %08x %08x %08x %08x\n",
- lbuff,
- rbuff,
- conn->service_id,
- conn->proto.cid,
- rxrpc_conn_is_service(conn) ? "Svc" : "Clt",
- refcount_read(&conn->ref),
- atomic_read(&conn->active),
- state,
- key_serial(conn->key),
- atomic_read(&conn->serial),
- conn->hi_serial,
- conn->channels[0].call_id,
- conn->channels[1].call_id,
- conn->channels[2].call_id,
- conn->channels[3].call_id);
-
- return 0;
-}
-
-const struct seq_operations rxrpc_connection_seq_ops = {
- .start = rxrpc_connection_seq_start,
- .next = rxrpc_connection_seq_next,
- .stop = rxrpc_connection_seq_stop,
- .show = rxrpc_connection_seq_show,
-};
-
-/*
- * generate a list of extant virtual peers in /proc/net/rxrpc/peers
- */
-static int rxrpc_peer_seq_show(struct seq_file *seq, void *v)
-{
- struct rxrpc_peer *peer;
- time64_t now;
- char lbuff[50], rbuff[50];
-
- if (v == SEQ_START_TOKEN) {
- seq_puts(seq,
- "Proto Local "
- " Remote "
- " Use SST MTU LastUse RTT RTO\n"
- );
- return 0;
- }
-
- peer = list_entry(v, struct rxrpc_peer, hash_link);
-
- sprintf(lbuff, "%pISpc", &peer->local->srx.transport);
-
- sprintf(rbuff, "%pISpc", &peer->srx.transport);
-
- now = ktime_get_seconds();
- seq_printf(seq,
- "UDP %-47.47s %-47.47s %3u"
- " %3u %5u %6llus %8u %8u\n",
- lbuff,
- rbuff,
- refcount_read(&peer->ref),
- peer->cong_ssthresh,
- peer->mtu,
- now - peer->last_tx_at,
- peer->srtt_us >> 3,
- jiffies_to_usecs(peer->rto_j));
-
- return 0;
-}
-
-static void *rxrpc_peer_seq_start(struct seq_file *seq, loff_t *_pos)
- __acquires(rcu)
-{
- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
- unsigned int bucket, n;
- unsigned int shift = 32 - HASH_BITS(rxnet->peer_hash);
- void *p;
-
- rcu_read_lock();
-
- if (*_pos >= UINT_MAX)
- return NULL;
-
- n = *_pos & ((1U << shift) - 1);
- bucket = *_pos >> shift;
- for (;;) {
- if (bucket >= HASH_SIZE(rxnet->peer_hash)) {
- *_pos = UINT_MAX;
- return NULL;
- }
- if (n == 0) {
- if (bucket == 0)
- return SEQ_START_TOKEN;
- *_pos += 1;
- n++;
- }
-
- p = seq_hlist_start_rcu(&rxnet->peer_hash[bucket], n - 1);
- if (p)
- return p;
- bucket++;
- n = 1;
- *_pos = (bucket << shift) | n;
- }
-}
-
-static void *rxrpc_peer_seq_next(struct seq_file *seq, void *v, loff_t *_pos)
-{
- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
- unsigned int bucket, n;
- unsigned int shift = 32 - HASH_BITS(rxnet->peer_hash);
- void *p;
-
- if (*_pos >= UINT_MAX)
- return NULL;
-
- bucket = *_pos >> shift;
-
- p = seq_hlist_next_rcu(v, &rxnet->peer_hash[bucket], _pos);
- if (p)
- return p;
-
- for (;;) {
- bucket++;
- n = 1;
- *_pos = (bucket << shift) | n;
-
- if (bucket >= HASH_SIZE(rxnet->peer_hash)) {
- *_pos = UINT_MAX;
- return NULL;
- }
- if (n == 0) {
- *_pos += 1;
- n++;
- }
-
- p = seq_hlist_start_rcu(&rxnet->peer_hash[bucket], n - 1);
- if (p)
- return p;
- }
-}
-
-static void rxrpc_peer_seq_stop(struct seq_file *seq, void *v)
- __releases(rcu)
-{
- rcu_read_unlock();
-}
-
-
-const struct seq_operations rxrpc_peer_seq_ops = {
- .start = rxrpc_peer_seq_start,
- .next = rxrpc_peer_seq_next,
- .stop = rxrpc_peer_seq_stop,
- .show = rxrpc_peer_seq_show,
-};
-
-/*
- * Generate a list of extant virtual local endpoints in /proc/net/rxrpc/locals
- */
-static int rxrpc_local_seq_show(struct seq_file *seq, void *v)
-{
- struct rxrpc_local *local;
- char lbuff[50];
-
- if (v == SEQ_START_TOKEN) {
- seq_puts(seq,
- "Proto Local "
- " Use Act RxQ\n");
- return 0;
- }
-
- local = hlist_entry(v, struct rxrpc_local, link);
-
- sprintf(lbuff, "%pISpc", &local->srx.transport);
-
- seq_printf(seq,
- "UDP %-47.47s %3u %3u %3u\n",
- lbuff,
- refcount_read(&local->ref),
- atomic_read(&local->active_users),
- local->rx_queue.qlen);
-
- return 0;
-}
-
-static void *rxrpc_local_seq_start(struct seq_file *seq, loff_t *_pos)
- __acquires(rcu)
-{
- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
- unsigned int n;
-
- rcu_read_lock();
-
- if (*_pos >= UINT_MAX)
- return NULL;
-
- n = *_pos;
- if (n == 0)
- return SEQ_START_TOKEN;
-
- return seq_hlist_start_rcu(&rxnet->local_endpoints, n - 1);
-}
-
-static void *rxrpc_local_seq_next(struct seq_file *seq, void *v, loff_t *_pos)
-{
- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
-
- if (*_pos >= UINT_MAX)
- return NULL;
-
- return seq_hlist_next_rcu(v, &rxnet->local_endpoints, _pos);
-}
-
-static void rxrpc_local_seq_stop(struct seq_file *seq, void *v)
- __releases(rcu)
-{
- rcu_read_unlock();
-}
-
-const struct seq_operations rxrpc_local_seq_ops = {
- .start = rxrpc_local_seq_start,
- .next = rxrpc_local_seq_next,
- .stop = rxrpc_local_seq_stop,
- .show = rxrpc_local_seq_show,
-};
-
/*
* Display stats in /proc/net/rxrpc/stats
*/
diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
index dd54ceee7bcc..c589d691552e 100644
--- a/net/rxrpc/recvmsg.c
+++ b/net/rxrpc/recvmsg.c
@@ -30,15 +30,13 @@ void rxrpc_notify_socket(struct rxrpc_call *call)
if (!list_empty(&call->recvmsg_link))
return;

- rcu_read_lock();
+ spin_lock(&call->notify_lock);

- rx = rcu_dereference(call->socket);
+ rx = call->socket;
sk = &rx->sk;
if (rx && sk->sk_state < RXRPC_CLOSE) {
if (call->notify_rx) {
- spin_lock(&call->notify_lock);
call->notify_rx(sk, call, call->user_call_ID);
- spin_unlock(&call->notify_lock);
} else {
write_lock(&rx->recvmsg_lock);
if (list_empty(&call->recvmsg_link)) {
@@ -54,7 +52,7 @@ void rxrpc_notify_socket(struct rxrpc_call *call)
}
}

- rcu_read_unlock();
+ spin_unlock(&call->notify_lock);
_leave("");
}

diff --git a/net/rxrpc/rxperf.c b/net/rxrpc/rxperf.c
index 16dcabb71ebe..962bdc608a0b 100644
--- a/net/rxrpc/rxperf.c
+++ b/net/rxrpc/rxperf.c
@@ -606,7 +606,6 @@ static int __init rxperf_init(void)
key_put(rxperf_sec_keyring);
error_keyring:
destroy_workqueue(rxperf_workqueue);
- rcu_barrier();
error_workqueue:
pr_err("Failed to register: %d\n", ret);
return ret;
@@ -620,7 +619,6 @@ static void __exit rxperf_exit(void)
rxperf_close_socket();
key_put(rxperf_sec_keyring);
destroy_workqueue(rxperf_workqueue);
- rcu_barrier();
}
module_exit(rxperf_exit);

diff --git a/net/rxrpc/security.c b/net/rxrpc/security.c
index cd66634dffe6..cb8dd1d3b1d4 100644
--- a/net/rxrpc/security.c
+++ b/net/rxrpc/security.c
@@ -178,9 +178,9 @@ struct key *rxrpc_look_up_server_security(struct rxrpc_connection *conn,
sprintf(kdesc, "%u:%u",
sp->hdr.serviceId, sp->hdr.securityIndex);

- rcu_read_lock();
+ read_lock(&conn->local->services_lock);

- rx = rcu_dereference(conn->local->service);
+ rx = conn->local->service;
if (!rx)
goto out;

@@ -202,6 +202,6 @@ struct key *rxrpc_look_up_server_security(struct rxrpc_connection *conn,
}

out:
- rcu_read_unlock();
+ read_unlock(&conn->local->services_lock);
return key;
}
diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c
index d2cf2aac3adb..7df082f3e9be 100644
--- a/net/rxrpc/txbuf.c
+++ b/net/rxrpc/txbuf.c
@@ -71,10 +71,8 @@ void rxrpc_see_txbuf(struct rxrpc_txbuf *txb, enum rxrpc_txbuf_trace what)
trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, r, what);
}

-static void rxrpc_free_txbuf(struct rcu_head *rcu)
+static void rxrpc_free_txbuf(struct rxrpc_txbuf *txb)
{
- struct rxrpc_txbuf *txb = container_of(rcu, struct rxrpc_txbuf, rcu);
-
trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, 0,
rxrpc_txbuf_free);
kfree(txb);
@@ -95,7 +93,7 @@ void rxrpc_put_txbuf(struct rxrpc_txbuf *txb, enum rxrpc_txbuf_trace what)
dead = __refcount_dec_and_test(&txb->ref, &r);
trace_rxrpc_txbuf(debug_id, call_debug_id, seq, r - 1, what);
if (dead)
- call_rcu(&txb->rcu, rxrpc_free_txbuf);
+ rxrpc_free_txbuf(txb);
}
}

@@ -105,38 +103,31 @@ void rxrpc_put_txbuf(struct rxrpc_txbuf *txb, enum rxrpc_txbuf_trace what)
void rxrpc_shrink_call_tx_buffer(struct rxrpc_call *call)
{
struct rxrpc_txbuf *txb;
- rxrpc_seq_t hard_ack = smp_load_acquire(&call->acks_hard_ack);
bool wake = false;

_enter("%x/%x/%x", call->tx_bottom, call->acks_hard_ack, call->tx_top);

for (;;) {
- spin_lock(&call->tx_lock);
txb = list_first_entry_or_null(&call->tx_buffer,
struct rxrpc_txbuf, call_link);
if (!txb)
break;
- hard_ack = smp_load_acquire(&call->acks_hard_ack);
- if (before(hard_ack, txb->seq))
+ if (before(call->acks_hard_ack, txb->seq))
break;

if (txb->seq != call->tx_bottom + 1)
rxrpc_see_txbuf(txb, rxrpc_txbuf_see_out_of_step);
ASSERTCMP(txb->seq, ==, call->tx_bottom + 1);
smp_store_release(&call->tx_bottom, call->tx_bottom + 1);
- list_del_rcu(&txb->call_link);
+ list_del(&txb->call_link);

trace_rxrpc_txqueue(call, rxrpc_txqueue_dequeue);

- spin_unlock(&call->tx_lock);
-
rxrpc_put_txbuf(txb, rxrpc_txbuf_put_rotated);
if (after(call->acks_hard_ack, call->tx_bottom + 128))
wake = true;
}

- spin_unlock(&call->tx_lock);
-
if (wake)
wake_up(&call->waitq);
}

Reply all
Reply to author
Forward
0 new messages