Limit max number of connections

5,106 views
Skip to first unread message

Chaitanya Gangwar

unread,
Aug 2, 2016, 12:41:13 AM8/2/16
to grpc.io
Hi,

Is there any way to limit maximum number of connections in grpc server ? I want to limit number of connections till 8 and after that server should reject any more new client connections.

Thanks
Chaitanya

Nathaniel Manista

unread,
Aug 2, 2016, 11:12:20 AM8/2/16
to Chaitanya Gangwar, grpc.io
On Mon, Aug 1, 2016 at 9:41 PM, Chaitanya Gangwar <chaitany...@gmail.com> wrote:
Is there any way to limit maximum number of connections in grpc server ? I want to limit number of connections till 8 and after that server should reject any more new client connections.

In what programming language are you working with gRPC?
-Nathaniel

Chaitanya Gangwar

unread,
Aug 3, 2016, 1:30:18 AM8/3/16
to grpc.io
I am using C++ grpc library.

-Chaitanya

Vijay Pai

unread,
Aug 3, 2016, 10:33:29 AM8/3/16
to grpc.io
There currently isn't such an API from either core or C++ to do so. Depending on the platform, I presume that one could use a process-wide FD limit to limit the number of open connections before starting the server, but that would be outside the current scope of gRPC.
Regards,
Vijay

Chaitanya Gangwar

unread,
Aug 4, 2016, 2:58:08 AM8/4/16
to grpc.io
Hi Vijay,

In normal tcp connection, listen api, takes backlog as argument which can be used to limit the max number of connections. I am looking something similar in grpc server api as well. i thought the grpc  also internally make same calls so there should be a way to restrict maximum connections. Please correct me if i am wrong.

listen(int sockfd, int backlog);
The backlog argument defines the maximum length to which the queue of pending connections for sockfd may grow. If a connection request arrives when the queue is full, the client may receive an error with an indication of ECONNREFUSED or, if the underlying protocol supports retransmission, the request may be ignored so that a later reattempt at connection succeeds

Thanks
Chaitanya

David Klempner

unread,
Aug 4, 2016, 3:12:35 AM8/4/16
to Chaitanya Gangwar, grpc.io
The backlog argument only affects connections that haven't been accepted yet. It does nothing to connections that have already been accepted.


--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/a704aa13-f55b-44c7-9a99-229cc66ec2b0%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Chaitanya Gangwar

unread,
Aug 4, 2016, 3:21:19 AM8/4/16
to grpc.io, chaitany...@gmail.com
Hi David,

I want something similar to backlog argument in grpc server api. This is exactly what my requirement is. I am sorry if my previous message created some confusion.
What i want is, if there are already 8 accpeted client connections, after that no new client should connect, once one of connected client goes off, thn new client can connect to server. Its like, at a time, server should serve max of 8 clients.

Thanks
Chaitanya
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.

Josh Humphries

unread,
Aug 4, 2016, 10:29:36 AM8/4/16
to Chaitanya Gangwar, grpc.io
On Thu, Aug 4, 2016 at 3:21 AM, Chaitanya Gangwar <chaitany...@gmail.com> wrote:
Hi David,

I want something similar to backlog argument in grpc server api. This is exactly what my requirement is. I am sorry if my previous message created some confusion.
What i want is, if there are already 8 accpeted client connections, after that no new client should connect, once one of connected client goes off, thn new client can connect to server. Its like, at a time, server should serve max of 8 clients.

But that is not what the backlog argument gets you. After you've accepted 8 connections, you can still accept more. The backlog argument is for a situation where you are actually accepting the connections more slowly than clients are trying to connect. In that case, the OS queue of connection requests is limited to 8. But after you've accepted those 8, the process will still happily accept more.
 
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.

pradee...@cinarra.com

unread,
Aug 8, 2016, 3:22:40 AM8/8/16
to grpc.io, chaitany...@gmail.com
You can put a TCP proxy like HAproxy in front of your gRPC server and limit the number of clients to whatever number you want.

HTH

David Klempner

unread,
Aug 8, 2016, 3:55:08 AM8/8/16
to Josh Humphries, Chaitanya Gangwar, grpc. io

It would be sensible in a request-per-connection model like early HTTP -- limiting the backlog argument would have the side effect of limiting the real backlog. That model, of course, is extremely far from how grpc, or even HTTP/1.1, works.

As I see it this is almost certainly an XY problem. Why would you want to limit the number of incoming connections to 8? All else equal, more connections is better. If you don't want more connections you are probably constrained by some resource. The question should be "how do I allocate that resource properly" and not "how do I limit incoming connections".

Furthermore, grpc is an RPC library. The fundamental abstraction is an RPC or stream, not a connection or an HTTP/2 request. From my point of view, any time you have to think about connections represents a failure of the RPC library in its job of providing an RPC abstraction.

And yes, every networking abstraction is leaky, but that doesn't mean the first thing we should punch holes in the bucket in response to completely unexplained feature requests, or even that your first resort should be to punch such a hole given the tools to do so.


Reply all
Reply to author
Forward
0 new messages