[gRPC-go]Is any best practice for multi-conn per one endpoint?

70 views
Skip to first unread message

Zeymo Wang

unread,
Jun 16, 2017, 5:30:57 AM6/16/17
to grpc.io
 In high concurrency or high performace situation, single addrConn per one endpoint will reach the limitation of HTTP2 max stream(e.g. 1000),immediatly, Is any best practice for multi-conn per one endpoint? most of balance include official RoundRobin lb are single conn per endpoint ,thx

Eric Anderson

unread,
Jun 20, 2017, 1:48:38 PM6/20/17
to Zeymo Wang, grpc.io
You can create multiple channels to increase concurrency. Although if you are hitting MAX_CONCURRENT_STREAMS limits it would be important to make sure the additional channels connect to a different server, otherwise what is the point of MAX_CONCURRENT_STREAMS? Also, you should make sure your server is okay with you making additional connections/load.

For built-in support for using multiple connections, you can follow along on the issue for improving L4 proxy support. For supporting multiple connections from client-side, the current idea is to add a value to the service config that tells clients how many additional connections they can make. At least initially, this will probably be a simple multiplier where the client makes, say, 4 connections instead of 1 and round-robins across them (ignoring MAX_CONCURRENT_STREAMS and load).

For the moment though, you'd need to create the additional connections yourself.

On Fri, Jun 16, 2017 at 2:30 AM, Zeymo Wang <zeymo...@gmail.com> wrote:
 In high concurrency or high performace situation, single addrConn per one endpoint will reach the limitation of HTTP2 max stream(e.g. 1000),immediatly, Is any best practice for multi-conn per one endpoint? most of balance include official RoundRobin lb are single conn per endpoint ,thx

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/22b91705-7bd1-491b-a073-d5ace38a9678%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Zeymo Wang

unread,
Jun 26, 2017, 6:30:55 AM6/26/17
to grpc.io, zeymo...@gmail.com

Thx all the same , I have resolve this issue by config connAddr#metadata :)

On Wednesday, June 21, 2017 at 1:48:38 AM UTC+8, Eric Anderson wrote:
You can create multiple channels to increase concurrency. Although if you are hitting MAX_CONCURRENT_STREAMS limits it would be important to make sure the additional channels connect to a different server, otherwise what is the point of MAX_CONCURRENT_STREAMS? Also, you should make sure your server is okay with you making additional connections/load.

For built-in support for using multiple connections, you can follow along on the issue for improving L4 proxy support. For supporting multiple connections from client-side, the current idea is to add a value to the service config that tells clients how many additional connections they can make. At least initially, this will probably be a simple multiplier where the client makes, say, 4 connections instead of 1 and round-robins across them (ignoring MAX_CONCURRENT_STREAMS and load).

For the moment though, you'd need to create the additional connections yourself.
On Fri, Jun 16, 2017 at 2:30 AM, Zeymo Wang <zeymo...@gmail.com> wrote:
 In high concurrency or high performace situation, single addrConn per one endpoint will reach the limitation of HTTP2 max stream(e.g. 1000),immediatly, Is any best practice for multi-conn per one endpoint? most of balance include official RoundRobin lb are single conn per endpoint ,thx

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages