[gRPC-go] Slow when letting grpc handle requests / connections

1,112 views
Skip to first unread message

ew...@grandslammedia.com

unread,
Sep 14, 2016, 1:25:06 PM9/14/16
to grpc.io
I am trying to send many requests over gRpc, however am running into very  poor performance under simulated load via. wrk with a 100ms rpc request deadline.

- ubuntu 14.04 x86
- go1.6.2
- 1 Connection being handled via. gRpc
- The operation being performed on the server end is always less than 1ms 
- gRpc time is anywhere from 30-500ms
- this test is performed with both client / server on the same machine

I attempted a loosely handled open / close connection method which appeared to give a more stable overall time (3-150ms).

I am aiming for sub 100ms at all times.

What is the best practice for handling this?

Nathaniel Manista

unread,
Sep 15, 2016, 12:06:22 AM9/15/16
to ew...@grandslammedia.com, grpc.io
Are you able to share with us the code you've written that is performing as you have described?
-Nathaniel

ew...@grandslammedia.com

unread,
Sep 15, 2016, 10:48:54 AM9/15/16
to grpc.io, ew...@grandslammedia.com
I've attached a few pastebins of code samples:


All being accessed via respective functions, those seen in the server (in the return) all finish in under 1ms (measured)

Qi Zhao

unread,
Sep 20, 2016, 7:38:52 PM9/20/16
to grpc.io, ew...@grandslammedia.com
Your client code seems incomplete (there is no code sending rpcs). Can you share the real code you used to get that latency numbers?

Ewan Walker

unread,
Sep 21, 2016, 9:07:30 AM9/21/16
to Qi Zhao, grpc.io
Sorry, I did not include those calls due to it being called in a standard method:


func submit(rpc *client.Rpc, payload *trc.Tr) error {
    ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(100 * time.Millisecond))
    defer cancel()
    reply, err := rpc.Client.ConsumeNow(ctx, payload)
if err != nil {
return err
}
    return nil
}


There is nothing fancy going on, just a bunch of calls being made similar to above (1 to 1 with a web request). I've noticed that under no load the delay is 1ms or less, it is as load increases it just gets exponentially worse.

the full request chain would look similar to the following:

web request -> request handler -> some work to generate rpc data -> rpc call (above code) -> return data back to handler

the only time I am measuring is the time within the <rpc call> block:

func submit(rpc *client.Rpc, payload *trc.Tr) error {
    ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(100 * time.Millisecond))
    defer cancel()
    start := time.Now()
    reply, err := rpc.Client.ConsumeNow(ctx, payload)
    end := time.Since(start)
if err != nil {
return err
}
    return nil
}

I hope this helps
--


skype.pngewan.adnium

email.pngew...@grandslammedia.com

phone.png+1 416 666 4479


gsm.pngVisit Grand Slam Media

work.png56 The Esplanade, Suite 220, Toronto ON M5E 1A7

maps.pngFind us on Google Maps



Confidentiality Note: This e-mail and any attachments are confidential and may be protected by legal privilege.


If you are not the intended recipient, be aware that any disclosure, copying, distribution or use of this e-mail or any attachment is prohibited. If you have received this e-mail in error, please notify us immediately by returning it to the sender and delete this copy from your system Thank you for your cooperation.

ew...@grandslammedia.com

unread,
Sep 28, 2016, 2:56:48 PM9/28/16
to grpc.io, ew...@grandslammedia.com
What I have noticed is that it is directly the number of concurrent requests I am sending over the given connection at any time e.g. if I am using 2000 connections to send simulated load on a single connection I see crazy high latency - however if I (for testing purpose) send those across a small pool of those connections (e.g. 32) and force the requests to block, the latency is more like 1-2ms (though the number of connections in the pool is too low and it ends up being the bottleneck - taking on the previously observed latency).

Qi Zhao

unread,
Sep 28, 2016, 4:04:53 PM9/28/16
to ew...@grandslammedia.com, grpc.io
My speculation is that the longer latency is from contention. We are working on performance optimization now and will keep you posted on the improvement.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/44eb814c-bb9d-4267-bb2f-cb070418ba40%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Thanks,
-Qi

Ewan Walker

unread,
Sep 28, 2016, 4:23:48 PM9/28/16
to Qi Zhao, grpc.io
Is there a timeline or general ETA on that? 

Thanks for the reply!
Reply all
Reply to author
Forward
0 new messages