Correct GRPC Server configuration for low latency and high throughput

1,315 views
Skip to first unread message

Avinash Dongre

unread,
Sep 12, 2016, 9:59:58 AM9/12/16
to grpc.io
Hello All,

I am trying to see if I can use this framework in my project. I need to know what should be my GRPC Server configuration to achieve low latency and high throughput.

This is how I am starting a GRPC which is embedded in another Java Process.

I am not doing anything on the server side, and when I call this method for 1000000 times. I am getting 

Took 167 Seconds 167085714844 NanoSeconds

This is certainly high, but I am sure I am doing something wrong in configuring gRPC Server, Please help.

private void startGRPCService(GemFireCacheImpl cache) {
  int port = system.getConfig().getRpcPort();

if ( this.isServerNode() && port != 0) {
try {
gRPCServer = NettyServerBuilder.forPort(port)
.addService(
new MTableServiceImpl())
.channelType(NioServerSocketChannel.
class)
.build()
.start();
}
catch (IOException e) {
e.printStackTrace();
}
logger.info("GRPC Server started, listening on " + port);

Runtime.
getRuntime().addShutdownHook(new Thread() {
@Override
public void run() {
// Use stderr here since the logger may have been reset by its JVM shutdown hook.
System.err.println("*** shutting down gRPC server since JVM is shutting down");
this.stopGRPCServer();
System.
err.println("*** server shut down");
}
});
}
}

Service is implemented as follows, It does nothing but just returns a a status.

public class MyServiceImpl extends MyServiceGrpc.MyServiceImplBase {
  private static final Logger logger = LogService.getLogger();

@Override
public void tablePut(PutMessage request, StreamObserver<PutMessageReply> responseObserver) {
PutMessageReply reply = PutMessageReply.newBuilder().setStatus(PutMessageReply.OpStatus.SUCCESS).build();
responseObserver.onNext(reply);
responseObserver.onCompleted();
}


Client is implemented as follows

public class TableServiceClient {
private static final Logger logger = LogService.getLogger();
public static TableServiceClient INSTANCE = new TableServiceClient();
private Map<ServerLocation, TableServiceGrpc.TableServiceBlockingStub> grpcConnectionMap = new HashMap<>();
private Random r = new Random();
private List<TableServiceGrpc.TableServiceBlockingStub> valuesList;
private int numberOfServers = 0;
private PutMessage.Builder putMessageBuilder = MPutMessage.newBuilder();


public TableServiceClient() {
this.valuesList = new ArrayList<>(grpcConnectionMap.values());
this.numberOfServers = valuesList.size();
}

private TableServiceGrpc.TableServiceBlockingStub getApplicableChannel() {
return this.valuesList.get(r.nextInt(this.numberOfServers));
}

public void put(final String tableName, MPut put, List<Integer> columnPositions, byte[] value,)
{
TableServiceGrpc.TableServiceBlockingStub channel = getApplicableChannel();

this.putMessageBuilder.setKey(ByteString.copyFrom(put.getRowKey()))
.setValue(ByteString.copyFrom(value))
.setTableName(tableName);

for (Integer position : columnPositions) {
this.putMessageBuilder.addColumnPositions(position);
}

MPutMessage putMessage = this.putMessageBuilder.build();
MPutMessageReply response;

try {
response = channel.tablePut(putMessage);
} catch (StatusRuntimeException e) {
logger.warn("RPC failed: {0}", e.getStatus());
return;
}
this.putMessageBuilder.clear();
PutMessageReply.OpStatus status = response.getStatus();
}
}


Thanks
Avinash

Carl Mastrangelo

unread,
Sep 12, 2016, 2:17:06 PM9/12/16
to grpc.io
You need to provide your on Executor in NettyServerBuilder.  The default is a cached threadpool which spawns threads too quickly under load.

Also, you are using the blocking client API, which won't be as fast as the Async API.  You should only create one channel, and reuse it.  Also, the Channel needs its own threadpool too.

Avinash Dongre

unread,
Sep 12, 2016, 7:28:06 PM9/12/16
to grpc.io
Thanks Carl,

Is there any example for the same ?

Thanks
Avinash

Carl Mastrangelo

unread,
Sep 13, 2016, 6:51:40 PM9/13/16
to grpc.io
We have scattered examples, but right now the examples have somewhat diverged from current best practices.  The blocking API is mostly for convenience, but you will get more fine grained performance from the Async APIs.

Avinash Dongre

unread,
Sep 13, 2016, 9:34:43 PM9/13/16
to grpc.io
Thanks Carl,
I tried Async API, but it seems it is missing some of my requests from client side.
I will see what is wrong with my code. Thanks for your help.

Thanks
Avinash

Eric Anderson

unread,
Sep 17, 2016, 12:00:23 AM9/17/16
to Carl Mastrangelo, grpc.io
And to be clear: the blocking API is just as fast as async API, but your application can't scale as much with the blocking API because your application must have a thread per RPC.

Your test is a ping-pong latency test, since it waits for one RPC to complete before starting the next. In no way does it determine throughput. As a latency test, it showed that each RPC took 167µs, which isn't bad and about what I would hope for. What latency goal did you have?

If you want greater throughput, do more RPCs in parallel. That is a case the async API is useful, since you don't need to keep a Thread for each RPC.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/eefaf65f-fc64-4dad-bf12-e0dfe44c7fa7%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages