same RPC service on multiple servers (python)

45 views
Skip to first unread message

ranjith

unread,
Aug 13, 2018, 8:55:37 PM8/13/18
to grpc.io
Hi,

I have a gRPC service running fine on a server. I have limited amount of servers so what I want to do is run this service on the same server but on different ports (basically faking the number of grpc servers). The service has a single rpc which sends data every 1 second.

I am running this service on 100 different ports staring from 50000 to 50100. Now there are 100 different clients making requests to their corresponding server. What I noticed is the data is not sent by these servers every 1 second.


Example:

Servers are running on localhost:50000, localhost:50001, localhost:50002 .... localhost:50100 


 

class OpenConfigServicer(plugin_pb2_grpc.OpenConfigServicer):

        def dataSubscribe(self, request, context):
            # this rpc yields data every 1s



def serve():
        servers = []
        for i in range(50000, 50101):
            server = grpc.server(futures.ThreadPoolExecutor(max_workers=1))
            servers.append(server)
        i = 50000
        for server in servers:
            plugin_pb2_grpc.add_OpenConfigServicer_to_server(
                OpenConfigServicer(), server)
            server.add_insecure_port('[::]:' + str(i))
            server.start()
            i += 1


Can someone tell me if we can optimize this


Thanks,
Ranjith

Nathaniel Manista

unread,
Aug 15, 2018, 5:07:37 AM8/15/18
to ranji...@gmail.com, grpc.io
Probably not that important: why have 101 OpenConfigServicer instances rather than one that is shared among all your grpc.Server instances?

Probably more important: why have 101 grpc.Server instances each serving on one port rather than one serving on 101 ports? Why don't you construct one server outside your loop and only call add_insecure_port on that one grpc.Server instance inside the loop?
-Nathaniel

ranjith

unread,
Aug 15, 2018, 5:02:15 PM8/15/18
to grpc.io
Thanks Nathaniel for suggesting the optimizations. I did the changes in my code and noticed that there are 101 threads spawned (each thread mimics as a server running on its own port). However I noticed that data is not streaming every second from the server. Here is my entire code



from concurrent import futures
from multiprocessing import Process, Queue
import time
import math
import grpc
import plugin_pb2
import plugin_pb2_grpc
import data_streamer
import threading
import datetime
import sys
import get_sensor_data



_ONE_DAY_IN_SECONDS = 60 * 60 * 24


class OpenConfigServicer(plugin_pb2_grpc.OpenConfigServicer):

        def dataSubscribe(self, request, context):
            try:
                path = '/data-sensor'
                metadata = None
                data = get_sensor_data(path, metadata)
                data_point = data[0]
                while True:
                    print 'test:{}, {}'.format(threading.current_thread(), datetime.datetime.now())
                    yield data_point
                    # Each server should stream data every second
                    time.sleep(1)
            except Exception as e:
                import traceback
                print 'Exception is streaming data:{}, {}'.format(
                        e, traceback.format_exc())


def serve():
        server = grpc.server(futures.ThreadPoolExecutor(max_workers=101))
        plugin_pb2_grpc.add_OpenConfigServicer_to_server(OpenConfigServicer(), server)
        for i in range(50051, 50152):
            server.add_insecure_port('[::]:' + str(i))
        server.start()
        try:
            while True:
                time.sleep(_ONE_DAY_IN_SECONDS)
        except KeyboardInterrupt:
                server.stop(0)
        

if __name__ == '__main__':
    serve()

ranjith

unread,
Aug 15, 2018, 7:36:04 PM8/15/18
to grpc.io
Here are the print statements. If we look at the time difference between subsequent yields for Thread-15 it is more than 2 seconds. Eventually the difference goes beyond 2 minutes


test:<Thread(Thread-15, started daemon 140166928791296)>, 2018-08-15 23:32:27.009203
test:<Thread(Thread-16, started daemon 140166882019072)>, 2018-08-15 23:32:27.311508

test:<Thread(Thread-15, started daemon 140166928791296)>, 2018-08-15 23:32:29.069680
test:<Thread(Thread-16, started daemon 140166882019072)>, 2018-08-15 23:32:29.449455

Nathaniel Manista

unread,
Aug 19, 2018, 5:24:50 AM8/19/18
to ranjith, grpc.io
On Wed, Aug 15, 2018 at 11:02 PM ranjith <ranji...@gmail.com> wrote:
Thanks Nathaniel for suggesting the optimizations. I did the changes in my code and noticed that there are 101 threads spawned (each thread mimics as a server running on its own port).

I think it's more accurate to say that since your server is servicing 101 concurrent RPCs, 101 threads are used, since the current implementation of the server submits service of each RPC as its own function to be executed in the application-provided thread pool.
This can't be your entire code, because something must be connecting to your server and invoking RPCs. :-P What's the load on your host? How certain are you that the threads that are servicing your RPCs have sufficient resources to run in the time in which you wish them to run? If you drop the number of concurrent RPCs from 101 to 50 or 20 or 10, does the problem continue to occur?
-Nathaniel
Reply all
Reply to author
Forward
0 new messages