I assume your heavy processing is CPU bound (the opposite, IO bound
would spend its time waiting, thus available to process more task side
by side).
- Every request will have to wait after each others (makes sense
after all). If you want more truly concurrent execution, you will have
to throw more CPU cores at it: that's it, run more instance of your
server.
- On the server in Python, watch out for blocking too long the gevent
loop. If you do heaving processing that never cooperates with the
gevent loop, zerorpc wont be executing, which means loss of
heartbeats.
Depending of your project, you could cooperatively yield via
gevent.sleep(0). But most often this is not practicable and breaks
your code encapsulation. You could disable zerorpc heartbeats, but
that would be a shame. I will propose a more scalable solution below.
problem: Since you want to be able to compute tons of request
simultaneously, besides using faster algorithm and caching the best as
you can, you need to run many instances of your server.
possible solution:
On all your servers machines you have:
- A manager service:
- with a zerorpc server exposing your service on the network
- and a zerorpc client that binds to a local unix socket with
heartbeat disabled.
This proxy simply forward any request to the zerorpc client, thus
effectively spending all its time waiting for I/Os. So heartbeat works
wonder. You can cap the number of request/s as you wish, do any
assertion and I/O bound operation there directly etc.
You can spawn and baby sit all your CPU bound processes directly
(think about gevent.subprocess). Spawn as many task as you want,
restart them if they crash (hey don't forget to log what happened too
;)). Congrat, now you have a multi-process Python service!
- The heavy task process:
- with a zerorpc server connected to the local unix socket of the
manager, heartbeat disabled as well.
Receives the request from the local manager, execute, return
result. Its your current server code, except instead of binding to a
tcp port, it connects to a unix socket.
- On the client (or even an intermediate broker):
- you manage one zerorpc client per manager service. a simple
round-robin should do the trick. Using an intermediate broker or some
naming service can help managing the list of manager to connect to.
- You have to implement the round-robin/balance between all the
managers yourself because of a limitation of zmq<=4.1. (Maybe one day
zerorpc will be updated to the new features of zmq>=4.1).
Here you have it, some opinionated idea to scale your app.
Depending of your need of a high-level RPC or not, using one of the
zmq pattern directly might be easier/more scalable:
http://zguide.zeromq.org/page:all#toc86
Best,
fx
> --
> You received this message because you are subscribed to the Google Groups
> "zerorpc" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to
zerorpc+u...@googlegroups.com.
> To post to this group, send email to
zer...@googlegroups.com.
> To view this discussion on the web visit
>
https://groups.google.com/d/msgid/zerorpc/f76bccca-9546-4b6c-bea8-3608d8c50afb%40googlegroups.com.
> For more options, visit
https://groups.google.com/d/optout.