20 msg/sec is slow. Message publishing performance depends on a few factors and tradeoffs, and the Nameko defaults choose the safest (and slowest) combinations.
Probably the biggest factor is latency between your service and the RabbitMQ broker. On my laptop with a local RabbitMQ, this simple service gives me ~200 msg/sec:
from nameko.events import EventDispatcher
from nameko.rpc import rpc
class Service:
name = "dispatcher"
dispatch = EventDispatcher()
@rpc
def go(self):
for _ in range(30000):
self.dispatch("eventtype", "payload")
It drops to less than 10 msg/sec if I use a (free, slow) cloud-hosted RabbitMQ a few hundred milliseconds away.
The impact of the latency is partly because publish confirms are enabled. You can disable them:
...
dispatch = EventDispatcher(use_confirms=False)
...
This bumps my local delivery to to ~300 msg/sec, with my RabbitMQ docker image and VM pegging a CPU each. The Nameko service isn't working particularly hard.
You can also disable persistence of the messages to disk when they reach the broker:
...
dispatch = EventDispatcher(persistence="transient")
...
Transient messages may be helpful if your payloads are large.
If latency is the problem, you can improve performance by dispatching messages in multiple parallel threads. If I call initiate 10 concurrent RPC workers going around the loop, I get 10x the msg/sec at the broker.
Nameko Python 3.5.3 (default, Jun 30 2017, 18:28:54)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] shell on darwin
>>> for _ in range(10): n.rpc.dispatcher.go.call_async()
...
<nameko.rpc.RpcReply object at 0x10bf1da58>
<nameko.rpc.RpcReply object at 0x10bf1dc88>
<nameko.rpc.RpcReply object at 0x10bf1d470>
<nameko.rpc.RpcReply object at 0x10bfa0400>
<nameko.rpc.RpcReply object at 0x10bf1d3c8>
<nameko.rpc.RpcReply object at 0x10bfa0518>
<nameko.rpc.RpcReply object at 0x10bfa0f98>
...
Hope that helps.
Matt.