I did run some tests with ZMQ in the comm layer. However, I was using the PUB/SUB pattern, which is non-blocking. You can send messages as fast as you want: However, if the connection's queue fills, ZMQ starts throwing away messages until a slot opens up on the queue. The test showed similar results to UDP. I ran several tests at faster and faster data rates: At some threshold, both UDP and ZMQ starting dropping events. At each event rate, the percentage dropped was nearly identical between UDP and ZMQ PUB/SUB.
Even using a non-blocking ZMQ pattern or TCP/IP directly could result in data loss. TCP/IP send is reliable and blocking: That is, you cannot send messages faster than the target can receive them. But somewhere up the line, messages are being generated based on real-world events (users clicking on links, people swiping cards, hearts beating, etc.), and those real-world events are not necessarily going to slow down to accommodate how fast your nodes can receive data. If the nodes cannot keep up, a queue somewhere is going to fill up: The legacy stream's send queue, some adapter's receive or send queue, or some S4 node's receive or send queues.
>Is there any test
>scenario to find out how may messages can be handled
>on specific hardware with guaranteeing no data loss?
For our S4-based applications, we first ran load tests at maximum expected rates (usually by generating fake events) to ensure we had enough processing power and memory to accommodate those rates. The hardware, number of nodes, and the number of adapters needed depends on the application, since some applications might be more CPU bound, others more memory bound, and some might generate many new events on the fly.