Slow throughput on sending files as byte array via eventbus

180 views
Skip to first unread message

xnike x

unread,
Jun 5, 2019, 8:26:56 AM6/5/19
to vert.x
Hi,
maybe someone could point me at the solution for an issue I trying to find for a few days:

I have 4 nodes / docker containers in vert.x cluster with default configuration (multicast). Each node is started by io.vertx.core.Launcher.
I'd like to send files as byte array (byte[]) via eventbus from one node to another. So I started testing with 20MB file.

I made several attempts and saw that message became available to handle on the second node in 15-20 seconds after sending. 
Over the time this delivery time decreased to 9 seconds. 2MB per seconds finally.

I checked on 2 different hosts with different host OSes (Linux and Windows) and behavior is the same. With vert.x  version 3.5.4 and latest 3.7.1...

Also I tried to set send/receive buffers to 1MB via command line options -Dvertx.options.eventBusOptions.receiveBufferSize=1048576 and -Dvertx.options.eventBusOptions.sendBufferSize=1048576, but nothing changed.

Julien Viet

unread,
Jun 6, 2019, 2:48:30 AM6/6/19
to ve...@googlegroups.com
Hi,

event bus is not designed for sending such large payload, that being said it seems abnormal that it takes 15-20 seconds to send 20mb as a message payload.

have you tried running it on the same host without docker to see if that makes a difference ?

Julien

--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/d5dec2c6-e306-42dc-b4aa-cf2e9c4fd100%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

xnike x

unread,
Jun 6, 2019, 7:24:22 AM6/6/19
to vert.x
Hi Julien,

I've checked connectivity between containers and it's pretty OK I suppose:

[root@00d6f18e94c8 /]# iperf3 -c worker-service
Connecting to host worker-service, port 5201
[  4] local 172.19.0.5 port 47318 connected to 172.19.0.6 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   623 MBytes  5.23 Gbits/sec    1    827 KBytes
[  4]   1.00-2.00   sec   832 MBytes  6.98 Gbits/sec    0    882 KBytes
[  4]   2.00-3.00   sec   391 MBytes  3.28 Gbits/sec    0    959 KBytes
[  4]   3.00-4.00   sec   405 MBytes  3.40 Gbits/sec    0    959 KBytes
[  4]   4.00-5.00   sec   494 MBytes  4.14 Gbits/sec    0    959 KBytes
[  4]   5.00-6.00   sec   884 MBytes  7.40 Gbits/sec    0   1003 KBytes
[  4]   6.00-7.00   sec   521 MBytes  4.38 Gbits/sec    0   1003 KBytes
[  4]   7.00-8.00   sec   726 MBytes  6.09 Gbits/sec    0   1.01 MBytes
[  4]   8.00-9.00   sec   905 MBytes  7.59 Gbits/sec  105    727 KBytes
[  4]   9.00-10.00  sec   549 MBytes  4.60 Gbits/sec   50    663 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  6.18 GBytes  5.31 Gbits/sec  156             sender
[  4]   0.00-10.00  sec  6.17 GBytes  5.30 Gbits/sec                  receiver

iperf Done.

[root@997657494c6f /]# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 172.19.0.5, port 47316
[  5] local 172.19.0.6 port 5201 connected to 172.19.0.5 port 47318
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   567 MBytes  4.75 Gbits/sec
[  5]   1.00-2.00   sec   867 MBytes  7.27 Gbits/sec
[  5]   2.00-3.02   sec   393 MBytes  3.25 Gbits/sec
[  5]   3.02-4.01   sec   403 MBytes  3.40 Gbits/sec
[  5]   4.01-5.00   sec   460 MBytes  3.88 Gbits/sec
[  5]   5.00-6.00   sec   887 MBytes  7.44 Gbits/sec
[  5]   6.00-7.01   sec   550 MBytes  4.58 Gbits/sec
[  5]   7.01-8.00   sec   696 MBytes  5.88 Gbits/sec
[  5]   8.00-9.00   sec   912 MBytes  7.64 Gbits/sec
[  5]   9.00-10.02  sec   586 MBytes  4.81 Gbits/sec
[  5]  10.02-10.04  sec  2.25 MBytes  1.18 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.04  sec  6.17 GBytes  5.28 Gbits/sec                  receiver
^Ciperf3: interrupt - the server has terminated


Also I started 2 processes not in docker and received 8 - 10 seconds. After I increased memory for docker, time reduced to the same 8 - 11 seconds.

I understand that eventbus is designed for processing small data packets, but results for this simple case (no traffic, one not too big file not too often) look strange.



On Thursday, June 6, 2019 at 9:48:30 AM UTC+3, Julien Viet wrote:
Hi,

event bus is not designed for sending such large payload, that being said it seems abnormal that it takes 15-20 seconds to send 20mb as a message payload.

have you tried running it on the same host without docker to see if that makes a difference ?

Julien

On 5 Jun 2019, at 14:26, xnike x <xni...@gmail.com> wrote:

Hi,
maybe someone could point me at the solution for an issue I trying to find for a few days:

I have 4 nodes / docker containers in vert.x cluster with default configuration (multicast). Each node is started by io.vertx.core.Launcher.
I'd like to send files as byte array (byte[]) via eventbus from one node to another. So I started testing with 20MB file.

I made several attempts and saw that message became available to handle on the second node in 15-20 seconds after sending. 
Over the time this delivery time decreased to 9 seconds. 2MB per seconds finally.

I checked on 2 different hosts with different host OSes (Linux and Windows) and behavior is the same. With vert.x  version 3.5.4 and latest 3.7.1...

Also I tried to set send/receive buffers to 1MB via command line options -Dvertx.options.eventBusOptions.receiveBufferSize=1048576 and -Dvertx.options.eventBusOptions.sendBufferSize=1048576, but nothing changed.

--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ve...@googlegroups.com.

Vincent Free

unread,
Jun 7, 2019, 7:28:50 AM6/7/19
to vert.x
Is your docker setup actually using multicast? I had a problem running the eventbus in docker and docker swarm where I've eventually switched to Kafka for the messaging part due to my company running al kafka instance with good performance.

I had problems with docker and multicast then switched to hazelcast with a non official swarm integration zo we could use dns-rr to find available nodes. But then the performance was still not that good. Very unstable connections. But that could have been the company network also.

Now with kafka and our applications we handle can handle close to 100k rps without a lot of problems.
Our normal load is 8 - 15k

Message size isn't as large as yours though. For 100k it's about 24mb so yea that's one big message 😅
Reply all
Reply to author
Forward
0 new messages