Only 50% cpu usage in a ping like messaging scenario

158 views
Skip to first unread message

Mihai Stanescu

unread,
Mar 30, 2017, 11:22:47 AM3/30/17
to ve...@googlegroups.com
Hi all, 

Context:

vertx 3.4.1, 5000 event verticles  doing Eventbus.send then on reply repeat ,  200 worker verticles doing. Mesage.reply (null).

500 threads in each of the blocking thread pools, core-i7 cpu 

I can obtain a rate of 400k send req/sec however my cpu never goes higher than 50%. 

Is this an expected behavior? For such a cpu bound test i would have expected a much higher cpu load 

Regards, Mihai

Julien Viet

unread,
Mar 30, 2017, 11:47:58 AM3/30/17
to ve...@googlegroups.com
Hi,

it is not clear what you are trying to achieve or demonstrate.

what means 5000 event verticles ? does it mean you have 5000 verticles ? or one verticle sending 5000 events ?

in such tests involving message passing between threads, it is lilely that part of the thread are waiting for message from other threads.

Julien

--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/CALuWX9P2i0CGrP4XKKfyJG%2Bay-Gb0337iZex2h-MvehcmkC_UQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

yahim stnsc

unread,
Mar 31, 2017, 3:10:34 AM3/31/17
to vert.x
I will create a reproducer 

Mihai Stanescu

unread,
Mar 31, 2017, 4:56:53 AM3/31/17
to ve...@googlegroups.com
I filed the issue here with a reproducer and some pictures.


I do not really have an explanation.  Seems 8 eventloop threads are fully busy.

My CPU probably has 4 cores with 8 threads but in another test i just do MD5 in worker verticles and CPU is loaded 100%




--
You received this message because you are subscribed to a topic in the Google Groups "vert.x" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/vertx/N2YJjFfksEg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to vertx+unsubscribe@googlegroups.com.

Mihai Stanescu

unread,
Mar 31, 2017, 8:30:27 AM3/31/17
to ve...@googlegroups.com
I have updated the reproducer. Check issue for the link.

Vertx 3.4.1, clustered, default settings

Run the bad test with :

gradle :test --tests io.vertx.test.performance.EventBusTest.testSendAndReply -i

Here is what the test displays

ThreadInfo:              vert.x-eventloop-thread-11 getBlockedTime: 3521 msec getWaitedTime:  1 
ThreadInfo:               vert.x-eventloop-thread-7 getBlockedTime: 3552 msec getWaitedTime:  1 
ThreadInfo:              vert.x-eventloop-thread-12 getBlockedTime:    0 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-4 getBlockedTime:    0 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-1 getBlockedTime: 3527 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-2 getBlockedTime:    0 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-8 getBlockedTime:    0 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-10 getBlockedTime:    0 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-13 getBlockedTime: 3550 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-15 getBlockedTime:    0 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-0 getBlockedTime: 3471 msec getWaitedTime:  1 
ThreadInfo:              vert.x-eventloop-thread-14 getBlockedTime:    0 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-5 getBlockedTime: 3525 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-6 getBlockedTime:    0 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-3 getBlockedTime: 3528 msec getWaitedTime:  2 
ThreadInfo:               vert.x-eventloop-thread-9 getBlockedTime: 3481 msec getWaitedTime:  0 


Run the runOnContext test with 

gradle :test --tests io.vertx.test.performance.EventBusTest.testRunOnContext -i

and there's no contention

Obviously doing a Eventbus.send and an Eventbus.reply is creating contention.

I suspect its the reply.



Mihai Stanescu

unread,
Mar 31, 2017, 8:31:08 AM3/31/17
to ve...@googlegroups.com
Ah, test runs for 10 seconds and then prints stats.

Mihai Stanescu

unread,
Mar 31, 2017, 8:45:48 AM3/31/17
to ve...@googlegroups.com
As i increased the number of verticles by 10 times from 32 to 320 then the contention increased too.

ThreadInfo:               vert.x-eventloop-thread-9 getBlockedTime: 5278 msec getWaitedTime:  0 

Mihai Stanescu

unread,
Mar 31, 2017, 8:51:25 AM3/31/17
to ve...@googlegroups.com
I have pushed a new test.. testSendToItself, which does not do reply. 

It mostly does this on 32 verticles

vertx.eventBus().consumer(
consumerAddress,
event ->
{
vertx.eventBus().send(
consumerAddress,
payload
);
}
)
And the contention is lower than in the variant with the reply
    ThreadInfo:               vert.x-eventloop-thread-0 getBlockedTime:  870 msec getWaitedTime: 16 
    ThreadInfo:              vert.x-eventloop-thread-12 getBlockedTime:    0 msec getWaitedTime:  0 
    ThreadInfo:               vert.x-eventloop-thread-1 getBlockedTime:  900 msec getWaitedTime:  0 
    ThreadInfo:               vert.x-eventloop-thread-4 getBlockedTime:    0 msec getWaitedTime:  0 
    ThreadInfo:               vert.x-eventloop-thread-6 getBlockedTime:    0 msec getWaitedTime:  0 
    ThreadInfo:              vert.x-eventloop-thread-14 getBlockedTime:    0 msec getWaitedTime:  0 
    ThreadInfo:              vert.x-eventloop-thread-15 getBlockedTime:    0 msec getWaitedTime:  0 
    ThreadInfo:               vert.x-eventloop-thread-9 getBlockedTime:  894 msec getWaitedTime:  2 
    ThreadInfo:              vert.x-eventloop-thread-11 getBlockedTime:  822 msec getWaitedTime:  0 
    ThreadInfo:              vert.x-eventloop-thread-13 getBlockedTime:  886 msec getWaitedTime:  8 
    ThreadInfo:               vert.x-eventloop-thread-8 getBlockedTime:    0 msec getWaitedTime:  0 
    ThreadInfo:               vert.x-eventloop-thread-5 getBlockedTime:  851 msec getWaitedTime:  1 
    ThreadInfo:               vert.x-eventloop-thread-2 getBlockedTime:    0 msec getWaitedTime:  0 
    ThreadInfo:               vert.x-eventloop-thread-7 getBlockedTime:  883 msec getWaitedTime:  4 
    ThreadInfo:              vert.x-eventloop-thread-10 getBlockedTime:    0 msec getWaitedTime:  0 
    ThreadInfo:               vert.x-eventloop-thread-3 getBlockedTime:  851 msec getWaitedTime:  3 

Mihai Stanescu

unread,
Mar 31, 2017, 8:53:46 AM3/31/17
to ve...@googlegroups.com
And another hint, I somehow did similar performance tests on vertx 2.x some time ago, but vertx could load the cpu to the max. 

Tim Fox

unread,
Mar 31, 2017, 9:09:45 AM3/31/17
to vert.x
Can you please post a stack dump?

Tim Fox

unread,
Mar 31, 2017, 9:27:20 AM3/31/17
to vert.x
Also.. .any reason you're using a clustered Vert.x for this?

Tim Fox

unread,
Mar 31, 2017, 9:45:44 AM3/31/17
to vert.x
I can't see my synchronization in the non clustered event bus, but in the ClusteredEventBus I see this:


Just a wild guess, but I believe this is new code and it looks like it could provide a contention point in the case there is no sending context.

Mihai Stanescu

unread,
Mar 31, 2017, 9:47:46 AM3/31/17
to ve...@googlegroups.com
"vert.x-eventloop-thread-9" #26 prio=5 os_prio=0 tid=0x00007f2dec107800 nid=0x77c4 waiting for monitor entry [0x00007f2df4826000]
   java.lang.Thread.State: BLOCKED (on object monitor)
    at sun.security.provider.SecureRandom.engineNextBytes(SecureRandom.java:215)
    - waiting to lock <0x00000006c6227660> (a sun.security.provider.SecureRandom)
    at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:534)
    at sun.security.provider.NativePRNG$RandomIO.access$400(NativePRNG.java:331)
    at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:220)
    at java.security.SecureRandom.nextBytes(SecureRandom.java:468)
    at java.util.UUID.randomUUID(UUID.java:145)
    at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus.generateReplyAddress(ClusteredEventBus.java:267)
    at io.vertx.core.eventbus.impl.EventBusImpl.createReplyHandlerRegistration(EventBusImpl.java:401)
    at io.vertx.core.eventbus.impl.EventBusImpl.sendOrPubInternal(EventBusImpl.java:416)
    at io.vertx.core.eventbus.impl.EventBusImpl.send(EventBusImpl.java:94)
    at io.vertx.core.eventbus.impl.EventBusImpl.send(EventBusImpl.java:84)
    at io.vertx.test.performance.EventBusTest$2.doSend(EventBusTest.java:89)
    at io.vertx.test.performance.EventBusTest$2.lambda$doSend$0(EventBusTest.java:94)
    at io.vertx.test.performance.EventBusTest$2$$Lambda$52/1917065682.handle(Unknown Source)
    at io.vertx.core.eventbus.impl.EventBusImpl.lambda$convertHandler$1(EventBusImpl.java:334)
    at io.vertx.core.eventbus.impl.EventBusImpl$$Lambda$53/1951528094.handle(Unknown Source)
    at io.vertx.core.eventbus.impl.HandlerRegistration.deliver(HandlerRegistration.java:212)
    at io.vertx.core.eventbus.impl.HandlerRegistration.handle(HandlerRegistration.java:191)
    at io.vertx.core.eventbus.impl.EventBusImpl.lambda$deliverToHandler$3(EventBusImpl.java:505)
    at io.vertx.core.eventbus.impl.EventBusImpl$$Lambda$57/412796178.handle(Unknown Source)
    at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:337)
    at io.vertx.core.impl.ContextImpl$$Lambda$12/1597514780.run(Unknown Source)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at java.lang.Thread.run(Thread.java:745)


Mihai Stanescu

unread,
Mar 31, 2017, 9:51:35 AM3/31/17
to ve...@googlegroups.com
I am using the clustered version because in my application will also be clustered

Tim Fox

unread,
Mar 31, 2017, 9:52:09 AM3/31/17
to vert.x
Aha, this an entropy issue, it's blocking on generating a secure random.

This is a known issue with SecureRandom... it requires entropy from keyboard, mouse disk, or something, which can be a problem on servers especially in VMs.

Try running without a clustered vert.x, I can't see any reason why it needs to be clustered in this test anyway.

Tim Fox

unread,
Mar 31, 2017, 9:57:08 AM3/31/17
to vert.x
But you're not actually sending anything across the cluster so you're not really testing what your app would actually do. Also, presumably if your app is clustered then the verticles would be on different servers so wouldn't be competing for the same entropy source.

In any case, it's worth testing without a clustered vert.x just to see if it solves the throughput issue.

Mihai Stanescu

unread,
Mar 31, 2017, 9:57:41 AM3/31/17
to ve...@googlegroups.com
With unclustered version then the contention is quite low and does not seem to depend on the number of verticles

Here i tested with 6000 verticles , unclustered vertx

ThreadInfo:               vert.x-eventloop-thread-2 getBlockedTime:  192 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-13 getBlockedTime:   11 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-8 getBlockedTime:  170 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-5 getBlockedTime:   11 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-6 getBlockedTime:  162 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-12 getBlockedTime:  198 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-7 getBlockedTime:    7 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-3 getBlockedTime:    9 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-0 getBlockedTime:  191 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-9 getBlockedTime:   13 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-11 getBlockedTime:   15 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-14 getBlockedTime:  240 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-10 getBlockedTime:  241 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-4 getBlockedTime:  258 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-1 getBlockedTime:    9 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-15 getBlockedTime:    5 msec getWaitedTime:  0

Tim Fox

unread,
Mar 31, 2017, 9:58:53 AM3/31/17
to vert.x
(Idea for feature request: don't use secure random for cluster reply addresses, could instead use something like: <host_name>.sequence_number)

Mihai Stanescu

unread,
Mar 31, 2017, 10:00:52 AM3/31/17
to ve...@googlegroups.com
Well, don't celebrate too fast :)

Now i ran the code without reply. With 32000 verticles there's still some 40% contention here.

ThreadInfo:               vert.x-eventloop-thread-7 getBlockedTime: 4602 msec getWaitedTime:  6 
ThreadInfo:               vert.x-eventloop-thread-3 getBlockedTime: 4263 msec getWaitedTime:  5 
ThreadInfo:               vert.x-eventloop-thread-8 getBlockedTime:   33 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-12 getBlockedTime:   29 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-9 getBlockedTime: 4961 msec getWaitedTime:  2 
ThreadInfo:               vert.x-eventloop-thread-2 getBlockedTime:   20 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-10 getBlockedTime:   25 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-4 getBlockedTime:   21 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-14 getBlockedTime:   23 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-11 getBlockedTime: 4723 msec getWaitedTime:  1 
ThreadInfo:              vert.x-eventloop-thread-15 getBlockedTime:   24 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-13 getBlockedTime: 4706 msec getWaitedTime:  3 
ThreadInfo:               vert.x-eventloop-thread-6 getBlockedTime:   20 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-0 getBlockedTime: 4606 msec getWaitedTime:  5 
ThreadInfo:               vert.x-eventloop-thread-1 getBlockedTime: 4553 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-5 getBlockedTime: 4559 msec getWaitedTime:  8 


"vert.x-eventloop-thread-13" #49 prio=5 os_prio=0 tid=0x00007f444803b000 nid=0x7c6e waiting for monitor entry [0x00007f443d258000]
   java.lang.Thread.State: BLOCKED (on object monitor)
    at io.vertx.core.impl.TaskQueue.execute(TaskQueue.java:74)
    - waiting to lock <0x00000006c62f0160> (a java.util.LinkedList)
    at io.vertx.core.impl.ContextImpl.executeBlocking(ContextImpl.java:291)
    at io.vertx.core.impl.ContextImpl.executeBlocking(ContextImpl.java:250)
    at io.vertx.test.fakecluster.FakeClusterManager$FakeAsyncMultiMap.get(FakeClusterManager.java:347)
    at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus.sendOrPub(ClusteredEventBus.java:260)
    at io.vertx.core.eventbus.impl.EventBusImpl$SendContextImpl.next(EventBusImpl.java:450)
    at io.vertx.core.eventbus.impl.EventBusImpl.sendOrPubInternal(EventBusImpl.java:418)
    at io.vertx.core.eventbus.impl.EventBusImpl.send(EventBusImpl.java:94)
    at io.vertx.core.eventbus.impl.EventBusImpl.send(EventBusImpl.java:79)
    at io.vertx.test.performance.EventBusTest$3.lambda$start$0(EventBusTest.java:137)
    at io.vertx.test.performance.EventBusTest$3$$Lambda$42/687819258.handle(Unknown Source)
    at io.vertx.core.eventbus.impl.HandlerRegistration.deliver(HandlerRegistration.java:212)
    at io.vertx.core.eventbus.impl.HandlerRegistration.handle(HandlerRegistration.java:191)
    at io.vertx.core.eventbus.impl.EventBusImpl.lambda$deliverToHandler$3(EventBusImpl.java:505)
    at io.vertx.core.eventbus.impl.EventBusImpl$$Lambda$51/1345882575.handle(Unknown Source)
    at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:337)
    at io.vertx.core.impl.ContextImpl$$Lambda$12/1597514780.run(Unknown Source)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at java.lang.Thread.run(Thread.java:745)

Mihai Stanescu

unread,
Mar 31, 2017, 10:01:49 AM3/31/17
to ve...@googlegroups.com
Hmm...why is it using this fake thing?

Mihai Stanescu

unread,
Mar 31, 2017, 10:03:27 AM3/31/17
to ve...@googlegroups.com
I will start with Hazelcast. Sorry for this

Mihai Stanescu

unread,
Mar 31, 2017, 10:06:14 AM3/31/17
to ve...@googlegroups.com
Ok. no problems with HazelcastClusterManager. 

32000 verticles sending to themselves have a low contention.

ThreadInfo:              vert.x-eventloop-thread-12 getBlockedTime:    1 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-1 getBlockedTime:   18 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-15 getBlockedTime:    2 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-14 getBlockedTime:    1 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-5 getBlockedTime:   13 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-10 getBlockedTime:    2 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-11 getBlockedTime:   10 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-13 getBlockedTime:    9 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-3 getBlockedTime:   17 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-6 getBlockedTime:    2 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-8 getBlockedTime:    1 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-4 getBlockedTime:    1 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-7 getBlockedTime:   11 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-2 getBlockedTime:    1 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-0 getBlockedTime:    9 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-9 getBlockedTime:   15 msec getWaitedTime:  0 

Mihai Stanescu

unread,
Mar 31, 2017, 10:09:07 AM3/31/17
to ve...@googlegroups.com
(Idea for feature request: don't use secure random for cluster reply addresses, could instead use something like: <host_name>.sequence_number)

Thanks for the help.

Does not sound to me like a feature, more like a bug. :) . The feature for me would be to use all the cpu power :)

Tim Fox

unread,
Mar 31, 2017, 10:37:29 AM3/31/17
to vert.x


On Friday, 31 March 2017 15:09:07 UTC+1, yahim stnsc wrote:
(Idea for feature request: don't use secure random for cluster reply addresses, could instead use something like: <host_name>.sequence_number)

Thanks for the help.

Does not sound to me like a feature, more like a bug. :) . The feature for me would be to use all the cpu power :)

I suspect if you actually ran it really clustered (i.e. with verticles on different machines) you wouldn't see the issue as you wouldn't be competing for the same entropy source.

So your example is a bit contrived... however I think the feature is worth while anyway.

Mihai Stanescu

unread,
Mar 31, 2017, 10:54:13 AM3/31/17
to ve...@googlegroups.com
> I suspect if you actually ran it really clustered (i.e. with verticles on different machines) you wouldn't see the issue as you wouldn't be competing for the same entropy source.

The case is not so unlikely. 

One of the nice features is vertx is referential transparency. In my case the application can run on a single box or be split in tiers so this means that when is not a multi tier then the contention can happen as it runs locally. 

I agree that in my case, the test is doing only messaging without anything else, thus probably the anything else will count for more processing than this contention. 




Tim Fox

unread,
Mar 31, 2017, 11:01:07 AM3/31/17
to vert.x


On Friday, 31 March 2017 15:54:13 UTC+1, yahim stnsc wrote:
> I suspect if you actually ran it really clustered (i.e. with verticles on different machines) you wouldn't see the issue as you wouldn't be competing for the same entropy source.

The case is not so unlikely. 

One of the nice features is vertx is referential transparency. In my case the application can run on a single box or be split in tiers so this means that when is not a multi tier then the contention can happen as it runs locally. 

This is fine, but presumably if it is running on a single server you'd run it with clustered = false, which wouldn't exhibit the issue.

 

I agree that in my case, the test is doing only messaging without anything else, thus probably the anything else will count for more processing than this contention. 

+1

Mihai Stanescu

unread,
Mar 31, 2017, 11:08:02 AM3/31/17
to ve...@googlegroups.com
> This is fine, but presumably if it is running on a single server you'd run it with clustered = false, which wouldn't exhibit the issue.

It is still a cluster though event when not split in multitier. Each tier can also be split horizontally (mostly for HA and/or scale).

But even a machine can be a sender and also a consumer. The pattern of sending replies is used to detect failed requests. 

Julien Viet

unread,
Mar 31, 2017, 11:23:03 AM3/31/17
to ve...@googlegroups.com
can you open an issue with a link to this thread ?

Julien Viet

unread,
Mar 31, 2017, 11:25:56 AM3/31/17
to ve...@googlegroups.com
why do you need 6000 verticles ? what difference does it make with N verticles with N is the number of event loop ?


Thomas SEGISMONT

unread,
Mar 31, 2017, 11:34:21 AM3/31/17
to ve...@googlegroups.com
No it's not new, it's present in a few releases already.

To unsubscribe from this group and stop receiving emails from it, send an email to vertx+unsubscribe@googlegroups.com.

Mihai Stanescu

unread,
Mar 31, 2017, 11:46:40 AM3/31/17
to ve...@googlegroups.com
@Julien The contention was also observed with 32 verticles. 6000 just makes it a bit worse so was easier to get the thread dump.

There's bad news though.

Now i am running the testSendToItself and sometimes there is still contention. 

"vert.x-eventloop-thread-13" #75 prio=5 os_prio=0 tid=0x00007f844c0f7800 nid=0x2405 waiting for monitor entry [0x00007f847e3a5000]
   java.lang.Thread.State: BLOCKED (on object monitor)
    at io.vertx.spi.cluster.hazelcast.impl.ChoosableSet.choose(ChoosableSet.java:74)
    - waiting to lock <0x00000006c68ae678> (a io.vertx.spi.cluster.hazelcast.impl.ChoosableSet)
    at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus.sendToSubs(ClusteredEventBus.java:347)
    at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus.lambda$sendOrPub$4(ClusteredEventBus.java:245)
    at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus$$Lambda$48/1805668161.handle(Unknown Source)
    at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.get(HazelcastAsyncMultiMap.java:101)
    at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus.sendOrPub(ClusteredEventBus.java:260)
    at io.vertx.core.eventbus.impl.EventBusImpl$SendContextImpl.next(EventBusImpl.java:450)
    at io.vertx.core.eventbus.impl.EventBusImpl.sendOrPubInternal(EventBusImpl.java:418)
    at io.vertx.core.eventbus.impl.EventBusImpl.send(EventBusImpl.java:94)
    at io.vertx.core.eventbus.impl.EventBusImpl.send(EventBusImpl.java:79)
    at io.vertx.test.performance.EventBusTest$3.lambda$start$0(EventBusTest.java:166)
    at io.vertx.test.performance.EventBusTest$3$$Lambda$45/1579713601.handle(Unknown Source)
    at io.vertx.core.eventbus.impl.HandlerRegistration.deliver(HandlerRegistration.java:212)
    at io.vertx.core.eventbus.impl.HandlerRegistration.handle(HandlerRegistration.java:191)
    at io.vertx.core.eventbus.impl.EventBusImpl.lambda$deliverToHandler$3(EventBusImpl.java:505)
    at io.vertx.core.eventbus.impl.EventBusImpl$$Lambda$55/831140959.handle(Unknown Source)
    at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:337)
    at io.vertx.core.impl.ContextImpl$$Lambda$15/2388865.run(Unknown Source)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at java.lang.Thread.run(Thread.java:745)


ThreadInfo:               vert.x-eventloop-thread-8 getBlockedTime:    1 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-1 getBlockedTime: 5882 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-6 getBlockedTime:    3 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-14 getBlockedTime:    3 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-5 getBlockedTime: 5877 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-12 getBlockedTime:    2 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-4 getBlockedTime:    8 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-11 getBlockedTime: 5731 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-13 getBlockedTime: 5836 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-3 getBlockedTime: 5844 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-15 getBlockedTime:    4 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-2 getBlockedTime:    3 msec getWaitedTime:  0 
ThreadInfo:              vert.x-eventloop-thread-10 getBlockedTime:    3 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-7 getBlockedTime: 5929 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-0 getBlockedTime: 5864 msec getWaitedTime:  0 
ThreadInfo:               vert.x-eventloop-thread-9 getBlockedTime: 5926 msec getWaitedTime:  0

Mihai Stanescu

unread,
Mar 31, 2017, 11:57:28 AM3/31/17
to ve...@googlegroups.com
I attached the full thread dump here


thread_dump.txt

Tim Fox

unread,
Mar 31, 2017, 12:52:54 PM3/31/17
to vert.x
Well, you're unnecessarily hammering this method so some contention is likely ;)

I doubt very much this would be an issue in a real application for reasons already stated. If it is, we can revisit it, it shouldn't be too hard to create a version of that method that doesn't require synchronization. Or even better, submit a PR! :)

Mihai Stanescu

unread,
Mar 31, 2017, 1:07:36 PM3/31/17
to ve...@googlegroups.com
@Tim, yeah i put this to rest for now as i have other bottlenecks to fish 

@Julian My understanding is that vertx is inspired from actor model thus expect a high number of vehicles/handlers, otherwise if i cannot make many then what's the point of vertx.

I do not have so many verticles in the app however there could be hundreds of thousands of handlers. 

From my tests so far, the number did not seem to affect vertx so much

Julien Viet

unread,
Mar 31, 2017, 2:22:56 PM3/31/17
to ve...@googlegroups.com
On Mar 31, 2017, at 7:07 PM, Mihai Stanescu <mihai.s...@gmail.com> wrote:

@Tim, yeah i put this to rest for now as i have other bottlenecks to fish 

@Julian My understanding is that vertx is inspired from actor model thus expect a high number of vehicles/handlers, otherwise if i cannot make many then what's the point of vertx.

I do not have so many verticles in the app however there could be hundreds of thousands of handlers. 

From my tests so far, the number did not seem to affect vertx so much

On Mar 31, 2017 6:52 PM, "Tim Fox" <timv...@gmail.com> wrote:
Well, you're unnecessarily hammering this method so some contention is likely ;)

I doubt very much this would be an issue in a real application for reasons already stated. If it is, we can revisit it, it shouldn't be too hard to create a version of that method that doesn't require synchronization. Or even better, submit a PR! :)

I’ve come up with such version and a JMH benchmark a while ago: https://groups.google.com/d/msg/vertx-dev/Z0v8e6TPrqw/X5Tur81ADgAJ

Julien Viet

unread,
Mar 31, 2017, 2:26:39 PM3/31/17
to ve...@googlegroups.com
can you try with a modified version of Hazelcast Choosable set:

https://github.com/vietj/choosable-perf


to see how it goes ?

even though your case is not realistic (don’t take it badly!), it is interesting to see if CAS strategy can improve it.

Mihai Stanescu

unread,
Mar 31, 2017, 4:50:51 PM3/31/17
to ve...@googlegroups.com
Not sure how unrealistic it is. 

if one wants to send ton  of events on an address, then this is a valid scenario. THe contention was about 30 percent of cpu even with low number of verticles and it increases i guess with the number of verticles.

I would give up the perfect round-robin if that would reduce contention. if it has "glitches" in a random way things are still balanced enough
Maybe the load would be 90 percent but still better than now. THe perfect round robin will deliver perfect load if the processing is the uniform however in an app with more diversity the hardlers will anyway perform processing at a different load. IF i would inject one slower consumer then it will start to accumulate events on that thread anyway.

I do not have this scenario i agree but i could modify the test to use more addresses so there are not many send to same address.



On Fri, 31 Mar 2017 at 22:28, Mihai Stanescu <mihai.s...@gmail.com> wrote:



To unsubscribe from this group and all its topics, send an email to vertx+un...@googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.

-- 
You received this message because you are subscribed to a topic in the Google Groups "vert.x" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/vertx/N2YJjFfksEg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to vertx+un...@googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
--
You received this message because you are subscribed to a topic in the Google Groups "vert.x" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/vertx/N2YJjFfksEg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to vertx+un...@googlegroups.com.

Tim Fox

unread,
Apr 1, 2017, 2:53:08 AM4/1/17
to vert.x


On Friday, 31 March 2017 21:50:51 UTC+1, yahim stnsc wrote:
Not sure how unrealistic it is. 

if one wants to send ton  of events on an address, then this is a valid scenario. THe contention was about 30 percent of cpu even with low number of verticles and it increases i guess with the number of verticles.

Yes, but this only happens with a clustered event bus, and in a real clustered test it would probably be swallowed up by costs of network transit, so wouldn't be anywhere near 30%. That's my guess anyway!
 
To unsubscribe from this group and all its topics, send an email to vertx+unsubscribe@googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+unsubscribe@googlegroups.com.

-- 
You received this message because you are subscribed to a topic in the Google Groups "vert.x" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/vertx/N2YJjFfksEg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to vertx+unsubscribe@googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups "vert.x" group.
<span style="font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-sp
Reply all
Reply to author
Forward
0 new messages