Worker verticles are never executed concurrently by more than one thread. Worker verticles are also not allowed to use TCP or HTTP clients or servers. Worker verticles normally communicate with other verticles using the vert.x event bus, e.g. receiving work to process.
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
import vertx
vertx.deploy_verticle('http_server.py')
vertx.deploy_worker_verticle('worker.py','{"dummy":"dummy"}',20)
Is the default pool size > 10?
Also remember that the any verticle who calls that "slow" worker your execution time will be as fast as the slowest worker...
You need to understand what an instance is. If you create ten instances of a worker verticle (I think what Tim referred to as a "worker" is a single instance of a worker verticle, I hope I'm not putting words in his mouth) that's ten potential parallel processors (although the thread pool may limit how many can be active at one time).
Each of those instances can only be accessed by a single thread at a given time. That is not an error, and therefore does not need to be fixed :) Maybe the documentation needs clarifying slightly to say "A given worker verticle instance is never executed concurrently by more than one thread" - if it was worded like that would you be less confused?