Hi,
I am running a cluster vert.x web application running on 8 vCPU and 16 GB RAM VM.
The application receives payloads as POST requests and puts it on event bus and on the other end a consumer reads those payloads and pushes it to Kafka.
Each application is configured as:
VertxOptions options = context.vertxOptions();
options.setEventLoopPoolSize(16).setWorkerPoolSize(80);
DeploymentOptions options = context.deploymentOptions();
options.setInstances(16);
My Kafka producer is configured as:
KafkaProducer<String, String> producer = KafkaProducer.createShared(vertx, "the-producer", config);
vertx.eventBus().consumer("platform", message -> {
String topic = message.headers().get("topic");
String key = message.headers().get("key");
KafkaProducerRecord<String, String> record =
KafkaProducerRecord.create(topic, key, message.body());
producer.send(record).onSuccess(...).onFailure(...);
}));
As you can see I am creating a shared producer, which means there will be a single producer for one deployed application, so does it make sense to create multiple producers in order to increase the throughput of the application ?
Thanks
Sachin