Yesyou can use Apache Kafka to communicate between Microservices. With Spring boot you can Spring Kafka or plain kafka-client API. Well there are various ways to achieve this, and it is totally depends on your use case. You can start with Producer & Consumer API, producer will send records to a topic, and then a consumer (or group of consumer) consume the records from the topic.
Instead of Spring Kafka, you could as well use Spring Cloud Stream for Kafka. You can read more about it in this article. Spring Cloud Stream provides several useful features like DLQ support, serialization to JSON by default, or interactive queries.
We will create a simple system that consists of three microservices. The order-service sends orders to the Kafka topic called orders. Both other microservices stock-service and payment-service listen for the incoming events. After receiving them they verify if it is possible to execute the order. For example, if there are no sufficient funds on the customer account the order is rejected. Otherwise, the payment-service accepts the order and sends a response to the payment-orders topic. The same with stock-service except that it verifies a number of products in stock and sends a response to the stock-orders topic.
The order-service is the most important microservice in our scenario. It acts as an order gateway and a saga pattern orchestrator. It requires all the three topics used in our architecture. In order to automatically create topics on application startup we need to define the following beans:
Finally, the implementation of our stream. We need to define the KStream bean. We are joining two streams using the join method of KStream. The joining window is 10 seconds. As the result, we are setting the status of the order and sending a new order to the orders topic. We use the same topic as for sending new orders.
In Spring Boot the name of the application is by default the name of the consumer group for Kafka Streams. Therefore, we should set in application.yml. Of course, we also need to set the address of the Kafka bootstrap server. Finally, we have to configure the default key and value for events serialization. It applies to both standard and streams processing.
If we run more than one instance of the order-service on the same machine it is also important to override the default location of the state store. To do that we should define the following property unique per every instance:
After running, it will print the address of your node. For me, it is
127.0.0.1:56820. You should put that address as a value of the spring.kafka.bootstrap-servers property. You can also display a list of created topics using the following command:
There is some test data inserted while payment-service and stock-service start. So, you can set the value of customerId or productId between 1 and 100 and it will work for you. However, you can use as well a method for generating a random stream of data. The following bean is responsible for generating 10000 random orders:
How would you change this solution if `Reservation` was a JPA Entity and you had to get the actual account balances from a data source? One problem I see is that initializer arg for aggregate does not have a id param which would make it impossible for you to get the current status from the DB.
One of the traditional approaches for communicating between microservices is through their REST APIs. However, as your system evolves and the number of microservices grows, communication becomes more complex, and the architecture might start resembling our old friend the spaghetti anti-pattern, with services depending on each other or tightly coupled, slowing down development teams. This model can exhibit low latency but only works if services are made highly available.
To overcome this design disadvantage, new architectures aim to decouple senders from receivers, with asynchronous messaging. In a Kafka-centric architecture, low latency is preserved, with additional advantages like message balancing among available consumers and centralized management.
Apache Kafka is a distributed streaming platform. It was initially conceived as a message queue and open-sourced by LinkedIn in 2011. Its community evolved Kafka to provide key capabilities:
The Consumer Group in Kafka is an abstraction that combines both models. Record processing can be load balanced among the members of a consumer group and Kafka allows you to broadcast messages to multiple consumer groups. It is the same publish-subscribe semantic where the subscriber is a cluster of consumers instead of a single process.
Create an apps.jdl file that defines the store, alert, and gateway applications in JHipster Domain Language (JDL). Kafka integration is enabled by adding messageBroker kafka to the store and alert app definitions.
The JHipster generator adds a spring-cloud-starter-stream-kafka dependency to applications that declare messageBroker kafka (in JDL), enabling the Spring Cloud Stream programming model with the Apache Kafka binder for using Kafka as the messaging middleware.
Spring Cloud Stream was recently added back to JHipster. Now, instead of working with Kafka Core APIs, we can use the binder abstraction, declaring input/output arguments in the code, and letting the specific binder implementation handle the mapping to the broker destination.
IMPORTANT NOTE: At this moment, JHipster includes Spring Cloud Stream 3.2.4, which has deprecated the annotation-based programming model, @EnableBinding and @StreamListener annotations, in favor of the functional programming model. Stay tuned for future JHipster updates.
Create an AlertConsumer service to persist a StoreAlert and send the email notification when receiving an alert message through Kafka. Add KafkaProperties, StoreAlertRepository, and EmailService as constructor arguments. Then add a start() method to initialize the consumer and enter the processing loop.
Once everything is up, go to the gateway at :8081 and log in. Create a store entity and then update it. The alert microservice should log entries when processing the received message from the store service.
To enable the login from the alert application, go to and then choose the Security tab. Turn on 2-Step Verification for your account. In the section Signing in to Google, choose App passwords and create a new app password. In the Select app dropdown set Other (Custom name) and type the name for this password. Click Generate and copy the password. Update docker-compose/.env and set the app password for Gmail authentication.
This tutorial showed how a Kafka-centric architecture allows decoupling microservices to simplify the design and development of distributed systems. To continue learning about these topics check out the following links:
Providing feedback to users is of tremendous importance in a modern Web application. E-mails continue to be the best way to communicate when a user is not active on the platform. Sending e-mail can however take time and may fail. In modern micro-service architecture, the need to give feedback can occur in any of many services. A robust design requires a dedicated service for messaging.
I faced this issue on an ambitious project for one of our customers. The customer had an Apache Kafka infrastructure on site. I figured it could be the ideal tool to asynchronously send e-mail from various parts of our software stack.
As described on its website, "Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications." Said otherwise, Kafka is a message queue software on steroids. Its flexibility makes it perfect for both simple and complex projects. It acquired therefore vast popularity in the IT of influential companies.
The goal of the article is not to explain how Kafka works. It is by itself a topic for a whole book. I recommend "Kafka: The Definitive Guide" by Narkhede, Shapira, and Palino. If you have no experience of Kafka, here are a few definitions of its concepts:
Our application is comprised of several micro-services. Each of them has its responsibility; whether it is to manage permissions, customer data, products... The strength of using Kafka is that it permits several of our micro-services to send notifications by pushing messages to a single Kafka topic. On the other end of the queue, a single Spring Boot application is responsible for handling the request for e-mails of our whole application.
Concerning the Spring Boot application itself, I generated a pom.xml from the automated generation tool ( ), including Kafka, Emailing and Thymeleaf. Alternatively, you can include the following in the dependency section:
In an actual application using a micro-service architecture, the request for an e-mail notification could come from any service. In this example, I set up a simple Spring Boot controller directly in the messaging service. Using Kafka in such a situation is, of course, ridiculous, but will serve my demonstration purpose.
Note: I utilized the YAML format for this project, but it should work likewise with a properties file. In local environment, I use a plain text security protocol. Naturally, in production, it is advisable to use SSL and a proper configuration of security keys.
The serialization formats are set using the spring.kafka.producer section. I use simple string keys and JSON for the body of the messages. For deserialization, we must set the same formats. Please note that instead of directly using the StringSerializer and JsonSerializer, we use the ErrorHandlingSerializer class and configure it in its dedicated section. This is extremely effective to avoid poison pill situations: if a corrupted object is pushed to the Kafka topic, serialization will likely fail with a deserialization exception. The consumer would retry again, yielding the same result, and enter a deadlock situation. This class handles deserialization errors for us. More detail about poison pills and deserialization error management can be found in the following article.
3a8082e126