In JMS terms, the cooperating consumers would use a shared durable subscription. The shared subscription acts like a queue, but with the flexibility of publish/subscribe.
By using a topic as opposed to a queue, the same messages can be used for other purposes too. For example, an auditing microservice can consume the same messages using a different subscription.
[SCENARIO] Worker-offload (or cooperating consumers)Brief DescriptionA microservice performs work asynchronously requested using messaging. Each request is represented by a message published on a topic, a named logical channel for message distribution. The microservice consumes the messages from the topic and performs the work. Multiple instances of the microservice can cooperatively consume the messages from the topic, with each message being processed by one instance of the microservice.In JMS terms, the cooperating consumers would use a shared durable subscription. The shared subscription acts like a queue, but with the flexibility of publish/subscribe.By using a topic as opposed to a queue, the same messages can be used for other purposes too. For example, an auditing microservice can consume the same messages using a different subscription.Extension: Responses to work requestsAlthough it increases the coupling between requester and worker, it may be desirable for the microservice to send a response to each request. For the sake of resilience, it is preferable if the consumption of responses can also be handled by any of the requesters as opposed to a single, specific requester. Otherwise, the degree of coupling between requester and worker approaches that of a synchronous call.
Extension: Message ordering and concurrencyWhen a single microservice is performing the work, maintaining order is simple. Once concurrent execution of multiple workers comes into play, it's much more complicated. A pragmatic way of maintaining order when appropriate but permitted as much concurrency as possible is for the requestors to indicate which streams of messages have an ordering relationship by using some kind of identifier. Then, the messaging infrastructure can ensure that messages are processed in order where it matters while maximising concurrency where it does not.
...if you think of processing a message in terms of a transformation where service A emits a stream of messages, service B transforms them, emitting a new stream, and service A consumes that new stream, the requirement for messages to be able to be handled by any node becomes implicit.
Hi,...if you think of processing a message in terms of a transformation where service A emits a stream of messages, service B transforms them, emitting a new stream, and service A consumes that new stream, the requirement for messages to be able to be handled by any node becomes implicit.I like the transformer analogy. I can imagine that the concept is enhanced to include a notion of routing messages, where a request is routed through multiple transformers between a source and a sink. Then request-response can be expressed by a pipeline where the sink is the same as the source. However, if the sink is a public endpoint accessible by a synchronous interface (public REST service or a web frontend) or stateful protocol (SSE or WebSocket), it is not easy to ensure that the messages can be handled by any source node. It is necessary that the messages are routed to the very node that has access to the context of the original request. Am I wrong?
An example: Service A is a REST endpoint, service B is a transformer, that consumes messages from A and emits transformed messages back to A. If a client issues a REST call against service A, the client expects a response from the same node. The service A sends a message to B, which emits a message for A. If the final message is received by any node of the service A, it can only send the REST response if it's the original node. If it's not the original node, it does not have access to the REST client and must forward the message to the original node, which can send the REST response back to the client.
--Ondrej
Dňa piatok, 16. decembra 2016 1:50:05 UTC+1 James Roper napísal(-a):On Thursday, December 15, 2016 at 9:58:10 PM UTC+11, andrew_s...@uk.ibm.com wrote:[SCENARIO] Worker-offload (or cooperating consumers)Brief DescriptionA microservice performs work asynchronously requested using messaging. Each request is represented by a message published on a topic, a named logical channel for message distribution. The microservice consumes the messages from the topic and performs the work. Multiple instances of the microservice can cooperatively consume the messages from the topic, with each message being processed by one instance of the microservice.In JMS terms, the cooperating consumers would use a shared durable subscription. The shared subscription acts like a queue, but with the flexibility of publish/subscribe.By using a topic as opposed to a queue, the same messages can be used for other purposes too. For example, an auditing microservice can consume the same messages using a different subscription.Extension: Responses to work requestsAlthough it increases the coupling between requester and worker, it may be desirable for the microservice to send a response to each request. For the sake of resilience, it is preferable if the consumption of responses can also be handled by any of the requesters as opposed to a single, specific requester. Otherwise, the degree of coupling between requester and worker approaches that of a synchronous call.From what I've encountered, services tend to interact with messaging APIs in one of three ways - as sources, as sinks, or as transformers. The initial use case you've described is of a sink (with an implied source somewhere). This extension though, rather than viewing it as a sink of requests that sends responses, I think in many cases it may be better to view it as a (possibly side effecting) transformation of messages. While the two things are basically describe the same process (we're just transforming requests to responses), if you think of processing a message in terms of a transformation where service A emits a stream of messages, service B transforms them, emitting a new stream, and service A consumes that new stream, the requirement for messages to be able to be handled by any node becomes implicit. That is to say, take the "request/response" concept out of there and you get something that feels more natural and is implicitly resilient, and I think using this terminology will help guide developers to implement their services in such a way.So for example, the order service emits messages to say that an order has been submitted. A payment service will process these, and will emit success or failure messages, which will be consumed by the order service. Effectively a request/response has happened, but by not using the request/response terminology, developers will naturally think more towards location independent messaging.Extension: Message ordering and concurrencyWhen a single microservice is performing the work, maintaining order is simple. Once concurrent execution of multiple workers comes into play, it's much more complicated. A pragmatic way of maintaining order when appropriate but permitted as much concurrency as possible is for the requestors to indicate which streams of messages have an ordering relationship by using some kind of identifier. Then, the messaging infrastructure can ensure that messages are processed in order where it matters while maximising concurrency where it does not.I think this requirement is going to be across the board for almost all scenarios.
--
You received this message because you are subscribed to a topic in the Google Groups "MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/0yYyOdVrZ_c/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/01c4cbac-4340-499d-8628-e1721d6e7e1b%40googlegroups.com.
On 17 December 2016 at 09:27, Ondrej Mihályi <ondrej....@gmail.com> wrote:However, if the sink is a public endpoint accessible by a synchronous interface (public REST service or a web frontend) or stateful protocol (SSE or WebSocket), it is not easy to ensure that the messages can be handled by any source node. It is necessary that the messages are routed to the very node that has access to the context of the original request. Am I wrong?That's correct. The use cases I was thinking of specifically targeting were ones that support asynchronous messaging - asynchronous in the sense that it does not require the sending and receiving ends to be participating in the communication at the same time.
An example: Service A is a REST endpoint, service B is a transformer, that consumes messages from A and emits transformed messages back to A. If a client issues a REST call against service A, the client expects a response from the same node. The service A sends a message to B, which emits a message for A. If the final message is received by any node of the service A, it can only send the REST response if it's the original node. If it's not the original node, it does not have access to the REST client and must forward the message to the original node, which can send the REST response back to the client.I would consider a chaining of requests like that to be an anti-pattern. Microservices should, as much as practical, be autonomous. A system of services where services call other services in order to fulfill their own requests is better described as a distributed monolith, and in fact behaves worse than a monolith since all the network hops between services add latency and higher error rates.
[SCENARIO] Flow-control and back pressureDescription - flow control:Service A needs to retrieve a list of entities from service B and filter it. It only needs first N of the filtered entities. Clearly, the number of entities to retrieve from the service B is unknown at the beginning. Service A issues a message to start retrieval of the entities. Service B will start streaming messages, one for each entity. Service A consumes the messages from B, filters the data. When service A receives enough entities that match the filter, it will ignore subsequent messages. At this point, service B needs to be notified that no further messages will be consumed and therefore it may stop streaming data. Service A emits a "terminate" message to inform B that no more data is needed.A similar scenario applies when the initial request is canceled by the user/client (e.g. when the browser navigates away from the current page, which is listening to SSE events).
--
You received this message because you are subscribed to the Google Groups "MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
My view is that Apache Kafka makes a really good starting point for messaging for MicroProfile. I wouldn't suggest that MicroProfile would adopt the Kafka APIs directly. For one thing, I've seen plenty of people struggle with the APIs and the myriad configuration options. It does however give a nice, concrete example that's used widely.
I would start with the Kafka Streams API (https://kafka.apache.org/documentation/streams) which is a client library for processing streams of messages. A Kafka Streams application is a long-running program that processes streams of messages. The processing logic of a Kafka Streams application consists of a processor topology - basically a directed graph of processor nodes. At the incoming edge of the topology are source processors which read from topics, and at the outgoing edge are sink processors which write to topics. The intervening processor nodes can filter, map, transform, or whatever.
I think this is at a much nicer level of abstraction than the Kafka Consumer API. In that case, you actually make calls to poll for batches of messages. Much nicer to define the code to handle a message and have the library worry about getting the messages.
You can run multiple instances of a Kafka Streams application on a multi-partition topic and the partitions are distributed among the running instances. If an instance crashes, the remaining instances take over that instance's partitions. So, you can scale it up, do rolling upgrades and so on. That's great, but I consider it to be a detail of the implementation, rather than part of a messaging API.
Kafka Streams includes the idea that a stream of events can be viewed as a table, and vice versa. That's really nice, but I suggest putting that aside an looking at the KStream part of the API.
If you are taking a microservices approach and want to use messaging to wire it all together, I think Kafka Streams is a good place to start.
--
You received this message because you are subscribed to a topic in the Google Groups "MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/0yYyOdVrZ_c/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/92e668f4-e61e-4b8b-9fbe-32e79c0d4246%40googlegroups.com.
To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/92e668f4-e61e-4b8b-9fbe-32e79c0d4246%40googlegroups.com.
I think you expressed it better than me. I don't want to invent a new stream processing API, but if we can feed messages into and out of existing frameworks in an elegant way, that would be good. Reactive Streams looks like an interesting option.I don't agree about trying to span transactions across resource managers, which I think is the implication of trying to ensure a message is published if a database update succeeds.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/fcfc8d89-e6f1-4d15-bf25-0a17c4935d30%40googlegroups.com.
--
You received this message because you are subscribed to a topic in the Google Groups "MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/0yYyOdVrZ_c/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/b1e36514-780e-45b9-9d77-f98e5244b6e4%40googlegroups.com.
I spent a bit of time asking around about real-world examples of people using messaging with microservices. The response was that we've not seen all that much use of messaging and proper microservices yet. The signal's not that strong for what people are actually using and we often don't get close enough to see the actual code being written.
I've worked with a company where they divided their applications into ones that needed exactly-once delivery with two-phase commit and ones that could cope with at-least-once. They used different messaging solutions for the two cases. The latter were using a non-transactional, eventually-consistent database which is at best atomic to the level of a single document only. In terms of the API calls, that would be publish after update. This was Node.js and Kafka.
Another company I worked with were using Akka and Kafka, implementing an API by using microservices triggered by messages. Definitely not transactional.
We definitely see a variety of languages and runtimes being used for new code. Node.js is the one that gets most attention, probably next are Java (not JEE), Python and Go.
I don't think it's the case that everyone uses Kafka exclusively. At the open-source end of things, I see a lot of ActiveMQ and a bit less RabbitMQ.
To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/fcfc8d89-e6f1-4d15-bf25-0a17c4935d30%40googlegroups.com.
Hi Ted,If producers/consumers are talking directly to each other, what happens when all instances of a consumer are down for some reason? Does this cause the failure to propagate to the producer, or is the producer isolated from the consumers failure?One of the biggest problems that we find with microservices is that when you move from a monolith to many services, and you have temporal coupling between your services, the downtime ends up being the aggregate of the downtimes of all the services, which is far greater than the downtime of a single monolith. This is why a brokered messaging system can be desirable.Regards,James
On 15 June 2017 at 06:15, <tr...@redhat.com> wrote:
I'd like to toss a different perspective into this discussion of messaging for microservices by mentioning the AMQP protocol and AMQP routing.--
In many microservice scenarios, the extra transactional exchange of a brokered messaging system (JMS, Kafka, etc.) is not necessary and in fact complicates the whole thing. AMQP routing (see the Apache Qpid Dispatch Router project) provides a scalable network at layer 4 (TCP/IP) but allows direct transactional exchange between any two endpoints at layer 7 (AMQP). In a scenario where there are multiple instances of a consumer/service, a producer/client transacts directly with the service instance and knows when the request has been received and processed.
The benefits of scale are also provided without the need to manage topic replicas. Because the AMQP protocol formally handles the settlement of deliveries (in a transactional way), AMQP intermediaries (routers) are aware of the current state of a delivery and are also aware of how many outstanding deliveries are being handled by each service/consumer instance. This allows an AMQP router network to choose a service instance based on its present rate of settlement (i.e. request completion) which is a far more effective method of load balancing than round robin or using out-of-band health checks.
The management of settlement also promises to aid in the smooth increase and decrease of service scale. If a service instance is no longer needed, the AMQP network can quiesce it by taking it out of routing consideration while it completes its backlog of unsettled requests. The network can then indicate when it is safe to shut down an instance without losing messages.
-Ted Ross
Engineering, Red Hat Inc.
On Thursday, December 15, 2016 at 5:55:15 AM UTC-5, andrew_s...@uk.ibm.com wrote:I'd like to kick off a discussion specifically on the scenarios for messaging we expect to see in a microservices architecture. I don't want to get bogged down in API discussions at this point. So, I'll kick this off with a short description of one way of using messaging that I see as appropriate for microservices architectures. I've started with a view that the messaging is "in front" of the microservices, so the messages are kind of commands. I think it's also entirely sensible to view messaging as "behind" the microservices, which I'd characterise as using the messages to represent events.Comments and additional scenarios are welcome.Andrew SchofieldEvent Services, IBM Watson and Cloud Platform
You received this message because you are subscribed to a topic in the Google Groups "MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/0yYyOdVrZ_c/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/b1e36514-780e-45b9-9d77-f98e5244b6e4%40googlegroups.com.
I also think using Reactive Streams looks like an interesting option for messaging and wiring things together with different frameworks, particularly given inclusion of Flow in JDK9. The request granting looks a good way to regulate transmission of messages, and I like the ideas outlined previously around using either a Processor or an envelope based explicit ack/commit to convey that handling (be it receiving confirmation of send, or completing handling of a received message) has been completed and allow for at-least-once semantics through different processing stages.
In terms of the envelope idea, that also seems like a good fit for carrying content and related message metadata supported by many systems/protocols together as a unit through processing. There is a question there for me around how much of that is considered implementation specific detail and how much could be useful to look at common handling for to aid interop across implementations?
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/2215c8d0-ecb7-4269-9be6-dd12029cb80b%40googlegroups.com.