Messaging beyond JMS

1,409 views
Skip to first unread message

James Roper

unread,
Oct 17, 2016, 8:55:30 PM10/17/16
to MicroProfile
Hi all,

One thing that we've found about microservices architecture is that asynchronous messaging (either p2p or through a broker) needs to become a first class communication mechanism, used just as much, if not more, than REST.  In fact many services deployed in a microservices platform may communicate solely using messaging, and have no REST interface at all.  I don't think JMS is up to what microservices demand.  So I'd like to talk about messaging beyond JMS.  There are quite a number of different aspects here, so let's see how we go.

Higher level abstractions

JMS allows you to work with text or binary messages, plus a few other types, but conceptually, no one actually sends text or binary messages, they send a higher level model objects that are serialized to text or binary. The JMS API could be seen as the HttpServletRequest/Response of messaging, it's a low level API that isn't suitable for high level programming.  Just as JAX-RS is the high level API on top of HttpServletRequest/Response that handles the serialization and deserialization of request and response messages (among other things), modern microservice frameworks need to provide a mechanism for transparent handling of serializing and deserializing of messages sent through a message broker or point to point.  I think there is a need for a JAX-RS like API for messaging.

An interesting consequence of this that we've found is that the serialization/deserialization technology used needs to have first class, idiomatic support for polymorphic domain objects, because very often the one message stream will have many different types of messages that are sub types of one parent type - we've found this is almost always encountered in messaging, compared to REST where it's relatively more rare.

Support for modern messaging brokers

One of the most common message brokers we see in use today with microservices is Kafka, and similar technologies such as AWS Kinesis are also gaining popularity.  These differ from many traditional message brokers that map to JMS, in the following ways:

* There are no transactions, and definitely no distributed transactions. Transactions are typically used to guarantee exactly once message processing. These modern message brokers however do not offer exactly once delivery, they offer at least once delivery, which means transactions don't give you anything, even if a publisher ensures that it only publishes each message once, it still could arrive at a consumer twice.  The upshot of this is that what works for messaging handling when using transactions isn't necessarily a good fit for at least once messaging, and APIs may need to be adjusted accordingly.
* Pub-sub is in the control of consumers.  In traditional message brokers, you configure pub-sub in the broker, by creating a queue for each consumer, and routing messages appropriately.  In Kafka and similar technologies, the consumer is in control here - a consumer can consume any message stream without impacting other consumers, and consumers can form groups that ensure that messages are distributed among the groups.  One consequence of this is that you need consumer side APIs for specifying/joining these groups.
* Partitioning. These message brokers partition messages for scaling and load balancing, and if you want any ordering guarantees (you usually do), then the producer needs to control how they are partitioned.  This is done by the producer extracting a key from messages, and that key is then hashed to a partition.

We've found that these concepts need to be first class concepts in the API for successful use in a microservices architecture.

Streams integration

Sources and sinks for message streams will often be from another API.  For example, if using CQRS, very often your source of messages to publish to a broker will be a CQRS read side stream. A microservices messaging solution needs to compatible with different streaming sources and sinks, so that end users don't need to implement their own adapters between these technologies (which can be very difficult to do, especially if they want to implement robust back pressure propagation).  Hence, such a messaging API should use a common interface for streaming, and of course Reactive Streams/JDK9 Flows is the prime candidate here.

Distribution

When plumbing streams together from different libraries to a message broker, distribution needs to be considered, and in our experience ends up being a first class concept in the end user APIs.  A single service may consist of many nodes, publishing the stream from every node is not usually desirable since that means each message will be published once for each node doing the publishing.  Sometimes you want a singleton node doing the publishing, sometimes if the source stream is sharded, you want to distribute the shards out across the cluster so that the publishing load is shared by all nodes in the service.  We've found that end user APIs need to give the user control over this in order to implement it successfully.



So, I've said a lot here, I'm interested in whether people agree with my assessment or not.

--
James Roper
Software Engineer

Lightbend – Build reactive apps!
Twitter: @jroper

Mark Little

unread,
Oct 18, 2016, 4:27:42 AM10/18/16
to James Roper, MicroProfile
Hi James.

Thanks for kicking this one off. It’s related to an action item I had coming off the back of the JavaOne BOF we did, which I hadn’t gotten around to yet so it’s well timed too. Just for completeness, and before I jump into some of your text, I’d said I would kick off a discussion about supporting binary protocols for communication that go beyond HTTP/2. The reason for this is that despite the fact I understand why REST/HTTP (or more typically just HTTP) is the preferred way in which people talk about communication between microservices, it has its limitations and one of which is performance. As microservices push us (back) towards distributed systems, and they become chattier, the performance of whatever protocol they use becomes an important consideration and whilst HTTP is a convenient approach, it’s not something I believe many of Red Hat’s customers will want to use to the exclusion of all else. And this probably goes to the heart of my response here: I don’t believe one size fits all in general and definitely not with distributed communications. So whilst I expect REST/HTTP and JMS to be something some microservices will want to use, I’m sure there will be other implementations in play as well.

On 18 Oct 2016, at 01:55, James Roper <ja...@lightbend.com> wrote:

Hi all,

One thing that we've found about microservices architecture is that asynchronous messaging (either p2p or through a broker) needs to become a first class communication mechanism, used just as much, if not more, than REST.

I agree though I’d be cautious about using the term REST, which is why I called out REST/HTTP and HTTP. I suspect you mean REST/HTTP here rather than just REST because whilst the latter is an architectural approach which doesn’t imply synchronous or asynchronous behaviour, the former is a specific implementation of that architecture (which again does support asynchronous behaviour).

  In fact many services deployed in a microservices platform may communicate solely using messaging, and have no REST interface at all.  I don't think JMS is up to what microservices demand.

I disagree and yet I would agree if the statement was “I don’t think JMS is up to what all microservices demand”.

  So I'd like to talk about messaging beyond JMS.  There are quite a number of different aspects here, so let's see how we go.

Higher level abstractions

JMS allows you to work with text or binary messages, plus a few other types, but conceptually, no one actually sends text or binary messages, they send a higher level model objects that are serialized to text or binary. The JMS API could be seen as the HttpServletRequest/Response of messaging, it's a low level API that isn't suitable for high level programming.  Just as JAX-RS is the high level API on top of HttpServletRequest/Response that handles the serialization and deserialization of request and response messages (among other things), modern microservice frameworks need to provide a mechanism for transparent handling of serializing and deserializing of messages sent through a message broker or point to point.  I think there is a need for a JAX-RS like API for messaging.

We tried it in the early 2000’s and the result was the ESB :) Most successful ESBs started with an abstraction for messaging (synchronous and/or asynchronous). Ignoring the JBI standard which came out of the JCP during that time, there really hasn’t been a lot of effort to standardise this and I’m not sure if that’s because of lack of interest or because SOAP came along and people got confused between that and what ESBs were attempting to do (ESB != SOAP but if you ask people these days many of them seem to believe they are equivalent.)

Perhaps a more tractable option isn’t to try to create an API for all messaging approaches but just one(s) that are sufficiently different from JMS or others that already have a suitable standard?


An interesting consequence of this that we've found is that the serialization/deserialization technology used needs to have first class, idiomatic support for polymorphic domain objects, because very often the one message stream will have many different types of messages that are sub types of one parent type - we've found this is almost always encountered in messaging, compared to REST where it's relatively more rare.

Support for modern messaging brokers

One of the most common message brokers we see in use today with microservices is Kafka, and similar technologies such as AWS Kinesis are also gaining popularity.  These differ from many traditional message brokers that map to JMS, in the following ways:

* There are no transactions, and definitely no distributed transactions. Transactions are typically used to guarantee exactly once message processing.

Whilst they can be used for that, I certainly wouldn’t say that’s their typical use case. Well, at least not what I’ve observed over the last 30+ years.

These modern message brokers however do not offer exactly once delivery, they offer at least once delivery, which means transactions don't give you anything, even if a publisher ensures that it only publishes each message once, it still could arrive at a consumer twice.

I wouldn’t mix transactions into this discussion just yet without at least defining what you mean by a transaction. Again I assume you mean an ACID transaction or perhaps being even more specific and XA?

  The upshot of this is that what works for messaging handling when using transactions isn't necessarily a good fit for at least once messaging, and APIs may need to be adjusted accordingly.

I agree that changes will likely be needed at a number of levels, including the types of transactions to be supported, if any :)

* Pub-sub is in the control of consumers.  In traditional message brokers, you configure pub-sub in the broker, by creating a queue for each consumer, and routing messages appropriately.  In Kafka and similar technologies, the consumer is in control here - a consumer can consume any message stream without impacting other consumers, and consumers can form groups that ensure that messages are distributed among the groups.  One consequence of this is that you need consumer side APIs for specifying/joining these groups.
* Partitioning. These message brokers partition messages for scaling and load balancing, and if you want any ordering guarantees (you usually do), then the producer needs to control how they are partitioned.  This is done by the producer extracting a key from messages, and that key is then hashed to a partition.

We've found that these concepts need to be first class concepts in the API for successful use in a microservices architecture.

Streams integration

Sources and sinks for message streams will often be from another API.  For example, if using CQRS, very often your source of messages to publish to a broker will be a CQRS read side stream. A microservices messaging solution needs to compatible with different streaming sources and sinks, so that end users don't need to implement their own adapters between these technologies (which can be very difficult to do, especially if they want to implement robust back pressure propagation).  Hence, such a messaging API should use a common interface for streaming, and of course Reactive Streams/JDK9 Flows is the prime candidate here.

Distribution

When plumbing streams together from different libraries to a message broker, distribution needs to be considered, and in our experience ends up being a first class concept in the end user APIs.  A single service may consist of many nodes, publishing the stream from every node is not usually desirable since that means each message will be published once for each node doing the publishing.  Sometimes you want a singleton node doing the publishing, sometimes if the source stream is sharded, you want to distribute the shards out across the cluster so that the publishing load is shared by all nodes in the service.  We've found that end user APIs need to give the user control over this in order to implement it successfully.



So, I've said a lot here, I'm interested in whether people agree with my assessment or not.

I’ll encourage Red Hat’s messaging team to jump in here too and try to give a more detailed response. It would be good to start with a specific use case and grow the discussion around that. We’ve also got our conference application which we’re using to try to demonstrate these use cases so perhaps think about how that might be changed to accommodate?

Mark.


--
James Roper
Software Engineer

Lightbend – Build reactive apps!
Twitter: @jroper

--
You received this message because you are subscribed to the Google Groups "MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/CABY0rKP6J_RoVQ%2Br4DRXvGiUVwQHB7FiUXF7ROdFEOO%2BMy_LPg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

John D. Ament

unread,
Oct 18, 2016, 6:47:58 PM10/18/16
to MicroProfile
I take the JMS argument a bit passionately, and am currently not happy with Oracle's decision.

It seems like JMS is misunderstood - which is ironic as I don't think I had as good of an understanding of it 6 years ago when I first worked on a JMS related project w/ JBoss guys.

JMS is purely a client API.  It defines some expectations for message headers and features of brokers, but that's it.  It doesn't require protocols, or interactions, or clustering.  It's simply "you must support P2P and PubSub via this client API."  By not tying in a protocol, it makes leveraging components such as Amazon SQS pretty straight forward (and for what its worth - Amazon SQS does have a JMS client library).  The JMS 1.x team built an API that was customary for its point in time.  Back then, you dealt a lot with resources, manually opening and closing things, and heavy-weight transactions on the container.  You didn't deal with fluent APIs.  Which is all stuff that changed in JMS 2.0.

Messaging is pretty critical to cloud based applications.  Streaming events is simply a derived design paradigm from the decoupled messaging.  Its often implemented on top of AMQP brokers.  Likewise, the entire IoT movement is powered by MQTT.  Real time communication in these devices is not possible, but asynchronous polling and pushing? Great idea.  Even when it comes to publishing cluster state.  Its smarter to publish to a topic than it is to have all the clients poll for data in a database table.

James Roper

unread,
Oct 18, 2016, 9:22:25 PM10/18/16
to Mark Little, MicroProfile
On 18 October 2016 at 19:27, Mark Little <markc...@gmail.com> wrote:
Hi James.

Thanks for kicking this one off. It’s related to an action item I had coming off the back of the JavaOne BOF we did, which I hadn’t gotten around to yet so it’s well timed too. Just for completeness, and before I jump into some of your text, I’d said I would kick off a discussion about supporting binary protocols for communication that go beyond HTTP/2. The reason for this is that despite the fact I understand why REST/HTTP (or more typically just HTTP) is the preferred way in which people talk about communication between microservices, it has its limitations and one of which is performance. As microservices push us (back) towards distributed systems, and they become chattier, the performance of whatever protocol they use becomes an important consideration and whilst HTTP is a convenient approach, it’s not something I believe many of Red Hat’s customers will want to use to the exclusion of all else. And this probably goes to the heart of my response here: I don’t believe one size fits all in general and definitely not with distributed communications. So whilst I expect REST/HTTP and JMS to be something some microservices will want to use, I’m sure there will be other implementations in play as well.

On 18 Oct 2016, at 01:55, James Roper <ja...@lightbend.com> wrote:

Hi all,

One thing that we've found about microservices architecture is that asynchronous messaging (either p2p or through a broker) needs to become a first class communication mechanism, used just as much, if not more, than REST.

I agree though I’d be cautious about using the term REST, which is why I called out REST/HTTP and HTTP. I suspect you mean REST/HTTP here rather than just REST because whilst the latter is an architectural approach which doesn’t imply synchronous or asynchronous behaviour, the former is a specific implementation of that architecture (which again does support asynchronous behaviour).

Actually the term I should have used is JAX-RS (or JAX-WS), this is the status quo for communication between services coming from a Java EE world, is it not?  Both of these technologies are synchronous message passing technologies, the require both parties to be participating in the communication at the same time.
  In fact many services deployed in a microservices platform may communicate solely using messaging, and have no REST interface at all.  I don't think JMS is up to what microservices demand.

I disagree and yet I would agree if the statement was “I don’t think JMS is up to what all microservices demand”.

  So I'd like to talk about messaging beyond JMS.  There are quite a number of different aspects here, so let's see how we go.

Higher level abstractions

JMS allows you to work with text or binary messages, plus a few other types, but conceptually, no one actually sends text or binary messages, they send a higher level model objects that are serialized to text or binary. The JMS API could be seen as the HttpServletRequest/Response of messaging, it's a low level API that isn't suitable for high level programming.  Just as JAX-RS is the high level API on top of HttpServletRequest/Response that handles the serialization and deserialization of request and response messages (among other things), modern microservice frameworks need to provide a mechanism for transparent handling of serializing and deserializing of messages sent through a message broker or point to point.  I think there is a need for a JAX-RS like API for messaging.

We tried it in the early 2000’s and the result was the ESB :) Most successful ESBs started with an abstraction for messaging (synchronous and/or asynchronous). Ignoring the JBI standard which came out of the JCP during that time, there really hasn’t been a lot of effort to standardise this and I’m not sure if that’s because of lack of interest or because SOAP came along and people got confused between that and what ESBs were attempting to do (ESB != SOAP but if you ask people these days many of them seem to believe they are equivalent.)

Perhaps a more tractable option isn’t to try to create an API for all messaging approaches but just one(s) that are sufficiently different from JMS or others that already have a suitable standard?


An interesting consequence of this that we've found is that the serialization/deserialization technology used needs to have first class, idiomatic support for polymorphic domain objects, because very often the one message stream will have many different types of messages that are sub types of one parent type - we've found this is almost always encountered in messaging, compared to REST where it's relatively more rare.

Support for modern messaging brokers

One of the most common message brokers we see in use today with microservices is Kafka, and similar technologies such as AWS Kinesis are also gaining popularity.  These differ from many traditional message brokers that map to JMS, in the following ways:

* There are no transactions, and definitely no distributed transactions. Transactions are typically used to guarantee exactly once message processing.

Whilst they can be used for that, I certainly wouldn’t say that’s their typical use case. Well, at least not what I’ve observed over the last 30+ years.

These modern message brokers however do not offer exactly once delivery, they offer at least once delivery, which means transactions don't give you anything, even if a publisher ensures that it only publishes each message once, it still could arrive at a consumer twice.

I wouldn’t mix transactions into this discussion just yet without at least defining what you mean by a transaction. Again I assume you mean an ACID transaction or perhaps being even more specific and XA?

When it comes to transactions on a messaging provider, it's mostly just the message is consumed or it isn't - perhaps if you want to maintain ordering while eagerly processing multiple messages at once, you may have some more complex transaction logic where one message failing causes all subsequent messages to rollback, but generally in my experience it's been primarily about a message either being consumed or not.  In that context, it's mostly XA that I'm referring to, having whether the message is consumed or not being connected to whether the database transaction associated with the message processing is committed or not.  Failure to tie these two transactions together results in at least once guarantees instead of exactly once guarantees processing if you send the message inside the database transaction and confirm receipt of the message outside of the receiving end transaction, or will result in at most once guarantees instead of exactly once if you send the message outside of the sending side database transaction or confirm receipt inside/before the database transaction on the receiving side.  So my argument is that if the message provider only gives you at least once messaging guarantees in the first place, then there's no need for XA transactions between the message provider and the database, as you can already achieve at least once guarantees without them, and you can't improve on at least once if all the message broker offers is at least once.
  The upshot of this is that what works for messaging handling when using transactions isn't necessarily a good fit for at least once messaging, and APIs may need to be adjusted accordingly.

I agree that changes will likely be needed at a number of levels, including the types of transactions to be supported, if any :)

* Pub-sub is in the control of consumers.  In traditional message brokers, you configure pub-sub in the broker, by creating a queue for each consumer, and routing messages appropriately.  In Kafka and similar technologies, the consumer is in control here - a consumer can consume any message stream without impacting other consumers, and consumers can form groups that ensure that messages are distributed among the groups.  One consequence of this is that you need consumer side APIs for specifying/joining these groups.
* Partitioning. These message brokers partition messages for scaling and load balancing, and if you want any ordering guarantees (you usually do), then the producer needs to control how they are partitioned.  This is done by the producer extracting a key from messages, and that key is then hashed to a partition.

We've found that these concepts need to be first class concepts in the API for successful use in a microservices architecture.

Streams integration

Sources and sinks for message streams will often be from another API.  For example, if using CQRS, very often your source of messages to publish to a broker will be a CQRS read side stream. A microservices messaging solution needs to compatible with different streaming sources and sinks, so that end users don't need to implement their own adapters between these technologies (which can be very difficult to do, especially if they want to implement robust back pressure propagation).  Hence, such a messaging API should use a common interface for streaming, and of course Reactive Streams/JDK9 Flows is the prime candidate here.

Distribution

When plumbing streams together from different libraries to a message broker, distribution needs to be considered, and in our experience ends up being a first class concept in the end user APIs.  A single service may consist of many nodes, publishing the stream from every node is not usually desirable since that means each message will be published once for each node doing the publishing.  Sometimes you want a singleton node doing the publishing, sometimes if the source stream is sharded, you want to distribute the shards out across the cluster so that the publishing load is shared by all nodes in the service.  We've found that end user APIs need to give the user control over this in order to implement it successfully.



So, I've said a lot here, I'm interested in whether people agree with my assessment or not.

I’ll encourage Red Hat’s messaging team to jump in here too and try to give a more detailed response. It would be good to start with a specific use case and grow the discussion around that. We’ve also got our conference application which we’re using to try to demonstrate these use cases so perhaps think about how that might be changed to accommodate?

Mark.


--
James Roper
Software Engineer

Lightbend – Build reactive apps!
Twitter: @jroper

--
You received this message because you are subscribed to the Google Groups "MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile+unsubscribe@googlegroups.com.

Mark Little

unread,
Oct 19, 2016, 4:46:42 AM10/19/16
to John D. Ament, MicroProfile
On 18 Oct 2016, at 23:47, John D. Ament <john.d...@gmail.com> wrote:

I take the JMS argument a bit passionately, and am currently not happy with Oracle's decision.

You’re not the only one. However, and for fear of seeming to hijack the thread, under the circumstances (which arguably they put themselves in) I can understand why they’ve made the change they did.


It seems like JMS is misunderstood

+1

- which is ironic as I don't think I had as good of an understanding of it 6 years ago when I first worked on a JMS related project w/ JBoss guys.

JMS is purely a client API.  It defines some expectations for message headers and features of brokers, but that's it.  It doesn't require protocols, or interactions, or clustering.  It's simply "you must support P2P and PubSub via this client API."  By not tying in a protocol, it makes leveraging components such as Amazon SQS pretty straight forward (and for what its worth - Amazon SQS does have a JMS client library).  The JMS 1.x team built an API that was customary for its point in time.  Back then, you dealt a lot with resources, manually opening and closing things, and heavy-weight transactions on the container.  You didn't deal with fluent APIs.  Which is all stuff that changed in JMS 2.0.

Messaging is pretty critical to cloud based applications.  Streaming events is simply a derived design paradigm from the decoupled messaging.  Its often implemented on top of AMQP brokers.  Likewise, the entire IoT movement is powered by MQTT.  Real time communication in these devices is not possible, but asynchronous polling and pushing? Great idea.  Even when it comes to publishing cluster state.  Its smarter to publish to a topic than it is to have all the clients poll for data in a database table.

+1


On Monday, October 17, 2016 at 8:55:30 PM UTC-4, James Roper wrote:
Hi all,

One thing that we've found about microservices architecture is that asynchronous messaging (either p2p or through a broker) needs to become a first class communication mechanism, used just as much, if not more, than REST.  In fact many services deployed in a microservices platform may communicate solely using messaging, and have no REST interface at all.  I don't think JMS is up to what microservices demand.  So I'd like to talk about messaging beyond JMS.  There are quite a number of different aspects here, so let's see how we go.

Higher level abstractions

JMS allows you to work with text or binary messages, plus a few other types, but conceptually, no one actually sends text or binary messages, they send a higher level model objects that are serialized to text or binary. The JMS API could be seen as the HttpServletRequest/Response of messaging, it's a low level API that isn't suitable for high level programming.  Just as JAX-RS is the high level API on top of HttpServletRequest/Response that handles the serialization and deserialization of request and response messages (among other things), modern microservice frameworks need to provide a mechanism for transparent handling of serializing and deserializing of messages sent through a message broker or point to point.  I think there is a need for a JAX-RS like API for messaging.

An interesting consequence of this that we've found is that the serialization/deserialization technology used needs to have first class, idiomatic support for polymorphic domain objects, because very often the one message stream will have many different types of messages that are sub types of one parent type - we've found this is almost always encountered in messaging, compared to REST where it's relatively more rare.

Support for modern messaging brokers

One of the most common message brokers we see in use today with microservices is Kafka, and similar technologies such as AWS Kinesis are also gaining popularity.  These differ from many traditional message brokers that map to JMS, in the following ways:

* There are no transactions, and definitely no distributed transactions. Transactions are typically used to guarantee exactly once message processing. These modern message brokers however do not offer exactly once delivery, they offer at least once delivery, which means transactions don't give you anything, even if a publisher ensures that it only publishes each message once, it still could arrive at a consumer twice.  The upshot of this is that what works for messaging handling when using transactions isn't necessarily a good fit for at least once messaging, and APIs may need to be adjusted accordingly.
* Pub-sub is in the control of consumers.  In traditional message brokers, you configure pub-sub in the broker, by creating a queue for each consumer, and routing messages appropriately.  In Kafka and similar technologies, the consumer is in control here - a consumer can consume any message stream without impacting other consumers, and consumers can form groups that ensure that messages are distributed among the groups.  One consequence of this is that you need consumer side APIs for specifying/joining these groups.
* Partitioning. These message brokers partition messages for scaling and load balancing, and if you want any ordering guarantees (you usually do), then the producer needs to control how they are partitioned.  This is done by the producer extracting a key from messages, and that key is then hashed to a partition.

We've found that these concepts need to be first class concepts in the API for successful use in a microservices architecture.

Streams integration

Sources and sinks for message streams will often be from another API.  For example, if using CQRS, very often your source of messages to publish to a broker will be a CQRS read side stream. A microservices messaging solution needs to compatible with different streaming sources and sinks, so that end users don't need to implement their own adapters between these technologies (which can be very difficult to do, especially if they want to implement robust back pressure propagation).  Hence, such a messaging API should use a common interface for streaming, and of course Reactive Streams/JDK9 Flows is the prime candidate here.

Distribution

When plumbing streams together from different libraries to a message broker, distribution needs to be considered, and in our experience ends up being a first class concept in the end user APIs.  A single service may consist of many nodes, publishing the stream from every node is not usually desirable since that means each message will be published once for each node doing the publishing.  Sometimes you want a singleton node doing the publishing, sometimes if the source stream is sharded, you want to distribute the shards out across the cluster so that the publishing load is shared by all nodes in the service.  We've found that end user APIs need to give the user control over this in order to implement it successfully.



So, I've said a lot here, I'm interested in whether people agree with my assessment or not.

--
James Roper
Software Engineer

Lightbend – Build reactive apps!
Twitter: @jroper

--
You received this message because you are subscribed to the Google Groups "MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Mark Little

unread,
Oct 19, 2016, 5:21:11 AM10/19/16
to James Roper, MicroProfile
I’ll delete some text to try to keep it manageable …

On 19 Oct 2016, at 02:22, James Roper <ja...@lightbend.com> wrote:


On 18 Oct 2016, at 01:55, James Roper <ja...@lightbend.com> wrote:

Hi all,

One thing that we've found about microservices architecture is that asynchronous messaging (either p2p or through a broker) needs to become a first class communication mechanism, used just as much, if not more, than REST.

I agree though I’d be cautious about using the term REST, which is why I called out REST/HTTP and HTTP. I suspect you mean REST/HTTP here rather than just REST because whilst the latter is an architectural approach which doesn’t imply synchronous or asynchronous behaviour, the former is a specific implementation of that architecture (which again does support asynchronous behaviour).

Actually the term I should have used is JAX-RS (or JAX-WS), this is the status quo for communication between services coming from a Java EE world, is it not?

Not quite. There’s JMS. There’s also (still) IIOP (with or without RMI). Speaking for Red Hat, I think we see more interactions between services happening over any of these than over HTTP. Of course once you factor in clients, HTTP and SOAP do have a significant role to play. But that said, I do get your point :)

<stuff cut>

  

These modern message brokers however do not offer exactly once delivery, they offer at least once delivery, which means transactions don't give you anything, even if a publisher ensures that it only publishes each message once, it still could arrive at a consumer twice.

I wouldn’t mix transactions into this discussion just yet without at least defining what you mean by a transaction. Again I assume you mean an ACID transaction or perhaps being even more specific and XA?

When it comes to transactions on a messaging provider, it's mostly just the message is consumed or it isn't - perhaps if you want to maintain ordering while eagerly processing multiple messages at once, you may have some more complex transaction logic where one message failing causes all subsequent messages to rollback, but generally in my experience it's been primarily about a message either being consumed or not.

We could certainly go into details of communications protocols dating back to ISIS, Horus and others where transaction-like semantics were used for delivery of messages (though Ken [Birman] was never too keen to talk about them as such). However, I think it risks diverting attention from what you want to do (or at least what I hope you want to do …)

  In that context, it's mostly XA that I'm referring to, having whether the message is consumed or not being connected to whether the database transaction associated with the message processing is committed or not.  Failure to tie these two transactions together results in at least once guarantees instead of exactly once guarantees processing if you send the message inside the database transaction and confirm receipt of the message outside of the receiving end transaction, or will result in at most once guarantees instead of exactly once if you send the message outside of the sending side database transaction or confirm receipt inside/before the database transaction on the receiving side.  So my argument is that if the message provider only gives you at least once messaging guarantees in the first place, then there's no need for XA transactions between the message provider and the database, as you can already achieve at least once guarantees without them, and you can't improve on at least once if all the message broker offers is at least once.

OK so as I mentioned before, I believe more than one way for microservices to communicate is a necessity. Is JMS always going to be the right API or model? Probably not. Does that mean there’s scope for other approaches, with or without transactional semantics? Absolutely. Therefore, I would definitely hope that you and others on this group could get together and thrash out a use case and appropriate API, taking a look at our conference app in the process.

Mark.

Justin Ross

unread,
Oct 20, 2016, 10:01:05 AM10/20/16
to MicroProfile
Hi, James.  Thanks for taking the time to kick off this discussion.


On Monday, October 17, 2016 at 5:55:30 PM UTC-7, James Roper wrote:
Hi all,

One thing that we've found about microservices architecture is that asynchronous messaging (either p2p or through a broker) needs to become a first class communication mechanism, used just as much, if not more, than REST.  In fact many services deployed in a microservices platform may communicate solely using messaging, and have no REST interface at all.  I don't think JMS is up to what microservices demand.  So I'd like to talk about messaging beyond JMS.  There are quite a number of different aspects here, so let's see how we go.

Higher level abstractions

JMS allows you to work with text or binary messages, plus a few other types, but conceptually, no one actually sends text or binary messages, they send a higher level model objects that are serialized to text or binary. The JMS API could be seen as the HttpServletRequest/Response of messaging, it's a low level API that isn't suitable for high level programming.  Just as JAX-RS is the high level API on top of HttpServletRequest/Response that handles the serialization and deserialization of request and response messages (among other things), modern microservice frameworks need to provide a mechanism for transparent handling of serializing and deserializing of messages sent through a message broker or point to point.  I think there is a need for a JAX-RS like API for messaging.

An interesting consequence of this that we've found is that the serialization/deserialization technology used needs to have first class, idiomatic support for polymorphic domain objects, because very often the one message stream will have many different types of messages that are sub types of one parent type - we've found this is almost always encountered in messaging, compared to REST where it's relatively more rare.

Why this is more commonly encountered in messaging than in REST?  I would think that in both instances you want your unit of code doing just one job.  Messaging APIs typically allow you to select particular message streams and substreams, so the tools exist to get a homogeneous set.
 
Support for modern messaging brokers

One of the most common message brokers we see in use today with microservices is Kafka, and similar technologies such as AWS Kinesis are also gaining popularity.  These differ from many traditional message brokers that map to JMS, in the following ways:

* There are no transactions, and definitely no distributed transactions. Transactions are typically used to guarantee exactly once message processing. These modern message brokers however do not offer exactly once delivery, they offer at least once delivery, which means transactions don't give you anything, even if a publisher ensures that it only publishes each message once, it still could arrive at a consumer twice.  The upshot of this is that what works for messaging handling when using transactions isn't necessarily a good fit for at least once messaging, and APIs may need to be adjusted accordingly.
* Pub-sub is in the control of consumers.  In traditional message brokers, you configure pub-sub in the broker, by creating a queue for each consumer, and routing messages appropriately.  In Kafka and similar technologies, the consumer is in control here - a consumer can consume any message stream without impacting other consumers, and consumers can form groups that ensure that messages are distributed among the groups.  One consequence of this is that you need consumer side APIs for specifying/joining these groups.
* Partitioning. These message brokers partition messages for scaling and load balancing, and if you want any ordering guarantees (you usually do), then the producer needs to control how they are partitioned.  This is done by the producer extracting a key from messages, and that key is then hashed to a partition.

We've found that these concepts need to be first class concepts in the API for successful use in a microservices architecture.

Yes, with Kafka and similar servers, some of the concerns traditionally isolated to the server are pushed out to the clients.  

Modern messaging includes but is a lot bigger than Kafka and similar servers.  In most modern approaches, you get more control over things like ordering and delivery guarantees, persistence, and distribution.  Getting the *right* level of those things for your application is the key to getting better scale.  But you don't have to move a whole new set of concerns to the clients to achieve that.  It's just one of several possible approaches.
 
Streams integration

Sources and sinks for message streams will often be from another API.  For example, if using CQRS, very often your source of messages to publish to a broker will be a CQRS read side stream. A microservices messaging solution needs to compatible with different streaming sources and sinks, so that end users don't need to implement their own adapters between these technologies (which can be very difficult to do, especially if they want to implement robust back pressure propagation).  Hence, such a messaging API should use a common interface for streaming, and of course Reactive Streams/JDK9 Flows is the prime candidate here.

This makes sense to me.

Clebert Suconic

unread,
Oct 20, 2016, 2:31:25 PM10/20/16
to MicroProfile
I just wanted to answer to this specifics point now:
>* There are no transactions

JMS actually has supported no transactions for years.

The issue on JAVA EE is that most users will do everything through MDBs that depend heavily on a single transaction per message received, in a distributed fashion, which will need a network round trip to each endpoint plus their requested syncs to make sure the information is on the proper storage.

So it will be up to the implementation on how fast it can be on non transactional cases.


Although I see an issue here, that when you need transactions you would have no option around it?

I would prefer to give the control to the user and favor speed when possible (i.e. non transactional cases).



>* Pub-sub is in the control of consumers

That's what you get with Client Acks, or Dups_Allowed. 



>* Partitioning. These message brokers partition messages for scaling and load balancing

Now you are talking specifically about one implementation. The API itself may not need to know anything specific to the partition, just needs a connection point.




I don't want to get into any battle about any specific technology here.. I just want to focus on the API itself.

Clebert Suconic

unread,
Oct 20, 2016, 4:54:49 PM10/20/16
to MicroProfile
Just because you pointed Kafka, I took a look at the API on Kafka*. which from my view is pretty similar to JMS, with one difference:

there's no concept of Session or Connection. The Consumer is the whole Object, the Producer is the whole object.


I'm pretty sure that if the JMS Spec was active such simplified consumer or producer would be viable. But if you did that on the JMS Spec that would mean that the whole JMS Spec would need to be supported.


Some EE specs have the concept of profiles, perhaps someone could specify a simplified profile where an user would simply do:

Simplifiedconsumer consumer = Factory.newConsumer(properties, serializer);
SimplifiedProducer producer = Factory.newProducer(properties, deserializer);

then you could have Messaging provider implement such API and be free to choose accordingly to their needs.

Even though this is pretty similar to opening a connection, session and producer in JMS2... this would be a nice feature to add to either JMS Spec on a sub-profile.

I'm not sure it makes sense to create another JSR around simplified Messaging. IMHO it belong to the JMS space.

James Roper

unread,
Nov 3, 2016, 6:48:07 AM11/3/16
to MicroProfile
Hi all,

To follow up on this with something more concrete of what we've been thinking, we've created a discussion starter API that sums up how we think messaging should be handled in microservices.  Note this isn't a proposal, we are not married to anything in this API, it's simply something to trigger some discussion.

A big difference between this and JMS is JMS handles one message at a time, where this API deals with streams of messages.  The README in the repo describes the design goals and principles, as well as explains how we came up with this API, and gives some pointers of where to start looking at it.


Cheers,

James

Mark Little

unread,
Nov 3, 2016, 12:51:42 PM11/3/16
to James Roper, MicroProfile
Hi James.

Thanks for getting back to this. I’m travelling at the moment so unlikely to take a look at this until the weekend but I’ve asked some from the Red Hat messaging team to try if they have time. Hopefully others in the wider community will also chime in with positive and constructive feedback :)

Mark.


--
You received this message because you are subscribed to the Google Groups "MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.

Clebert Suconic

unread,
Nov 4, 2016, 5:48:47 PM11/4/16
to James Roper, MicroProfile
I really like the idea of the Reactive Streams. it's an API easy
enough to be implemented.

I am still trying to understand how your API would glue together with Streams

Some questions I have:

- I see that you have the intent of introducing the concept of ACKs to
the API (the commit)?

- Partition would be translated to a node on a cluster of brokers?
What is a partition in concrete terms (thinking on a message broker
case).

- MessageOffset? What is the use? to start a subscribe like Kafka case
for N Messages? wouldn't that be too much at the implementation level,
other providers could have different means of starting or Resuming a
Flow / Subscriber?


This caught my personal interest. I really want to build from there on this.
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "MicroProfile" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
--
Clebert Suconic

James Roper

unread,
Nov 6, 2016, 6:47:18 PM11/6/16
to Clebert Suconic, MicroProfile
On 5 November 2016 at 08:48, Clebert Suconic <clebert...@gmail.com> wrote:
I really like the idea of the Reactive Streams. it's an API easy
enough to be implemented.

I am still trying to understand how your API would glue together with Streams

Some questions I have:

- I see that you have the intent of introducing the concept of ACKs to
the API (the commit)?

Yes, the API provides two different mechanisms (and maybe there should only be one), one is to use MessageEnvelope which allows you to explicitly ACK a message by invoking commit(), the other is to use Processor's, which require you to emit one message (of any type) for each message that you receive, each message that you emit is an ack of a message received.

- Partition would be translated to a node on a cluster of brokers?
What is a partition in concrete terms (thinking on a message broker
case).

Partition is more about the stream, not the actual nodes, and it's more about integrating the message broker with other technologies that may be partitioned.

Often you want ordered handling of messages.  But that won't scale, since to handle messages in order, you have to handle them sequentially, on one node, handling messages in parallel intrinsically undermines the ordering.  As it turns out, usually you don't need global ordering, you only need ordering for a particular entity, so for example, in an online auction system, you only care about ordering of the messages within a particular auction (you want to ensure that a bid for $5 doesn't get overtaken by a bid for $10), between auctions the order doesn't matter.  So to do ordering at scale, you partition your streams using the auction ID is a key, you might have 100 partitions, and use a hash of the auction key to assign it to a partition.  As far as how this relates to nodes, each partition will be handled by one, and only one node.  If you have 100 nodes, then there would be a 1 to 1 relationship of partitions to nodes, but if you had 10 nodes, then each node will handle 10 partitions.  By partitioning in this way, you can distribute publishing and subscribing the stream to many different nodes, while still handling messages in order within each auction.  Nodes in this example is actually referring to publisher and consumer nodes, not the message broker nodes.

So that's partitioning in general.  There is a specific use case though that I'm thinking about here, one that we have to solve in Lagom, but that also has some more general applications.  In Lagom the default persistence mechanism is event sourcing.  Events are stored in a database (not a message broker), and then they often get published to a message broker.  The events may be partitioned for scale.  When publishing them to a message broker, if you have N partitions, you need to create N message publishers.  This API puts the responsibility of distributing those N partitions across a cluster.  So it's not actually got anything to do with the message broker partitioning (which in the case of a number of message brokers, such as Kafka, do the same sort of partitioning themselves), rather it's when you have a non message broker source of messages that is partitioned, and you want to create a message publisher that pushes them to the broker, how that partitioning is handled and distributed.

But, the same approach could be used for handling partitioning in the message broker.

Partitioning would be an optional part of the spec, especially given current technologies, there are not a lot of technologies out there today that let you easily distributed workloads around a cluster of nodes like we have in Lagom with Akka - I believe these sorts of features will necessarily become more common in future, especially if event sourcing becomes more popular, but for now I think it would have to be an optional part of the spec to support partitioning otherwise very few technologies could support it.

- MessageOffset? What is the use? to start a subscribe like Kafka case
for N Messages? wouldn't that be too much at the implementation level,
other providers could have different means of starting or Resuming a
Flow / Subscriber?

Again, the primary use case I had in mind was for publishing a non message broker source of messages into a message broker, in our event sourcing example, it's the offset in the persistent store of events, so that a message publisher can be restarted.  Perhaps though this shouldn't be a message broker implementation concern - it does require storage to track the offset, and could be implemented purely in the application.  My main reason for including it is I'm basing it off existing APIs in Lagom, and this is one of the things that our APIs do, but it could be removed.

That said, it may be useful to have there to give more control over the message broker itself.  Either way, it's optional, message offsets don't need to be used, and could be an optional feature of the spec if it was included.
 
This caught my personal interest. I really want to build from there on this.


On Thu, Nov 3, 2016 at 6:48 AM, James Roper <ja...@lightbend.com> wrote:
> Hi all,
>
> To follow up on this with something more concrete of what we've been
> thinking, we've created a discussion starter API that sums up how we think
> messaging should be handled in microservices.  Note this isn't a proposal,
> we are not married to anything in this API, it's simply something to trigger
> some discussion.
>
> A big difference between this and JMS is JMS handles one message at a time,
> where this API deals with streams of messages.  The README in the repo
> describes the design goals and principles, as well as explains how we came
> up with this API, and gives some pointers of where to start looking at it.
>
> https://github.com/jroper/java-messaging
>
> Cheers,
>
> James
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "MicroProfile" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to

> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/microprofile/ada1b636-0b73-4e39-97db-f1539f2d028c%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Clebert Suconic


Clebert Suconic

unread,
Nov 7, 2016, 9:24:20 PM11/7/16
to James Roper, MicroProfile
I know the commit here doesn't mean transaction (it's a
way to ack receive), but my concern (im not sure it's valid) is with
transactions.

the moment you introduce something for acks, users will want different
ack means, batched, Auto-Ack, XA...

what about sending?

What is the general direction on transactions for the microprofile? I
have heard it's everything simplified, so I need to do some research
of my own? so if that's the case my concern is not valid.
>> > microprofile...@googlegroups.com.
>> > To post to this group, send email to microp...@googlegroups.com.
>> > To view this discussion on the web visit
>> >
>> > https://groups.google.com/d/msgid/microprofile/ada1b636-0b73-4e39-97db-f1539f2d028c%40googlegroups.com.
>> >
>> > For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> Clebert Suconic
>
>
>
>
> --
> James Roper
> Software Engineer
>
> Lightbend – Build reactive apps!
> Twitter: @jroper



--
Clebert Suconic

James Roper

unread,
Nov 7, 2016, 10:22:45 PM11/7/16
to Clebert Suconic, MicroProfile
On 8 November 2016 at 13:24, Clebert Suconic <clebert...@gmail.com> wrote:
I know the commit here doesn't mean transaction (it's a
way to ack receive), but my concern (im not sure it's valid) is with
transactions.

the moment you introduce something for acks, users will want different
ack means, batched, Auto-Ack, XA...

what about sending?

The same mechanism can be used, by the producer returning an implementation of MessageEnvelope that implements commit.  This is why MessageEnvelope is an interface. 

What is the general direction on transactions for the microprofile? I
have heard it's everything simplified, so I need to do some research
of my own? so if that's the case my concern is not valid.

My thoughts are that transactions should be limited to a single piece of middleware, that is to say, XA transactions should not be part of microprofile, but certainly you can use a transaction on a relational database.  In distributed systems, the approach we try to encourage at Lightbend is at least once messaging with idempotent handling.  Transactions are often not needed to achieve that, if something fails, it's always safe to replay, there's no need to worry about partial updates since all updates are idempotent.
 

>> > To post to this group, send email to microp...@googlegroups.com.
>> > To view this discussion on the web visit
>> >
>> > https://groups.google.com/d/msgid/microprofile/ada1b636-0b73-4e39-97db-f1539f2d028c%40googlegroups.com.
>> >
>> > For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> Clebert Suconic
>
>
>
>
> --
> James Roper
> Software Engineer
>
> Lightbend – Build reactive apps!
> Twitter: @jroper



--
Clebert Suconic

Mark Little

unread,
Nov 7, 2016, 10:27:03 PM11/7/16
to Clebert Suconic, James Roper, MicroProfile
We've discussed transactions briefly in the context of acid and extended. No real conclusions yet.

Sent from my iPhone
> You received this message because you are subscribed to the Google Groups "MicroProfile" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/CAKF%2Bbsq-afx%3D4OKENrrLGnYNf4w_J8iStzVazLF00A2pcb85Xw%40mail.gmail.com.

Clebert Suconic

unread,
Nov 7, 2016, 11:27:42 PM11/7/16
to MicroProfile, clebert...@gmail.com
Maybe you could still support ACID, but still make it a simple and clean API?

In case you do expose ACID at any point (not just for messaging), it would be nice to encourage users somehow to batch multiple stuff in a single TX.

Example, one of the worst things ever invented for messaging was MDBs, which will induce users to receive a message, commit the DB and the messaging with a single TX, one TX per message.



I don't want to hung much on the Transaction aspect as I still see some space for dev before getting there.. it's just something that caught my attention now.

>> > To post to this group, send email to microp...@googlegroups.com.
>> > To view this discussion on the web visit
>> >
>> > https://groups.google.com/d/msgid/microprofile/ada1b636-0b73-4e39-97db-f1539f2d028c%40googlegroups.com.
>> >
>> > For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> Clebert Suconic
>
>
>
>
> --
> James Roper
> Software Engineer
>
> Lightbend – Build reactive apps!
> Twitter: @jroper



--
Clebert Suconic

andrew_s...@uk.ibm.com

unread,
Nov 29, 2016, 12:21:04 PM11/29/16
to MicroProfile, clebert...@gmail.com

Hi,

I've been reading this thread and learning about the technologies in MicroProfile 1.0 and I agree that a messaging API at the abstraction level of JAX-RS is a good idea. I'm happy to get involved in working on this.


I agree with a lot of the principles in the thread too. They match modern messaging systems and use cases nicely. I think the important ones are:

  • API not tied to a specific messaging system - I should be able to implement on top of whatever messaging system I like within reason
  • Much simpler API than JMS - easier to learn, cheaper to implement
  • Topic-based publish/subscribe - with a way of sharing messages on a subscription
  • Message ordering - could be partitioning, but I think the important thing is a key
  • Interoperability with non-Java code - so I can use MicroProfile to implement the Java parts of a multi-language environment
  • No transactions or exactly-once delivery - smart endpoints and dumb pipes

I think the contentious one is the last one. Distributed messaging systems like Apache Kafka and Amazon SQS prioritise availability over consistency. They can't really do exactly-once publish, acknowledgement or delivery (that's a simplification for Kafka, but approximately true). So, I think it would be better to set an expectation of at-least-once delivery to start with. If you have at-least-once delivery, you might get duplication in error situations and consuming a message inside a transaction isn't going to prevent that. You're going to need idempotent processing either way.

 

Thanks,

Andrew Schofield

Event Services, IBM Watson and Cloud Platform

Clebert Suconic

unread,
Nov 29, 2016, 3:15:54 PM11/29/16
to andrew_s...@uk.ibm.com, MicroProfile
> No transactions or exactly-once delivery - smart endpoints and dumb pipes


I agree with no transactions as part of the API. although we all have
our users, and some of them care about distributed transactions, e.g.
Financial institutions.

But perhaps the idea is to implement these things transparently
without the user realizing it's going XA behind the scenes.

James Roper

unread,
Nov 29, 2016, 6:24:33 PM11/29/16
to Clebert Suconic, andrew_s...@uk.ibm.com, MicroProfile
I think this would be possible to do.  Though if a user came to me saying they need XA, then I'd tell them that they aren't ready for microservices and should stick to Java EE - if they're not willing to change their architectural approach, then they shouldn't force themselves to change their architectural approach by switching to microservices.
 
--
You received this message because you are subscribed to a topic in the Google Groups "MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

Erin Schnabel

unread,
Nov 29, 2016, 6:53:24 PM11/29/16
to MicroProfile, clebert...@gmail.com, andrew_s...@uk.ibm.com
+1


On Tuesday, November 29, 2016 at 6:24:33 PM UTC-5, James Roper wrote:
On 30 November 2016 at 07:15, Clebert Suconic <clebert...@gmail.com> wrote:
> No transactions or exactly-once delivery - smart endpoints and dumb pipes


I agree with no transactions as part of the API. although we all have
our users, and some of them care about distributed transactions, e.g.
Financial institutions.

But perhaps the idea is to implement these things transparently
without the user realizing it's going XA behind the scenes.

I think this would be possible to do.  Though if a user came to me saying they need XA, then I'd tell them that they aren't ready for microservices and should stick to Java EE - if they're not willing to change their architectural approach, then they shouldn't force themselves to change their architectural approach by switching to microservices.
 
--
You received this message because you are subscribed to a topic in the Google Groups "MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Erin Schnabel

unread,
Nov 29, 2016, 6:56:49 PM11/29/16
to MicroProfile, clebert...@gmail.com
Strongly agree with all points. Especially the last: No transactions or exactly-once guarantee. 

James Roper

unread,
Nov 29, 2016, 7:14:59 PM11/29/16
to Erin Schnabel, MicroProfile, Clebert Suconic
My main question then is where to next.  The API proposal I put together is very rough, it was about 2 hours of thought taking some of the ideas that we've been introducing in Lagom, but trying to apply it to something that is less opinionated than Lagom (Lagom is by design incredibly opinionated, it's messaging API at least is not likely a suitable candidate for a standardisation effort).  So my main question is should we use that as a start and work from there, or should we start from scratch?  Or is it too early to work at the concrete level of APIs?  Is there a process that the microprofile effort in general is adopting for things like this?

One possibility is that we come up with a use case for messaging in the microprofile example app, and do a rough API with a limited implementation that just demonstrates the feature.  I'm not sure where we'd start with that, if that's something that would be worthwhile doing I'd appreciate any help or guidance that anyone can offer.

Regards,

James

--
You received this message because you are subscribed to a topic in the Google Groups "MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

Ken Finnigan

unread,
Nov 29, 2016, 7:31:48 PM11/29/16
to James Roper, Erin Schnabel, MicroProfile, Clebert Suconic
James,

For MicroProfile we've started following an Evolution process defined here: https://github.com/microprofile/evolution

All the steps are outlined within the repository, but it essentially starts with a proposal outlining use cases and reasoning for whatever "thing", in this case reactive messaging (or something more catchy), you propose for inclusion in MicroProfile that can be discussed and reviewed by the wider community.

Regards
Ken

--
You received this message because you are subscribed to the Google Groups "MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Mark Little

unread,
Nov 30, 2016, 3:59:05 AM11/30/16
to James Roper, Clebert Suconic, andrew_s...@uk.ibm.com, MicroProfile
Let's start by defining "transactions" because as someone once said "you [guys] keep using that word and I don't think it means what you think it means" :)

Sent from my iPhone
You received this message because you are subscribed to the Google Groups "MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Mark Little

unread,
Nov 30, 2016, 4:02:16 AM11/30/16
to James Roper, Clebert Suconic, andrew_s...@uk.ibm.com, MicroProfile
Our industry has been doing transactions of various flavours over reliable and unreliable transports for decades. They don't need to be baked into the API. Flexible context flow and association is sufficient. Failing that, can be dealt with at the application level.

Sent from my iPhone

On 29 Nov 2016, at 23:24, James Roper <ja...@lightbend.com> wrote:

You received this message because you are subscribed to the Google Groups "MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Erin Schnabel

unread,
Nov 30, 2016, 9:39:45 AM11/30/16
to MicroProfile, erinsc...@gmail.com, clebert...@gmail.com
While Game On! is not a reference microprofile app (because it uses WebSockets to cheaply emulate async bidi communication between third parties that don't know that much about each other), I've harassed most of the people in the community about it at conferences etc. ;)

We do use kafka/events within our core services, so I _have_ a use case for you. Our code (which is a hack, frankly) does pub/sub from different services. The consuming side retrieves using the kafka API, and then uses CDI events (!!) to bridge to RxJava (!!) for reactive goodness. My guess is that Lagom's opinion would already lead us in a better direction than we have right now. However, as a hackable use case that people can observe in action, it exists.


Some of this will likely change, as we realized we made a mistake in how we're emitting/consuming/persisting data in one path, but if we need a use case, one exists, and I'm happy to have the game be a proving ground for approaches, as I have enough running services that we can try several and see which we like better in terms of how the code looks when we're done.

This application is always on, here: https://game-on.org, with info about it here: https://book.game-on.org.

What Ken said for evolution process, but I have a live/event-generating use case that I'm happy to have used as a stomping ground for messaging APIs.

HTH

On Tuesday, November 29, 2016 at 7:14:59 PM UTC-5, James Roper wrote:
My main question then is where to next.  The API proposal I put together is very rough, it was about 2 hours of thought taking some of the ideas that we've been introducing in Lagom, but trying to apply it to something that is less opinionated than Lagom (Lagom is by design incredibly opinionated, it's messaging API at least is not likely a suitable candidate for a standardisation effort).  So my main question is should we use that as a start and work from there, or should we start from scratch?  Or is it too early to work at the concrete level of APIs?  Is there a process that the microprofile effort in general is adopting for things like this?

One possibility is that we come up with a use case for messaging in the microprofile example app, and do a rough API with a limited implementation that just demonstrates the feature.  I'm not sure where we'd start with that, if that's something that would be worthwhile doing I'd appreciate any help or guidance that anyone can offer.

Regards,

James
On 30 November 2016 at 10:56, Erin Schnabel <erinsc...@gmail.com> wrote:
Strongly agree with all points. Especially the last: No transactions or exactly-once guarantee. 



On Tuesday, November 29, 2016 at 12:21:04 PM UTC-5, andrew_s...@uk.ibm.com wrote:

Hi,

I've been reading this thread and learning about the technologies in MicroProfile 1.0 and I agree that a messaging API at the abstraction level of JAX-RS is a good idea. I'm happy to get involved in working on this.


I agree with a lot of the principles in the thread too. They match modern messaging systems and use cases nicely. I think the important ones are:

  • API not tied to a specific messaging system - I should be able to implement on top of whatever messaging system I like within reason
  • Much simpler API than JMS - easier to learn, cheaper to implement
  • Topic-based publish/subscribe - with a way of sharing messages on a subscription
  • Message ordering - could be partitioning, but I think the important thing is a key
  • Interoperability with non-Java code - so I can use MicroProfile to implement the Java parts of a multi-language environment
  • No transactions or exactly-once delivery - smart endpoints and dumb pipes

I think the contentious one is the last one. Distributed messaging systems like Apache Kafka and Amazon SQS prioritise availability over consistency. They can't really do exactly-once publish, acknowledgement or delivery (that's a simplification for Kafka, but approximately true). So, I think it would be better to set an expectation of at-least-once delivery to start with. If you have at-least-once delivery, you might get duplication in error situations and consuming a message inside a transaction isn't going to prevent that. You're going to need idempotent processing either way.

 

Thanks,

Andrew Schofield

Event Services, IBM Watson and Cloud Platform

--
You received this message because you are subscribed to a topic in the Google Groups "MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Steve Millidge

unread,
Nov 30, 2016, 9:47:13 AM11/30/16
to MicroProfile, erinsc...@gmail.com, clebert...@gmail.com
We've also been messing with CDI events in Payara for distributed messaging.

We have these annotations;

@Inbound for subscriber and @Outbound for publisher.

Something like

    @Inject
    @Outbound
    Event<CustomMessage> event;
To send the event you then just use the standard CDI api;

CustomMessage message = new CustomMessage ( 'test', 'server-1');
 event.fire(message);


To receive;

public void observe(@Observes @Inbound CustomMessage event) {
        Logger.getLogger(this.getClass().getName()).log(Level.INFO, "MessageReceiverBean Received Event {0}", event);
    }

Matt Pavlovich

unread,
Nov 30, 2016, 11:20:45 AM11/30/16
to MicroProfile

Great discussion! I agree that the JMS API is misunderstood/demonized. In my 15+ years of working with distributed computing and messaging, I continue to be surprised by how often a messaging edge case is already handled by the JMS API. I agree the “Kafka-like” use case isn’t there currently, but if you break down the Kakfa use case I see a new delivery mode and a subscription strategy/consumer-type as the big gaps.


With Oracle breaking from the JMS API, are there options for the community to do a community-driven JMS 3.0? I haven’t found a mention of a trademark anywhere. Maybe it needs a new name just to rebrand? I think it would be great to strive for something that is earmarked for JCP-like inclusion (not suggesting this should be Java-only) and avoid the “API-defined-outside-and-then-gets-standardized-into-the-JDK” multi-year lag that was REST/JAX-RS. I think the JMS 2.0 API provides that “slim” mode, and would is good input to requirements. There are still use cases where having the connection-session-producer/consumer model for those that need advanced handling and shared-object handling use cases (connection pooling, large number of destinations, small message size, small message volume).


Thoughts:


1. I think a messaging API should strive to meet all messaging use cases. While the “kafka”-style messaging model is the-new-cool-thing, it also is just one of many messaging use cases. Using the current JMS feature set as a baseline of requirements ensures a number of use cases that have been used in distributed computing over the past 20+ years are covered.


2. I think not matching the API with at least one reference wire-protocol was a big miss by JMS v1.x and should be considered.


3. A fair way to look at transactions is that it reduces the surface area of potential data loss or undesirable behavior during unplanned outage. While it won’t ever by 100% correct (b/c distributed computing), transactions provide non-zero value to many use cases.


4. The “Kafka” consumer use case is essential consumer-driven vs broker-driven when it comes to marking state of the subscription. I think this could be handled with the addition of a destination + selector syntax. In current JMS it would look like: session.createMarkedConsumer(destination, marker) where “marker” could follow the selector syntax. This would allow consumers to use message id, header key+value, or other query to tell the broker where to start up again.


5. As far as “Kafka”-style partitioning, I’m curious why folks insist on a single destination?  Message Groups / Affinity Groups already exist in most brokers for doing same-queue partitioning. Server side, brokers can be configured to split storage of destinations across multiple storage volumes to scale disk I/O. At some point in Distributed Computing, you reach a CPU/Network/Storage limit and need to partition to a separate queue and then on to a separate host. This all seems server-side-ish and not sure there should be anything API specific, maybe wire protocol handling for rebalancing or redirecting of publishers?  Maybe making the JMSXGoupId/JMSXSeqId-like option more of a first-class citizen?


6. The Kafka-style producing use-case is essentially async send and a non-sync() store to disk. Most brokers and clients can be configured to this already. Maybe a new DeliveryMode to call it out as a first class option?  Something akin to: NON-PERSISTENT, PERSISTENT and LAZY_PERSISTENT?


7. Sharded producing would be another reason (imho) why having at least one-reference wire-protocol would be a good idea. The client could configure to do producer-side load balancing to a list of url's and the broker can inform the clients of new/removed brokers to update its list of available brokers. ActiveMQ's Openwire has something close to this today (missing the load balancing part)

-Matt Pavlovich

On Monday, October 17, 2016 at 8:55:30 PM UTC-4, James Roper wrote:
Hi all,

One thing that we've found about microservices architecture is that asynchronous messaging (either p2p or through a broker) needs to become a first class communication mechanism, used just as much, if not more, than REST.  In fact many services deployed in a microservices platform may communicate solely using messaging, and have no REST interface at all.  I don't think JMS is up to what microservices demand.  So I'd like to talk about messaging beyond JMS.  There are quite a number of different aspects here, so let's see how we go.

Higher level abstractions

JMS allows you to work with text or binary messages, plus a few other types, but conceptually, no one actually sends text or binary messages, they send a higher level model objects that are serialized to text or binary. The JMS API could be seen as the HttpServletRequest/Response of messaging, it's a low level API that isn't suitable for high level programming.  Just as JAX-RS is the high level API on top of HttpServletRequest/Response that handles the serialization and deserialization of request and response messages (among other things), modern microservice frameworks need to provide a mechanism for transparent handling of serializing and deserializing of messages sent through a message broker or point to point.  I think there is a need for a JAX-RS like API for messaging.

An interesting consequence of this that we've found is that the serialization/deserialization technology used needs to have first class, idiomatic support for polymorphic domain objects, because very often the one message stream will have many different types of messages that are sub types of one parent type - we've found this is almost always encountered in messaging, compared to REST where it's relatively more rare.

Support for modern messaging brokers

One of the most common message brokers we see in use today with microservices is Kafka, and similar technologies such as AWS Kinesis are also gaining popularity.  These differ from many traditional message brokers that map to JMS, in the following ways:

* There are no transactions, and definitely no distributed transactions. Transactions are typically used to guarantee exactly once message processing. These modern message brokers however do not offer exactly once delivery, they offer at least once delivery, which means transactions don't give you anything, even if a publisher ensures that it only publishes each message once, it still could arrive at a consumer twice.  The upshot of this is that what works for messaging handling when using transactions isn't necessarily a good fit for at least once messaging, and APIs may need to be adjusted accordingly.
* Pub-sub is in the control of consumers.  In traditional message brokers, you configure pub-sub in the broker, by creating a queue for each consumer, and routing messages appropriately.  In Kafka and similar technologies, the consumer is in control here - a consumer can consume any message stream without impacting other consumers, and consumers can form groups that ensure that messages are distributed among the groups.  One consequence of this is that you need consumer side APIs for specifying/joining these groups.
* Partitioning. These message brokers partition messages for scaling and load balancing, and if you want any ordering guarantees (you usually do), then the producer needs to control how they are partitioned.  This is done by the producer extracting a key from messages, and that key is then hashed to a partition.

We've found that these concepts need to be first class concepts in the API for successful use in a microservices architecture.

Streams integration

Sources and sinks for message streams will often be from another API.  For example, if using CQRS, very often your source of messages to publish to a broker will be a CQRS read side stream. A microservices messaging solution needs to compatible with different streaming sources and sinks, so that end users don't need to implement their own adapters between these technologies (which can be very difficult to do, especially if they want to implement robust back pressure propagation).  Hence, such a messaging API should use a common interface for streaming, and of course Reactive Streams/JDK9 Flows is the prime candidate here.

Distribution

When plumbing streams together from different libraries to a message broker, distribution needs to be considered, and in our experience ends up being a first class concept in the end user APIs.  A single service may consist of many nodes, publishing the stream from every node is not usually desirable since that means each message will be published once for each node doing the publishing.  Sometimes you want a singleton node doing the publishing, sometimes if the source stream is sharded, you want to distribute the shards out across the cluster so that the publishing load is shared by all nodes in the service.  We've found that end user APIs need to give the user control over this in order to implement it successfully.



So, I've said a lot here, I'm interested in whether people agree with my assessment or not.

andrew_s...@uk.ibm.com

unread,
Nov 30, 2016, 11:45:35 AM11/30/16
to MicroProfile, erinsc...@gmail.com, clebert...@gmail.com
I'd like to spend some time digging into Game On and its approach to messaging as background reading.

I'd also like to see a discussion about use cases and how we envisage people actually using messaging with microservices. It would be good to get broad agreement about the use cases for the API before we actually start designing it.

Here's one approach I've seen. Each microservice listens on its own topic. Messages published on the topic are really command messages to invoke the microservice. There can be multiple instances of each microservice, and the messages are shared among them. You can scale the microservice by running more instances. When a microservice needs to invoke another microservice, it publishes a message on that microservice's topic. There are no responses, just state changes as a result of the processing.

People also do async request/response. This is a less alien to people used to synchronous calls. For microservices, I suggest there's an anti-pattern. When an instance of a microservice makes a request by sending a message, it's a bad idea to depend on the same instance to handle the response. For availability and scalability, it's better to be able to handle the response in any instance of the calling microservice, rather than a specific instance.

There's also a multi-language API called MQ Light that we created with microservices in mind.


It has some characteristics which I'd consider useful in a MicroProfile messaging API:
  • Async, non-blocking interface
  • Flow control for subscribers (but not back pressure)
  • Sharing among subscribers
If you look at the examples, they're very verbose because it doesn't use any of the modern Java features like annotations and CDI. So, I offer it as an example of an interesting interface.

Andrew Schofield

vaquar khan

unread,
Nov 30, 2016, 2:35:24 PM11/30/16
to MicroProfile


On Monday, October 17, 2016 at 7:55:30 PM UTC-5, James Roper wrote:
Hi all,

One thing that we've found about microservices architecture is that asynchronous messaging (either p2p or through a broker) needs to become a first class communication mechanism, used just as much, if not more, than REST.  In fact many services deployed in a microservices platform may communicate solely using messaging, and have no REST interface at all.  I don't think JMS is up to what microservices demand.  So I'd like to talk about messaging beyond JMS.  There are quite a number of different aspects here, so let's see how we go.

Higher level abstractions

JMS allows you to work with text or binary messages, plus a few other types, but conceptually, no one actually sends text or binary messages, they send a higher level model objects that are serialized to text or binary. The JMS API could be seen as the HttpServletRequest/Response of messaging, it's a low level API that isn't suitable for high level programming.  Just as JAX-RS is the high level API on top of HttpServletRequest/Response that handles the serialization and deserialization of request and response messages (among other things), modern microservice frameworks need to provide a mechanism for transparent handling of serializing and deserializing of messages sent through a message broker or point to point.  I think there is a need for a JAX-RS like API for messaging.

An interesting consequence of this that we've found is that the serialization/deserialization technology used needs to have first class, idiomatic support for polymorphic domain objects, because very often the one message stream will have many different types of messages that are sub types of one parent type - we've found this is almost always encountered in messaging, compared to REST where it's relatively more rare.

Support for modern messaging brokers

One of the most common message brokers we see in use today with microservices is Kafka, and similar technologies such as AWS Kinesis are also gaining popularity.  These differ from many traditional message brokers that map to JMS, in the following ways:

* There are no transactions, and definitely no distributed transactions. Transactions are typically used to guarantee exactly once message processing. These modern message brokers however do not offer exactly once delivery, they offer at least once delivery, which means transactions don't give you anything, even if a publisher ensures that it only publishes each message once, it still could arrive at a consumer twice.  The upshot of this is that what works for messaging handling when using transactions isn't necessarily a good fit for at least once messaging, and APIs may need to be adjusted accordingly.
* Pub-sub is in the control of consumers.  In traditional message brokers, you configure pub-sub in the broker, by creating a queue for each consumer, and routing messages appropriately.  In Kafka and similar technologies, the consumer is in control here - a consumer can consume any message stream without impacting other consumers, and consumers can form groups that ensure that messages are distributed among the groups.  One consequence of this is that you need consumer side APIs for specifying/joining these groups.
* Partitioning. These message brokers partition messages for scaling and load balancing, and if you want any ordering guarantees (you usually do), then the producer needs to control how they are partitioned.  This is done by the producer extracting a key from messages, and that key is then hashed to a partition.

We've found that these concepts need to be first class concepts in the API for successful use in a microservices architecture.

Streams integration

Sources and sinks for message streams will often be from another API.  For example, if using CQRS, very often your source of messages to publish to a broker will be a CQRS read side stream. A microservices messaging solution needs to compatible with different streaming sources and sinks, so that end users don't need to implement their own adapters between these technologies (which can be very difficult to do, especially if they want to implement robust back pressure propagation).  Hence, such a messaging API should use a common interface for streaming, and of course Reactive Streams/JDK9 Flows is the prime candidate here.

Distribution

When plumbing streams together from different libraries to a message broker, distribution needs to be considered, and in our experience ends up being a first class concept in the end user APIs.  A single service may consist of many nodes, publishing the stream from every node is not usually desirable since that means each message will be published once for each node doing the publishing.  Sometimes you want a singleton node doing the publishing, sometimes if the source stream is sharded, you want to distribute the shards out across the cluster so that the publishing load is shared by all nodes in the service.  We've found that end user APIs need to give the user control over this in order to implement it successfully.



So, I've said a lot here, I'm interested in whether people agree with my assessment or not.

--
James Roper
Software Engineer

Lightbend – Build reactive apps!
Twitter: @jroper



Hi All,

I would like to add few points here related to JMS vs Kafka  

Kafka was designed from the beginning to handle both online and batch consumers ,so it can handle 1000k+ events per second.
Kafka doesn't have message acknowledgments

JMS was not designed for large volume ,JMS support acknowledgments and two phase commit

AMQP vs JMS:


Now come to Rest JAXRS , I can see JAXRS and JMS are not competitor both work well together.
Micro services is another form of SOA , and we need async processing for Circuit breaker.

This is really unfortunate that JMS is most misunderstood API in java. 

I would love to see flowing features inside of JMS API
  1. Kafka features under JMS. 
  2. Batch support in JMS.
  3. Json parsing support ( currently we can send Json as text message).
  4. Streaming

Regards,
Vaquar khan 
 

Clebert Suconic

unread,
Nov 30, 2016, 2:35:33 PM11/30/16
to MicroProfile, erinsc...@gmail.com, clebert...@gmail.com

On Wednesday, November 30, 2016 at 11:45:35 AM UTC-5, andrew_s...@uk.ibm.com wrote:
I'd like to spend some time digging into Game On and its approach to messaging as background reading.

I'd also like to see a discussion about use cases and how we envisage people actually using messaging with microservices. It would be good to get broad agreement about the use cases for the API before we actually start designing it.

I think the scope here could be a bit beyond of the microprofile.

It would be nice to have an API that can be used from a remote client or standalone application.

With that in hand remote applications could the interact with Microservices through a message broker. There's a lot to be aggregated as value into microservices IMO.

And of course, always keeping simplicity in focus. That's so far the usecase everybody has agreed upon here.

James Roper

unread,
Nov 30, 2016, 7:01:35 PM11/30/16
to andrew_s...@uk.ibm.com, MicroProfile, Erin Schnabel, Clebert Suconic
On 1 December 2016 at 03:45, <andrew_s...@uk.ibm.com> wrote:
I'd like to spend some time digging into Game On and its approach to messaging as background reading.

I'd also like to see a discussion about use cases and how we envisage people actually using messaging with microservices. It would be good to get broad agreement about the use cases for the API before we actually start designing it.

Here's one approach I've seen. Each microservice listens on its own topic. Messages published on the topic are really command messages to invoke the microservice. There can be multiple instances of each microservice, and the messages are shared among them. You can scale the microservice by running more instances. When a microservice needs to invoke another microservice, it publishes a message on that microservice's topic. There are no responses, just state changes as a result of the processing.

Interestingly, the approach that I've seen (and what we provide the most support for in Lagom) is the other way around - each microservice publishes to its own topic, and those services interested in it subscribe.  This approach works really well when using event sourcing, since each service can just publish its event log, and other services can participate either as remote read sides or use those events to feed commands to themselves.  At least once guarantees are straight forward to implement since the source of messages to be published to the broker is persistent and indexed by an offset.

Another approach I think is a hybrid of both - Kafka streams has each service listening on its own topic, and producing on another topic.

People also do async request/response. This is a less alien to people used to synchronous calls. For microservices, I suggest there's an anti-pattern. When an instance of a microservice makes a request by sending a message, it's a bad idea to depend on the same instance to handle the response. For availability and scalability, it's better to be able to handle the response in any instance of the calling microservice, rather than a specific instance.

There's also a multi-language API called MQ Light that we created with microservices in mind.


It has some characteristics which I'd consider useful in a MicroProfile messaging API:
  • Async, non-blocking interface
  • Flow control for subscribers (but not back pressure)
  • Sharing among subscribers
If you look at the examples, they're very verbose because it doesn't use any of the modern Java features like annotations and CDI. So, I offer it as an example of an interesting interface.

Andrew Schofield

--
You received this message because you are subscribed to a topic in the Google Groups "MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Rüdiger zu Dohna

unread,
Dec 1, 2016, 6:32:31 AM12/1/16
to MicroProfile
Hi,

Remotely distributed transactions are a pain... I think we all agree on this. But for most applications (not only in the financial sector) they are a reality that we can't simply ignore. We have to have some mechanism to handle network or target system failures. Most often we go with retrying and idempotency... and that's a good approach; much better than two-phase commits.

But maybe we can choose where to handle this: in the application code or in the container. There's an awful lot of things you can do wrong and only notice sporadically, so it's better to not do this over and over again, but have one code base to rely on. So it would be nice to keep this by default out of the application code. But how?

Let's dream for a moment: The simplest thing for an application programmer would be to have exactly-once semantics:
  • When I receive a command message to update some data and that update fails for some reason, I want the database to go back to the state before my update and the message to be sent again. And it's über convenient not to care about idempotency, too: If the database update actually completed, I will never receive that message again.
  • When I update my database and then send a message, I also want the message to not get out of the door when, e.g., some optimistic lock on the database fails. And I don't want to have the database updated without my message being sent, just because my machine crashes right between those two operations.
While generations of architects have dreamt of this logic to work for distributed systems, the performance impact on synchronous remote invocations has rightfully put a perpetual 'never-again' label on remote two-phase commits. REST carefully defines that PUT must be idempotent, GET never change any state, etc. So we retry and compensate things in our applications. I've seen some nasty bugs arising from doing it just almost right, but nobody said programming distributed systems would be all unicorns and rainbows. We have to live with that and I don't think we can or even should change it.

Asynchronous messaging OTOH (and JMS has proved this to be viable) allows us to do local XA transactions when sending as well as when receiving messages. Only when forwarding messages from one machine to another, we need retries and idempotency, but other than with synchronous calls, this can go unnoticed to the application code! Isn't this a good thing? No distributed two-phase commits slowing us down, and still the simplest programming model possible.


JMS has it's down-sides, though. Like other messaging systems (including Akka and Kafka), it's very technology centric. It's often difficult to see the business logic hidden in all that message handling, converting, and compensating. So you will pull that out and have the business code on one side and on the other side... hmmm... boilerplate?

Maybe this would be a good litmus test for any technology: How many lines of code, annotations, or concepts do you need to read/write/understand to send or receive a business message? This mail is already too long, so just a small teaser of the idea:

@MessageApi
public interface CustomerService {
   
public void createCustomer(String firstName, String lastName);
}

Inject this to have a sender, implement it to have a receiver.
For the full story, see https://java.net/projects/messageapi/pages/Home

I must admit that the project has totally fallen asleep, as I've moved to a different team where we mainly do REST.

I don't see any reason why this approach shouldn't work with batched messages or without transactions. I'd rather have those aspects visible in the code, as they do have side effects, but that's already going into details. I also would argue that the default should be one message per transaction, as this can scale out very well. Optimizing the last bit must be possible, but it also may be premature.

In the JMS 2.0 and 2.1 EG, I had tried to push into this direction, but I assume standardization and innovation are rightfully two different things. Maybe the playground-before-standardizing approach of MicroProfile is more open to this reduce-to-the-essential approach.


Regards
Rüdiger

BTW: I do think that standardizing the wire format would be a real benefit; but it's completely independent from
standardizing the APIs.

Werner Keil

unread,
Dec 1, 2016, 8:28:15 AM12/1/16
to MicroProfile, erinsc...@gmail.com, clebert...@gmail.com
Because of issues with some of the Diana drivers (Hazelcast and/or MongoDB did not work) I had to replace the fairly MP-compliant (JSON-P, REST, CDI) back-end for my Eclipse DemoCamp demo last night https://wiki.eclipse.org/Eclipse_DemoCamp_Neon_2016/Darmstadt#Agenda with a small, but effective equivalent on top of Spark Framework (http://sparkjava.com/)

I will make it available under https://github.com/unitsofmeasurement/uom-demos soon, but we also hope to migrate that to a standard-based equivalent. So people can chose between a few containers, both MP-compatible and outside (like Spark Framework, maybe Dropwizard and/or Spring Boot)

And on the Edge device I will broaden supported devices form Intel Edison to at least Raspberry Pi soon, too.

Werner

Clebert Suconic

unread,
Dec 1, 2016, 9:01:18 AM12/1/16
to Werner Keil, MicroProfile, erinsc...@gmail.com
I think we should try to concentrate on use cases now (a few guys have
already said the same thing along this thread. So, how can we focus on
the usecases now?)


- first things first, we need to *determine the scope of the API*
* Is it really just inside Microprofile, or could it be also used as
a standalone API.
* Notice that there are lot of applications which are purely
messaging apps. Financial apps is an old and good example, but notice
that a more modern usage that is in fashion at the moment is IOT where
a great part of the App is hidden from any sort of UI. An API to
interact with IOT events would be great while being agnostic of
protocols.

- After defined the Scope we can then define the usecases for an
initial version.

would be better to start that usecase discussion fresh on a new thread?
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "MicroProfile" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> microprofile...@googlegroups.com.
> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/microprofile/ed630811-b9d6-478f-bfc1-afccab8bc523%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Clebert Suconic

Werner Keil

unread,
Dec 1, 2016, 9:09:06 AM12/1/16
to MicroProfile, werne...@gmail.com, erinsc...@gmail.com
Andrew referred to examples when he talked about Game On, so I mentioned it here, but there were threads on sample apps like the "conference" example earlier, so yes, a more detailed discussion of other example solutions probably best fits there, or create a new thread.

There are also quite a few messaging demos related to IoT, most of them use MQTT or similar protocols, but many are in fact backed by a JMS or MQ like infrastructure ;-)

Werner

Clebert Suconic

unread,
Dec 1, 2016, 9:27:50 AM12/1/16
to Werner Keil, MicroProfile, erinsc...@gmail.com
On Thu, Dec 1, 2016 at 9:09 AM, Werner Keil <werne...@gmail.com> wrote:
> Andrew referred to examples when he talked about Game On, so I mentioned it
> here, but there were threads on sample apps like the "conference" example
> earlier, so yes, a more detailed discussion of other example solutions
> probably best fits there, or create a new thread.
>
> There are also quite a few messaging demos related to IoT, most of them use
> MQTT or similar protocols, but many are in fact backed by a JMS or MQ like
> infrastructure ;-)

The API Can be agnostic from the Protocol.

I am looking for a list of features, but before we can drive the list
I think we need to define *the scope* first. If we answer this
question the scope will be defined I guess:

"is the API targeted at microprofile only, or could be used everywhere"?

From the initial tone of the conversation I had the impression it
would be nice to have an API that would welcome a broader type of
message systems and be standalone. But I confess I am now confused
after this thread progressed a bit more.

Ken Finnigan

unread,
Dec 1, 2016, 9:32:11 AM12/1/16
to Clebert Suconic, Werner Keil, MicroProfile, Erin Schnabel
Though it would be great to develop an API solution for messaging that covers a broad scope, I think we should stick with a MicroProfile focus for now.

The danger is we get sucked into API debates trying to cover too many use cases when really this group should only be concerned with use cases that are particular to MicroProfile.

Ken


--

You received this message because you are subscribed to the Google Groups "MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Werner Keil

unread,
Dec 1, 2016, 9:36:11 AM12/1/16
to MicroProfile, werne...@gmail.com, erinsc...@gmail.com
You mean a REST API?

There have been other treads especially around Performance Monitoring. And certain solutions like Dropwizard or Spring Boot (mostly based on Dropwizard Metrics nowadays) define standard endpoints like /health, etc.
Hawkular.org, a Red Hat project goes a bit beyond that offering "metric" endpoints not only for performance monitoring, but also to gather measurements from IoT devices. I think if we talk about standardizing API it should be done along those, but it really goes beyond the core JMS topic now.

Let's pick up either in a monitoring thread or another one.

Werner

Clebert Suconic

unread,
Dec 1, 2016, 9:43:50 AM12/1/16
to Ken Finnigan, Werner Keil, MicroProfile, Erin Schnabel
On Thu, Dec 1, 2016 at 9:32 AM, Ken Finnigan <k...@kenfinnigan.me> wrote:
> Though it would be great to develop an API solution for messaging that
> covers a broad scope, I think we should stick with a MicroProfile focus for
> now.

I am looking for a compromise, where we can expand later :). I believe
in "evolutional development". I think it was Mark Little who came up
with this term?

@Jroper initial commit shows a great potential, that's generic enough
for everybody. I would be happy if we made that functional already. It
requires some work as JRoper said it's a 2 hours work only.. but we
can build it from there.

Matt Pavlovich

unread,
Dec 1, 2016, 10:39:22 AM12/1/16
to MicroProfile, werne...@gmail.com, erinsc...@gmail.com
+1 Enumerating use cases

I think a standalone API would be reasonable to tackle. I think the feature set can be enumerated fairly quickly using current JMS feat and the Kafka-style use case as the baseline. 

Mark Little

unread,
Dec 1, 2016, 10:58:31 AM12/1/16
to Ken Finnigan, Clebert Suconic, Werner Keil, MicroProfile, Erin Schnabel
+1

To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

---
Mark Little

JBoss, by Red Hat
Registered Address: Red Hat Ltd, 6700 Cork Airport Business Park, Kinsale Road, Co. Cork.
Registered in the Companies Registration Office, Parnell House, 14 Parnell Square, Dublin 1, Ireland, No.304873
Directors:Michael Cunningham (USA), Vicky Wiseman (USA), Michael O'Neill, Keith Phelan, Matt Parson (USA)

Werner Keil

unread,
Dec 1, 2016, 11:06:48 AM12/1/16
to MicroProfile, k...@kenfinnigan.me, clebert...@gmail.com, werne...@gmail.com, erinsc...@gmail.com
+1

There should be no API definitions or discussions (except for using them, despite occasionally we saw otherwise;-)

If as optional part of a future profile JMS 2 is found appropriate, then why not, but this is not the place to develop JMS 2.1 or alternatives.

Clebert Suconic

unread,
Dec 1, 2016, 2:44:26 PM12/1/16
to Werner Keil, MicroProfile, Ken Finnigan, Erin Schnabel
> There should be no API definitions or discussions (except for using them,
> despite occasionally we saw otherwise;-)

JMS currently imposes heavy semantics on how a message system should
behave in regard to acks, transactions.. etc... Kafka, AKKA and other
systems are different and they probably don't even want to implement
these semantics.

So, this thread started with the "Messaging *beyond* JMS" subject...
So, we have been so far talking about the API for "messaging beyond
JMS".

So far I liked the idea of Reactive Streams, with adding a few methods
to commit messages, and maybe do transformations.. which would be a
development in top of what James Ropper has.

Now, if you tell me we are not designing or discussing an API, then
nobody knows what this thread is about.

Werner Keil

unread,
Dec 1, 2016, 3:16:02 PM12/1/16
to MicroProfile, werne...@gmail.com, k...@kenfinnigan.me, erinsc...@gmail.com
Microprofile is not about standard setting, so "defining" APIs or standards here does not really seem to be in scope. See plenty of other threads about that. Whether say Kafka, Hystrix, Archaius or other (not standardized but fairly popular, some of them could be seen as de-facto standards) technologies can be used, sure, but not define standards as such.

Take the likes of MQTT, Eclipse (IoT) and other places use them, but it's OASIS where it was standardized. Ditto for other aspects.

Messaging "beyond JMS or CDI" was also a topic of Java EE 9 long term goals, so while things are still pretty vague in that area, there is a chance of upcoming Java standards (JSRs) for Java EE 9, too. 

Ken Finnigan

unread,
Dec 1, 2016, 3:25:04 PM12/1/16
to Werner Keil, MicroProfile, Erin Schnabel
We can discuss/create an API without it being an official standard.

We're all fully aware that MicroProfile is not creating standards, but APIs and specifications that may one day be standardized through one or more standards bodies.

While we need to be cognizant of possible/upcoming JSRs started by Oracle, MicroProfile needs to focus on developing APIs/specifications that its community wants and needs right now. We're not here to wait for Oracle to create new JSRs for these things.

All that being said, I think the discussion has meandered long enough.

The next step should be putting together a proposal for submission to the evolution process and evolving the ideas there for actual APIs.

Ken

Mark Little

unread,
Dec 2, 2016, 4:08:39 AM12/2/16
to Ken Finnigan, Werner Keil, MicroProfile, Erin Schnabel
+1

Just because we've used existing standards to date does not rule out us creating new ones if nothing appropriate exists elsewhere.

Sent from my iPhone
--
You received this message because you are subscribed to the Google Groups "MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.

Werner Keil

unread,
Dec 2, 2016, 7:38:08 AM12/2/16
to MicroProfile, k...@kenfinnigan.me, werne...@gmail.com, erinsc...@gmail.com
E.g. Performance and Health Monitoring, although there is a lot appropriate out there, especially Red Hat seems to have 2 or more things in parallel ;-)

Matt Pavlovich

unread,
Dec 2, 2016, 9:16:27 AM12/2/16
to MicroProfile
Maybe this would be a good litmus test for any technology: How many lines of code, annotations, or concepts do you need to read/write/understand to send or receive a business message? This mail is already too long, so just a small teaser of the idea:

@MessageApi
public interface CustomerService {
   
public void createCustomer(String firstName, String lastName);
}


As a teaser, I think this looks great. The devil is in the details.. what to do with POJOs?  Force everyone into XML or JSON? What about object serialization?

On the consumer side, does it default to shared subscription (round-robin) or exclusive? How would headers or meta-data (messageId, expiry, etc) be injected?

My concern is that this just ends up moving code into an annotation or some behind the scenes backing file.

@MessageApi(mode=LAZY_PERSIST, expiry=30000... etc)

public interface CustomerService {
   
public void createCustomer(String firstName, String lastName);
}

... snip

@MessageApi(consume=shared)
public class CustomerServiceImpl implements CustomerService {

   
public void createCustomer(String firstName, String lastName);
}


BTW: I do think that standardizing the wire format would be a real benefit; but it's completely independent from standardizing the APIs.

+1  

James Roper

unread,
Dec 4, 2016, 6:57:11 PM12/4/16
to Matt Pavlovich, MicroProfile
My big question around an API like this is how would you propagate back pressure?  Or are you forced to only handle one message, with a single thread, at a time?  I don't think that's very viable, not today.

A big difference between microservices and monoliths is that messaging becomes used not as something at the edges for things that happen infrequently, but as a core communication mechanism where throughputs of thousands of messages per second are considered a normal base load for a single node.  Since messaging is asynchronous, so you don't have the synchronous pushback that you get when you use something like REST, backpressure becomes incredibly important in order to service such throughputs without fear of running out of memory if you start dropping behind.  This is why the API I put forward is based on reactive streams.  An API like the one below I think isn't up to the demands of microservices today, and definitely won't help take our users into the future.

--
You received this message because you are subscribed to a topic in the Google Groups "MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Rüdiger zu Dohna

unread,
Dec 5, 2016, 2:21:23 AM12/5/16
to MicroProfile
On Friday, December 2, 2016 at 3:16:27 PM UTC+1, Matt Pavlovich wrote:
Maybe this would be a good litmus test for any technology: How many lines of code, annotations, or concepts do you need to read/write/understand to send or receive a business message? This mail is already too long, so just a small teaser of the idea:

@MessageApi
public interface CustomerService {
   
public void createCustomer(String firstName, String lastName);
}


As a teaser, I think this looks great. The devil is in the details.. what to do with POJOs?  Force everyone into XML or JSON? What about object serialization?

The current implementation converts this call into an POJO named CreateCustomer with the arguments as fields and serializes it by default as an xml text message. But you can change that to Java serialisation or a mapped message, and it would be easy to add support for JSON or YAML or whatever.
 
On the consumer side, does it default to shared subscription (round-robin) or exclusive?

It defaults to the exactly-once mode from JMS MDBs... and the JMS implementation is responsible for the implementation details.
 
How would headers or meta-data (messageId, expiry, etc) be injected?

Application headers can be passed and received as annotated parameters. Control parameters can be annotated, as you suggest:
 
My concern is that this just ends up moving code into an annotation or some behind the scenes backing file.

@MessageApi(mode=LAZY_PERSIST, expiry=30000... etc)
public interface CustomerService {
   
public void createCustomer(String firstName, String lastName);
}

... snip

@MessageApi(consume=shared)
public class CustomerServiceImpl implements CustomerService {
   
public void createCustomer(String firstName, String lastName);
}

 
What's your concern about this? If it's static, I consider annotations with solid defaults to be a very good option. They are close to the code but still allow for easy static analysis, i.e. you can write an annotation processor to warn you when annotations are inconsistent in a way that can't be expressed with type safety.

I had started to add support for some dynamic sending options, but this clutters the business centric API, which is especially ugly on the receiver side, which has to ignore those send parameters. But these are really advanced features, so I think it's okay, considering the `make the easy things easy and the complex things possible` motto.



BTW: I do think that standardizing the wire format would be a real benefit; but it's completely independent from standardizing the APIs.

+1  

There seems to be some good options (AMQP and STOMP come to my mind), but I haven't worked with any of them, yet. Defining a new standard would IMHO only make sense, if they are seriously flawed. The MicroProfile could pick one and define it to be required and be the default for all implementations, but it should be permissible to switch to a different protocol for technical reasons, like throughput, compatibility with third parties, etc.


Does anybody have a suggestion, what messaging use-case could semi-reasonably be built into the microprofile conference services?

Werner Keil

unread,
Dec 5, 2016, 11:47:53 AM12/5/16
to MicroProfile
I think there are a few things mixed together here.

Circuit-breakers, especially Hystrix looks like the most popular so probably not a thing to "standardize" here.

Why would

public interface CustomerService {
   
public void createCustomer(String firstName, String lastName);
}

have a method createCustomer that's void, instead of returning the actual Customer PoJO?

Werner

Rüdiger zu Dohna

unread,
Dec 5, 2016, 12:31:00 PM12/5/16
to MicroProfile
On Monday, December 5, 2016 at 5:47:53 PM UTC+1, Werner Keil wrote:
Why would
public interface CustomerService {
   
public void createCustomer(String firstName, String lastName);
}

have a method createCustomer that's void, instead of returning the actual Customer PoJO?
 
The invocation is going to create a message. The response is asynchronous and actually optional.

andrew_s...@uk.ibm.com

unread,
Dec 5, 2016, 1:11:48 PM12/5/16
to MicroProfile, ma...@mediadriver.com
Hi,
I've taken a look at your suggested API here https://github.com/jroper/java-messaging. It looks quite interesting and have some comments.

I agree that the overall idea of considering the messages as a stream is a good one. The ideas in RxJava and Reactive Streams are very nice and expressive, so an API based on these principles seems attractive to me. I'd like to be able to use the Observable pattern for messaging applications and have producers and consumers fit in well with that pattern.

The idea of back-pressure in these reactive systems is good. I can see how you could prevent overly ambitious reading from a network connection and keep control of the memory use. I'm more sceptical about trying to apply the idea across networks connections for a distributed publish/subscribe system.

I'm a bit surprised by the presence of the MessageBroker as a class in the API. I think of the broker as part of the messaging infrastructure, either local or remote to the code using the API, which you'd not actually represent as a class.

Do you have any views on the relative merits of Reactive Streams and RxJava as building blocks for a messaging API?

Andrew Schofield

James Roper

unread,
Dec 5, 2016, 7:42:23 PM12/5/16
to andrew_s...@uk.ibm.com, MicroProfile, Matt Pavlovich
On 6 December 2016 at 05:11, <andrew_s...@uk.ibm.com> wrote:
Hi,
I've taken a look at your suggested API here https://github.com/jroper/java-messaging. It looks quite interesting and have some comments.

I agree that the overall idea of considering the messages as a stream is a good one. The ideas in RxJava and Reactive Streams are very nice and expressive, so an API based on these principles seems attractive to me. I'd like to be able to use the Observable pattern for messaging applications and have producers and consumers fit in well with that pattern.

The idea of back-pressure in these reactive systems is good. I can see how you could prevent overly ambitious reading from a network connection and keep control of the memory use. I'm more sceptical about trying to apply the idea across networks connections for a distributed publish/subscribe system.

I definitely don't think backpressure should propagate through the message broker - what there should be is some monitoring of queue sizes that can be used to trigger, perhaps manually but even better automatically, the provisioning of more and just as importantly less resources to process the message stream.  Of course, all this is beyond the scope of a messaging API, but it's the overly ambitious reading from a network connection that is my primary concern for back pressure.

I'm a bit surprised by the presence of the MessageBroker as a class in the API. I think of the broker as part of the messaging infrastructure, either local or remote to the code using the API, which you'd not actually represent as a class.

Take all the names I chose with a grain of salt, in fact take the whole API with a grain of salt.  I definitely have no objections to your objections.  MessageBroker is probably a badly chosen name.

Do you have any views on the relative merits of Reactive Streams and RxJava as building blocks for a messaging API?

Reactive Streams itself (which has been included in JDK 9 as the Flow API) is primarily an integration API, not an end user API. Even though it's just 3 interfaces with a total of 6 methods, it's actually not an easy spec to implement (I implemented the Netty reactive streams integration - it was a big task, very easy to get things wrong).  An end user should never implement the reactive streams interfaces themselves, rather, they would use a reactive streams implementation, such as RxJava or Akka streams (a little known fact is that the Reactive Streams initiative was primarily driven by the Akka team), to handle the stream.

So, the simplest thing you could do on handling a stream would essentially be a foreach, and just imperatively handle each message.  If that's all you ever do, you're not going to get much value from reactive streams.  In Akka streams for example what that might look like in practice is doing a mapAsync on the stream, this lets you say how many elements you want to handle in parallel, and the handling of each element returns a future that should be redeemed when processing of that element is finished.  That processing might be doing a database update, for example.  It's useful, puts control in the end users hands, but nothing that you do with mapAsync couldn't be achieved with a slightly simpler API, perhaps with some configuration to specify things like parallelism.

Where reactive streams gets more useful is when you want to start plumbing streams.  For example, perhaps I might want to take the messages, look up some data in my database to add to them, then split the stream in two directions, one to go to maybe another topic, another to feed into an analytics loop that may even feed back into my earlier processing in a cycle.  How does back pressure get handled now?  How many messages can be in flight at once?  What happens in case of failure?  If I'm sending to two destinations, and just one of them is slow, how should the whole stream react?  This is where things like Akka streams and RxJava give you the tools to answer these questions, and implement them in a declarative and simple fashion.  Implementing something like this yourself would not likely be very feasible.

But the next thing that reactive streams really has in its favor is because it's a well specified API underneath, you can choose the best tool for each stage of the processing - you can use Akka streams in one stage, feed into RxJava for another, perhaps integrate that with Spark streaming to do your analytics, incorporate a stream of database elements from a database driver that implements reactive streams (examples off the top of my head include slick or reactive-mongo), and so on.  And you can trust that errors will be propagated in a predictable way, that backpressure will be handled in a predictable way, because these things have all been carefully thought out and well specified in the reactive streams spec, verified by the reactive streams TCK, etc.

I think it's one of those things where without reactive streams, we can probably cover maybe 70% or maybe even 90% of use cases, but then users would hit a road block where it gets incredibly difficult to do anything more.  With reactive streams, you add a negligible amount of complexity for that first 90% of use cases (in that you have to select a reactive streams implementation to handle the stream), but users can now easily implement 100% of their use cases with a complexity commensurate with the complexity of the problem they are looking to solve.

Regards,

-- 
Message has been deleted

James Roper

unread,
Feb 8, 2018, 1:08:52 PM2/8/18
to MicroProfile
Hi Emily,

I think it would be worthwhile creating a repo. I think perhaps the following would be useful:

* Decide on a first use case to demonstrate.
* Come up with a minimum API that just allows that use case to be demonstrated.
* Implement the use case using that API.
* Do the minimum required to implement that API to get the example working).

I think this might create a good starting point to work with.

Here's a very simple use case which I think is very easy for people to relate to:

* An email notification service needs to keep track of the email address of users, so that it can turn notification events into emails.
* This service will do this by subscribing to user created and user updated messages on a user details topic, and will use those to keep its own local mapping of user ids to emails up to date.

One nice thing about this use case is that it can be extended in future to demonstrate other things, for example, transforming a stream of notifications into emails, and more complex things like notifying groups of users, or fanning out to multiple notification types (SMS, mobile push, etc). Does that sound like a good starting point to you?

Regards,

James

On 7 February 2018 at 17:26, 'Emily Jiang' via Eclipse MicroProfile <microp...@googlegroups.com> wrote:
Try to reactivate this thread. This is a very useful discussion. What is the next step? Should a repo be created to demonstrate the apis etc?

Emily

--
You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.

To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
James Roper
Senior Octonaut
Message has been deleted

James Roper

unread,
Feb 15, 2018, 9:00:40 PM2/15/18
to MicroProfile
Hi Emily,

Before I go and create a project, I'm wondering whether at this stage this belongs as a new Microprofile project, or as a new EE4J project. From what I've seen of Microprofile, there tends to be a preference of adopting or adapting existing specs, rather than coming up with new specs. In contrast, EE4J is all about creating and maintaining specs. I think we may have a much better chance of success with this if we go down the EE4J path - even if that means it may take a little longer for the project to get up and going while we wait for the EE.next working group to bed down the EE4J processes. I think we're at a unique time right now where a new spec like this could ride a wave of enthusiasm for new life in EE4J, this messaging API has the potential to be a fairly important new spec, and from EE4J's perspective might be a great first new spec to produce, so there's likely to be a lot of enthusiasm for the success of such a spec which would likely help.

So I'm thinking at this stage that perhaps proposing the minimal APIs, implementing them on top of existing products that we have to ensure that they are realistic and that they reflect what is already being done, and then creating a new Eclipse project (following the process here https://www.eclipse.org/community/eclipse_newsletter/2014/july/article2.php) may be a good way to go.

Cheers,

James

On 9 February 2018 at 11:30, 'Emily Jiang' via Eclipse MicroProfile <microp...@googlegroups.com> wrote:
Hi James,

Can you do a PR in https://github.com/eclipse/microprofile-sandbox with what you mentioned in your reply (readme, and/or simple APIs)? Once it is done, we can have a brief discussion. If the content looks good, we will get a dedicated repo created and port the readme over.

Thanks,

Emily
To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.



--
James Roper
Senior Octonaut

Lightbend – Build reactive apps!
Twitter: @jroper

--
You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Emily Jiang

unread,
Feb 16, 2018, 5:07:45 AM2/16/18
to Eclipse MicroProfile
Hi James,


>From what I've seen of Microprofile, there tends to be a preference of adopting or adapting existing specs, rather than coming up with new specs.

No. MicroProfile is to define programming model for cloud native microservices. In the first release of MicroProfile, we adopted some of EE7 technologies: CDI 1.2, JAX-RS 2.0, JSON-P 1.0. In the subsequent MicroProfile releases, we defined new programming model, e.g. Config. Fault Tolerance, Metrics, JWT, Open Tracing, Open API, Rest Client etc. These are new specifications and directly created by the MicroProfile community.

As for this topic, I think if a simple API can be introduced in MicroProfile to benefit the development of microservices, it will be great. The reactive aspect is attractive. Focusing the following as per your email sounds good.

* Decide on a first use case to demonstrate.
* Come up with a minimum API that just allows that use case to be demonstrated.
* Implement the use case using that API.
* Do the minimum required to implement that API to get the example working).


However, if the new APIs has dependencies on other Java EE technologies e.g. JPA etc, it might be a better fit to be in EE4J umbrella.

By the way, since you are uncertain. it is best to use the sandbox (https://github.com/eclipse/microprofile-sandbox) to demonstrate the usage. It will become much clearer once you start putting thoughts and api down there.

Thanks
Emily

Kevin Sutter

unread,
Feb 19, 2018, 11:38:56 PM2/19/18
to Eclipse MicroProfile
James,
Let me give you my perspective...  I'm involved with both projects -- as co-project lead for MicroProfile and as a PMC member for EE4J.  From your message thus far, it would seem that you are looking to get started now with experimenting and innovating with a simplified messaging model.  As Emily has pointed out, we have been innovating in the microservices arena since the beginning of MicroProfile.  Yes, we started with the three Java EE specs as a base, but since that time, we have defined eight new (and improved) APIs to aid with the development of microservices.

EE4J is making fantastic progress, but we're still working through the initial contribution process.  We're just now proposing the next set of contributions to Eclipse.  We probably still have another third (approx) of the Java EE components to transfer over.  That would be the API and RI contributions.  We're still working through the TCK/CTS contributions and the specification process.  Like I said, we're making good progress, but we're not quite there...

To that end, if you want to make immediate progress with defining messaging for microservices, I would suggest the MicroProfile route.  This would provide a solid base to start with and build upon.  And, as EE4J solidifies, the eventual integration of MicroProfile features seems likely.  So, starting with MicroProfile does not preclude you from eventually migrating to EE4J.

Hope this helps,
Kevin

James Roper

unread,
Feb 20, 2018, 1:05:42 AM2/20/18
to MicroProfile
Hi Emily and Kevin,

One thing that concerns me with the MicroProfile route is if you look at this thread, we've been talking about messaging APIs for almost 18 months, I even put forward a possible API for discussion, but as far as next steps are concerned, there's always been an invisible wall where we've been met with unclear answers about how to actually start a project, and progress etc. I'm worried that going down this route, we will only encounter more of the same. I have no insight at the moment as to how successful any attempt at contribution to MicroProfile will be, I've only got this past experience which hasn't been all that positive.

In contrast, on the EE4J side, even without EE4J policies themselves being established, we can see very clear policies published for how to start a new Eclipse project, and we've had people involved with EE4J not just ask but push us very strongly into contributing. Of course it remains to be seen whether we'll encounter the same wall we've encountered in MicroProfile, but we feel somewhat more optimistic here.

Timing-wise, we're really interested in a Reactive Streams based approach. Reactive Streams requires JDK9 (since java.util.concurrent.Flow was introduced in JDK9), and I don't think MicroProfile or EE4J are anywhere near adopting JDK9 (or any subsequent JDK version), particularly given the current state of flux that Oracle has created regarding support periods for OpenJDK. So even if we can officially start sooner with MicroProfile, it's going to be some time before any final spec can be published.

I'll be happy to start the project in MicroProfile if my fears can be belayed. For example, maybe no one in MicroProfile will be interested in the approach that we'd like to propose - the thing I need to know is how much effort will it take me to get to that stage where we know that the spec is not going to fly? I haven't got a lot of feedback so far on whether the things I've proposed are liked or not. I need to report to my bosses a set of milestones and criteria for continued investment, so that we don't go down the rabbit hole of spending a year investing in a spec that doesn't get adopted.

Cheers,

James

To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Mark Little

unread,
Feb 20, 2018, 4:45:44 AM2/20/18
to microp...@googlegroups.com
I’m a bit late to this as I’ve been travelling but let me comment on a few things:

On 20 Feb 2018, at 06:05, James Roper <ja...@lightbend.com> wrote:

Hi Emily and Kevin,

One thing that concerns me with the MicroProfile route is if you look at this thread, we've been talking about messaging APIs for almost 18 months, I even put forward a possible API for discussion, but as far as next steps are concerned, there's always been an invisible wall where we've been met with unclear answers about how to actually start a project, and progress etc.

I know of no invisible wall. I can’t say why things haven’t progressed but it could be a combination of priorities for members, no one else in the group is interested (let’s get more people involved/join), lack of persistence on your part :)

Speaking solely for Red Hat, we’ve had similar issues with adding transaction support to MicroProfile but I’ve made sure my team have pushed on this and we are starting to see progress. Therefore, it’s good you are back to rekindle the conversation.

I'm worried that going down this route, we will only encounter more of the same. I have no insight at the moment as to how successful any attempt at contribution to MicroProfile will be, I've only got this past experience which hasn't been all that positive.

In contrast, on the EE4J side, even without EE4J policies themselves being established, we can see very clear policies published for how to start a new Eclipse project,

And they are the same policies for starting any new Eclipse project. With my EE4J PMC hat on I can tell you we have no extra policies in place yet. This means if you start an Eclipse project it has as much chance of getting into EE4J eventually as any other pre-existing Eclipse, or non-Eclipse, project. If you want to start an Eclipse project they have at it :)

and we've had people involved with EE4J not just ask but push us very strongly into contributing.

Reiterating: you can contribute to any existing Eclipse project. You cannot create new projects at this stage which would be guaranteed to become part of EE4J. You can contribute to existing EE4J projects under certain restrictions which were published recently.

MicroProfile is several steps ahead of EE4J at this point and slightly related, it’s also why I’m not pushing the Red Hat team to suggest we merge MicroProfile with EE4J: we’re not quite there yet with EE4J and I don’t want to see the excellent momentum we’ve built up around MicroProfile dampened down.

Of course it remains to be seen whether we'll encounter the same wall we've encountered in MicroProfile, but we feel somewhat more optimistic here.

Timing-wise, we're really interested in a Reactive Streams based approach. Reactive Streams requires JDK9 (since java.util.concurrent.Flow was introduced in JDK9), and I don't think MicroProfile or EE4J are anywhere near adopting JDK9 (or any subsequent JDK version), particularly given the current state of flux that Oracle has created regarding support periods for OpenJDK. So even if we can officially start sooner with MicroProfile, it's going to be some time before any final spec can be published.

I would hope that any spec, whether in EE4J or MicroProfile, does not get published until there is experience driving it and that experience must come from more than one company, group or individual. Therefore, I’m not sure why the rush.


I'll be happy to start the project in MicroProfile if my fears can be belayed. For example, maybe no one in MicroProfile will be interested in the approach that we'd like to propose - the thing I need to know is how much effort will it take me to get to that stage where we know that the spec is not going to fly?

Check out what we’re doing around transactions for a path forward. It may help.

I haven't got a lot of feedback so far on whether the things I've proposed are liked or not. I need to report to my bosses a set of milestones and criteria for continued investment, so that we don't go down the rabbit hole of spending a year investing in a spec that doesn't get adopted.

Red Hat is interested in this topic but we’ve been somewhat diverted due to other priorities. But I’ve asked some in my team to try to re-engage here.

You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Mike Croft

unread,
Feb 20, 2018, 11:21:11 AM2/20/18
to Eclipse MicroProfile
Hi James/Mark,

How about this for some practical next steps:

  • Let's use the MP-Sandbox repo more as an incubating space and start a messaging spec there
  • We (I) can start a new Gitter chat room for it to have more immediate discussions
  • We can set up a fortnightly hangout for MP-Sandbox where we can discuss in-progress specs
  • This can either be a single working group or a couple based on the kind of APIs being discussed.

This approach may not scale so well, so I would propose aims of spending 3 months in an "incubating" cycle and then present the spec to the general hangout or this group for inclusion as a more formal spec.

I'm strongly against adding more ceremony to the process, so the above points are NOT requirements, but more suggested steps that help in development of new things.


What do you think? I would be happy to contribute to get these things started, but I'm spread quite thin at the moment so probably couldn't commit to really working on this. There may well be other people keen to get involved with things like Messaging both in the group and in the community who might want to help though.

(The same goes for the transaction/LongRunningActions proposal - we've seen other specs move along very well when they have chat/calls etc set up)

Justin Ross

unread,
Feb 20, 2018, 12:05:48 PM2/20/18
to Eclipse MicroProfile
On Tuesday, February 20, 2018 at 8:21:11 AM UTC-8, Mike Croft wrote:
Hi James/Mark,

How about this for some practical next steps:
  • Let's use the MP-Sandbox repo more as an incubating space and start a messaging spec there
  • We (I) can start a new Gitter chat room for it to have more immediate discussions
  • We can set up a fortnightly hangout for MP-Sandbox where we can discuss in-progress specs
  • This can either be a single working group or a couple based on the kind of APIs being discussed.
This approach may not scale so well, so I would propose aims of spending 3 months in an "incubating" cycle and then present the spec to the general hangout or this group for inclusion as a more formal spec.

I'm strongly against adding more ceremony to the process, so the above points are NOT requirements, but more suggested steps that help in development of new things.

What do you think? I would be happy to contribute to get these things started, but I'm spread quite thin at the moment so probably couldn't commit to really working on this. There may well be other people keen to get involved with things like Messaging both in the group and in the community who might want to help though.

+1 from my perspective.  I think a regular time to convene is a great way to add some rigor and get more engaged.

Kevin Sutter

unread,
Feb 20, 2018, 12:11:31 PM2/20/18
to MicroProfile
Good intro and offer of assistance, Mike.  Thanks.

The other thing to clarify for the audience...  The term "projects" is being used in several contexts.  Let me explain the topology a bit.  EE4J is a top-level project at Eclipse, just like the Technology project.  Individual components of EE4J are separate Eclipse projects, kind of like "child" projects.  EE4J is the top-level project with the other Java EE projects (apis, RIs, etc) as sub-projects.  If this Messaging proposal would like to go the route of EE4J, then a separate Eclipse project would need to be proposed and accepted.

On the other hand, MicroProfile is a single Eclipse project under the Technology top-level project.  All of our "sub projects" are actually just components of the MicroProfile umbrella.  We have defined a simple process (I suppose this depends on the eye of the beholder) for defining the APIs, RIs, and TCKs for each of the component features (ie. Config, OpenAPI, RestClient, etc).  The only project visible to Eclipse is MicroProfile.  These other components are just considered part of the MicroProfile project.  Thus, in my mind, it's easier to get started as a component of MicroProfile.  IMHO.  :-)

Anyway, I hope that helps.

Thanks, Kevin

--
You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.

Ondro Mihályi

unread,
Feb 20, 2018, 12:31:24 PM2/20/18
to Eclipse MicroProfile
Hi James,

I remember when the discussion started more than a year ago and I thought the discussion is going in a good direction. A problem was that it was at early stage of the MicroProfile project, the official process of moving forward with specifications was too formal and cumbersome and it was also when MP was moving to Eclipse that slowed everything now.

I'm happy that you restarted the discussion now when we are well established in Eclipse and the process of going forward is much lighter.

Feel free to attend the toplevel MicroProfile  hangout today at 7pm GMT (info to connect here: https://wiki.eclipse.org/MicroProfile/MicroProfileLiveHangouts)

As Mike has explained, we can incubate the ideas in the Sandbox repo: https://github.com/eclipse/microprofile-sandbox). Initially, you can sign the Eclipse CLA, send PRs and project committers will accept them after a discussion).

To discuss further, teams usually use a separate Gitter chat room and a weekly hangout. You may explore documentation for other specifications in the wiki, e.g. for the REST client: https://wiki.eclipse.org/MicroProfile/RESTClient

When there's a group represented by at least 2 vendors (including the submitter), we can create a dedicated repo and start working towards version 1.0.

I hope it's clear and I don't see any obstacles besides that for some steps you need to coordinate with the committers until you become a committer yourself. We don't have any problems to accept new committers if they prove dedicated to the project enough.

Ondro
To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Clebert Suconic

unread,
Feb 21, 2018, 10:58:16 AM2/21/18
to MicroProfile
When we discussed back then, it wasn't clear for me the direction of
transactions... transaction or not transaction? So I didn't know the
directions.

So, what is the direction now? is that a defined ground?

I would like to get involved on this again. and I want to be totally
independent from JMS. so.. where you guys usually hang for this?
> https://groups.google.com/d/msgid/microprofile/d218761e-df19-4065-b528-7713d747e1fa%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Clebert Suconic

James Roper

unread,
Feb 21, 2018, 6:14:04 PM2/21/18
to MicroProfile
Hi all,

Thanks for the feedback. I'm coming round to starting it on MicroProfile, just have to confirm with those above me. Also, thanks for the invite to the MicroProfile Hangout, unfortunately it was at 6am my time (Australia) and I didn't get the message until 8am when I first checked my emails for the day, but I've put it in my calendar and so should be able to join the next one.

I'll talk more on Gitter to work out specifics of what to do next.

Cheers,

James


> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/microprofile/d218761e-df19-4065-b528-7713d747e1fa%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Clebert Suconic

--
You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

Ondro Mihályi

unread,
Feb 21, 2018, 6:57:43 PM2/21/18
to Eclipse MicroProfile
Hi Clebert,


I would like to get involved on this again. and I want to be totally
independent from JMS. so.. where you guys usually hang for this?

We're using gitter.im for chat - I've created a room for the sandbox repository to discuss ideas: https://gitter.im/eclipse/microprofile-sandbox

For telco, we use either hangouts or recently zoom.us instance at Eclipse: https://eclipse.zoom.us/j/949859967
If you want to discuss messaging there, feel free to propose a time (a doodle poll has always worked well for us). Or we may discuss at the next toplevel telco on Tueasday in 2 weeks.

--Ondro

Mark Little

unread,
Feb 22, 2018, 5:52:30 AM2/22/18
to microp...@googlegroups.com
You mean transactions in the scope of what James is proposing or transactions in the scope of what Tom Jenkinson has proposed :) ? Would clearly be good if there was some convergence in that area.

Mark.
> You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/CAKF%2Bbsp_jUaTQ%3DYPAP%3DAL9mWkc-nNTuH%2BX2HETj7s79-Pr6hFA%40mail.gmail.com.

Clebert Suconic

unread,
Feb 22, 2018, 9:04:13 PM2/22/18
to MicroProfile
I have been away from this discussion for a while... I don't want to
drive anything now until I catch up. was just trying to catch up.

I believe users will always want distributed transactions on anything
(as much as it complicates things and as much as we hate it). How that
translates to the APIs is a different subject.. I'm hoping to be
transparent.. working on cases where the implementation supports it or
not.

I will join the discussions.. and looking forward to positively
collaborate here.
> To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/D45FFE66-06E4-4062-94D6-6B0FF3E6A4BD%40gmail.com.

Mark Little

unread,
Feb 23, 2018, 4:28:38 AM2/23/18
to microp...@googlegroups.com

Matt Pavlovich

unread,
Feb 23, 2018, 11:00:34 AM2/23/18
to Eclipse MicroProfile
+1 I'm likewise catching back up after a long away period. I agree with Celbert. Eventually, use cases drive to requiring transactions and its generally best to plan that up front. I think we can take lessons learned from past APIs (JMS) where the transaction use cases were muddled.

Goals:

1. Simple straightforward non-transacted use cases
2. Consistent approach for transacted use cases (per-message, per-session/batch, etc)

James Roper

unread,
Feb 27, 2018, 11:04:52 PM2/27/18
to MicroProfile
For those not in the Gitter microprofile-sandbox channel, here's a proposal that I just made:


It defines an absolute minimum use case, the minimum API necessary for implementing that use case, a working implementation of that api, and a running example app that implements the use case.

To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

Robbie Gemmell

unread,
Feb 28, 2018, 12:24:47 PM2/28/18
to microp...@googlegroups.com
Hi James,

This looks interesting. I had some questions after a quick initial look.

Is there a specific behaviour intended for the commit() method on the
envelope? For example, indicating all previous messages in the stream
are now processed, or simply indicating that message envelope alone
had been processed?

The impl only covers the Subscriber usage case so far, which the
readme makes clear, however I wondered whether you had any specific
thoughts on the Publisher side yet?

Robbie
>> https://groups.google.com/d/msgid/microprofile/fef68384-f50e-4446-9541-34efe597abf4%40googlegroups.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> James Roper
> Senior Octonaut
>
> Lightbend – Build reactive apps!
> Twitter: @jroper
>
> --
> You received this message because you are subscribed to the Google Groups
> "Eclipse MicroProfile" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to microprofile...@googlegroups.com.
> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/microprofile/CABY0rKPWDyG1qd71MkjFSPeQUGH9pi_RMJZd-raik9ghbKn1qg%40mail.gmail.com.

Clebert Suconic

unread,
Feb 28, 2018, 5:43:25 PM2/28/18
to MicroProfile
i had the impression that commit was meant to ack a receipt. isn't that right?
> To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/CAFitrpRYRsSLNZQcnj4aHi54yEWZhKHWC4uiFeNzjw8BnUHwhQ%40mail.gmail.com.

Emily Jiang

unread,
Feb 28, 2018, 6:42:16 PM2/28/18
to Eclipse MicroProfile
Overall the proposal looks very good. I think this proposal is needed in MicroProfile. Thanks James!

A couple of comments:
1. I know you mentioned your proposal is based on JDK9. I suggest you use JDK8, so that the stream spec can be released earlier as all of our current specs are based on JDK8.
2. As for commit, if it is just an ack as mentioned by Clebert, changing the method name to ack might be less confusing.
3. As Robbie said, it might be good to add message producer part as well. I guess it would be similar to the consumer. Maybe the Ingest can be updated to cover both producer and consumer.

By the way, it is really good to do the CDI-based model, which fits in well with other specs.

Thanks
Emily

James Roper

unread,
Feb 28, 2018, 7:23:13 PM2/28/18
to MicroProfile
Hi all,

With regards to publishing, I think there's two options that could be offered (and possibly both should).

First is to offer a Publisher based API. This is good when the source of messages fits well with a publisher - for example, if the source originates from a stream. One place where this works particularly well is where the database provides a stream of events (with the stream possibly implemented through polling, depending on the database), such an approach fits very naturally with event sourcing, but even with CRUD persistence, it can be used to allow the publishing of a message and a CRUD update to be done in a single database transaction without distributed transactions - ie, you publish message by writing to a message table, and then the message is published to the broker by polling.

But of course, there are many use cases where this is not a good fit, where people either want to publish a message as the primary side effect of a REST POST for example, or they may want to publish a message along with a database update, perhaps using transactions or maybe not, or they may just want to emit messages on certain events in the application in more of a monitoring capacity. For that we do need a non streams based imperative solution. I'd suggest that we allow injecting an interface that provides this method of publishing, perhaps annotated with @Egress.

I would be very interested to hear what people thought on either/both publishing approaches. I'll implement the publisher use case, as that's quite straight forward, and I'll have a go at implementing injecting an imperative publisher, though that will be pushing my knowledge of CDI to make that work.

Also related, in microservices commonly you have use cases that are both ingress and egress, ie where you're processing a stream of messages and producing a new stream, and so we could offer a Flow.Processor based solution for that. We may also want to consider whether more complex graphs should be supported, eg fan in/fan out, balancing, broadcast etc. I think this is where it could get very interesting, but its also where it could get very complex, and admittedly these more complex graphs I don't have a lot of experience in providing framework level APIs for allowing users to implement them. So I think limiting the scope to just processors would be good, but we should keep in mind the more complex graphs.

For the commit method - this is a place where I think we need to have a lot more discussion. The suggestion to call it ack I think is a good one. But the approach where committing/acking is done explicitly with a method call (ie, what I've implemented here) has a number of limitations - 

1) It requires the developer to manually implement that, meaning more boiler plate, and it's more implied knowledge of how to use the API that a developer has to know.
2) It may require (depending on the underlying queuing mechanism) messages to be acked in order, since otherwise you might end up acking a message before the prior one fails, causing that message to be dropped. This is very easy to get wrong, in the example code, I could have simply acked the message using a thenCompose call on the CompletionStage returned by my database update, but doing so would have potentially allowed acking to be done out of order. Instead, I did it in a subsequent map on the stream, which ensured it was done in order. A solution to this is to have the underlying implementation put the acks back in order, but this comes with its own limitations, if you don't ack something - for example, if you decide to filter the stream and you simply drop messages without acking them, then that's going to block all subsequent messages to be dropped.

Another approach, and this is an approach that we have used in Lagom, but it also comes with its downsides, is instead of offering an explicit ack, you provide an API that instead of taking a Subscriber, takes a Processor (which is both a Subscriber and a Publisher), with the contract being that for each message that is successfully processed, you emit a message from the Publisher that indicates that that message should be acked. In Lagom, the API we offer allows emiting a singleton stateless object called Done (in JDK equivalent scenarios, Void is used with a null value). This approach can be lighter weight for users to use, but it also has its downsides:

1) If using a stateless object, then the application developer *must* ensure a 1:1 relationship between consumed and emitted messages. We've seen problems where people either do filters, or batching operations, on the streams, and this means far less elements are output, which means committing lags far behind the emission, and means when the stream restarts, it reprocessing a number of messages that have already been published. Ordering is also important, that the done elements output correspond to the same elements in the same order that were input. Though most operations offered by common streaming libraries do maintain this order.
2) If using a stateful object, for example, an object that contains the identifier, then it does mean that application developers need to carry that identify through their processing, which can increase complexity, and also then ordering is also very important.

Another problem is that this approach is a little bit different compared to what your run of the mill enterprise Java developer is used to, they are used to imperative methods for causing side effects (such as an ack() method), rather than a declarative method of producing a stream that describes what should happen.

The advantage of this approach is that you don't have to worry about forgetting to ack, because the API forces you to return something that emits the acks, and also it is much easier to ensure that you don't ack out of order, since a failure will cause the stream to fail before any subsequent ack messages are emitted.

What I might do is implement the second case there, allow @Ingest method to return something like a Processor<Envelope<Message>, MessageAck>, so you can see what it looks like to use such an API. At this stage, I don't know which approach is best, I've had experience both using and implementing both APIs. Would be very interested in peoples thoughts.

Also, any thoughts on the Ingress/Egress naming? This is the first time I've used that naming in an API, I got the idea from a colleague who is using that naming in an API they are writing for a general streaming framework (that I hope/will ensure will implement this spec).

Thanks for the feedback so far.

Cheers,

James

To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Clebert Suconic

unread,
Feb 28, 2018, 8:37:22 PM2/28/18
to MicroProfile
I have some embedded text on your original email.

> But of course, there are many use cases where this is not a good fit, where
> people either want to publish a message as the primary side effect of a REST
> POST for example, or they may want to publish a message along with a
> database update, perhaps using transactions or maybe not, or they may just
> want to emit messages on certain events in the application in more of a
> monitoring capacity. For that we do need a non streams based imperative
> solution. I'd suggest that we allow injecting an interface that provides
> this method of publishing, perhaps annotated with @Egress.

I like the idea of CDI as a tool to help with boilerplates.. but there
should be a way to instantiate with objects directly as well.
Or is everything else in Microprofile based on CDI? and that is the
fashion used?


>
> I would be very interested to hear what people thought on either/both
> publishing approaches. I'll implement the publisher use case, as that's
> quite straight forward, and I'll have a go at implementing injecting an
> imperative publisher, though that will be pushing my knowledge of CDI to
> make that work.
>
> Also related, in microservices commonly you have use cases that are both
> ingress and egress, ie where you're processing a stream of messages and
> producing a new stream, and so we could offer a Flow.Processor based
> solution for that.

In my experience, what about just keep it simple. send message.. and
receive message?
if users will forward a message that should be just like forwarding an
email to a new address IMO.


> We may also want to consider whether more complex graphs
> should be supported, eg fan in/fan out, balancing, broadcast etc.

Shouldn't that be up to the implementations? As far as the client is
concerned all you need is to receive and send messages.



> this is where it could get very interesting, but its also where it could get
> very complex, and admittedly these more complex graphs I don't have a lot of
> experience in providing framework level APIs for allowing users to implement
> them. So I think limiting the scope to just processors would be good, but we
> should keep in mind the more complex graphs.

If we keep the basic usecase well done.. users can make complex usage
as they wish just like a math formula. Give the user a simple tool and
that will be very powerful.


>
> For the commit method - this is a place where I think we need to have a lot
> more discussion. The suggestion to call it ack I think is a good one. But
> the approach where committing/acking is done explicitly with a method call
> (ie, what I've implemented here) has a number of limitations -

I think ack should mean ack. Lets keep it simple?


ACK should do whatever the implementation need to control redeliveries.

If you starting mixing semantics then you start to make the user to
implement a transaction coordinator based on weird semantics.


>
> 1) It requires the developer to manually implement that, meaning more boiler
> plate, and it's more implied knowledge of how to use the API that a
> developer has to know.
> 2) It may require (depending on the underlying queuing mechanism) messages
> to be acked in order, since otherwise you might end up acking a message
> before the prior one fails, causing that message to be dropped. This is very
> easy to get wrong, in the example code, I could have simply acked the
> message using a thenCompose call on the CompletionStage returned by my
> database update, but doing so would have potentially allowed acking to be
> done out of order. Instead, I did it in a subsequent map on the stream,
> which ensured it was done in order. A solution to this is to have the
> underlying implementation put the acks back in order, but this comes with
> its own limitations, if you don't ack something - for example, if you decide
> to filter the stream and you simply drop messages without acking them, then
> that's going to block all subsequent messages to be dropped.

That is why I have been suggesting some simple API to help users to
control completion stages or compensations (aka transactions).
if you don't provide such tool users will have to do these
compensations themselves.

How can I make a proposal? just submit a PR to the repository?

Just because it sucked before, it doesn't mean we can't make it simple now.


>
> 1) If using a stateless object, then the application developer *must* ensure
> a 1:1 relationship between consumed and emitted messages. We've seen
> problems where people either do filters, or batching operations, on the
> streams, and this means far less elements are output, which means committing
> lags far behind the emission, and means when the stream restarts, it
> reprocessing a number of messages that have already been published. Ordering
> is also important, that the done elements output correspond to the same
> elements in the same order that were input. Though most operations offered
> by common streaming libraries do maintain this order.
> 2) If using a stateful object, for example, an object that contains the
> identifier, then it does mean that application developers need to carry that
> identify through their processing, which can increase complexity, and also
> then ordering is also very important.


I would prefer stateful. All the message systems I know would benefit
for keeping a connection underneath the implementation. That will help
scalability of the implemention. and it would make it easier for users
to learn the API.

>
> Another problem is that this approach is a little bit different compared to
> what your run of the mill enterprise Java developer is used to, they are
> used to imperative methods for causing side effects (such as an ack()
> method), rather than a declarative method of producing a stream that
> describes what should happen.
>
> The advantage of this approach is that you don't have to worry about
> forgetting to ack, because the API forces you to return something that emits
> the acks, and also it is much easier to ensure that you don't ack out of
> order, since a failure will cause the stream to fail before any subsequent
> ack messages are emitted.
>
> What I might do is implement the second case there, allow @Ingest method to
> return something like a Processor<Envelope<Message>, MessageAck>, so you can
> see what it looks like to use such an API. At this stage, I don't know which
> approach is best, I've had experience both using and implementing both APIs.
> Would be very interested in peoples thoughts.
>
> Also, any thoughts on the Ingress/Egress naming? This is the first time I've
> used that naming in an API, I got the idea from a colleague who is using
> that naming in an API they are writing for a general streaming framework
> (that I hope/will ensure will implement this spec).

Even though I live in United States, I'm originally from Brazil.
Ingres and Egress at first sound a bit scary and complicated for a non
native speaker (Perhaps it's just for me).

@Outgoing and Incoming would sound weird?

If we can't find any other names I'm fine with that... just pointing
that it feels complex for a non native speaker.

James Roper

unread,
Feb 28, 2018, 9:36:33 PM2/28/18
to MicroProfile
On 1 March 2018 at 12:37, Clebert Suconic <clebert...@gmail.com> wrote:
I have some embedded text on your original email.

> But of course, there are many use cases where this is not a good fit, where
> people either want to publish a message as the primary side effect of a REST
> POST for example, or they may want to publish a message along with a
> database update, perhaps using transactions or maybe not, or they may just
> want to emit messages on certain events in the application in more of a
> monitoring capacity. For that we do need a non streams based imperative
> solution. I'd suggest that we allow injecting an interface that provides
> this method of publishing, perhaps annotated with @Egress.

I like the idea of CDI as a tool to help with boilerplates.. but there
should be a way to instantiate with objects directly as well.
Or is everything else in Microprofile based on CDI? and that is the
fashion used?

As I understand it, CDI is fundamental to MicroProfile (and Jakarta EE), and seems to be the number one thing that gets brought up when talking about proposals with MicroProfile. I have no strong opinion on it, but I suspect others do, which is why I've gone with a fundamentally CDI based approach.

> I would be very interested to hear what people thought on either/both
> publishing approaches. I'll implement the publisher use case, as that's
> quite straight forward, and I'll have a go at implementing injecting an
> imperative publisher, though that will be pushing my knowledge of CDI to
> make that work.
>
> Also related, in microservices commonly you have use cases that are both
> ingress and egress, ie where you're processing a stream of messages and
> producing a new stream, and so we could offer a Flow.Processor based
> solution for that.

In my experience, what about just keep it simple. send message.. and
receive message?
if users will forward a message that should be just like forwarding an
email to a new address IMO.

Where this gets complex is when it comes to back pressure - asynchronously propagating back pressure between receive a message and send receive message is not a simple thing to do. It is very simple though when you can supply an API that lets you declare a map operation on the messages between the receive and the send.  Another thing is optimisations like batching are much harder to do when you're just supplied with send/receive, but when you describe things in terms of streams, batching becomes very easy to apply, both directly by application developers, as well as indirectly by the under lying frameworks.

> We may also want to consider whether more complex graphs
> should be supported, eg fan in/fan out, balancing, broadcast etc.

Shouldn't that be up to the implementations? As far as the client is
concerned all you need is to receive and send messages.

Once again, it comes back to the back pressure. A fan out means an implied fan in of the backpressure in the other direction, and that's no simple thing to implement. Using a streaming library that supports fan in and fan out, it becomes a very simple declarative operation.

What we want to do is avoid situations where messages are being consumed faster than they can be produced, because in that situation, you end up running out of memory. So back pressure is incredibly important. If this were a synchronous API, you can just block the threads doing the processing to implement the back pressure. But we are looking at asynchronous processing here, and then you need signals in both directions, ensuring they hook up correctly is exactly why the reactive streams spec was created, and why using an abstraction like Processor makes this really simple.

> this is where it could get very interesting, but its also where it could get
> very complex, and admittedly these more complex graphs I don't have a lot of
> experience in providing framework level APIs for allowing users to implement
> them. So I think limiting the scope to just processors would be good, but we
> should keep in mind the more complex graphs.

If we keep the basic usecase well done.. users can make complex usage
as they wish just like a math formula. Give the user a simple tool and
that will be very powerful.

I believe reactive streams is that simple tool.

> For the commit method - this is a place where I think we need to have a lot
> more discussion. The suggestion to call it ack I think is a good one. But
> the approach where committing/acking is done explicitly with a method call
> (ie, what I've implemented here) has a number of limitations -

I think ack should mean ack. Lets keep it simple?

The problem is, it really isn't that simple. In Kafka (and Pravega, AWS Kineses, and a number of other message brokers), an ack is not just an ack, it is an ack of the current message, and every message before it. Users need to be aware of this when they invoke ack, to make sure that their handling of messages, and invocation of ack, is done in order. This is a hidden complexity of providing a simple ack method in an asynchronous API, and it's why just providing an ack method isn't actually simple.
Interestingly, ingress and egress are actually Latin words (or, ingressus and egressus). While I would expect most native English speakers to know what the words mean, they aren't used in every day speech, they tend to be used more in technical contexts (regress is probably the closest word that is used at least a little in every day speech). I would have expected romance language speakers to have found it easier than English speakers - I think ingresso/egresso are Portuguese words (at least Google translate gives them as a possible alternative)? I don't know Portuguese at all though so I could be completely wrong. The person that first suggested the name to me is Dutch, but I don't know Dutch either so I don't know if that makes any difference.

@Outgoing and Incoming would sound weird?

I don't know why but it does sound weird to me. I guess I've never seen naming like that in an API, where I have seen ingress/egress. But that doesn't mean I couldn't warm up to it.

If we can't find any other names I'm fine with that... just pointing
that it feels complex for a non native speaker.

Thanks for the feedback, if there's a particular name (eg outgoing/ingoing) that the majority of people here agree on, then I'm certainly not going to argue against it.


--
You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Mark Little

unread,
Mar 1, 2018, 4:07:20 AM3/1/18
to microp...@googlegroups.com
MicroProfile has a CDI-first policy for specifications.

Mark.

Ondrej Mihályi

unread,
Mar 1, 2018, 6:22:38 AM3/1/18
to MicroProfile
The CDI first policy means that any spec should focus on CDI-based API and add more general API later. It's not a hard requirement but a strong preference.  A spec without a high-quality CDI based API has low chance to be accepted to MP.

Ondro

Dňa 1. 3. 2018 10:07 AM používateľ "Mark Little" <markc...@gmail.com> napísal:
--
You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.
To post to this group, send email to microp...@googlegroups.com.

Mark Little

unread,
Mar 1, 2018, 6:34:04 AM3/1/18
to MicroProfile
It's safe to say it's a CDI-first policy at this time :)

Mark.

On Thu, Mar 1, 2018 at 11:22 AM, Ondrej Mihályi
<ondrej....@gmail.com> wrote:
> The CDI first policy means that any spec should focus on CDI-based API and
> add more general API later. It's not a hard requirement but a strong
> preference. A spec without a high-quality CDI based API has low chance to
> be accepted to MP.
>
> Ondro
>
> Dňa 1. 3. 2018 10:07 AM používateľ "Mark Little" <markc...@gmail.com>
> napísal:
>>
>> MicroProfile has a CDI-first policy for specifications.
>>
>> Mark.
>>
>>
>> On 1 Mar 2018, at 01:37, Clebert Suconic <clebert...@gmail.com>
>> wrote:
>>
>> I like the idea of CDI as a tool to help with boilerplates.. but there
>> should be a way to instantiate with objects directly as well.
>> Or is everything else in Microprofile based on CDI? and that is the
>> fashion used?
>>
>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Eclipse MicroProfile" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> microprofile...@googlegroups.com.
>> To post to this group, send email to microp...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/microprofile/86D1B4A9-29E7-4D12-B6DA-92B06BFAA01D%40gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Eclipse MicroProfile" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to microprofile...@googlegroups.com.
> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/microprofile/CACZTZYVMB92f5Xw-GH6MfhXUQ_Rn-1LZbZZVj_KJa6EBoCuZWg%40mail.gmail.com.

Rüdiger zu Dohna

unread,
Mar 1, 2018, 7:07:30 AM3/1/18
to microp...@googlegroups.com
I understand that all MP specs must play really, really nicely with CDI. OTOH, if the specs actually depend on features from CDI, they won’t be acceptable by, e.g., Spring or Dropwizard. I know that this is not a big concern for most of us, but in the long term it may prove fruitful to have those guys in the game, too, by restricting DI to what’s in ‘javax.inject’. A broad adoption of a spec helps everybody!


Just my 2 ct.
Rüdiger


> On 2018-03-01, at 12:22, Ondrej Mihályi <ondrej....@gmail.com> wrote:
>
> The CDI first policy means that any spec should focus on CDI-based API and add more general API later. It's not a hard requirement but a strong preference. A spec without a high-quality CDI based API has low chance to be accepted to MP.
>
> Ondro
>
> Dňa 1. 3. 2018 10:07 AM používateľ "Mark Little" <markc...@gmail.com> napísal:
> MicroProfile has a CDI-first policy for specifications.
>
> Mark.
>
>
>> On 1 Mar 2018, at 01:37, Clebert Suconic <clebert...@gmail.com> wrote:
>>
>> I like the idea of CDI as a tool to help with boilerplates.. but there
>> should be a way to instantiate with objects directly as well.
>> Or is everything else in Microprofile based on CDI? and that is the
>> fashion used?
>
>
> --
> You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.
> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/86D1B4A9-29E7-4D12-B6DA-92B06BFAA01D%40gmail.com.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to microprofile...@googlegroups.com.
> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/CACZTZYVMB92f5Xw-GH6MfhXUQ_Rn-1LZbZZVj_KJa6EBoCuZWg%40mail.gmail.com.

Mark Little

unread,
Mar 1, 2018, 7:44:05 AM3/1/18
to MicroProfile
If it becomes an issue then we can discuss it here within the
community and evaluate. So far it hasn't been an issue and we've
released many specs and many revisions of MP since it was started.

Mark.
> You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.
> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/1E40403E-FB5B-4076-BA24-E1C16F151779%40googlemail.com.

Ondrej Mihályi

unread,
Mar 1, 2018, 7:48:36 AM3/1/18
to MicroProfile
CDI-first policy doesn't mean that non-CDI way isn't provided. It means that all features should be based on CDI first and non-CDI way can optionally come second or be added in future versions later. The spec don't end with version 1 and can be updated in the next release train in 3 months.

I hope it's clear and there's no need to discuss further ;-)

--Ondro

Dňa 1. 3. 2018 1:07 PM používateľ "'Rüdiger zu Dohna' via Eclipse MicroProfile" <microp...@googlegroups.com> napísal:
I understand that all MP specs must play really, really nicely with CDI. OTOH, if the specs actually depend on features from CDI, they won’t be acceptable by, e.g., Spring or Dropwizard. I know that this is not a big concern for most of us, but in the long term it may prove fruitful to have those guys in the game, too, by restricting DI to what’s in ‘javax.inject’. A broad adoption of a spec helps everybody!


Just my 2 ct.
Rüdiger


> On 2018-03-01, at 12:22, Ondrej Mihályi <ondrej....@gmail.com> wrote:
>
> The CDI first policy means that any spec should focus on CDI-based API and add more general API later. It's not a hard requirement but a strong preference.  A spec without a high-quality CDI based API has low chance to be accepted to MP.
>
> Ondro
>
> Dňa 1. 3. 2018 10:07 AM používateľ "Mark Little" <markc...@gmail.com> napísal:
> MicroProfile has a CDI-first policy for specifications.
>
> Mark.
>
>
>> On 1 Mar 2018, at 01:37, Clebert Suconic <clebert...@gmail.com> wrote:
>>
>> I like the idea of CDI as a tool to help with boilerplates.. but there
>> should be a way to instantiate with objects directly as well.
>> Or is everything else in Microprofile based on CDI? and that is the
>> fashion used?
>
>
> --
> You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/microprofile/86D1B4A9-29E7-4D12-B6DA-92B06BFAA01D%40gmail.com.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

> To post to this group, send email to microp...@googlegroups.com.
--
You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Ken Finnigan

unread,
Mar 1, 2018, 7:51:14 AM3/1/18
to MicroProfile
The key point Ondrej is that non CDI usage is purely optional.

I've made my feelings on this clear in the past so I will leave it at that

--
You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Ondrej Mihályi

unread,
Mar 1, 2018, 8:13:34 AM3/1/18
to MicroProfile
+1

Dňa 1. 3. 2018 1:51 PM používateľ "Ken Finnigan" <k...@kenfinnigan.me> napísal:

m.reza.rahman

unread,
Mar 1, 2018, 8:53:57 AM3/1/18
to microp...@googlegroups.com
+1. Fully agree.

Sent via the Samsung Galaxy S7, an AT&T 4G LTE smartphone
You received this message because you are subscribed to the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile...@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

James Roper

unread,
Mar 2, 2018, 12:54:52 AM3/2/18
to MicroProfile
Just to make sure it's clear with everyone with regards to this proposal - while the API itself doesn't depend on anything from CDI (so far, we've only just started, so this could well change), the approach for registering message consumers/producers is designed to be (and has been in the example impl) done using CDI extensions.


For more options, visit https://groups.google.com/d/optout.



--

Ondrej Mihályi

unread,
Mar 2, 2018, 2:25:36 AM3/2/18
to MicroProfile
I understand, James.

I think the direction you chose is perfectly fine. Although at some point we may need to bring CDI dependency to the API and it shouldn't be a problem.

--Ondro

Dňa 2. 3. 2018 6:54 používateľ "James Roper" <ja...@lightbend.com> napísal:

Robbie Gemmell

unread,
Mar 2, 2018, 11:17:33 AM3/2/18
to microp...@googlegroups.com
Agreed.

> But
> the approach where committing/acking is done explicitly with a method call
> (ie, what I've implemented here) has a number of limitations -
>
> 1) It requires the developer to manually implement that, meaning more boiler
> plate, and it's more implied knowledge of how to use the API that a
> developer has to know.
> 2) It may require (depending on the underlying queuing mechanism) messages
> to be acked in order, since otherwise you might end up acking a message
> before the prior one fails, causing that message to be dropped. This is very
> easy to get wrong, in the example code, I could have simply acked the
> message using a thenCompose call on the CompletionStage returned by my
> database update, but doing so would have potentially allowed acking to be
> done out of order. Instead, I did it in a subsequent map on the stream,
> which ensured it was done in order. A solution to this is to have the
> underlying implementation put the acks back in order, but this comes with
> its own limitations, if you don't ack something - for example, if you decide
> to filter the stream and you simply drop messages without acking them, then
> that's going to block all subsequent messages to be dropped.
>

I think the above has some assumptions, which themselves may need discussion.

In some cases, acking messages out of order may not be considered a
problem, and acking a subsequent message need not mean a prior message
is considered acked/dropped. Thats certainly the case in many systems,
though obviously not in others.

Perhaps it is configurable, if a given system can support both ways,
such that you can use it the way desired? The 'client ack' behaviour
is one of the clear annoyances in JMS.
I'm not sure how much advantage I think there is. The API gives a
model that the method approach doesnt so forcefully imply (which may
seem more natural to some) but you can still run into similar issues
with it due to stream manipulation etc as you covered here, and both
need the user to consider them in their processing. It feels like the
overall requirement on use and potential for misuse is broadly similar
betwen the two, its just one is slightly less implied.

> What I might do is implement the second case there, allow @Ingest method to
> return something like a Processor<Envelope<Message>, MessageAck>, so you can
> see what it looks like to use such an API. At this stage, I don't know which
> approach is best, I've had experience both using and implementing both APIs.
> Would be very interested in peoples thoughts.
>
> Also, any thoughts on the Ingress/Egress naming? This is the first time I've
> used that naming in an API, I got the idea from a colleague who is using
> that naming in an API they are writing for a general streaming framework
> (that I hope/will ensure will implement this spec).
>

They seem good to me, though I take Cleberts point that they might not
be obvious to everyone. No alternative suggestions at this point
though.
>> https://groups.google.com/d/msgid/microprofile/8ca73236-5376-4f2d-b983-bc12bee8562c%40googlegroups.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> James Roper
> Senior Octonaut
>
> Lightbend – Build reactive apps!
> Twitter: @jroper
>
> --
> You received this message because you are subscribed to the Google Groups
> "Eclipse MicroProfile" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to microprofile...@googlegroups.com.
> To post to this group, send email to microp...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/microprofile/CABY0rKOEg_m2z55Yn4VZ%2BJGRrCtrtrbTOcj%2BvV8-trHLjv2Yjg%40mail.gmail.com.

Ondrej Mihályi

unread,
Mar 2, 2018, 2:47:51 PM3/2/18
to MicroProfile
I agree with Clebert about naming.

I'm also not a native speaker. Words like Ingress and Egress don't mean anything to me and sound like from a martian language.

In Payara, we use @Inbound and @Outbound for a similar thing and I think those names are clear to anybody. 

I thnk it makes sense that the names start with "in" and "out". While ingress does, egress really confusing for non-native speakers and I never saw it as a term in messaging/streaming context.

--Ondro

Dňa 2. 3. 2018 17:17 používateľ "Robbie Gemmell" <robbie....@gmail.com> napísal:

>> To post to this group, send email to microp...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/microprofile/8ca73236-5376-4f2d-b983-bc12bee8562c%40googlegroups.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> James Roper
> Senior Octonaut
>
> Lightbend – Build reactive apps!
> Twitter: @jroper
>
> --
> You received this message because you are subscribed to the Google Groups
> "Eclipse MicroProfile" group.
> To unsubscribe from this group and stop receiving emails from it, send an
--
You received this message because you are subscribed to a topic in the Google Groups "Eclipse MicroProfile" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/microprofile/slv8lk_1smU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Emily Jiang

unread,
Mar 5, 2018, 8:55:05 AM3/5/18
to Eclipse MicroProfile
 A couple of comments:
Clebert,
Please feel free to create a PR to demonstrate your ideas.

I also suggest not to use Ingress and Egress as they are used by Istio or Kube. People will be easily confused with the same names.

How about @MessageProducer or @MessageConsumer?

Thanks
Emily
&g

Ladislav Thon

unread,
Mar 5, 2018, 10:48:47 AM3/5/18
to MicroProfile
2018-03-05 14:55 GMT+01:00 'Emily Jiang' via Eclipse MicroProfile <microp...@googlegroups.com>:
I also suggest not to use Ingress and Egress as they are used by Istio or Kube. People will be easily confused with the same names.

How about @MessageProducer or @MessageConsumer?

+1, I'd like these terms much better than "ingress", "ingest" and whatnot. Producer/constumer, or publisher/subscriber from j.u.c.Flow, or (worst case) emitter/listener, are much more accessible terms.

On a related note, would it be possible to figure out the role of the method based on its return type? I.e., if a method annotated with @Messaging (for the lack of better term) returned Flow.Subscriber, it would clearly be a consumer, if it returned Flow.Publisher, it would be a producer, and if it returned Flow.Processor, it would be both. Would that even make sense?

Also, now that I've firmly established myself as a bikeshedding expert... why isn't Envelope simply called a Message? :-)

LT
 
To unsubscribe from this group and stop receiving emails from it, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

Ondrej Mihályi

unread,
Mar 6, 2018, 8:20:44 AM3/6/18
to MicroProfile
+1 for names like Publisher/Subscriber (as in Flow), or better MessagePublisher/MessageSubscriber to avoid any confusion. I would also call the wrapper object Message instead of Envelope because Message is a more concrete term the subscriber is meant to process a stream of "messages".

As to whether we need 2 annotations or only one and infer the direction from the return type, I don't have a clear opinion yet. I only think that this would work for stream-based return types but not if we want to support much simpler method definitions which have e.g. void return type and are executed once per message (e.g. @MessageSubscriber void onMessage(Message messageReceived) )

--Ondro

To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

James Roper

unread,
Mar 6, 2018, 5:36:49 PM3/6/18
to MicroProfile
Hi all,

In addition to what's mentioned in the previous email, I've pushed another commit to my most recent PR:

https://github.com/eclipse/microprofile-sandbox/pull/8

This adds basic @Outgoing support, in the same fashion as the @Incoming support. It gives one example of using it. Note that for many people, this approach to publishing won't be useful, since it assumes you can, at start up time, create a stream of events that can be published. I'm not that familiar with CDI events and how they're meant to be used, but perhaps a good source of these events could be CDI, another good source of events would be polling a message queue on the database. In my example, I'm notifying everyone on the system periodically that they're receiving events from this system.

I think what we need in addition to this is to be able to annotate @Inject annotated parameters with @Outgoing, in order to inject something that can be published to imperatively, as well as perhaps a Subscriber that can be published to in an ad-hoc fashion (say, when a WebSocket connects). Likewise, we can annotate injected Publishers with @Incoming, which would allow a message queue to be consumed on an adhoc basis. By ad-hoc, I mean that the lifecycle of the subscription to the queue or publishing to the queue is not managed by the CDI container (in the current case, the lifecycle is managed by the CDI container, including restarting when the stream fails).

Regards,

James

On 6 March 2018 at 11:05, James Roper <ja...@lightbend.com> wrote:
Hi all,

I've just pushed a major update:


So firstly, I renamed @Ingress to @Incoming. I think the naming of this is far from over, and I don't think it's bikeshedding to talk about it - this is a very important part of the API to get right! I think it'll be worth renaming it a few times and having a look at what the code looks like with each name.

What this update adds though is multiple different ways to declare subscribers and to ack messages. Importantly, I'm not saying that we *should* support all these ways, just because we can provide something doesn't mean we should, and having too many different options is a bad thing. But I've provided them so that we can compare them side by side.

If you look at the UserDetailsSubscriber now, you'll see 6 different methods that essentially do exactly the same thing, but all in a slightly different way. The original method of returning a Subscriber is still there.

The second method, handleUserDetails2, takes a UserDetailsEvent, and returns CompletionStage<Void>. In this case, there's no Envelope, for this purpose there doesn't need to be, the redemption of the returned CompletionStage successfully (rather than with an error) indicates that the message has been successfully processed and so is an implied ack. I think this option is important, because if you look at all the other options, we need a third party library (I've chosen Akka Streams, but I could have chosen Reactor or RxJava) to build the subscribers, and I think there needs to be a straight forward option that doesn't require a third party library. Work is being done in the JDK at the moment to provide a library in the JDK that will fill this role, so this won't always be an issue, but for now, it is.

The third method, handleUserDetails3, is like handleUserDetails2, but returns void, and takes an Envelope, so the message is acked explicitly. I don't like this method, since it doesn't allow propagation of backpressure (unless the underlying message broker technology allows propagating backpressure through acks, Kafka for example though doesn't, also there's no way to ensure acks are done in order).

The fourth method, handleUserDetails4, I think is really interesting and should be paid close attention to, as it demonstrates why using reactive streams can be very powerful. In this case we are batching messages into groups of up to 20, and then persisting them all using batched statements on Cassadra. Now Cassandra experts may point out that batched statements don't necessarily increase throughput on Cassandra - but there are many other use cases and databases where doing things in batches can significantly increase throughput. Having this option I think is very desirable, this is just the beginning of what can be done, at customers that I've worked with we've used reactive streams to farm work out to a cluster of machines, to cluster groups of similar messages together so they can be deduplicated to prevent unnecessary processing etc. Also note the different approach to acking here, this method returns a Processor<Envelope<UserEvent>, Ack>, that is, it's transforming user messages to acknowledgements of messages, which I think is an elegant way to view the stream handling. And in this case, we only acknowledge the last message of each batch, so we significantly reduce the acknowledgement overhead, and it puts the user in direct control of exactly how much acknowledging is or isn't done.

The fifth method, handleUserDetails5, is almost identical to the first, with one small difference, it's using org.reactivestreams.Subscriber, rather than java.util.concurrent.Flow.Subscriber. This shows how an implementation can support both versions of the API simultaneously.

Finally, the sixth method, handleUserDetails6, is just there to demonstrate that implementations of this API can provide support for their own types, here I've provided support for Akka Streams Flow. Of course, using this means your code is not portable to different implementations of the API. Another thing about this method is that it doesn't take an Envelope, it just takes the raw message, and emits Done (an Akka unit type like void), one per message, so the acks are implied by the emission of Done.

So there's a lot to mull over here, and as I said, I am in no way proposing that we support all of these methods of consuming streams, they are only provided for side by side comparison.

Regards,

James

To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
James Roper
Senior Octonaut

Lightbend – Build reactive apps!
Twitter: @jroper

James Roper

unread,
Mar 6, 2018, 5:36:56 PM3/6/18
to MicroProfile
Hi all,

I've just pushed a major update:


So firstly, I renamed @Ingress to @Incoming. I think the naming of this is far from over, and I don't think it's bikeshedding to talk about it - this is a very important part of the API to get right! I think it'll be worth renaming it a few times and having a look at what the code looks like with each name.

What this update adds though is multiple different ways to declare subscribers and to ack messages. Importantly, I'm not saying that we *should* support all these ways, just because we can provide something doesn't mean we should, and having too many different options is a bad thing. But I've provided them so that we can compare them side by side.

If you look at the UserDetailsSubscriber now, you'll see 6 different methods that essentially do exactly the same thing, but all in a slightly different way. The original method of returning a Subscriber is still there.

The second method, handleUserDetails2, takes a UserDetailsEvent, and returns CompletionStage<Void>. In this case, there's no Envelope, for this purpose there doesn't need to be, the redemption of the returned CompletionStage successfully (rather than with an error) indicates that the message has been successfully processed and so is an implied ack. I think this option is important, because if you look at all the other options, we need a third party library (I've chosen Akka Streams, but I could have chosen Reactor or RxJava) to build the subscribers, and I think there needs to be a straight forward option that doesn't require a third party library. Work is being done in the JDK at the moment to provide a library in the JDK that will fill this role, so this won't always be an issue, but for now, it is.

The third method, handleUserDetails3, is like handleUserDetails2, but returns void, and takes an Envelope, so the message is acked explicitly. I don't like this method, since it doesn't allow propagation of backpressure (unless the underlying message broker technology allows propagating backpressure through acks, Kafka for example though doesn't, also there's no way to ensure acks are done in order).

The fourth method, handleUserDetails4, I think is really interesting and should be paid close attention to, as it demonstrates why using reactive streams can be very powerful. In this case we are batching messages into groups of up to 20, and then persisting them all using batched statements on Cassadra. Now Cassandra experts may point out that batched statements don't necessarily increase throughput on Cassandra - but there are many other use cases and databases where doing things in batches can significantly increase throughput. Having this option I think is very desirable, this is just the beginning of what can be done, at customers that I've worked with we've used reactive streams to farm work out to a cluster of machines, to cluster groups of similar messages together so they can be deduplicated to prevent unnecessary processing etc. Also note the different approach to acking here, this method returns a Processor<Envelope<UserEvent>, Ack>, that is, it's transforming user messages to acknowledgements of messages, which I think is an elegant way to view the stream handling. And in this case, we only acknowledge the last message of each batch, so we significantly reduce the acknowledgement overhead, and it puts the user in direct control of exactly how much acknowledging is or isn't done.

The fifth method, handleUserDetails5, is almost identical to the first, with one small difference, it's using org.reactivestreams.Subscriber, rather than java.util.concurrent.Flow.Subscriber. This shows how an implementation can support both versions of the API simultaneously.

Finally, the sixth method, handleUserDetails6, is just there to demonstrate that implementations of this API can provide support for their own types, here I've provided support for Akka Streams Flow. Of course, using this means your code is not portable to different implementations of the API. Another thing about this method is that it doesn't take an Envelope, it just takes the raw message, and emits Done (an Akka unit type like void), one per message, so the acks are implied by the emission of Done.

So there's a lot to mull over here, and as I said, I am in no way proposing that we support all of these methods of consuming streams, they are only provided for side by side comparison.

Regards,

James
On 6 March 2018 at 02:48, Ladislav Thon <lad...@gmail.com> wrote:
To unsubscribe from this group and all its topics, send an email to microprofile+unsubscribe@googlegroups.com.

To post to this group, send email to microp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
It is loading more messages.
0 new messages