Hi Julien,
in akka-persistence, each processor has its own logical journal. A single processor can only write to and read from his own journal (where reading is only done during recovery). If a processor's state depends on the events emitted by another processor, it can write these events (or something that is derived from them) to his own journal so that both can recover independently from each other. Independent recovery is especially important in a distributed setup where you don't want to make a processor's ability to recover dependent on the availability of (multiple) other services.
In the current implementation, all logical journals of processors from the same ActorSystem are mapped to a single physical journal (backed by LevelDB). With n ActorSystems (on the same or on different nodes) you'll have n physical journals. This however is an implementation detail and may change. Further optimizations may even recognize that messages/events are redundantly journaled and only write pointers instead of actual message/event data. From an application's perspective, however, only the concept of one logical journal per processor is important. Also, an application never interacts with journals directly, only with processors.
Although you can already develop a distributed application with the current LevelDB-backed journal (having 1 LevelDB instance per node, for example), you can't yet migrate a processor from one node to another (e.g. during failover) because LevelDB only writes to the local disk. To support processor migration, journal replication is needed which will be provided by distributed journal(s) in the near future.
Hope that helps.
Cheers,
Martin
On 07.11.13 20:24, JUL wrote:
Dear community,
When using DDD / Event Sourcing for the nice integration properties between SOA services, a service might need to 'subscribe' to other services' events to be able to process commands. In Akka persistence, this would translate into having one read-write journal (actual service event store), and multiple read-only journals (other services being listened to).
I can see right now that the configuration is supporting only one journal.
How would you implement such a use case? A custom "aggregating" journal routing events from the processors to the right journal? Or is this something that would likely be provided out of the box in the near future?
Thank you,
Julien
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://akka.io/faq/
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscribe@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.
--
Martin Krasser
blog: http://krasserm.blogspot.com
code: http://github.com/krasserm
twitter: http://twitter.com/mrt1nz
--
--- You received this message because you are subscribed to the Google Groups "Akka User List" group.Read the docs: http://akka.io/docs/
Check the FAQ: http://akka.io/faq/
Search the archives: https://groups.google.com/group/akka-user
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscribe@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.
Martin, only one active writer per processor-id makes a lot of sense, but is it possible to use one writer and several readers?
I think that can sometimes be useful. Given a distributed journal, an event sourced processor can "publish" the persisted domain events to CQRS read views via the journal, by having other processors with same processor-id replay events from latest know sequence number periodically (or triggered by something). These are only reading and possibly storing to another read view representation.
/Patrik
On Fri, Nov 8, 2013 at 6:55 AM, Martin Krasser <kras...@googlemail.com> wrote:
Hi Julien,
in akka-persistence, each processor has its own logical journal. A single processor can only write to and read from his own journal (where reading is only done during recovery). If a processor's state depends on the events emitted by another processor, it can write these events (or something that is derived from them) to his own journal so that both can recover independently from each other. Independent recovery is especially important in a distributed setup where you don't want to make a processor's ability to recover dependent on the availability of (multiple) other services.
In the current implementation, all logical journals of processors from the same ActorSystem are mapped to a single physical journal (backed by LevelDB). With n ActorSystems (on the same or on different nodes) you'll have n physical journals. This however is an implementation detail and may change. Further optimizations may even recognize that messages/events are redundantly journaled and only write pointers instead of actual message/event data. From an application's perspective, however, only the concept of one logical journal per processor is important. Also, an application never interacts with journals directly, only with processors.
Although you can already develop a distributed application with the current LevelDB-backed journal (having 1 LevelDB instance per node, for example), you can't yet migrate a processor from one node to another (e.g. during failover) because LevelDB only writes to the local disk. To support processor migration, journal replication is needed which will be provided by distributed journal(s) in the near future.
Hope that helps.
Cheers,
Martin
On 07.11.13 20:24, JUL wrote:
Dear community,
When using DDD / Event Sourcing for the nice integration properties between SOA services, a service might need to 'subscribe' to other services' events to be able to process commands. In Akka persistence, this would translate into having one read-write journal (actual service event store), and multiple read-only journals (other services being listened to).
I can see right now that the configuration is supporting only one journal.
How would you implement such a use case? A custom "aggregating" journal routing events from the processors to the right journal? Or is this something that would likely be provided out of the box in the near future?
Thank you,
Julien
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://akka.io/faq/
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.
--
Martin Krasser
blog: http://krasserm.blogspot.com
code: http://github.com/krasserm
twitter: http://twitter.com/mrt1nz
--
--- You received this message because you are subscribed to the Google Groups "Akka User List" group.Read the docs: http://akka.io/docs/
Check the FAQ: http://akka.io/faq/
Search the archives: https://groups.google.com/group/akka-user
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://akka.io/faq/
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.
It feels like it's the responsibility of Application B to expose it's read view(s) to Application A by whatever mechanism it deems fit. In the case that A and B are completely separate applications, is this not where SOA comes in? e.g. read views exposed via restful endpoints.
Additionally, there is no reason why a subscriber to journal events could not just forward them over some distributed message queue, allowing any separate applications to listen in.
Please do let me know if I'm missing your point completely!
I do like the idea of an out of the box mechanism for listening into journals rather than forcing processor implementations to explicitly publish events more widely.
Andrew
I understand that, but I was wondering what happen when you want to integrate 2 applications. Events are a great way to integrate 2 applications. Application A can subscribe to application B's event stream
and build its own view model out of application B events. From what I understand, this would be possible with 2 actor system, but then application A don't have access to the view model recovered from application B's journal, right?
I think there is a misunderstanding.
Traditional integration is through a REST or whatever RPC service exposing a view model of B. But event sourcing allows you to use events instead. A could consume events of B to build a specific-to-A-needs view model of B locally in A.
The first benefit is performance of course:access to B's view model is local. But more importantly, view model of B in A is probably very different from view model of B in B. It is tuned to the specific needs of A. Of course, live events will be channelled through external pub/sub, but at startup, the view model of B in A must be recovered first. Akka persistence have 95% of what is needed for that scenario, except access to different journal stores for 2 different local processors (processor for A, and processor for view model of B in A).
An alternative would be to pub/sub, and A stores events of B locally and replay from there. The problem is you are introducing a runtime dependency. What happens when you need to restart A? You need persistent pub/sub to buffer B's incoming events while A is down. I would find it simpler to use pub/sub for runtime, and plugs directly to B's journal at recovering time. It is although much more resilient to pub/sub issue: just restart view model of B in A in case of corruption or inconsistency.
This is exactly what was described in this post. Two processors with the same processor id reside on different nodes (A and B) where the processor on B read/writes to its journal and the processor on A reads-only from the same journal. These processors may have different implementation, with the result that the processor on A creates a view model of B that is specific to A.
Yes, except on A, read/write processor A would recover from journal store A, and read processor B will recover from unrelated journal store B. Obviously, A and B are 2 different application, that have 2 completely different journal stores in different locations.
This is exactly what was described in this post. Two processors with the same processor id reside on different nodes (A and B) where the processor on B read/writes to its journal and the processor on A reads-only from the same journal. These processors may have different implementation, with the result that the processor on A creates a view model of B that is specific to A.
How would one deal with view models that are a projection of events from multiple processors? Is it a case of having multiple read-only processors, each reading from a different journal, aggregating data to form a separate projection stored elsewhere?
One other factor, is it really a given that view models have to be recovered on startup? This is assuming view model state can't be persisted and survive restarts? I suppose this is the point you make, Julien, with regard to avoiding losing messages published by B when A is offline? It does feel like I'm missing something here, though. Surely it's not always going to be practical to be forced to rebuild views on every restart of an application?
Andrew
A separate view model would be built out of each event streams. A view model would not be projected from more than one stream.
This is correct. I am assuming I want to use a similar architecture for the view model as well. As Martin mentioned, this can be achieved with multiple processors with same id recovering from same journal. It seems attractive to me having an in-memory view model. Of course, with its own snapshots to speed up view model recovery.
> How would one deal with view models that are a projection of events from multiple processors?
A separate view model would be built out of each event streams. A view model would not be projected from more than one stream.> This is assuming view model state can't be persisted and survive restarts?
This is correct. I am assuming I want to use a similar architecture for the view model as well. As Martin mentioned, this can be achieved with multiple processors with same id recovering from same journal. It seems attractive to me having an in-memory view model. Of course, with its own snapshots to speed up view model recovery.
An example would be system B = CRM and system A = order fulfilment. To complete an order, you might need to know if the supplied customer ID do actually exist, and what kind of deals the customer negotiated (all information part of the CRM system). So system A would build a custom view model of A consisting of a dictionary from customer id to type of deal for example.
Le vendredi 8 novembre 2013 13:24:43 UTC-5, Andrew Easter a écrit :
This is exactly what was described in this post. Two processors with the same processor id reside on different nodes (A and B) where the processor on B read/writes to its journal and the processor on A reads-only from the same journal. These processors may have different implementation, with the result that the processor on A creates a view model of B that is specific to A.
How would one deal with view models that are a projection of events from multiple processors? Is it a case of having multiple read-only processors, each reading from a different journal, aggregating data to form a separate projection stored elsewhere?
One other factor, is it really a given that view models have to be recovered on startup? This is assuming view model state can't be persisted and survive restarts? I suppose this is the point you make, Julien, with regard to avoiding losing messages published by B when A is offline? It does feel like I'm missing something here, though. Surely it's not always going to be practical to be forced to rebuild views on every restart of an application?
Andrew
Andrew
Not necessarily to the same Akka cluster but have access to the same distributed journal.
The live events propagation could also be done via the journal (either push or pull based).
The live events propagation could also be done via the journal (either push or pull based).
I am curious, how would you push across applications?
On a side-note, I've been thinking a lot over the past couple of days about how to implement the DDD/CQRS concept of a Process Manager in the framework I'm working on. This "new persistent actor" would be a very good fit, in theory. A Processs Manager consumes events, possibly from multiple aggregates, and issues commands based on those events. And the ability to snapshot state naturally makes sense.Just an interesting thought.
When a processor writes a message/event it can then be pushed to view models with a matching processor id.
When a processor writes a message/event it can then be pushed to view models with a matching processor id.
Making a distributed journal a full blown pub / sub middleware? That would be cool.
You mean like Apache Kafka? I've been reading a lot about Apache Kafka and it seems like the Kafka/Akka combination is a potent one (even without Storm). I don't see Akka persistence taking on the Kafka role, do you?
Yes. It can also be used like a queue with a single consumer. I'm no Kafka or DDD expert, but I would think Pub/Sub messaging plays a role in sagas or communicating between bounded contexts. But if the Akka folks think Pub/Sub is redundant I'd like to know about it.
So, I think the pub/sub plugin approach works nicely. For example, if publishing to a Kafka topic, any application (not just an akka based one) could subscribe to live event feeds and project view(s) in whatever way makes sense.
This gives the implementor the choice as to whether to use persistent pub/sub or not (obviously Kafka has this by design). In the case of views that are only projected from a live event stream, this is going to be important such that events aren't missed when the read model consumer goes offline temporarily.
This discussion has referred to creating a read only processor with the same processor id as a read/write counterpart.
This is nice but, at least in the use case I'm looking at, my read views are interested in receiving updates from multiple processors (each processor representing an aggregate root of a particular type).
I wonder whether there would have to be some flexibility in the way messages are routed from journal to read only views? For example, rather than a straight match on processor id, a regular expression could be used. In that way, one could choose to add a common prefix (i.e. aggregate root type) to processors of the same type and have a view that defines a regular expression that matches on that prefix (ignoring the unique part of the id).
Andrew
Would it make sense to have read model functionality that made it possibleto aggregate from 1-n processors of *same* type with different processor ids, e.g:pid = claim-earnings-1pid = claim-earnings-..pid = claim-earnings-n
In this case there would be no causal dependencies between those streams.
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://akka.io/faq/
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/groups/opt_out.
Hi Patrik,The order could, in our case, as well be claim-earnings-2 events followed by claim-earnings-1 events.
Dear community,When using DDD / Event Sourcing for the nice integration properties between SOA services, a service might need to 'subscribe' to other services' events to be able to process commands. In Akka persistence, this would translate into having one read-write journal (actual service event store), and multiple read-only journals (other services being listened to).I can see right now that the configuration is supporting only one journal.How would you implement such a use case? A custom "aggregating" journal routing events from the processors to the right journal? Or is this something that would likely be provided out of the box in the near future?Thank you,Julien
The order could, in our case, as well be claim-earnings-2 events followed by claim-earnings-1 events.Ok, then it can be implemented on top of existing processor building block.
Hi Patrick,
On Wednesday, 13 November 2013 03:19:08 UTC+8, Patrik Nordwall wrote:The order could, in our case, as well be claim-earnings-2 events followed by claim-earnings-1 events.Ok, then it can be implemented on top of existing processor building block.
Can you please give me a quick idea of how this could be implemented effectively now (i.e. a View with a know set of Processors to follow)?
I am thinking of a how to create a denormalised view for say a Department that contains Staff. If the DepartmentView follows the DepartmentESP then it has a list of updated Staff processor ids.Are you suggesting that the DepartmentView will message each Staff member's ESP or View to get their state? Will it do this every n seconds as the DepartmentView is updated. Or something else?Sorry to keep on going about this - I know Akka Streams may be the answer for this in the near future but I'm just keen to know what the alternative approaches are (e.g. using an event bus, publish-subscribe).Thanks,Ashley.
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.
Can you please give me a quick idea of how this could be implemented effectively now (i.e. a View with a known set of Processors to follow)?
A View corresponds to one processorId but you can have for example an aggregate actor that has two Views as children, receiving updates from both (and supervising them) and doing aggregation. Of course you should be able to collate messages coming from the two view to avoid inconsistencies.
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.