Hi Volker,
Am 09.04.13 18:24, schrieb Volker
Stampa:
Hi,
since I also plan to use this feature I had a quick
look at the API and can say that it looks pretty
convenient for me. I especially like the fact that I
do not have to fiddle around with snapshots and
sequence numbers when it comes to recovery but a
simple
extension.recover(replayParams.allWithSnapshot)
will do.
yes, removing the burden from the user of dealing with sequence
numbers during both, snapshot creation and recovery, was a design
goal.
I still have some comments/questions. I only
looked at the sample application and not at the
actual implementation, so please excuse if some of
them look trivial.
- As far as I understand the snapshot is saved to the
journal. Is this done in parallel to saving normal
Messages or are normal Messages queued and get written
once writing the snapshot is finished? Or does this
depend on the Journal implementation?
This depends on the journal implementation but writing the snapshot
concurrently to normal messages makes most sense, especially if
snapshots are large. This is supported by the SWRS in the current
prototype. With the AWRS, writes are anyway done concurrently.
- The SnapshotRequest as well as the Snapshot in
SnapshotOffer contain some details like processorId or
requestor an eventsourced actor does not care about (as I
assume). I wonder if it is a big deal to hide these?
Agree, would be cleaner to hide them. Shouldn't be a big deal.
- As far as I understand there is only one snapshot per
eventsourced actor. I wonder if it could make sense to
support having several snapshots per actor and select which
one to base the recovery on. I do not have any requirements
for this right now, so I am basically just thinking aloud.
This would make definitely sense. Replaying from an older snapshot
can avoid issues like those described in
this
post, for example. It would then make sense to have time-based
snapshot selection criteria which would require snapshot timestamps
and an extension of ReplayParams.
I
added
your comments to ticket #8. Thanks for your valuable feedback.
Cheers,
Martin