Hi Michael,
Your question is very interesting.
Assuming multiple devices can modify the same state in a multi-master scenario, I do not think event sourcing is the right tool for the job in this instance. By definition, an aggregate is responsible for its integrity. Considering the decentralized nature of your application, multiple "disconnected" aggregates which are supposed to represent the same aggregate could (and will !) produce a conflicting event stream.
The event stream is THE source of truth. Conflict resolution has to happen before events are committed. I think the only way to satisfactorily solve this issue is to store a pipeline of commands that can then be submitted to a centralized server, which would then handle conflict resolution.
Merging event streams created asynchronously works if the following conditions are present:
1) At any one time, (a) only one device/client can modify an aggregate and this needs to be enforced extremely aggressively or (b) you are willing to create a conflict resolution layer which rewrites the incoming stream and append it to the event store once it is processed for conflicts. Similarly, an integration layer will have to be written for the client, with the possibility to disregard the read model (up to the branching point) and replay the event stream from the branching point.
The (a) solution is very difficult to implement and supposes that your problem domain makes it possible to completely segregate ALL the aggregates that are modified by the mobile client, which means that your problem domain has no cooperative component. Event if it is the case, I know of no easy mechanism to enforce this, unless you control the device hardware, which does not completely eliminate the risk of having two devices modify the same aggregate separately. It should also be noted that one of the main reasons to use CQRS is that it makes complex interactions between contexts easier (through saga, for example). If your problem domain has no cooperative component, CQRS may not be the right tool in the first place.
The (b) solution is also very difficult to implement. It supposes you accept to do away with one of the core principles of DDD (and hence, CQRS), which is that the transactional boundary is the aggregate, and that the aggregate root is responsible for ensuring consistency within this boundary. It also violates one of the most interesting property of the event stream, which is to be append-only. It also supposes you write an integration layer and design your read model to be rewindable up to a given point in time, or that you accept to completely delete the read model when it is necessary. This is not impossible, but certainly not trivial at all. It also assumes that all the side effects of events are reversible. This is certain to cause multiple problems if you deal with hard to reverse side-effects, such as financial transactions or really anything that has to deal with external services.
2) The integrity of the data can be protected up to the level which is required by your requirements on the mobile device. You are ready to accept that someone may temper with the event stream and send you crap, or you are ready to write a verification layer to ensure that, in which case events will need to be "unapplied" on the mobile device.
3) Your server code and object model is forever backward compatible, considering old versions of the client will always be running and nothing can force an offline device to update its software version and stop producing new events until the update occurs (using pre-set mandatory time
4) You are willing to accept that old code can and will, forever, execute and create events without possibly respecting newly established contracts OR you are willing to accept that certain events may be rejected, in which case they will need to be reverted on the client. You will need to create handlers for every event that results in a modification of the read model on the client to revert the changes (i.e. compensating actions) made on the read side.
5) Other stuff that I have not thought about.
I could go on and on about all the necessary assumptions, trade-offs and headaches you will have to endure to solve your offline problem by merging event streams. Globally, it doesn't seem like a sound approach.
Maybe a command queue that is purged once the client goes back online is more appropriate ?