For a more painful option... :)
We run active/active(/active) and duplicate all important messages to at least 2 core clusters.
The core clusters process in parallel, including database operations to a global Cassandra cluster, and then exchange outputs.
Each core dedups using Cassandra (coarse grain) as a final step, passing messages to the gateways, which also dedup (fine grain).
In extreme cases, a client may still receive a duplicate. They expect this.
In our case, core clusters are on different continents.
We can run on one core cluster if necessary and have done so. No client, external or internal, noticed.
ml