Hi Truc Le,
Thanks for the response!
We are ultimately trying to ensure we have sufficient disaster recovery in place for our application. For context we use a data set per customer in a single data store to back a SaaS application.
Ultimately I am trying to figure out:
1. If say the GCP healthcare api has a major issue, what protections are in place.
2. If an entire data set for one of our customers customer was deleted either maliciously or accidentally in production, could we restore the data set
3. If a customer's data got messed up in some other way can we recover back to a point in time.
> Our FHIR store comes equipped with a built-in internal backup method for data protection.
This sounds like it would cover situation (1)? Would it help us restore data if we deleted a dataset for a customer (2)? Are these backups available to us/to be selectively restored, or are these just about overall system stability/protection? Can you talk more to this?
> Additionally, we offer
a rollback feature that allows you to restore resources to a specified point in time.
I have taken a look at this. This looks like it would help with (3) but unless I am mistaken it doesn't look like it would help us with (2) recovering an entire store if deleted?
> For more comprehensive backup strategies, we also have
exportHistory and
importHistory APIs available in public preview. These can be valuable tools for your needs.
Thanks I'll take a deeper look at these? Will the importHistory endpoint better merge histories for the same resource if split across multiple incremental exports? Or is it still last-import-wins like the import api? Also, i tried the exportHistory api on a tiny store (less than 10 resources total) and it took ~10mins vs ~4s for `export` - is that a limitation of the beta? Some of our production stores are much much larger (100k+ resources).
Thanks for anything you can provide to help me out!
Philip