Snapshot/Restore Process

167 views
Skip to first unread message

Tapas Mohapatra

unread,
May 27, 2019, 1:25:52 PM5/27/19
to victorametrics-users
Hi, I want to understand how can i prevent data from loss. 

  • Can I take snapshots and store on a separate storage and restore as needed? If yes, I didnt find any commands to restore
  • WIll the snapshot/restore work on a newly build server if the existing server is dead?

Thanks,
-Tapas

Aliaksandr Valialkin

unread,
May 27, 2019, 1:43:02 PM5/27/19
to Tapas Mohapatra, victorametrics-users
Hi Tapas,

On Mon, May 27, 2019 at 8:25 PM Tapas Mohapatra <tapasmo...@gmail.com> wrote:
Hi, I want to understand how can i prevent data from loss. 

  • Can I take snapshots and store on a separate storage and restore as needed? If yes, I didnt find any commands to restore
Snapshots may be stored (archived) on a separate storage with any suitable tool that follows symlinks. For instance, `cp -L`, `scp -r` or `rsync -L`. Snapshots contain entire copy of data directory pointed by -storageDataPath command-line flag. So the restoration process is quite simple:
- stop VictoriaMetrics
- remove all the data from the directory pointed by `-storageDataPath` and then copy snapshot contents there.
- start VictoriaMetrics
 
  • WIll the snapshot/restore work on a newly build server if the existing server is dead?

We maintain backwards compatibility for on-disk data format, so snapshots created by old VictoriaMetrics versions should work with new versions. Note that single-node data format is incompatible with cluster data format, so snapshots from single-node version won't work on a cluster version and vice versa. We plan creating migration tools for converting single-node data to cluster version.

 

Thanks,
-Tapas

--
You received this message because you are subscribed to the Google Groups "victorametrics-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to victorametrics-u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/victorametrics-users/2c4194a4-b978-49d5-9723-9fef5d3d3980%40googlegroups.com.


--
Best Regards,

Aliaksandr

Tapas Mohapatra

unread,
May 27, 2019, 2:06:29 PM5/27/19
to Aliaksandr Valialkin, victorametrics-users
you were super quick. thanks. I'm impressed with victoria metrics and checking it. Need a suggestion here.

I have 4k servers, 2k in each DC to monitor.
2 to 3 pairs (both prom in a pair scrapes same targets to provide HA on prometheus) in each DC (no cross DC setup)
I want to use victoriametrics as long term retention (15 months). running a single server on one prometheus pair is about 6gb of data.

So can I use the following way.

Say, prom01 and Prom01_HA are pair and same way prom02,prom02_HA are pair.

Can i have 2 vicrtoriaDB with prom01,prom02,prom03 writing to DB01 and prom01_HA,prom02_HA and prom03_HA writing to DB02?
Does this model going to work? Thought was to gain HA on victoriametrics DB perspecrtive (at least data points for trending)

Thanks,
-Tapas

Aliaksandr Valialkin

unread,
May 27, 2019, 2:18:52 PM5/27/19
to Tapas Mohapatra, victorametrics-users
On Mon, May 27, 2019 at 9:06 PM Tapas Mohapatra <tapasmo...@gmail.com> wrote:
you were super quick. thanks. I'm impressed with victoria metrics and checking it. Need a suggestion here.

I have 4k servers, 2k in each DC to monitor.
2 to 3 pairs (both prom in a pair scrapes same targets to provide HA on prometheus) in each DC (no cross DC setup)
I want to use victoriametrics as long term retention (15 months). running a single server on one prometheus pair is about 6gb of data.

So can I use the following way.

Say, prom01 and Prom01_HA are pair and same way prom02,prom02_HA are pair.

Can i have 2 vicrtoriaDB with prom01,prom02,prom03 writing to DB01 and prom01_HA,prom02_HA and prom03_HA writing to DB02?
Does this model going to work? Thought was to gain HA on victoriametrics DB perspecrtive (at least data points for trending)

Yes, this model should work if DB01 and DB02 are put in distinct availability zones (datacenters). You can put Promxy in front of DB01 and DB02, so it could perform automatic merging and de-duplication for the collected data. This approach should work in the face of the following events:
* Temporary unavailability of a single Prometheus from each HA pair. In this case the remaining Prometheus will continue writing data to their DB
* Temporary unavailability of DB01 or DB02. Then the data will continue flowing into the remaining DB

Promxy should handle gaps in the data from DB01 and DB02 by filling them with the data from another DB.
Reply all
Reply to author
Forward
0 new messages