Prometheus with Timescale DB

88 views
Skip to first unread message

sreehari M V

unread,
Oct 1, 2020, 5:44:26 AM10/1/20
to Prometheus Users
Hi All,

Greetings,

Anybody using Prometheus with timescale DB ?. How is your experience in terms performance and storage size ? ( i have 500+ servers with monitor, node, process and jmx exporters )


Thanks and Regards,
Vinod M V

chris....@gmail.com

unread,
Oct 1, 2020, 6:42:08 PM10/1/20
to Prometheus Users
Curious as to what do you mean "using it with"  at one time we used a remote_write to send all Prometheus data to timescale but timescale did have a tendency to use alot of disk space.

sreehari M V

unread,
Oct 4, 2020, 6:10:39 AM10/4/20
to chris....@gmail.com, Prometheus Users
Hi Chris,

In fact I am looking for scalability and a high available monitoring setup and planning to implement timescale DB for this. Currently there are node, process and JMX exporters are installed  in nodes and expect nodes count to be increased to 500+.

And currently prometheus storage re-tension is 30 days. Can you please suggest a best solution for this. If timescale db is the best solution , can you please share the implementation ideas ? Below issues are facing with time scale DB.

1. Storage size is huge.
2. Not able fetch the data from timescale DB through prometheus.

Regards,
Vinod M V

--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/229a2510-df35-487a-ae72-0d3f0dd75e77n%40googlegroups.com.

Ben Kochie

unread,
Oct 4, 2020, 6:26:36 AM10/4/20
to sreehari M V, chris....@gmail.com, Prometheus Users
We're using Thanos for clustering and HA of our Prometheus servers.

Brian Candler

unread,
Oct 4, 2020, 8:24:05 AM10/4/20
to Prometheus Users
On Sunday, 4 October 2020 11:10:39 UTC+1, sreehari M V wrote:
In fact I am looking for scalability and a high available monitoring setup and planning to implement timescale DB for this. Currently there are node, process and JMX exporters are installed  in nodes and expect nodes count to be increased to 500+.

And currently prometheus storage re-tension is 30 days. Can you please suggest a best solution for this. If timescale db is the best solution , can you please share the implementation ideas ? Below issues are facing with time scale DB.

1. Storage size is huge.
2. Not able fetch the data from timescale DB through prometheus.


I think the storage size problem is fundamental to timescale DB; it's a row-based storage engine, and hence will use much more storage than a column-based system, as well as being much slower to query for typical use cases.  However, you *should* be able to read and write to it via prometheus: the postgres/timescale adapter supports both remote read and remote write.  If you can't, then it's probably just a configuration issue.

If you're looking for a more scalable solution, I'd recommend you look at VictoriaMetrics, because it can start as a simple single-process system, which may be all you need, but you can change to a horizontally-scalable distributed system later if you need.  There are some benchmarks here (from author of VictoriaMetrics):

Thanos/Cortex are other well-known big players in this space.

chris....@gmail.com

unread,
Oct 4, 2020, 11:35:22 AM10/4/20
to Prometheus Users
We did do alot of compare/contrasts of different solutions.  The real question is what do you "desire".  Our timescale was because someone felt that there was a need for SQL.  We did make the argument that yes, SQL is nice but there are alot of different ways to get data out. 

Timescale:  yes it is row based and the storage requirements were larger for this than "other solutions"  If you are just using it to store data then take a look at the following.  It might be a better solution.

Of the 2 that we did begin to look at were VictoriaMetrics ,Thanos, and M3. 

Harkishen Singh

unread,
Oct 4, 2020, 1:19:47 PM10/4/20
to Prometheus Users
I have been using TimescaleDB for quite some time and everything is going great! No problems so far even when at scale. So, should be great to use for you as well.

Ben Kochie

unread,
Oct 4, 2020, 1:23:51 PM10/4/20
to Harkishen Singh, Prometheus Users
What is "at scale" to you? Number of metrics? Samples per second?

Harkishen Singh

unread,
Oct 4, 2020, 1:43:46 PM10/4/20
to Prometheus Users
Both, number of metrics and samples per seconds in terms of ingestion rates. I have seen TimescaleDB personally ingesting a billion rows a second and that's a good scale.

Ben Kochie

unread,
Oct 4, 2020, 3:12:29 PM10/4/20
to Harkishen Singh, Prometheus Users
No, I meant what are your actual production numbers for samples per second and number of metrics. Not "which one".

For 1 billion samples per second, how many servers and how many CPUs are required? How much memory is require? How long can that ingestion rate be sustained for?

--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.

m...@timescale.com

unread,
Oct 6, 2020, 3:11:43 PM10/6/20
to Prometheus Users
Hi, (TimescaleDB dev here)

Promscale (https://github.com/timescale/promscale), the latest TimescaleDB adapter for Prometheus does support our time-series-optimized compression (https://blog.timescale.com/blog/building-columnar-compression-in-a-row-oriented-database/). So data size should be much less of an issue now. In fact, we expect compression to be similar to Prometheus. 

Promscale also now natively support PromQL and the same http endpoints for querying as Prometheus (in addition to remote_read). So querying should be much faster now.

Are you perhaps still using the old adapter? If so we'd highly recommend switching.

Please reach out with any additional questions you may have.

Thanks,
Mat Arye
Promscale team lead

Reply all
Reply to author
Forward
0 new messages