You do not have permission to delete messages in this group
Copy link
Report message
Sign in to report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to OpenTSDB
Hello gents,
I have a large amount of very granular metrics - I need to reduce the storage capacity usage.
My idea is to resample old data to lower granularity and save storage space. However, I need to exclude some metrics and some time intervals to archive the data on maximal granularity
Resampling would work as follows:
when a resampling job is run, it will look at metrics older than X and take all samples within ie. 30mins and use aggregation (ie mean average, quantile). the value will be recorded as a single value for that 30 min period, others will be deleted.
Questions:
a - is there any mechanism in open tsdb to set resampling job as described above? b - how can I find out how much capacity is a specific metric taking up?
Thanks!
BB
ManOLamancha
unread,
May 22, 2018, 1:37:58 PM5/22/18
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Sign in to report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to OpenTSDB
On Friday, February 23, 2018 at 1:14:21 AM UTC-8, BeefBot wrote:
Hello gents,
Questions:
a - is there any mechanism in open tsdb to set resampling job as described above?
We don't have a built-in job yet but we do have the capability, in 2.4, to store these "rolled-up" metrics and query over them with downsamplers.
b - how can I find out how much capacity is a specific metric taking up?
Unfortunately we don't have a really easy way of computing that. You kinda have to guestimate based on the number of time series for a metric + the recording interval + the type of value (integer vs float)