Repeated Scrape Values From Prometheus With Range Selection

41 views
Skip to first unread message

Suvendu Nayak

unread,
Mar 5, 2025, 6:04:27 AM3/5/25
to Prometheus Users
Hi - As Prometheus collects the metrics in every 15 secs, when we do a range query for some of the metrics the repeated values comes (which is expected). Is there a way that we can filter only  the change data values.
 Example:
Time.     Label Value
10:10:15 a b c 1
10:10:30 a b c 1
10:10:45 a b c 1
10:11:00 d b c 2
10:11:15 d b c 2

 for every 15 seconds, we want
Time.      Label Value
10:10:15 a b c 1
10:11:00 d b c 2

Brian Candler

unread,
Mar 5, 2025, 1:00:42 PM3/5/25
to Prometheus Users
Not a good one that I can think of. Can you do this on the client side (the API consumer?)

Any complex PromQL expression on a range involves resampling it with a subquery, which is usually not desirable, and you'll lose the exact timestamps at which the scrapes took place.

e.g.    (foo != foo offset 15s)[10m:15s]

Suvendu Nayak

unread,
Mar 6, 2025, 3:28:01 AM3/6/25
to Prometheus Users
Thanks Brain, I can do this on client side with Oracle SQLs but the problem is when Prometheus gets those data it scrapes in every 15 secs and most of the data are constant values for a longer period of time. So don't want the repeated duplicate data. Even query for 10 mins gives same data with time difference of 15 sec.

Ben Kochie

unread,
Mar 6, 2025, 3:39:34 AM3/6/25
to Suvendu Nayak, Prometheus Users
> So don't want the repeated duplicate data

Why? What problem are you trying to solve?

--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/prometheus-users/4269ed06-d3e6-4c57-85db-2586fc566539n%40googlegroups.com.

Suvendu Nayak

unread,
Mar 6, 2025, 3:48:42 AM3/6/25
to Prometheus Users
Because with every 15s data for a span of 1 hour, I get 240 records where only 2-3 records are useful. Data volume goes up when time increases, and have to scroll all through.

In Graph its fine may be but in table its huge and have to go through all.

Reply all
Reply to author
Forward
0 new messages