Error on ingesting samples with different value but same timestamp

565 views
Skip to first unread message

bruno bourdolle

unread,
Nov 16, 2020, 9:35:33 AM11/16/20
to Prometheus Users
hello,

I don(t understand how to correct the WARN about error on ingesting samples...
I'm into kubernetes
I upgrade at last version of prometheus
I cleared the datas
Each time I start I have this line of warning each second

Any idea ?

bets
bruno

level=info ts=2020-11-16T14:29:36.045Z caller=main.go:353 msg="Starting Prometheus" version="(version=2.22.1, branch=HEAD, revision=00f16d1ac3a4c94561e5133b821d8e4d9ef78ec2)"

level=info ts=2020-11-16T14:29:36.046Z caller=main.go:358 build_context="(go=go1.15.3, user=root@516b109b1732, date=20201105-14:02:25)"

level=info ts=2020-11-16T14:29:36.046Z caller=main.go:359 host_details="(Linux 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 15 17:36:42 UTC 2018 x86_64 prometheus-0 (none))"

level=info ts=2020-11-16T14:29:36.046Z caller=main.go:360 fd_limits="(soft=1048576, hard=1048576)"

level=info ts=2020-11-16T14:29:36.046Z caller=main.go:361 vm_limits="(soft=unlimited, hard=unlimited)"

level=info ts=2020-11-16T14:29:36.050Z caller=web.go:516 component=web msg="Start listening for connections" address=0.0.0.0:9090

level=info ts=2020-11-16T14:29:36.050Z caller=main.go:712 msg="Starting TSDB ..."

level=info ts=2020-11-16T14:29:36.058Z caller=head.go:642 component=tsdb msg="Replaying on-disk memory mappable chunks if any"

level=info ts=2020-11-16T14:29:36.093Z caller=head.go:656 component=tsdb msg="On-disk memory mappable chunks replay completed" duration=35.773378ms

level=info ts=2020-11-16T14:29:36.093Z caller=head.go:662 component=tsdb msg="Replaying WAL, this may take a while"

level=info ts=2020-11-16T14:29:37.850Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=17

level=info ts=2020-11-16T14:29:38.244Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=1 maxSegment=17

level=info ts=2020-11-16T14:29:39.839Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=2 maxSegment=17

level=info ts=2020-11-16T14:29:39.839Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=3 maxSegment=17

level=info ts=2020-11-16T14:29:39.839Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=4 maxSegment=17

level=info ts=2020-11-16T14:29:39.839Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=5 maxSegment=17

level=info ts=2020-11-16T14:29:39.840Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=6 maxSegment=17

level=info ts=2020-11-16T14:29:39.840Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=7 maxSegment=17

level=info ts=2020-11-16T14:29:39.840Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=8 maxSegment=17

level=info ts=2020-11-16T14:29:39.840Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=9 maxSegment=17

level=info ts=2020-11-16T14:29:39.840Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=10 maxSegment=17

level=info ts=2020-11-16T14:29:39.840Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=11 maxSegment=17

level=info ts=2020-11-16T14:29:39.841Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=12 maxSegment=17

level=info ts=2020-11-16T14:29:39.841Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=13 maxSegment=17

level=info ts=2020-11-16T14:29:39.841Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=14 maxSegment=17

level=info ts=2020-11-16T14:29:39.841Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=15 maxSegment=17

level=info ts=2020-11-16T14:29:39.842Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=16 maxSegment=17

level=info ts=2020-11-16T14:29:39.842Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=17 maxSegment=17

level=info ts=2020-11-16T14:29:39.842Z caller=head.go:719 component=tsdb msg="WAL replay completed" checkpoint_replay_duration=97.404µs wal_replay_duration=3.748284098s total_replay_duration=3.78422202s

level=info ts=2020-11-16T14:29:39.944Z caller=main.go:732 fs_type=XFS_SUPER_MAGIC

level=info ts=2020-11-16T14:29:39.944Z caller=main.go:735 msg="TSDB started"

level=info ts=2020-11-16T14:29:39.944Z caller=main.go:861 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml

level=info ts=2020-11-16T14:29:39.945Z caller=kubernetes.go:263 component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"

level=info ts=2020-11-16T14:29:39.947Z caller=kubernetes.go:263 component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"

level=info ts=2020-11-16T14:29:40.030Z caller=main.go:892 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=85.979364ms remote_storage=5.031µs web_handler=653ns query_engine=1.227µs scrape=250.298µs scrape_sd=2.56867ms notify=33.201µs notify_sd=11.894µs rules=82.217333ms

level=info ts=2020-11-16T14:29:40.030Z caller=main.go:684 msg="Server is ready to receive web requests."

level=warn ts=2020-11-16T14:30:00.551Z caller=scrape.go:1372 component="scrape manager" scrape_pool=kermit-proxy target=https://userprom.xxx:443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=30

level=warn ts=2020-11-16T14:30:11.443Z caller=scrape.go:1372 component="scrape manager" scrape_pool=kermit-proxy target=https://userprom.xxx:443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=30

Matthias Rampke

unread,
Nov 16, 2020, 10:48:25 AM11/16/20
to bruno bourdolle, Prometheus Users
This can happen in a few ways:

1. whatever exports the metrics, does so with a timestamp, but actually changes the value on you between scrapes without updating the timestamp. This is relatively unlikely unless this is something very specialized.
1.1 or it actually exposes the same metric twice within one /metrics response – but the official client libraries actively prevent that
2. (more likely) after all relabeling, you end up with metrics from multiple targets without any distinguishing labels between them.

Are you scraping through some kind of proxy? In general, Prometheus expects to discover and access each target individually, so that it can separately collect the data from all of them.

Look at the raw metrics endpoints that Prometheus would scrape – if they do not have duplicates or timestamps, it must be 2. Remove label drop or replacement rules that may accidentally coalesce multiple targets into having the same labels, especially if you are messing with the "instance" label. Looking at Prometheus' target page may also help in identifying targets that have the exact same label set.

/MR



--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/16a9cc0b-b9c5-406e-8c6f-92772bf7845dn%40googlegroups.com.

bruno bourdolle

unread,
Nov 17, 2020, 9:07:00 AM11/17/20
to Prometheus Users
hi,
I localise the part of the conf that generate the error, but I'm not sure to understand how to correct it. What is strange is that the same conf on dev env works without these warns

- job_name: 'my-proxy'
          metrics_path: '/metrics'
          scheme: https
          tls_config:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            insecure_skip_verify: true
          bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          static_configs:
            - targets:
              - '${SCRAPE_CONFIGS_TARGET}'
            
on target , I see

group="production" instance="userprom.xxx.intraxxx:443" job="my-proxy"

on dev no warns
on prod a lot of warns

if I remove this block, I works without warns on prod , others conf are ok but If I had this one, I have warns on prod

Any idea to correct and explain why, I'm a newbie

Matthias Rampke

unread,
Nov 18, 2020, 4:00:12 AM11/18/20
to bruno bourdolle, Prometheus Users
Is this proxying to more than one backend? And in dev, there is only one?

/MR

bruno bourdolle

unread,
Nov 18, 2020, 4:52:48 AM11/18/20
to Prometheus Users
maybe It 's an explication but I'm not the owner of xxx-proxy......How to add a distinct labl/annotation to differentiate them ?

Matthias Rampke

unread,
Nov 18, 2020, 7:13:29 AM11/18/20
to bruno bourdolle, Prometheus Users
Actually, I may have misunderstood – do you want to scrape the proxy itself, or something behind it?

In the former case, and assuming there is no further load balancing involved, you can use a service discovery mechanism that is appropriate in your environment. In a static config you directly provide the target(s). With service discovery, you provide Prometheus with a way to resolve each individual instance and automatically generate the targets. Unfortunately I cannot go into more detail without knowing the specifics of your environment.

/MR

Reply all
Reply to author
Forward
0 new messages