Metrics not bound to an exporter instance

70 views
Skip to first unread message

Paul van der Linden

unread,
Feb 27, 2020, 8:33:16 AM2/27/20
to Prometheus Users
I have currently a pod running in kubernetes exporting some metrics to prometheus. If we have alerts on them, they will be retriggered (resolved & trigger shortly after eachother) every time I update the software (as the pod would have a different name/ip). I came to the conclusing that this is because of the unique pod and instance labels on these metrics. I have added a config to drop these labels, but this seems to cause the error "
persist head block: write compaction: add series: out-of-order series added with label set", while it solves the issue. What is the correct way to solve this issue.

Paul van der Linden

unread,
Feb 27, 2020, 8:37:31 AM2/27/20
to Prometheus Users
The config probably causing the issue:
  metric_relabel_configs:
 
- separator: ;
    regex
: .*(pod)|(instance).*
    replacement
: $1
    action
: labeldrop


Julien Pivotto

unread,
Feb 27, 2020, 8:43:05 AM2/27/20
to Paul van der Linden, Prometheus Users
Is the label set in the error empty? Are you exposing metrics with timestamps?

The correct way to deal with this would probably be to ingest with the
instance and pod name and improve your alert rule to ignore that at this
point.

Thanks

--
(o- Julien Pivotto
//\ Open-Source Consultant
V_/_ Inuits - https://www.inuits.eu
signature.asc

Paul van der Linden

unread,
Feb 27, 2020, 8:48:03 AM2/27/20
to Prometheus Users
The label set in the error is empty. There are still a bunch of labels on the metric though. The metrics don't have timestamps, just generated everytime prometheus scrapes them.

How do I improve the alert rule to ignore these? Looking at the docs, is the only way to wrap these in 2 label_replace, or otherwise sum and specify every relevant label?

Julien Pivotto

unread,
Feb 27, 2020, 8:51:31 AM2/27/20
to Paul van der Linden, Prometheus Users
On 27 Feb 05:48, Paul van der Linden wrote:
> The label set in the error is empty. There are still a bunch of labels on
> the metric though. The metrics don't have timestamps, just generated
> everytime prometheus scrapes them.
>
> How do I improve the alert rule to ignore these? Looking at the docs, is
> the only way to wrap these in 2 label_replace, or otherwise sum and specify
> every relevant label?

You can use `without(pod, instance)` as well.
> --
> You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/54ff4ef8-7154-4517-bcda-8e52411fddea%40googlegroups.com.
signature.asc

Paul van der Linden

unread,
Feb 27, 2020, 9:14:35 AM2/27/20
to Prometheus Users
Thanks, completely missed that option. The error doesn't disappear though, does it mean this error keeps happening until the ingested metrics with the labelsdrop get deleted because of retention?
> To unsubscribe from this group and stop receiving emails from it, send an email to promethe...@googlegroups.com.

Julien Pivotto

unread,
Feb 27, 2020, 9:25:31 AM2/27/20
to Paul van der Linden, Prometheus Users
On 27 Feb 06:14, Paul van der Linden wrote:
> Thanks, completely missed that option. The error doesn't disappear though,
> does it mean this error keeps happening until the ingested metrics with the
> labelsdrop get deleted because of retention?

I don't know what is the cause of the original issue. Does not sound
like the relabel config you have shown because there would still be a
'job' label.
> > an email to promethe...@googlegroups.com <javascript:>.
> > > To view this discussion on the web visit
> > https://groups.google.com/d/msgid/prometheus-users/54ff4ef8-7154-4517-bcda-8e52411fddea%40googlegroups.com.
> >
> >
> >
> > --
> > (o- Julien Pivotto
> > //\ Open-Source Consultant
> > V_/_ Inuits - https://www.inuits.eu
> >
>
> --
> You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/28465380-afa9-48ac-94f9-e01ad901ae7a%40googlegroups.com.
signature.asc

Paul van der Linden

unread,
Feb 27, 2020, 9:48:48 AM2/27/20
to Prometheus Users
There are multiple labels on those metrics indeed. How can I figure out what is causing this. Looking at the git changes for the deployment scripts, the only thing changed was this metric job. I still get a continuous stream of these in my logs, and they started around the time I added the label drop:
level=error ts=2020-02-27T14:38:24.235Z caller=db.go:617 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
level=error ts=2020-02-27T14:39:26.332Z caller=db.go:617 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
level=error ts=2020-02-27T14:40:26.960Z caller=db.go:617 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
level=warn ts=2020-02-27T14:40:36.044Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 95336324 (95337598)"
level=error ts=2020-02-27T14:41:28.247Z caller=db.go:617 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
level=warn ts=2020-02-27T14:41:34.021Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 95336132 (95337864)"
level=warn ts=2020-02-27T14:42:18.027Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 95336301 (95338065)"
level=error ts=2020-02-27T14:42:28.877Z caller=db.go:617 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
level=error ts=2020-02-27T14:43:30.582Z caller=db.go:617 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
level=error ts=2020-02-27T14:44:32.665Z caller=db.go:617 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
level=error ts=2020-02-27T14:45:33.427Z caller=db.go:617 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
level=warn ts=2020-02-27T14:46:32.003Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 95338072 (95339190)"
level=error ts=2020-02-27T14:46:34.061Z caller=db.go:617 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
level=error ts=2020-02-27T14:47:35.562Z caller=db.go:617 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{}\""
> To unsubscribe from this group and stop receiving emails from it, send an email to promethe...@googlegroups.com.

Julien Pivotto

unread,
Feb 27, 2020, 9:52:41 AM2/27/20
to Paul van der Linden, Prometheus Users
Can you tell me what is your Prometheus version?
> > https://groups.google.com/d/msgid/prometheus-users/28465380-afa9-48ac-94f9-e01ad901ae7a%40googlegroups.com.
> >
> >
> >
> > --
> > (o- Julien Pivotto
> > //\ Open-Source Consultant
> > V_/_ Inuits - https://www.inuits.eu
> >
>
> --
> You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/ec686aa7-7f54-4485-9121-507527137b20%40googlegroups.com.
signature.asc

Paul van der Linden

unread,
Feb 27, 2020, 10:07:05 AM2/27/20
to Prometheus Users
I have updated today to 2.16.0, before that it was 2.9.0 (if I'm not mistaken).
> To unsubscribe from this group and stop receiving emails from it, send an email to promethe...@googlegroups.com.

Julien Pivotto

unread,
Feb 27, 2020, 5:19:28 PM2/27/20
to Paul van der Linden, Prometheus Users
On 27 Feb 07:07, Paul van der Linden wrote:
> I have updated today to 2.16.0, before that it was 2.9.0 (if I'm not
> mistaken).
>


It looks like you have ingested empty labels, probably due to wring
config.

Please see
https://github.com/prometheus/prometheus/issues/6891
signature.asc
Reply all
Reply to author
Forward
0 new messages