metric_relabel_configs not dropping metrics

1,011 views
Skip to first unread message

Jihui Yang

unread,
Feb 20, 2023, 5:21:33 AM2/20/23
to Prometheus Users
I'm using prometheus-operator's addionalScrapeConfigs to add metric drop rules. Example:

```
- job_name: drop_response_metrics 
  honor_timestamps: true 
  scrape_interval: 30s 
  scrape_timeout: 10s 
  metrics_path: /metrics 
  scheme: http 
  follow_redirects: true 
  metric_relabel_configs: 
  - source_labels: [__name__] 
    separator: ; 
    regex: (response_total|response_latency_ms_count|response_latency_ms_sum) 
    replacement: $1 
    action: drop
```

The config is successfully loaded to prometheus and I can view it in `/config` endpoint. But for some reason I still can see the metrics. can you let me know what to do?

Stuart Clark

unread,
Feb 20, 2023, 10:14:23 AM2/20/23
to Jihui Yang, Prometheus Users

Is that the full config? I'm not seeing a Service Discovery section (e.g. Kubernetes or file based) to tell Prometheus where to scrape from.

-- 
Stuart Clark

Jihui Yang

unread,
Feb 20, 2023, 2:10:15 PM2/20/23
to Prometheus Users
Hi, so I added this section to match all namespaces:
```
kubernetes_sd_configs: 
 - role: endpoints kubeconfig_file: "" 
   follow_redirects: true 
   namespaces: 
     names: 
         - example1
         - example2
         - example3 
```
as well as 
```
authorization: 
     type: Bearer 
     credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
```
I turned on debug logging, and i'm getting 
```
ts=2023-02-20T19:08:30.169Z caller=scrape.go:1292 level=debug component="scrape manager" scrape_pool=drop_response_metrics target=http://10.10.188.252:25672/metrics msg="Scrape failed" err="Get \"http://10.10.188.252:25672/metrics\": EOF"
ts=2023-02-20T19:08:30.465Z caller=scrape.go:1292 level=debug component="scrape manager" scrape_pool=drop_response_metrics target=http://10.10.152.96:10043/metrics msg="Scrape failed" err="server returned HTTP status 500 Internal Server Error"
ts=2023-02-20T19:08:30.510Z caller=scrape.go:1292 level=debug component="scrape manager" scrape_pool=drop_response_metrics target=http://10.10.241.97:9100/metrics msg="Scrape failed" err="server returned HTTP status 400 Bad Request"
```

The metrics are still not dropped

Stuart Clark

unread,
Feb 20, 2023, 5:16:38 PM2/20/23
to Jihui Yang, Prometheus Users
I'm not really following exactly what your config is.

Those errors suggest that at least some of the scrapes are failing.

When you say "the metrics are still not dropped" are these metrics that
are being scraped in this job?

--
Stuart Clark

Jihui Yang

unread,
Feb 20, 2023, 5:33:25 PM2/20/23
to Stuart Clark, Prometheus Users
I think these metrics are being scraped from another job. What I want is to drop any scraped metrics with names match the regex I provided 

Stuart Clark

unread,
Feb 20, 2023, 6:07:25 PM2/20/23
to Jihui Yang, Prometheus Users
On 20/02/2023 22:33, Jihui Yang wrote:
> I think these metrics are being scraped from another job. What I want
> is to drop any scraped metrics with names match the regex I provided
Then you need to add the relabel config to that other job.

--
Stuart Clark

Jihui Yang

unread,
Feb 20, 2023, 6:14:58 PM2/20/23
to Prometheus Users
I'm using prometheus-operator. It only allows loading addionalScrapeConfigs to append to the end of the config file. The other config jobs were added as part of loading prometheus-operator. I'm not sure I can change those.

Stuart Clark

unread,
Feb 20, 2023, 6:21:48 PM2/20/23
to Jihui Yang, Prometheus Users
On 20/02/2023 23:14, Jihui Yang wrote:
I'm using prometheus-operator. It only allows loading addionalScrapeConfigs to append to the end of the config file. The other config jobs were added as part of loading prometheus-operator. I'm not sure I can change those.

The other jobs are probably from PodMonitor & ServiceMonitor objects, so you'd need to adjust those.

-- 
Stuart Clark

Jihui Yang

unread,
Feb 21, 2023, 12:29:01 PM2/21/23
to Prometheus Users
I didn't find a way to adjust those. If I append scrape config jobs to the end of the config file, they should be able to overwrite existing job right?

Stuart Clark

unread,
Feb 21, 2023, 12:51:19 PM2/21/23
to Jihui Yang, Prometheus Users
On 21/02/2023 17:29, Jihui Yang wrote:
> I didn't find a way to adjust those. If I append scrape config jobs to
> the end of the config file, they should be able to overwrite existing
> job right?

No. Job configurations are self contained. So metrics scraped by a
particular job will have any relabelling rules applied for that
particular job only. The only way you can set relabelling rules for a
job is by editing the job config. For the Prometheus Operator this is
done by adjusting the PodMonitor/ServiceMonitor objects.

--
Stuart Clark

Jihui Yang

unread,
Feb 21, 2023, 3:40:59 PM2/21/23
to Prometheus Users
Thanks, I found the place to make the changes! Basically for each `kind: ServiceMonitor` add `metricRelabelings` to drop labels. Appreciate the help!
Reply all
Reply to author
Forward
0 new messages