[Alertmanager] match_re alerts with any label value

1,683 views
Skip to first unread message

Anton Tokarev

unread,
Dec 30, 2020, 10:30:38 AM12/30/20
to Prometheus Users
Hey! 0/

I have a Prometheus source with a huge amount of alerts, but I interested only in a part of them.

I wanted to separate interested me alerts via 'match_re' by a certain label.

The thing is that the values of such labels can be varied and I can't set 'my_label: value1' to separate the alerts with 'my_label' label.

I've tried to use regex 'my_label: .+' but seems it doesn't work.

```
...
route:
  group_by:
  - '...'
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 12h
  receiver: 'blackhole'
  routes:
  - match_re:
      opsgenie_team_id: '.+'
    receiver: 'opsgenie'
receivers:
- name: 'blackhole'
- name: 'opsgenie'
...
```

Could you guys please help?

I have no ideas on how to split the alerts flow and get only with 'my_label' labels alerts (without respect for its value) :(

Thanks!

Bjoern Rabenstein

unread,
Jan 7, 2021, 10:57:02 AM1/7/21
to Anton Tokarev, Prometheus Users
On 30.12.20 07:30, Anton Tokarev wrote:
>
> I've tried to use regex 'my_label: .+' but seems it doesn't work.
>
> ```
> ...
> route:
> group_by:
> - '...'
> group_wait: 30s
> group_interval: 5m
> repeat_interval: 12h
> receiver: 'blackhole'
> routes:
> - match_re:
> opsgenie_team_id: '.+'
> receiver: 'opsgenie'
> receivers:
> - name: 'blackhole'
> - name: 'opsgenie'
> ...
> ```
>
> Could you guys please help?

At first glance, I'd say it should work.

Could you describe more precisely what you are seeing? Like show us
ladies an example alert that goes down a route that you didn't expect?

--
Björn Rabenstein
[PGP-ID] 0x851C3DA17D748D03
[email] bjo...@rabenste.in

Anton Tokarev

unread,
Feb 9, 2021, 5:16:21 PM2/9/21
to Prometheus Users
Hey, Björn! 0/

I'll try to describe the issue.

I have a bunch of K8s clusters that sends alerts to my Alertmanager. Every cluster has its own Prometheus in it. Infrastructure guys sharing these Prometheuses between devs and send alerts to two Alertmanager simultaneously (weird, I know, but there is no chance to set up devs their own Prometheus instance in a cluster).

The thing is that I want to separate devs alerts from infrastructure ones and send infrastructure alerts to "/dev/null/" while devs alerts should be routed to a certain receiver.

For this case, all the devs alert marked with a special label: team_name="team1|team2|teamN".

Devs teams count will grow without any control, so I can't define the static regexp rules for `match_re`.

And here's the main issue - I can't set a regexp rule for `match_re` like `team_name: '.*'`. It's just doesn't work.

Here's my Alertmanager config:

global:
  ...
route:
  group_by:
  - '...'
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 12h
  receiver: blackhole
  routes:
  - match_re:
      team_name: '.*'
    receiver: pageduty
receivers:
- name: blackhole
- name: pageduty
  ...

Thanks in advance!

Anton Tokarev

unread,
Feb 10, 2021, 6:31:14 AM2/10/21
to Prometheus Users
Almost forgot, the problem behaves in a strange way.

There are statements in logs:
...
level=error ts=2021-02-10T10:47:20.270Z caller=notify.go:372 component=dispatcher msg="Error on notify" err="cancelling notify retry for \"pageduty\" due to unrecoverable error: unexpected status code 422: {\"message\":\"Request body is not processable. Please check the errors.\",\"errors\":{\"message\":\"Message can not be empty.\"},\"took\":0.001,\"requestId\":\"ee7f2788-b567-4134-bcde-96080025b392\"}" context_err=null
level=error ts=2021-02-10T10:47:20.271Z caller=dispatch.go:301 component=dispatcher msg="Notify for alerts failed" num_alerts=246 err="cancelling notify retry for \"pageduty\" due to unrecoverable error: unexpected status code 422: {\"message\":\"Request body is not processable. Please check the errors.\",\"errors\":{\"message\":\"Message can not be empty.\"},\"took\":0.001,\"requestId\":\"ee7f2788-b567-4134-bcde-96080025b392\"}"
...

PagerDuty answers that he receives an empty message and he can't process such a request.

This is damn weird. How my config affects the message body and the whole payload of the alert?

P.S. I'm already tried these regexps:
.*
.+
.*$
^(.*)$
^([a-zA-Z0-9-]*)&

and many more :(

It works only if I set 'pagerduty' as a default receiver and the unwanted alerts are being routed via 'blackhole' configured with a looooong 'match_re" expression, but this is madness. I don't want to stalk every new infrastructure alert and add it to the expression :(

Bjoern Rabenstein

unread,
Feb 19, 2021, 1:31:26 PM2/19/21
to Anton Tokarev, Prometheus Users
On 10.02.21 03:31, Anton Tokarev wrote:
>
> PagerDuty answers that he receives an empty message and he can't process
> such a request.
>
> This is damn weird. How my config affects the message body and the whole
> payload of the alert?

The regexp you are using to route the alert should indeed not affect
the message body.

This looks more like you have misconfigured your receiver so that
notifications with an empty message body are the result.

Your config sometimes says "pagerduty" and sometimes
"pageduty". Perhaps the latter got configured without the proper
notification templates?
Reply all
Reply to author
Forward
0 new messages