Hi,
the simplest adjustment would be the following, it seems:
Change your first rule to stop applying to toto2 and change the second
one to only apply for toto2, e.g.:
disk_used_percent{instance!="toto2:port"} > 80
disk_used_percent{instance="toto2:port"} > 70
While this may solve your immediate problem, I don't recommend going
with this solution as it doesn't scale.
Instead, I suggest using this pattern to set machine-specific thresholds:
https://www.robustperception.io/using-time-series-as-alert-thresholds
Another side note: I see that you are using a "host" label. You might
want to look into dropping that and changing the instance label to one
of your liking instead, for example using relabeling or using labels in
your service discovery configs:
https://www.robustperception.io/controlling-the-instance-label
Kind regards,
Christian
On 6/9/20 5:42 PM, Frederic Arnould wrote:
> I have one rule that i would applicate to one server, and an another
> rule that i would like applicate on a another server
>
> Example,
>
> My actual prometheus.yml
>
> alertmanagers:
> - static_configs:
> - targets:
> - monitor_alertmanager:9093
> global:
> evaluation_interval: 60s
> scrape_interval: 60s
> rule_files:
> - rules.d/.yml
> scrape_configs:
> - job_name: prometheus
> static_configs:
> - targets:
> - localhost:9090
> - job_name: production
> static_configs:
> - targets:
> toto1:port
>
> *rules.d/base.yml*
>
> - alert: disk_space
> expr: disk_used_percent > 80
> annotations:
> description: Disk space {{ $labels.path }} on {{ $labels.host }}
> is used more than 80%
> value: '{{ humanize $value }}%'
>
> but i must add a different rule for an another specific server
>
> My second file*rules.d/disk.yml*
>
> - alert: disk_space
> expr: disk_used_percent > 70
> annotations:
> description: Disk space {{ $labels.path }} on {{ $labels.host }}
> is used more than 70%
> value: '{{ humanize $value }}%'
>
>
> ~~~~~~
>
> but the second rule for toto2, will not be applicated to toto1
>
> My new prometheus.yml
>
> alertmanagers:
> - static_configs:
> - targets:
> - monitor_alertmanager:9093
> global:
> evaluation_interval: 60s
> scrape_interval: 60s
> rule_files:
> *- rules.d/base.yml*
> scrape_configs:
> - job_name: prometheus
> static_configs:
> - targets:
> - localhost:9090
> - job_name: production
> static_configs:
> - targets:
> *toto1:port*
> global:
> evaluation_interval: 60s
> scrape_interval: 60s
> rule_files:
> *- rules.d/disk.yml*
> scrape_configs:
> - job_name: prometheus
> static_configs:
> - targets:
> - localhost:9090
> - job_name: production
> static_configs:
> - targets:
> *toto2:port*
>
> Do you unterstand ?
>
>
>
>
>
>
> Le mardi 9 juin 2020 16:56:19 UTC+2, Christian Hoffmann a écrit :
>
> Hi,
>
> On 6/9/20 4:32 PM, Frederic Arnould wrote:
> > I would like to do different rules for different targets
> Are you talking about alert and/or recording rules in rule_files?
> These rules are always global.
>
> However, each rule can be restricted to specific series only. Such
> restrictions can be made upon arbitrary labels, including the job and
> instance labels. These might enable you to do exactly what you want.
>
> Maybe this helps already. If not, we might be able to provide more
> specific guidance if you share some more details on what you are trying
> to do.
>
> Kind regards,
> Christian
>
> --
> You received this message because you are subscribed to the Google
> Groups "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to
prometheus-use...@googlegroups.com
> <mailto:
prometheus-use...@googlegroups.com>.
> To view this discussion on the web visit
>
https://groups.google.com/d/msgid/prometheus-users/e8b63c14-446f-45e9-aef2-c3202160cd86o%40googlegroups.com
> <
https://groups.google.com/d/msgid/prometheus-users/e8b63c14-446f-45e9-aef2-c3202160cd86o%40googlegroups.com?utm_medium=email&utm_source=footer>.