I want to use Prometheus + alertmanager for health manager. I want to know what is the lowest value I can use for scraping metrics (I hope that I can have a config for particuliar rules) and send alert as soon as there are alerts. I need almost realtime. Is it possible in Prometheus + alertmanager ?
I have a sample config that works now, but is it possible to have 1s are something that prometheus send alert as soon as the metric is read ?
serverFiles:
alerts:
groups:
- name: Instances
rules:
- alert: InstanceDown
expr: up == 0
for: 10s
labels:
severity: page
annotations:
description: '{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 1 minute.'
summary: 'Instance {{ $labels.instance }} down'
alertmanagerFiles:
alertmanager.yml:
route:
receiver: default-receiver
group_wait: 5s
group_interval: 10s
receivers:
- name: default-receiver
webhook_configs: