You can leave out "for: 5s" since you're only scraping and evaluating rules every 60s.
If you don't want an immediate alert in the case of a single probe failure (like a single dropped packet), then set "for: 1m" or "for: 2m" as required. This will then only alert if the alert is continuously present for that duration.
Alert manager config:
...
In your original post you said "but black box exporter detect the recover behavior after about 5mins". Are you talking about when you receive the "send_resolved" message from alertmanager?
There are various delays which can occur between prometheus making an alert and alertmanager sending it, and also with prometheus withdrawing an alert and alertmanager sending a resolved message.
If I understand correctly: Prometheus doesn't explicitly "resolve" an alert, rather it just stops sending that alert. The alert comes with an "endsAt" time, which is explained here:
"
3x the greater of the evaluation_interval or resend-delay values"
Since you have an evaluation_interval of 60s, I believe this means there will be at least a 3 minute delay between an alert ceasing to fire, and the resolved message being sent.
See also:
# ResolveTimeout is the default value used by alertmanager if the alert does
# not include EndsAt, after this time passes it can declare the alert as resolved if it has not been updated.
# This has no impact on alerts from Prometheus, as they always include EndsAt.
Really I think you need to separate your problem into two parts:
1. Making sure that blackbox_exporter is probing ICMP and SSH successfully. Check "probe_status" is going to 0 or 1 at the correct times. View the PromQL history of the probe_status metric to confirm this. Ignore alerts.
2. Then look at your alerting configuration, as to exactly when it sends messages.