I was configuring Prometheus for Kong scraping and I expected to see node UP in /targets but I saw DOWN in /targets with the error:
context deadline exceeded. I'm able to reach the endpoint /metrics and it takes like 5s -10s to load over http.
I tried to insert a scrape_timeout in global section without any positive effects. I also put a scrape_timeout under job_name section. The aws security group is correctly defined.
Any Idea? i've opened a github issue but the response was to ask here...
Environment
System information:
Linux 4.4.0-1088-aws x86_64
Prometheus version:
prometheus, version 2.1.0+ds (branch: debian/sid, revision: 2.1.0+ds-1)
build user: pkg-go-ma...@lists.alioth.debian.org
build date: 20180121-21:30:42
go version: go1.9.2
Prometheus configuration file:
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # By default, scrape targets every 15 seconds.
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node_exporter'
static_configs:
- targets: ['localhost:9100','my_public_ip_4:9100','my_public_ip_3:9100']
- job_name: 'Kong_test'
metrics_path: /metrics
static_configs:
- targets: ['my_public_ip:8001']
- job_name: 'Kong_test_2'
metrics_path: /metrics
static_configs:
- targets: ['my_public_ip_2:8001']
context deadline exceeded>Still sounds like you are hitting the timeout.
>Can you query scrape_duration_seconds for this job? I suspect it will be identical to the scrape_timeout.
>Which scrape_timeout did you try? Was the reload successful?
>Kind regards
>Christian