Prometheus server closing automatically in sec whn i start

79 views
Skip to first unread message

Dashrath yadav

unread,
Oct 4, 2020, 8:27:00 AM10/4/20
to Prometheus Users
I am trying to call .net api in grafana for that i am using metrics
To ist export it into prometheus thn to grafan
But when i do confugration in yml file nd add host and save
And then again start server it closes automatically
not able to sort out issue
Using Window platform 
Help me to resolve this issue

Brian Candler

unread,
Oct 4, 2020, 11:14:35 AM10/4/20
to Prometheus Users
Your problem statement is too vague.  See this document: http://www.catb.org/~esr/faqs/smart-questions.html#intro

All you have said is that prometheus doesn't start after you you have edited prometheus.yml. If you want help, then you should provide at minimum:

1. The actual contents of the prometheus.yml file (exact copy-paste, not screenshot); and

2. The error message(s) which are given by prometheus when you start it.  You may need to start it at the command line, rather than as a service, to see what messages it outputs.  I don't use Windows, so I can't give you any advice about Windows service management.  If necessary, find a local Windows administrator to help you.

Dashrath yadav

unread,
Oct 4, 2020, 11:59:53 AM10/4/20
to Prometheus Users
Error:context dealine exceeded cant detect host
I increased scrap timeout

Dashrath yadav

unread,
Oct 4, 2020, 12:13:41 PM10/4/20
to Prometheus Users

# my global config

global:

scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.

evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.

# scrape_timeout is set to the global default (10s).



# Alertmanager configuration

alerting:

alertmanagers:

- static_configs:

- targets:

# - alertmanager:9093



# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.

rule_files:

# - "first_rules.yml"

# - "second_rules.yml"



# A scrape configuration containing exactly one endpoint to scrape:

# Here it's Prometheus itself.

scrape_configs:

# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.

- job_name: 'prometheus'



# metrics_path defaults to '/metrics'

# scheme defaults to 'http'.



static_configs:

- targets: ['localhost:9090']

 - job_name:'product' 

 static_configs: 
 - targets: ['localhost:4437'] 
    metrics_path: '/metrics-text'

Brian Candler

unread,
Oct 4, 2020, 12:27:20 PM10/4/20
to Prometheus Users
"Context deadline exceeded" would not stop prometheus from starting.  If the problem really is that the prometheus server is shutting down, then there will be some other error message logged.

The formatting of the config file you posted was lost, and in particular, the metrics_path is placed wrongly (it should align with job_name and static_configs).  Hence the "scrape_configs" section should look something like this:

  - job_name: 'prometheus'

Dashrath yadav

unread,
Oct 4, 2020, 1:03:40 PM10/4/20
to Prometheus Users
i am able to start server only when i made config like this:
  static_configs:
    -  targets: [ ' localhost:9090 ' , ' localhost:4437 ' ] then exceed error comes in target
 but when i do like in below its shuts down i mentioned

I made the following changes as you said
here is config:

# my global config
global:
  scrape_interval:     50s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to  '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']

     - job_name: 'Product'
      static_configs:
        - targets: ['localhost:'xxxx']
      metrics_path: '/metrics-text'

# i am adding reqired space which is valid for yml files
# not able to sort server shutdown error
Reply all
Reply to author
Forward
0 new messages