Azure Discovery & YML Config Error

636 views
Skip to first unread message

russel...@gmail.com

unread,
Aug 22, 2017, 2:49:12 PM8/22/17
to Prometheus Users
Hello

Just started out testing Prometheus to see if correct for us.

I am trying to add Azure discovery into the config file. I have set up an azure app into our Azure AD to get client ID/secret etc..  However I am obviously making a silly mistake somewhere. As when I try and run I get an error on the config file.  The output on run is as follows.


C:\prometheus>prometheus.exe
INFO[0000] Starting prometheus (version=2.0.0-beta.2, branch=HEAD, revision=a52f082939a566d5269671e98be06fc6bdf61d09)  source="main.go:204"
INFO[0000] Build context (go=go1.8.3, user=REMOVEDFROMPUBLICPOST, date=20170818-08:25:08)  source="main.go:205"
INFO[0000] Host details (windows)                        source="main.go:206"
INFO[0000] Starting tsdb                                 source="main.go:218"
INFO[0000] tsdb started                                  source="main.go:224"
INFO[0000] Loading configuration file prometheus.yml     source="main.go:357"
ERRO[0000] Error loading config: couldn't load configuration (--config.file=prometheus.yml): unknown fields in config: static_configs, azure_sd_configs  source="main.go:273"



My Config file is as follows (I removed security keys from public post etc)


# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'codelab-monitor'

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

static_configs:
      - targets: ['localhost:9182']

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'Azure Production'

azure_sd_configs:
   # List of Azure service discovery configurations.
    subscription_id: '0REMOVEDFROMPUBLICPOSTc'
    tenant_id: '2REMOVEDFROMPUBLICPOST4'
    client_id: '4REMOVEDFROMPUBLICPOST1'
    client_secret: 'nREMOVEDFROMPUBLICPOST='    
      


Any advice on what I am missing would be really appreciated

From

Russell

Tobias Schmidt

unread,
Aug 22, 2017, 3:03:25 PM8/22/17
to russel...@gmail.com, Prometheus Users
scrape_configs should only appear once. It should contain a list of job definitions, which are required to have a job_name attribute and one of the *_configs (so either static_configs or azure_sd_configs).

- indent static_configs on the same level as job_name: 'prometheus'
- remove second scrape_configs: line
- indent azure_sd_configs on the same level as job_name 'azure'

While using different sd mechanisms, this example file should give you an idea about the indentation: https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L18

--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To post to this group, send email to promethe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/c7dc1456-23bc-4de3-9405-db73760c78d2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

russel...@gmail.com

unread,
Aug 22, 2017, 4:25:44 PM8/22/17
to Prometheus Users, russel...@gmail.com
Hello

Thank you that worked and that all makes sense.

Now to work out why it thinks my credentials are wrong as getting a failed with 401 Unauthorized: StatusCode=401 on the azure oauth login url..

Once again thank you.

From

Russell

russel...@gmail.com

unread,
Aug 29, 2017, 8:40:29 AM8/29/17
to Prometheus Users, russel...@gmail.com
Hello

I hoping I can ask for bit more advice.  Now have working config file but to get an error on azure part when I startup.  I am guessing is from incorrect details.

Unable to refresh during Azure discovery: could not list virtual machines: autorest#WithErrorUnlessStatusCode: POST https://login.microsoftonline.com/2REMOVEDFROMPUBLICPOST4/oauth2/token?api-version=1.0 failed with 401 Unauthorized: StatusCode=401  source="azure.go:96"

My Config File:

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'codelab-monitor'

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9182']

  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'Azure German Production'

    azure_sd_configs:
   # List of Azure service discovery configurations.
      - subscription_id: '0REMOVEDFROMPUBLICPOSTc'
      - tenant_id: '2REMOVEDFROMPUBLICPOST4'
      - client_id: '4REMOVEDFROMPUBLICPOST1'
      - client_secret: 'nREMOVEDFROMPUBLICPOST='    
      

- I tried subscription_id,tenant_id,client_id and client_secret with and without dashes

I got the values for the numbers in the pic below.

subscription_id= Came from portal not seen in screen shot but sub overview page.
tenant_id = 1
client_id = 2 
client_secret = 3




Hopefully, this is something obvious I am missing?

From

Russell

Reply all
Reply to author
Forward
0 new messages