Getting 'Unauthorized: missing Authorization header...' using basic auth to scraped metrics

633 views
Skip to first unread message

CD Truong

unread,
Aug 5, 2019, 12:24:00 PM8/5/19
to Prometheus Users
Hi,

I can't seems to get more information about the unauthorized message.  If the prometheus.io/scrape is set to 'false' then the 'Unauthorized:...' message stop.

The job in prometheus.yml has a basic auth that get authenticated when scraping the custom metrics.  Has anyone seen this message before?

template:
    metadata:
      annotations:
          prometheus.io/path: /metrics
          prometheus.io/port: "my_port"
          prometheus.io/scrape: "true"


prometheus.yml |
   - job_name: my_scraping_job
      metrics_path: /metrics
      scheme: https
      static_configs:
        - targets: ['some_target_endpoints']
      basic_auth:
        username: 'prometheus_username'
        password: 'my_password''

{"_source":"middleware/auth.go:56","message":"Unauthorized: missing Authorization header, path: /metrics","severity":"info","time":"2019-08-05T15:30:14.529188149Z","timestamp":"2019-08-05T15:30:14.52917795Z"}
{"_source":"middleware/auth.go:56","message":"Unauthorized: missing Authorization header, path: /metrics","severity":"info","time":"2019-08-05T15:32:14.528995514Z","timestamp":"2019-08-05T15:32:14.528985014Z"}
{"_source":"middleware/auth.go:56","message":"Unauthorized: missing Authorization header, path: /metrics","severity":"info","time":"2019-08-05T15:34:14.528733779Z","timestamp":"2019-08-05T15:34:14.528722379Z"}
{"_source":"middleware/auth.go:56","message":"Unauthorized: missing Authorization header, path: /metrics","severity":"info","time":"2019-08-05T15:36:14.529052837Z","timestamp":"2019-08-05T15:36:14.529002138Z"}
{"_source":"middleware/auth.go:56","message":"Unauthorized: missing Authorization header, path: /metrics","severity":"info","time":"2019-08-05T15:38:14.529140299Z","timestamp":"2019-08-05T15:38:14.5291299Z"}

Thanks,
DT.

Chris Marchbanks

unread,
Aug 5, 2019, 12:41:45 PM8/5/19
to CD Truong, Prometheus Users
Judging by your description there is probably a second job, likely using Kubernetes service discovery, that is trying to scrape your service and not providing auth credentials. You can look at the targets page and see if there are any Unauthorized responses to confirm that is the case.
--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/0ae5bddf-7c04-4bc5-a097-f45ea444b145%40googlegroups.com.

CD Truong

unread,
Aug 5, 2019, 12:57:23 PM8/5/19
to Prometheus Users
Thanks Chris for responding.  I can see the kubernetes-pods job is also scraping the service.  There are also other endpoints in kubernetes-pods.  How do I set the auth credential to only use the the intended service?
To unsubscribe from this group and stop receiving emails from it, send an email to promethe...@googlegroups.com.

Chris Marchbanks

unread,
Aug 5, 2019, 1:22:06 PM8/5/19
to CD Truong, Prometheus Users
I belive you will need a separate job in order to specify the auth credential for only one service. It sounds like you already have a second job specifying the credentials, so just removing the prometheus.io/scrape annotation should work for you. When you already tried removing that annotation were you still getting the expected metrics from your service?

Hope that helps!

Chris

CD Truong

unread,
Aug 5, 2019, 2:19:04 PM8/5/19
to Prometheus Users
Yes.  I've already tried setting the prometheus.io/scrape to false and were not getting metrics from the service afterward.

Chris Marchbanks

unread,
Aug 5, 2019, 3:06:42 PM8/5/19
to CD Truong, Prometheus Users
What does the targets page say for your static job that specifically adds the basic auth? In your example that job is a static job, is that the case, or are you using Kubernetes service discovery for that job as well?
--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/a5ecea7c-2765-43b3-865f-230b74e08719%40googlegroups.com.

CD Truong

unread,
Aug 5, 2019, 3:52:09 PM8/5/19
to Prometheus Users
Is a static job.  The state is 'UP', Last Scrape is '1m36.178s ago, Scrape Duration '43.42ms' and Labels contain the instance endpoints and job name.
To unsubscribe from this group and stop receiving emails from it, send an email to promethe...@googlegroups.com.

CD Truong

unread,
Aug 5, 2019, 4:56:34 PM8/5/19
to Prometheus Users
Also, clicking the static job endpoint url in the target page it will return the error message below in the browser.

"error":{"message":"missing Authorization header, path: /metrics","code":401}

Chris Marchbanks

unread,
Aug 5, 2019, 7:36:07 PM8/5/19
to CD Truong, Prometheus Users
If the job is up then you should be getting some metrics from that job. You could take a look at the following queries to get an idea of what is happening:
scrape_samples_scraped{job="<job-name>"}
scrape_samples_post_metric_relabeling{job="<job-name>"}

CD Truong

unread,
Aug 6, 2019, 1:42:00 PM8/6/19
to Prometheus Users
The two metrics below return the data successfully.  I'm trying to make sense why the static job endpoints url in the target page is returning the code 401.

CD Truong

unread,
Aug 8, 2019, 12:45:23 PM8/8/19
to Prometheus Users
Is there away to suppress the endpoint error below? Why is the endpoint is valid collecting metrics and at the same time logged the error
"error":{"message":"missing Authorization header, path: /metrics","code":401}" to the container.

Chris Marchbanks

unread,
Aug 8, 2019, 11:10:56 PM8/8/19
to CD Truong, Prometheus Users
I am pretty sure if you get a 401 (or anything besides a 200) from a scrape no metrics will be ingested from your service. There could be some metrics such as the ones I listed below that are automatically generated by Prometheus though. The only way you could be getting metrics from your service and getting a 401 would be if you have two jobs scraping your service, one receiving a 401 and one receiving a 200. You can look through the /targets page to see if your pods are listed more than once.

Would you be able to post more of your configuration to help us understand what is happening? Also, are the queries I posted earlier returning more than 0?

Chris
--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/f5c20ab5-9a7a-4461-b65f-81972864121e%40googlegroups.com.

CD Truong

unread,
Aug 9, 2019, 11:08:23 AM8/9/19
to Prometheus Users
Yes.  The queries return value of 45.  The configuration you mentioned is that the prometheus.yml?
To unsubscribe from this group and stop receiving emails from it, send an email to promethe...@googlegroups.com.

Chris Marchbanks

unread,
Aug 9, 2019, 11:21:17 AM8/9/19
to CD Truong, Prometheus Users
Yes, the scrape configs for generic pods, and your custom job from prometheus.yml would be helpful.
Message has been deleted
Message has been deleted

CD Truong

unread,
Aug 13, 2019, 5:22:51 PM8/13/19
to Prometheus Users
I was able to locate the kubernetes job that was logging the 'Unauthorized:...' message.  Now, I need to add an 'action' to either drop/ignore the job endpoint "http://10.233.81.38:4001/metrics".  In the 'Before relabeling' can I use one of the label to drop the endpoimnt?

    - job_name: kubernetes-pods
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_scrape
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_pod_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: kubernetes_pod_name

Chris Marchbanks

unread,
Aug 15, 2019, 2:03:18 PM8/15/19
to CD Truong, Prometheus Users
Yes, you can use labels to drop specific specific pods. You can do this like:
- action: drop
  regex: <label-value>
  source_labels: [__meta_kubernetes_pod_label_<label-name>]

That will accomplish the same thing as setting the prometheus.io/scrape annotation to false on your pod. It might be easier just to set the annotation. I think you had a different issue when you tried to set the annotation to false though?
--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/c9bb37c0-c28f-443b-96b4-57658450f0ec%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages