I kept getting "server returned HTTP status 503 Service Unavailable" on my JMX Exporter scraping job on the pods which has Istio Sidecar enabled. Did anyone encounter similar issues before?
1. It works fine if I disabled the Istio Sidecar;
2. I am able to get the metrics fine through curl within any pods under the same k8s cluster (even under Prometheus pod itself).
I tried creating scaping jobs through Pod/Service/Endpoints, but none of them seem to make any difference.
- job_name: 'den-jmxexport-pod'
scheme: http
scrape_interval: 15s
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_name, __meta_kubernetes_pod_container_port_name]
action: keep
regex: (audit|web|loader|scheduler|environment|analysis);http-jmx
- source_labels: [__meta_kubernetes_pod_name]
target_label: pod_name
- source_labels: [__meta_kubernetes_namespace]
target_label: nservicee
- source_labels: [__meta_kubernetes_pod_node_name]
target_label: pod_node
- source_labels: [__meta_kubernetes_pod_phase]
target_label: pod_phase
- source_labels: [__meta_kubernetes_pod_host_ip]
target_label: pod_host_ip
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- job_name: 'den-jmxexport-service'
scheme: http
scrape_interval: 15s
kubernetes_sd_configs:
- role: service
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_service_port_name]
action: keep
regex: (audit-service|web-service|loader-service|scheduler-service|environment-service|analysis-service);http-jmx
- source_labels: [__meta_kubernetes_namespace]
target_label: namespace
- source_labels: [__meta_kubernetes_service_cluster_ip]
target_label: service_ip
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
As you can see from the following table, all of the pods with Istio Sidecar enabled are not working. The one without Istio Sidecar works fine.