prometheus report "Error on ingesting results from rule evaluation with different value but same timestamp" and keep restarting due to "Readiness probe failed"

466 views
Skip to first unread message

yongdao xuy

unread,
Aug 19, 2019, 6:00:53 AM8/19/19
to Prometheus Users

today we found out prometheus was not stable, though we deployed for weeks. So we delete all files under /prometheus/data and /prometheus/data/wal, prometheus still keep rebooting due to k8s Readiness probe failed.

we turned on INFO trace, but did not found out any hint except Error on ingesting results from rule evaluation with different value but same timestamp.  we are pretty sure ""error creating HTTP client: unable to load specified CA" error log does not impact prometheus's stability.


1. prometheus, version 2.10.0 (branch: HEAD, revision: d20e84d0fb64aff2f62a977adc8cfb656da4e286)


2. attached k8s-Resource group rule and prometheus config as below, not sure how to figure out which data with different value but with timestamp?    thanks for help.


evel=warn ts=2019-08-19T09:37:26.439Z caller=manager.go:553 component="rule manager" group=k8s-Resource msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=64

===========================================
groups:
- name: k8s-Resource
  rules:
  - expr: |
        sum(rate(container_cpu_usage_seconds_total{job=~".*kubernetes.*", image!="", container_name!=""}[5m])) by (namespace)
    record: namespace:container_cpu_usage_seconds_total:sum_rate
  - expr: |
        sum by (namespace, pod_name, container_name) (
          rate(container_cpu_usage_seconds_total{job=~".*kubernetes.*", image!="", container_name!=""}[5m])
        )
    record: namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate
  - expr: |
        sum(container_memory_usage_bytes{job=~".*kubernetes.*", image!="", container_name!=""}) by (namespace)
    record: namespace:container_memory_usage_bytes:sum
  - expr: |
         sum by (namespace, label_name) (
           sum(rate(container_cpu_usage_seconds_total{job=~".*kubernetes.*", image!="", container_name!=""}[5m])) by (namespace, pod_name)
         * on (namespace, pod_name) group_left(label_name) label_replace( (label_replace(kube_pod_labels{job="kube-state-metrics"}, "pod_name", "$1", "exported_pod", "(.*)")), "namespace", "$1", "exported_namespace", "(.*)" ) )
    record: namespace:container_cpu_usage_seconds_total:sum_rate
  - expr: |
        sum by (namespace, label_name) (
          sum(container_memory_usage_bytes{job=~".*kubernetes.*",image!="", container_name!=""}) by (pod_name, namespace)
        * on (namespace, pod_name) group_left(label_name)
          label_replace( (label_replace(kube_pod_labels{job="kube-state-metrics"}, "pod_name", "$1", "exported_pod", "(.*)")), "namespace", "$1", "exported_namespace", "(.*)" ) )
    record: namespace:container_memory_usage_bytes:sum
  - expr: |
        sum by (namespace, label_name) (
            sum (kube_pod_container_resource_requests_cpu_cores{job="kube-state-metrics"} * on (endpoint, instance, job, namespace, pod, exported_pod ,service) group_left(phase) (kube_pod_status_phase{phase=~"^(Pending|Running)$"} == 1)) by (namespace, exported_pod, pod)
          * on (namespace, exported_pod, pod)
            group_left(label_name) kube_pod_labels{job="kube-state-metrics"} )
    record: namespace:kube_pod_container_resource_requests_cpu_cores:sum 

  - expr: |
        sum by (namespace, label_name) (
            sum(kube_pod_container_resource_requests_memory_bytes{job="kube-state-metrics"} * on (endpoint, instance, job, namespace, pod, exported_pod, service) group_left(phase) (kube_pod_status_phase{phase=~"^(Pending|Running)$"} == 1)) by (namespace, exported_pod, pod)
          * on (namespace, exported_pod, pod)
            group_left(label_name) kube_pod_labels{job="kube-state-metrics"} )
    record: namespace:kube_pod_container_resource_requests_memory_bytes:sum

=======================================


2. prometheus config 

global:
  scrape_interval: 15s
rule_files:
  - "/etc/prometheus/*.rules"

scrape_configs:
- job_name: 'etcd-stats'
  static_configs:
  - targets: ['1.1.2.3:2379'']

- job_name: 'prometheus'
  static_configs:
  - targets: 
    - "prometheus:9090"

- job_name: 'istio-mesh'
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - istio-system

  relabel_configs:
  - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: istio-telemetry;prometheus

# Scrape config for envoy stats
- job_name: 'envoy-stats'
  metrics_path: /stats/prometheus
  kubernetes_sd_configs:
  - role: pod

  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_container_port_name]
    action: keep
    regex: '.*-envoy-prom'
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:15090
    target_label: __address__
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: pod_name
  - source_labels: [__meta_kubernetes_pod_label_app]
    action: keep
    regex: '.*'

  metric_relabel_configs:
  # Exclude some of the envoy metrics that have massive cardinality
  # This list may need to be pruned further moving forward, as informed
  # by performance and scalability testing.
  #- source_labels: [ cluster_name ]
  #  regex: '(outbound|inbound|prometheus_stats).*'
  #  action: keep
  - source_labels: [ tcp_prefix ]
    regex: '(outbound|inbound|prometheus_stats).*'
    action: drop
  - source_labels: [ listener_address ]
    regex: '(.+)'
    action: drop
  - source_labels: [ http_conn_manager_listener_prefix ]
    regex: '(.+)'
    action: drop
  - source_labels: [ http_conn_manager_prefix ]
    regex: '(.+)'
    action: drop
  - source_labels: [ __name__ ]
    regex: 'envoy_tls.*'
    action: drop
  - source_labels: [ __name__ ]
    regex: 'envoy_tcp_downstream.*'
    action: drop
  #- source_labels: [ __name__ ]
  #  regex: 'envoy_http_(stats|admin).*'
  #  action: keep
  #- source_labels: [ __name__ ]
  #  regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
  #  action: keep

- job_name: 'istio-policy'
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - istio-system


  relabel_configs:
  - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: istio-policy;http-monitoring

- job_name: 'istio-telemetry'
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - istio-system

  relabel_configs:
  - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: istio-telemetry;http-monitoring

- job_name: 'pilot'
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - istio-system

  relabel_configs:
  - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: istio-pilot;http-monitoring

- job_name: 'galley'
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - istio-system

  relabel_configs:
  - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: istio-galley;http-monitoring

- job_name: 'citadel'
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - istio-system

  relabel_configs:
  - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: istio-citadel;http-monitoring

# scrape config for API servers
- job_name: 'kubernetes-apiservers'
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - default
  scheme: https
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: kubernetes;https

# scrape config for nodes (kubelet)
- job_name: 'kubernetes-nodes'
  scheme: https
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  kubernetes_sd_configs:
  - role: node
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)
  - target_label: __address__
    replacement: kubernetes.default.svc:443
  - source_labels: [__meta_kubernetes_node_name]
    regex: (.+)
    target_label: __metrics_path__
    replacement: /api/v1/nodes/${1}/proxy/metrics

- job_name: 'kube-scheduler'
  honor_labels: true
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: endpoints
    namespaces:
      names:
      - kube-system
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_component]
    separator: ;
    regex: kube-scheduler
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: http-metrics
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: job
    replacement: ${1}
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace

- job_name: 'kube-controller-manager'
  honor_labels: true
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: endpoints
    namespaces:
      names:
      - kube-system
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_component]
    separator: ;
    regex: kube-controller-manager
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: http-metrics
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: job
    replacement: ${1}
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace

# scrape config for service endpoints.
- job_name: 'kubernetes-service-endpoints'
  kubernetes_sd_configs:
  - role: endpoints
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
    action: replace
    target_label: __scheme__
    regex: (https?)
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
    action: replace
    target_label: __address__
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
  - action: labelmap
    regex: __meta_kubernetes_service_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_service_name]
    action: replace
    target_label: kubernetes_name

- job_name: 'kubernetes-pods'
  kubernetes_sd_configs:
  - role: pod
  relabel_configs: # If first two labels are present, pod should be scraped by the istio-secure job.
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  # Keep target if there's no sidecar or if prometheus.io/scheme is explicitly set to "http"
  - source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_prometheus_io_scheme]
    action: keep
    regex: ((;.*)|(.*;http))
  - source_labels: [__meta_kubernetes_pod_annotation_istio_mtls]
    action: drop
    regex: (true)
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __address__
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: pod_name

- job_name: 'kubernetes-pods-istio-secure'
  scheme: https
  tls_config:
    ca_file: /etc/istio-certs/root-cert.pem
    cert_file: /etc/istio-certs/cert-chain.pem
    key_file: /etc/istio-certs/key.pem
    insecure_skip_verify: true # prometheus does not support secure naming.
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  # sidecar status annotation is added by sidecar injector and
  # istio_workload_mtls_ability can be specifically placed on a pod to indicate its ability to receive mtls traffic.
  - source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_istio_mtls]
    action: keep
    regex: (([^;]+);([^;]*))|(([^;]*);(true))
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
    action: drop
    regex: (http)
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__] # Only keep address that is host:port
    action: keep # otherwise an extra target with ':443' is added for https scheme
    regex: ([^:]+):(\d+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __address__
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: pod_name

- job_name: node-exporter
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: pod
    namespaces:
      names:
      - devops-tenant-monitor
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_label_app]
    separator: ;
    regex: node-exporter.*
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_container_port_number]
    action: keep
    regex: 9100
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: []
    separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace
- job_name: mysqld-exporter
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: pod
    namespaces:
      names:
      - devops-tenant-monitor
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_label_app]
    separator: ;
    regex: mysqldexporter.*
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_container_port_number]
    action: keep
    regex: 9104
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)  
  - source_labels: []
    separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace
- job_name: redis-exporter
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: pod
    namespaces:
      names:
      - devops-tenant-monitor
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_label_app]
    separator: ;
    regex: redisexporter.*
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_container_port_number]
    action: keep
    regex: 9121
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)    
  - source_labels: []
    separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace
- job_name: kube-state-metrics
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: pod
    namespaces:
      names: []
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_label_app]
    separator: ;
    regex: kube-state-metrics.*
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_container_port_number]
    action: keep
    regex: 8080
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: []
    separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace
- job_name: kafka-exporter
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: pod
    namespaces:
      names:
      - devops-tenant-monitor
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_label_app]
    separator: ;
    regex: kafka-exporter.*
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_container_port_number]
    action: keep
    regex: 9308
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: []
    separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace
- job_name: core-dns
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: pod
    namespaces:
      names:
      - kube-system
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_label_labels_harmonycloud_cn_local_dns]
    separator: ;
    regex: true
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_container_port_number]
    regex: 9253
    action: keep    
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
- job_name: kube-dns
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: pod
    namespaces:
      names:
      - kube-system
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_label_labels_harmonycloud_cn_kube_dns]
    separator: ;
    regex: true
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_container_port_number]
    action: keep
    regex: 10054
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: []
    separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace
- job_name: net-data
  scrape_timeout: 10s
  metrics_path: /api/v1/allmetrics
  params:
    # format: prometheus | prometheus_all_hosts
    # You can use `prometheus_all_hosts` if you want Prometheus to set the `instance` to your hostname instead of IP 
    format: [prometheus]
  scheme: http
  honor_labels: true
  kubernetes_sd_configs:
  - api_server: null
    role: pod
    namespaces:
      names:
      - devops-tenant-monitor
      - basics-tenant-hivip
  relabel_configs:  
  - source_labels: [__meta_kubernetes_pod_container_port_number]
    regex: 19999
    action: keep    
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
- job_name: 'jvm-stats'
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  honor_labels: true
  kubernetes_sd_configs:
  - api_server: null
    role: pod
    namespaces:
      names:
      - basics-tenant-hivip
  relabel_configs:  
  - source_labels: [__meta_kubernetes_pod_container_port_number]
    regex: 7676
    action: keep    
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
- job_name: elasticsearch-exporter 
  scrape_timeout: 15s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: pod
    namespaces:
      names:
      - devops-tenant-monitor
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_label_app]
    separator: ;
    regex: elasticsearch-exporter.*
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_container_port_number]
    action: keep
    regex: 9114
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: []
    separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace
- job_name: alertmanager
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: pod
    namespaces:
      names:
      - devops-tenant-monitor
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_label_app]
    separator: ;
    regex: alertmanager.*
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_container_port_number]
    action: keep
    regex: 9093
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: ${1}
    action: replace
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: []
    separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace
alerting:
  alertmanagers:
  - scheme: http
    static_configs:
    - targets:
      - "alertmanager:9093"
  - kubernetes_sd_configs:
      - role: pod
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
    relabel_configs:
    - source_labels: [__meta_kubernetes_namespace]
      regex: kube-system
      action: keep
    - source_labels: [__meta_kubernetes_pod_label_k8s_app]
      regex: alertmanager
      action: keep
    - source_labels: [__meta_kubernetes_pod_container_port_number]
      regex:
      action: drop







level=warn ts=2019-08-19T09:33:43.012Z caller=main.go:275 deprecation_notice="'storage.tsdb.retention' flag is deprecated use 'storage.tsdb.retention.time' instead."
level=info ts=2019-08-19T09:33:43.013Z caller=main.go:322 msg="Starting Prometheus" version="(version=2.10.0, branch=HEAD, revision=d20e84d0fb64aff2f62a977adc8cfb656da4e286)"
level=info ts=2019-08-19T09:33:43.013Z caller=main.go:323 build_context="(go=go1.12.5, user=root@a49185acd9b0, date=20190525-12:28:13)"
level=info ts=2019-08-19T09:33:43.013Z caller=main.go:324 host_details="(Linux 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 prometheus-4202522698-1z6rv (none))"
level=info ts=2019-08-19T09:33:43.013Z caller=main.go:325 fd_limits="(soft=655360, hard=655360)"
level=info ts=2019-08-19T09:33:43.013Z caller=main.go:326 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2019-08-19T09:33:43.021Z caller=main.go:645 msg="Starting TSDB ..."
level=info ts=2019-08-19T09:33:43.022Z caller=web.go:417 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=warn ts=2019-08-19T09:34:32.047Z caller=head.go:506 component=tsdb msg="encountered WAL error, attempting repair" err="read records: corruption in segment data/wal/00000020 at 48726016: last record is torn"
level=warn ts=2019-08-19T09:34:32.048Z caller=wal.go:300 component=tsdb msg="starting corruption repair" segment=20 offset=48726016
level=warn ts=2019-08-19T09:34:32.148Z caller=wal.go:308 component=tsdb msg="deleting all segments newer than corrupted segment" segment=20
level=warn ts=2019-08-19T09:34:32.148Z caller=wal.go:330 component=tsdb msg="rewrite corrupted segment" segment=20
level=info ts=2019-08-19T09:34:37.501Z caller=main.go:660 fs_type=65735546
level=info ts=2019-08-19T09:34:37.501Z caller=main.go:661 msg="TSDB started"
level=info ts=2019-08-19T09:34:37.501Z caller=main.go:730 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2019-08-19T09:34:37.519Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.521Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.523Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.524Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.526Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.527Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.528Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.530Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.531Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.533Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.534Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.539Z caller=kubernetes.go:192 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:34:37.657Z caller=main.go:758 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2019-08-19T09:34:37.657Z caller=main.go:614 msg="Server is ready to receive web requests."
level=error ts=2019-08-19T09:34:48.028Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:34:52.738Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:34:57.727Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:35:06.032Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:35:08.030Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:35:13.130Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:35:21.834Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:35:23.034Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:35:28.151Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:35:33.634Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=warn ts=2019-08-19T09:37:26.439Z caller=manager.go:553 component="rule manager" group=k8s-Resource msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=64
level=warn ts=2019-08-19T09:37:26.636Z caller=manager.go:553 component="rule manager" group=k8s-Resource msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=1
level=error ts=2019-08-19T09:37:27.727Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:37:32.658Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:37:37.724Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:37:42.727Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:37:54.233Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:37:58.328Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:38:03.028Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:38:08.028Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=warn ts=2019-08-19T09:38:09.946Z caller=manager.go:553 component="rule manager" group=k8s-Resource msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=63
level=warn ts=2019-08-19T09:38:10.239Z caller=manager.go:553 component="rule manager" group=k8s-Resource msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=1
level=error ts=2019-08-19T09:38:13.028Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:38:18.132Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:38:26.933Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:38:39.030Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:38:47.436Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:38:48.539Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:38:53.028Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:38:58.027Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:39:03.027Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:39:08.030Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=warn ts=2019-08-19T09:39:40.879Z caller=main.go:275 deprecation_notice="'storage.tsdb.retention' flag is deprecated use 'storage.tsdb.retention.time' instead."
level=info ts=2019-08-19T09:39:40.880Z caller=main.go:322 msg="Starting Prometheus" version="(version=2.10.0, branch=HEAD, revision=d20e84d0fb64aff2f62a977adc8cfb656da4e286)"
level=info ts=2019-08-19T09:39:40.880Z caller=main.go:323 build_context="(go=go1.12.5, user=root@a49185acd9b0, date=20190525-12:28:13)"
level=info ts=2019-08-19T09:39:40.880Z caller=main.go:324 host_details="(Linux 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 prometheus-4202522698-1z6rv (none))"
level=info ts=2019-08-19T09:39:40.880Z caller=main.go:325 fd_limits="(soft=655360, hard=655360)"
level=info ts=2019-08-19T09:39:40.880Z caller=main.go:326 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2019-08-19T09:39:40.889Z caller=web.go:417 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2019-08-19T09:39:40.889Z caller=main.go:645 msg="Starting TSDB ..."
level=warn ts=2019-08-19T09:39:40.981Z caller=wal.go:116 component=tsdb msg="last page of the wal is torn, filling it with zeros" segment=data/wal/00000024
level=warn ts=2019-08-19T09:40:43.428Z caller=head.go:458 component=tsdb msg="unknown series references" count=17913100
level=info ts=2019-08-19T09:40:45.620Z caller=main.go:660 fs_type=65735546
level=info ts=2019-08-19T09:40:45.620Z caller=main.go:661 msg="TSDB started"
level=info ts=2019-08-19T09:40:45.620Z caller=main.go:730 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2019-08-19T09:40:45.688Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.690Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.691Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.692Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.694Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.695Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.700Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.701Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.702Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.703Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.703Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.707Z caller=kubernetes.go:192 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-08-19T09:40:45.781Z caller=main.go:758 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2019-08-19T09:40:45.781Z caller=main.go:614 msg="Server is ready to receive web requests."
level=error ts=2019-08-19T09:41:00.931Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:41:05.927Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure
level=error ts=2019-08-19T09:41:10.781Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/istio-certs/root-cert.pem: open /etc/istio-certs/root-cert.pem: no such file or directory" scrape_pool=kubernetes-pods-istio-secure

Simon Pasquier

unread,
Aug 19, 2019, 11:35:45 AM8/19/19
to yongdao xuy, Prometheus Users
See https://github.com/kubernetes-monitoring/kubernetes-mixin/issues/230
You need to remove the second recording rule for
'namespace:container_cpu_usage_seconds_total:sum_rate'.
> --
> You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/c528330e-30e3-4430-960c-87c35ff00368%40googlegroups.com.

yongdao xuy

unread,
Aug 19, 2019, 9:47:33 PM8/19/19
to Prometheus Users
thanks Simon for the hint. Appreciate your help.

在 2019年8月19日星期一 UTC+8下午11:35:45,Simon Pasquier写道:
>> level=error ts=2019-08-19T09:38:47.436Z caller=manager.go:123 component="scrape manager" msg="error creating new scrape pool" err="...
Reply all
Reply to author
Forward
0 new messages