stable/prometheus-operator one node-exporter pod always in pending

10 views
Skip to first unread message

Zhang Zhao

unread,
Jun 23, 2020, 1:46:51 PM6/23/20
to Prometheus Users
I am using stable/prometheus-operator to deploy Prometheus. I added additional scrape config for linux VMs to connect to the Prometheus. However one of the node-exporter pod always shows pending. In the log, it shows it doesn't match node selector. I added app: prometheus-node-exporter in the additional Scrape config... Any advice?



additionalScrapeConfigs:
    - job_name: node-exporter-vm
      static_configs:
        - targets:
          - xx.xx.xx.xx:9100
          labels:
            app: prometheus-node-exporter
            namespace: espr-prometheus-nonprod
            jobLabel: node-exporter
            release: prometheus



prometheus-prometheus-node-exporter-df7bq                1/1     Running   1          8d

prometheus-prometheus-node-exporter-j6bmt                1/1     Running   1          8d

prometheus-prometheus-node-exporter-npg6f                0/1     Pending   0          9m54s





Events:

  Type     Reason             Age                    From                Message

  ----     ------             ----                   ----                -------

  Normal   NotTriggerScaleUp  7m4s (x4 over 7m34s)   cluster-autoscaler  pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) didn't match node selector

  Normal   NotTriggerScaleUp  2m57s (x2 over 3m8s)   cluster-autoscaler  pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 node(s) didn't match node selector, 1 node(s) didn't have free ports for the requested pod ports

  Normal   NotTriggerScaleUp  107s (x29 over 6m48s)  cluster-autoscaler  pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) didn't match node selector

  Warning  FailedScheduling   37s (x6 over 7m43s)    default-scheduler   0/3 nodes are available: 2 node(s) didn't match node selector, 3 node(s) didn't have free ports for the requested pod ports.









apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  annotations:

    meta.helm.sh/release-name: prometheus

    meta.helm.sh/release-namespace: espr-prometheus-nonprod

  creationTimestamp: "2020-06-02T06:23:34Z"

  generation: 2

  labels:

    app: prometheus-node-exporter

    app.kubernetes.io/managed-by: Helm

    chart: prometheus-node-exporter-1.10.0

    heritage: Helm

    jobLabel: node-exporter

    release: prometheus

  name: prometheus-prometheus-node-exporter

  namespace: espr-prometheus-nonprod

  resourceVersion: "4231680"

  selfLink: /apis/extensions/v1beta1/namespaces/espr-prometheus-nonprod/daemonsets/prometheus-prometheus-node-exporter

  uid: fbc88e6b-f806-4cbb-8903-bb360bb0a855

spec:

  revisionHistoryLimit: 10

  selector:

    matchLabels:

      app: prometheus-node-exporter

      release: prometheus

  template:

    metadata:

      creationTimestamp: null

      labels:

        app: prometheus-node-exporter

        chart: prometheus-node-exporter-1.10.0

        heritage: Helm

        jobLabel: node-exporter

        release: prometheus




Zhang

Reply all
Reply to author
Forward
0 new messages