Problem Configuring Prometheus with CollectD

2,481 views
Skip to first unread message

Shapath Neupane

unread,
Apr 12, 2015, 1:54:54 AM4/12/15
to prometheus...@googlegroups.com
I'm trying to configure Prometheus with Collectd so I can see the stats from CollectD with Prometheus however I'm having troubles with it.

I have installed CollectD in Ubuntu 14.04 and enabled some default collectd plugins, CPU, RAM and disabled those not installed like Apache.

I have enabled the write_http plugin and basically copied what is in the README of the CollectD Exporter in the Github Repo


LoadPlugin write_http
<Plugin write_http>
 
<URL "http://localhost:9090/collectd-post">
   
Format "JSON"
   
Interval 5s
   
StoreRates false
 
</URL>
</
Plugin>


Then Moving on to the Prometheus Configuration File I have this

# Global default settings.
global {
  scrape_interval: "15s"     # By default, scrape targets every 15 seconds.
  evaluation_interval: "15s" # By default, evaluate rules every 15 seconds.

  # Attach these extra labels to all timeseries collected by this Prometheus instance.
  labels: {
    label: {
      name: "monitor"
      value: "codelab-monitor"
    }
  }

  # Load and evaluate rules in this file every 'evaluation_interval' seconds. This field may be repeated.
  #rule_file: "prometheus.rules"
}

job {
  # This job will be named "collectd", so a job="collectd" label will be
  # added to all time series scraped from it.
  name: "collectd"
  # Scrape this job every 15s, overriding the global default.
  scrape_interval: "5s"
  # Configure a group of static HTTP targets
  target_group {
    target: "http://localhost:9090/collectd-post"
  }
}



# A job definition containing exactly one endpoint to scrape: Here it's prometheus itself.
job: {
  # The job name is added as a label `job={job-name}` to any timeseries scraped from this job.
  name: "prometheus"
  # Override the global default and scrape targets from this job every 5 seconds.
  scrape_interval: "5s"

  # Let's define a group of targets to scrape for this job. In this case, only one.
  target_group: {
    # These endpoints are scraped via HTTP.
  }
}


However I'm getting a Unhealthy Status Page in the Prometheus Status Page. Link

Is there something I am missing?










Brian Brazil

unread,
Apr 12, 2015, 3:24:54 AM4/12/15
to Shapath Neupane, prometheus-developers
On 12 April 2015 at 06:54, Shapath Neupane <neupane...@gmail.com> wrote:
I'm trying to configure Prometheus with Collectd so I can see the stats from CollectD with Prometheus however I'm having troubles with it.

I have installed CollectD in Ubuntu 14.04 and enabled some default collectd plugins, CPU, RAM and disabled those not installed like Apache.


If you're looking for machine stats, the Node Exporter is best way to do that. See http://www.boxever.com/monitoring-your-machines-with-prometheus
This should be /metrics, and the default port for collectd-exporter is 9103. Visit it in your browser to check.

Brian

 
  }
}



# A job definition containing exactly one endpoint to scrape: Here it's prometheus itself.
job: {
  # The job name is added as a label `job={job-name}` to any timeseries scraped from this job.
  name: "prometheus"
  # Override the global default and scrape targets from this job every 5 seconds.
  scrape_interval: "5s"

  # Let's define a group of targets to scrape for this job. In this case, only one.
  target_group: {
    # These endpoints are scraped via HTTP.
  }
}


However I'm getting a Unhealthy Status Page in the Prometheus Status Page. Link

Is there something I am missing?










--
You received this message because you are subscribed to the Google Groups "Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-devel...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages