Hey Patrick,
Thanks for the Prometheus integration. While I personally prefer running statically built binaries like the node exporter for host metrics, I can imagine this coming in handy for some people.
I tried running metrics.sh locally, but noticed some issues:
1) whenever requesting
http://localhost:9100/metrics, it seems to clear any metrics state for subsequent requests, so the next scrape will show an empty or reduced metrics state, until more metrics "accumulate" again. In Prometheus, a scrape should have no effect on the data (two parallel scrapes at the same time should get the same data). For example, counters should simply be in-memory values that are continuously incremented and a scrape would simply export their current state. Similar for the other metric types.
2) The metrics.sh metrics are of the form:
metrics_sh{metric="network_io.out"}
This is not following Prometheus's metrics data model (http://prometheus.io/docs/concepts/data_model/). The metric name should be the first word before the brackets, and the labels should differentiate dimensional aspects of the same metric. For example, in Prometheus's node exporter, the same metric is exported as:
node_network_receive_bytes{device="eth0"}
node_network_receive_bytes{device="bond0"}
[...]
A job="<exporter-job-name>" label will be automatically added by Prometheus, telling you where the metric came from.
3) you set client-side timestamps for the samples, which should only be done in very rare power-user circumstances. Prometheus will automatically attach scrape-time timestamps.
Hope this helps!
Cheers,
Julius