Writing multiple Custom Collectors using Python

25 views
Skip to first unread message

Ashwin kumar

unread,
May 22, 2020, 4:07:55 AM5/22/20
to Prometheus Users
Hi everyone,

I am new to Prometheus and I am writing a Custom Collector to get CPU and Memory values of two firewalls. I can expand them to as many devices as I want.

Now I am opening a http server in port 8000 and getting these. So now my metrics are available in http://localhost:8000

My script would look something like this

==================================================

class CustomCollector(object):
      def collect():
          devices =[]
          for device in devices:
              #collect CPU and yield#
              #collect Memory and yield#
if name == 'main': 
    start_http_server(8000) 
    REGISTRY.register(CustomCollector()) 
    while True: 
       time.sleep(1)
===================================================

I have one Custom Collector function to return both memory and CPU for the devices which I have stored in a list.

Now when I add about 30 devices the response would become untidy and difficult to interpret.

I want to have CPU values and Memory values in separate endpoints namely

http://localhost:8000/CPU 
http://localhost:8000/Memory

So is there a way to do this?

I have two questions

1) Can we write multiple Custom Collector functions in the same script and somehow modify their endpoints with the port remaining same? 
2) Can we write multiple Custom Collector functions in the same script exposing them on different ports? (Not sure if this efficient though)

Any other ideas apart from these would also help..

Brian Brazil

unread,
May 22, 2020, 4:25:21 AM5/22/20
to Ashwin kumar, Prometheus Users
On Fri, 22 May 2020 at 09:08, Ashwin kumar <ashwin...@gmail.com> wrote:
Hi everyone,

I am new to Prometheus and I am writing a Custom Collector to get CPU and Memory values of two firewalls.
I'd suggest looking at https://github.com/prometheus/snmp_exporter which is meant for getting metrics from network devices.

Brian

 
--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/5f6ef0c8-8442-405d-afbe-bdaa28c8733e%40googlegroups.com.


--

Brian Candler

unread,
May 22, 2020, 4:45:22 AM5/22/20
to Prometheus Users
Now when I add about 30 devices the response would become untidy and difficult to interpret.

That doesn't matter. If you split it into two endpoints, then you'll need two scrape jobs in Prometheus, and you'll still collect the same number of metrics.

If you look at (say) node_exporter you'll see there is a ton of metrics:

curl localhost:9100/metrics

But there's no need to separate them into different endpoints. As long as the metrics are sensibly named and labeled, then anyone using PromQL will be able to find the data they want.

Here are some hints on metric naming:

Reply all
Reply to author
Forward
0 new messages