Integrate Kafka as remote storage backend.

1,420 views
Skip to first unread message

georg...@gmail.com

unread,
Jan 3, 2017, 9:02:19 AM1/3/17
to Prometheus Developers
Hi,

I integrated kafka into the existing remote storage structure with pull request #2315

I know there is vulcan and so on, but basically I only needed a fast way to send the data to our central kafka entrypoint. This was the fastest.

What do you think?

Brian Brazil

unread,
Jan 3, 2017, 9:04:55 AM1/3/17
to georg...@gmail.com, Prometheus Developers
We want to keep all remote storage options external to Prometheus itself, particularly those not suitable to use as storage such as Kafka. See https://www.robustperception.io/using-the-remote-write-path/ for how to integrate. 

--

Georg Öttl

unread,
Jan 3, 2017, 9:26:28 AM1/3/17
to Brian Brazil, Prometheus Developers
Thanks  for the link, seems to be the more elegant solution. Moves away the problem from prometheus.

For kafka to work you'd need to implement a protobuf consumer and enable the REST api. Am I right?

Regards,
Georg

georg...@gmail.com

unread,
Jan 23, 2017, 8:48:39 AM1/23/17
to Prometheus Developers, brian....@robustperception.io, georg...@gmail.com
We tested the remote_write in its current implementation a bit. Getting everything to work with the default settings isn't easy. The default, namely Protocol Buffer over HTTP, with snappy compression, isn't the easiest way to start for a user. Makes remote_write to start with more complicated than it should be.

For my liking I'd prefer a start with everything in plaintext without protobufs and snappy. Also the documentation lacks the protobuf scheme, so it is hard to actually implement consumers for the data.

I hope this uninvited feedback doesn't do any harm. If, please ignore it :-)

Regards,
Georg

Tom Wilkie

unread,
Jan 23, 2017, 9:25:41 AM1/23/17
to georg...@gmail.com, Prometheus Developers, Brian Brazil
Hi Georg

Thanks for the feedback - sorry you found it difficult.  May I ask, what language were you using?

There is an example remote write 'server' implementation (linked from Brian's post) [1] and the definition of the protobufs is checked into the repo [2].  

For a plain-text without-compression view of the data, you could have used the federation interface - pulling data out of prometheus.  Other than that what else could we have done to make this easier?  Perhaps providing examples in Java etc would make it easier.

Thanks

Tom


>
> --
>
>
> Brian Brazil
> www.robustperception.io

--
You received this message because you are subscribed to the Google Groups "Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-developers+unsub...@googlegroups.com.
To post to this group, send email to prometheus-developers@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-developers/146337c3-b1d1-4a3b-bb0e-e06f494fbc90%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Georg Öttl

unread,
Jan 23, 2017, 9:38:32 AM1/23/17
to Tom Wilkie, Prometheus Developers, Brian Brazil
Federation interface, writing a custom server to translate the message or write a new kafka exporter inside prometheus (which i did) - everything needs some kind of implementation to be done from the user side. Actually if the remote_write would allow to send the events in json over http, I think it would  be easier to integrate it with existing systems. At least for proof of concepts. No need to compile protobufs, write custom implementations ...



To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-devel...@googlegroups.com.

To post to this group, send email to prometheus...@googlegroups.com.

Tom Wilkie

unread,
Jan 23, 2017, 9:47:45 AM1/23/17
to Georg Öttl, Prometheus Developers, Brian Brazil
Hi Georg

We're trying to keep the Prometheus server as "light" as possible, and avoiding supporting multiple different formats is the point of the "generic" remove write path - previously Prometheus embedded a bunch of different remote writes for things like influxdb, opentsdb and graphite.  

Having said that, it would be quite straight forward to write a small proxy which translates from the proto/snappy format to JSON - would that help?

Tom

To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-developers+unsub...@googlegroups.com.

To post to this group, send email to prometheus-developers@googlegroups.com.

Georg Öttl

unread,
Jan 23, 2017, 9:48:53 AM1/23/17
to Tom Wilkie, Prometheus Developers, Brian Brazil

Sounds good to me if this is the way to go.


To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-devel...@googlegroups.com.

To post to this group, send email to prometheus...@googlegroups.com.

guido.garc...@gmail.com

unread,
Oct 3, 2018, 11:23:29 AM10/3/18
to Prometheus Developers
We have recently written a remote write endpoint for Kafka. It receives metrics and writes them to a Kafka topic in JSON format.

You might find it useful: https://github.com/Telefonica/prometheus-kafka-adapter

I think that the protobuf-to-json conversion could still be improved. Contributions are welcome.

El lunes, 23 de enero de 2017, 15:48:53 (UTC+1), Georg Öttl escribió:
> Sounds good to me if this is the way to go.
>
> Tom Wilkie <tom@x> schrieb am Mo., 23. Jan. 2017, 15:47:
>
> We're trying to keep the Prometheus server as "light" as possible [...]

Siddhesh Divekar

unread,
Dec 3, 2019, 12:44:36 PM12/3/19
to Prometheus Developers
Hi Guido,

Just curious, what do you do on the consumer side once the data goes to kafka.
Reply all
Reply to author
Forward
0 new messages