Monitoring Druid

5,770 views
Skip to first unread message

Maxime Vanmeerbeck

unread,
Nov 13, 2013, 10:43:49 AM11/13/13
to druid-de...@googlegroups.com
Hello,

My team and I have been working for few months on druid, great so far! We want to monitor all the nodes metrics and the external services.
How do you properly monitor druid nodes? I believe you have to use the "emitter" but the link on the wiki sends to the homepage

We use munin http://munin-monitoring.org/ which can use plugins to retrieve the data.

What method should i use ?
Build an emitter class to send the data to munin ? How ?
Build a munin plugin that connects to log4j in some way (if so, which way ?)
Build a munin plugin that listens the events sent by the http emitter ? If so, can we keep the file log ?
or another way i didn't think of ?

Our current partial configuration:

runtime.properties
com.metamx.emitter.logging=true
com.metamx.emitter.logging.level=info

log4j.properties
log4j.rootLogger=DEBUG, file

log4j.appender.file=org.apache.log4j.DailyRollingFileAppender
log4j.appender.file.File=/home/logs/druid/druid-master.log
log4j.appender.file.Append=true
log4j.appender.file.DatePattern='.'yyyy-MM-dd
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n
log4j.appender.file.Threshold=DEBUG


Thanks for your help!

Maxime

Fangjin Yang

unread,
Nov 15, 2013, 1:22:30 AM11/15/13
to druid-de...@googlegroups.com
Hi Maxime,

Druid can emit metrics through HTTP to some end point that can ingest them. For us, we emit metrics to Kafka, and create a Druid datasource that represents these metrics. If Munin can accept HTTP requests, you should be able to use the http event emitter, otherwise, you will have to extend the emitting code for munin. If you want to build an emitter class, you can look at the Emitter interface, as well as LoggingEmitter and HttpPostEmitter as example emitters.

If you are using Druid 0.6.x, you can check out the docs we have on the Emitter module here:

The same logic applies for Druid 0.5.x, but some of the configs were renamed for Druid 0.6.x.

Let me know if you have more questions, I hope this can help you get started.

Otis Gospodnetic

unread,
Nov 16, 2013, 2:10:34 AM11/16/13
to druid-de...@googlegroups.com
Hi,

If you guys end up exposing Druid's metrics via JMX, we could easily
add monitoring for it to SPM - http://sematext.com/spm . In the mean
time, you can use SPM to monitor other things Druid uses, like Hadoop,
ZooKeeper, and Kafka.

Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
> --
> You received this message because you are subscribed to the Google Groups
> "Druid Development" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to druid-developm...@googlegroups.com.
> To post to this group, send email to druid-de...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/druid-development/81c977a1-ee6e-4ca1-802c-86f725219b18%40googlegroups.com.
>
> For more options, visit https://groups.google.com/groups/opt_out.

Eric Tschetter

unread,
Nov 18, 2013, 10:49:25 AM11/18/13
to druid-de...@googlegroups.com
Otis,

Does sematext have a push-based mechanism for getting data into it?

I do not see us exposing metrics via JMX soon (though someone could do it by creating an emitter if they wanted to) largely because exposing metrics via JMX generally implies a local aggregation/averaging of results.  For example, on every query, the compute nodes and broker emit metrics about query response time.  The broker on the whole query and the compute nodes on a per-segment basis.  In order to expose this via jmx, we would have to average out the response times local to the process rather than aggregating it somewhere and averaging/computing percentiles/etc. at the aggregation point.  Does that make sense?

Maxime,

Hopefully Fangjin's response was helpful.  I'm not too familiar with munin, but we've intentionally tried to make it so that you can implement whatever you like by just creating your own Emitter class :).  I generally believe you'll be happier if you implement an Emitter that goes directly to munin (and if you do, that'd be a cool contribution to have if you are ok sharing it).  If you choose to keep things in the logs like they are now, be careful with your log rotation because the log files will grow fairly quickly.

--Eric


Otis Gospodnetic

unread,
Nov 18, 2013, 11:30:26 AM11/18/13
to druid-de...@googlegroups.com
Hi Eric,

On Mon, Nov 18, 2013 at 10:49 AM, Eric Tschetter <eche...@gmail.com> wrote:
> Otis,
>
> Does sematext have a push-based mechanism for getting data into it?

It does, via Custom Metrics (Java and Ruby libs available, else one
can hit an HTTP API with data in JSON) -
https://sematext.atlassian.net/wiki/display/PUBSPM/Custom+Metrics

But having direct access to poll metrics from something like JMX would
be better, as we could build Druid-specific reports by default
organized in a way that tends to make most sense for Druid, slicable
and disable by dimensions that make sense for each Druid metric. SPM
does this for Hadooo, HBase, Elasticsearch, Solr, Kafka, etc.

> I do not see us exposing metrics via JMX soon (though someone could do it by
> creating an emitter if they wanted to) largely because exposing metrics via
> JMX generally implies a local aggregation/averaging of results. For
> example, on every query, the compute nodes and broker emit metrics about
> query response time. The broker on the whole query and the compute nodes on
> a per-segment basis. In order to expose this via jmx, we would have to
> average out the response times local to the process rather than aggregating
> it somewhere and averaging/computing percentiles/etc. at the aggregation
> point. Does that make sense?

Yes, but no :)
That is, wouldn't one want to know both the broker and compute nodes
metrics and even see them separately? Take Kafka as a working example
- we have Kafka Producer, Kafka Broker, and Kafka Consumer metrics.
The agent is attached to each of these node (types) and provides
metrics about its node. In the UI one can then see all these metrics
separately, filter by individual host if needed, or any other
dimension that make sense for filtering.

Hit https://apps.sematext.com/demo and look for SA.Prod.Kafka app to
check out how its metrics are represented. Based on what you wrote, I
think one would want the same for Druid.

Re JMX exporting - Coda's metrics package makes that easy.

> Maxime,
>
> Hopefully Fangjin's response was helpful. I'm not too familiar with munin,
> but we've intentionally tried to make it so that you can implement whatever
> you like by just creating your own Emitter class :). I generally believe
> you'll be happier if you implement an Emitter that goes directly to munin
> (and if you do, that'd be a cool contribution to have if you are ok sharing
> it). If you choose to keep things in the logs like they are now, be careful
> with your log rotation because the log files will grow fairly quickly.

One can also use Coda's metrics lib with a custom Reporter. There is
one for SPM, and that pushes metrics into SPM as Custom Metrics, which
is great for people's custom apps. But when you have a "known app",
like Druid, it's better to have a pre-built reports for (new) Druid
users to immediately get without having to understand what they need
to collect, how to aggregate it, what matters, and so on.
> https://groups.google.com/d/msgid/druid-development/CAB8U%2Bh1GPj4qHZOqgKzitvbJ3McmW%2BOZ1v7c-O550QkTAAXR1g%40mail.gmail.com.

Eric Tschetter

unread,
Nov 18, 2013, 11:42:35 AM11/18/13
to druid-de...@googlegroups.com
> Otis,
>
> Does sematext have a push-based mechanism for getting data into it?

It does, via Custom Metrics (Java and Ruby libs available, else one
can hit an HTTP API with data in JSON) -
https://sematext.atlassian.net/wiki/display/PUBSPM/Custom+Metrics

But having direct access to poll metrics from something like JMX would
be better, as we could build Druid-specific reports by default
organized in a way that tends to make most sense for Druid, slicable
and disable by dimensions that make sense for each Druid metric.  SPM
does this for Hadooo, HBase, Elasticsearch, Solr, Kafka, etc.

You can do this with a well-known push-based stream as well.  Metamarkets does this for its own monitoring, essentially the http emitter is used to emit metrics to an HTTP server that pushes into kafka which is then loaded back into Druid and a dashboard is built on top of it.  The node itself is a dimension of the metrics that are emitted via the emitter, so all you have to do is filter by one of those and you are looking at a specific node.  The druid.service property is also emitted with every metric, so you can filter by that to look at specific clusters.
 

> I do not see us exposing metrics via JMX soon (though someone could do it by
> creating an emitter if they wanted to) largely because exposing metrics via
> JMX generally implies a local aggregation/averaging of results.  For
> example, on every query, the compute nodes and broker emit metrics about
> query response time.  The broker on the whole query and the compute nodes on
> a per-segment basis.  In order to expose this via jmx, we would have to
> average out the response times local to the process rather than aggregating
> it somewhere and averaging/computing percentiles/etc. at the aggregation
> point.  Does that make sense?

Yes, but no :)
That is, wouldn't one want to know both the broker and compute nodes
metrics and even see them separately?  Take Kafka as a working example
- we have Kafka Producer, Kafka Broker, and Kafka Consumer metrics.
The agent is attached to each of these node (types) and provides
metrics about its node.  In the UI one can then see all these metrics
separately, filter by individual host if needed, or any other
dimension that make sense for filtering.

Hit https://apps.sematext.com/demo and look for SA.Prod.Kafka app to
check out how its metrics are represented.  Based on what you wrote, I
think one would want the same for Druid.

Hrm, to try to show what I'm talking about, what is the 99% response time for producer submissions to kafka?  95%?  What about the 90th-percentile of number of bytes per message submitted?  I might've missed them, but when aggregating via JMX, you can only compute these numbers local to a specific node, not globally for the whole cluster.
 
Re JMX exporting - Coda's metrics package makes that easy.

Yeah, my point wasn't so much about the local computing of the aggregation, but the fact that the aggregation happens locally and then gets aggregated again upon poll.

If your custom metrics allow for arbitrary slice-n-dice of dimensions emitted via JSON, though, then just taking things in via the http endpoint and filtering respectively should actually provide meaningful per node, per cluster and per node-type views.

--Eric


Nicolas F.

unread,
Dec 5, 2013, 5:22:23 AM12/5/13
to druid-de...@googlegroups.com
Eric,
an unrelated question: when you say "the http emitter is used to emit metrics to an HTTP server that pushes into kafka", I wonder what tools you use for that. A Netty-based HTTP server ? An Apache on nginx plugin ?
Thx

-- Nicolas

Eric Tschetter

unread,
Dec 9, 2013, 7:59:22 PM12/9/13
to druid-de...@googlegroups.com
Async Jetty.  You could easily do it via a Netty server or whatever though, just basically need some server to take HTTP and act like a kafka producer.  Could probably do it with an apache or nginx plugin as well, but I know Java so that's the tool I tend to use ;).

--Eric


Himanshu Gupta

unread,
Sep 2, 2014, 10:56:40 PM9/2/14
to druid-de...@googlegroups.com
Did anyone write an Emitter implementation to push metrics to codahale metrics ( http://metrics.dropwizard.io/ ) ?

Thanks,
Himanshu

JagadeeshM

unread,
Apr 25, 2016, 3:26:06 PM4/25/16
to Druid Development
Hi Fangjin -

Would you mind sharing a copy of the json spec that is being used to create a druid datasource from the emitted metrics into Kafka? 

I hope it would be a straight forward one because the emitted metrics are in some generic format.

Thanks,
Jagadeesh

Gian Merlino

unread,
Apr 25, 2016, 7:29:03 PM4/25/16
to druid-de...@googlegroups.com

Gian

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

JagadeeshM

unread,
Apr 25, 2016, 10:44:43 PM4/25/16
to Druid Development
Thanks Gian! This is helpful to get started.

JagadeeshM

unread,
Apr 26, 2016, 10:12:25 AM4/26/16
to Druid Development
Why is the  dimensions: [] empty ? 

JM

Fangjin Yang

unread,
Apr 26, 2016, 1:00:27 PM4/26/16
to Druid Development
Setting the dimension list to empty enables Druid to consider everything that is not a timestamp and metric as a dimension.

JagadeeshM

unread,
Apr 26, 2016, 2:52:24 PM4/26/16
to Druid Development
Oh..good to know. Thanks Fangjin!

Sunita Koppar

unread,
Aug 12, 2016, 3:05:18 PM8/12/16
to Druid Development
Hi All,

I am trying to set some monitoring for Druid services. Relatively new to Druid so pardon my ignorance. From what I understand from this thread as well as online docs, Druid provides the emitter API and by specifying something like below in common.runtime.properties :
druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
druid.emitter=composing
druid.emitter.logging.logLevel=info

druid.emitter.http.recipientBaseUrl=http://localhost:8082/druid_mon
#druid.monitoring.emissionPeriod=PT5m
#druid.monitoring.monitors=["com.metamx.metrics.SysMonitor","com.metamx.metrics.JvmMonitor","io.druid.client.cache.CacheMonitor","io.druid.server.metrics.HistoricalMetricsMonitor","io.druid.segment.realtime.RealtimeMetricsMonitor","io.druid.server.metrics.EventReceiverFirehoseMonitor"]

I can get the required metrics pushed to an HTTP end point. I have set up Kafka rest on my laptop, however, for production this will be a different kafka cluster than the one which connects the data sources. 
Please note, Kafka-rest by default listens on 8082 and I couldn't change the port  - 
so I changed Druid broker to be on a different port. So http://localhost:8082 is actually the Kafka-rest URL and both Druid and Kafka-rest seem to be working fine.
I do not see any activity or errors either w.r.t logging. Do I have to set up tranquility as well or should the emitter class be able to push metrics to this endpoint and I am missing some additional setup. Appreciate your guidance

regards
Sunita

Sunita Koppar

unread,
Aug 12, 2016, 5:37:38 PM8/12/16
to Druid Development
Internal communication between the services seems to be happening over http. Here is what the historic.log looks like:
HTTP/1.1 200 OK
Date: Fri, 12 Aug 2016 21:03:56 GMT
Content-Type: application/x-jackson-smile
X-Druid-Query-Id: 085bd9f3-1ff5-4238-bce6-11de6a541abe
X-Druid-Response-Context: {}
Vary: Accept-Encoding, User-Agent
Transfer-Encoding: chunked
Server: Jetty(9.2.5.v20141112)
2016-08-12T21:03:56,708 DEBUG [HttpClient-Netty-Worker-3] com.metamx.http.client.NettyHttpClient - [POST http://100.64.1.62:8083/druid/v2/] Got response: 200 OK
2016-08-12T21:03:56,708 DEBUG [HttpClient-Netty-Worker-3] io.druid.client.DirectDruidClient - Initial response from url[http://100.64.1.62:8083/druid/v2/] for queryId[085bd9f3-1ff5-4238-bce6-11de6a541abe]

But nothing seems to be pushed outside of druid services.
regards'

Slim Bouguerra

unread,
Aug 12, 2016, 6:21:36 PM8/12/16
to Druid Development
Hi have you set This
druid.emitter.composing.emitter=["http"]


List of emitter modules to load e.g. ["logging","http"]. []

Sunita Koppar

unread,
Aug 12, 2016, 7:09:05 PM8/12/16
to Druid Development
Thanks for taking a look. Yes, I have it set. Missed putting it in the email.
Also, 2 attributes I added while troubleshooting are:

druid.request.logging.type=emitter

druid.request.logging.feed=druid_requests

Not sure if it makes a difference 

Sunita Koppar

unread,
Aug 12, 2016, 7:11:57 PM8/12/16
to Druid Development
I just saw an error in broker.log which probably indicates the issue:
2016-08-12T23:09:07,338 DEBUG [HttpPostEmitter-1-0] com.metamx.http.client.NettyHttpClient - [POST http://localhost:8082/topics/druid_mon] starting
2016-08-12T23:09:07,346 DEBUG [HttpClient-Netty-Worker-0] com.metamx.http.client.NettyHttpClient - [POST http://localhost:8082/topics/druid_mon] messageReceived:
DefaultHttpResponse(chunked: false)
HTTP/1.1 500 Internal Server Error
Content-Length: 52
Content-Type: application/vnd.kafka.v1+json
Server: Jetty(8.1.16.v20140903)
2016-08-12T23:09:07,346 DEBUG [HttpClient-Netty-Worker-0] com.metamx.http.client.NettyHttpClient - [POST http://localhost:8082/topics/druid_mon] Got response: 500
 Internal Server Error
2016-08-12T23:09:07,346 WARN [HttpPostEmitter-1-0] com.metamx.emitter.core.HttpPostEmitter - Got exception when posting events to urlString[http://localhost:8082/
topics/druid_mon]. Resubmitting.
com.metamx.common.ISE: Emissions of events not successful[500 Internal Server Error], with message[{"error_code":500,"message":"Internal Server Error"}].
        at com.metamx.emitter.core.HttpPostEmitter$EmittingRunnable.run(HttpPostEmitter.java:304) [emitter-0.3.6.jar:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [?:1.7.0_79]
        at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_79]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) [?:1.7.0_79]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) [?:1.7.0_79]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_79]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_79]
        at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]

Will try a simple http server instead of kafka-rest.

Sunita Koppar

unread,
Aug 12, 2016, 7:43:06 PM8/12/16
to Druid Development
I was able to push to a test web server - http://posttestserver.com/data/2016/08/12/16.38.37791831443
So its not an issue from Druid end. 
regards
Sunita

Slim Bouguerra

unread,
Aug 12, 2016, 7:56:52 PM8/12/16
to druid-de...@googlegroups.com
i guess you need to have a wrapper around the emitter.
Otherwise you can use this http collector and it will send the http feed to the Kafka queue 


You received this message because you are subscribed to a topic in the Google Groups "Druid Development" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/druid-development/bgWDDJDg574/unsubscribe.
To unsubscribe from this group and all its topics, send an email to druid-developm...@googlegroups.com.

To post to this group, send email to druid-de...@googlegroups.com.

Fangjin Yang

unread,
Aug 15, 2016, 6:07:59 PM8/15/16
to Druid Development
Hi Sunita, are you running a webserver locally which can collect these events being emitted from Druid? Right now you are pushing the events to localhost and not posttestserver

Sunita Koppar

unread,
Aug 15, 2016, 6:14:52 PM8/15/16
to Druid Development
Thanks for your response Slim.

I was actually trying to use the statsd emitter. Kafka rest is not supported on the server side, while statsd is, hence this makes a lot of sense for our usecase.
I was able to create a one-jar (jar including StatsDClient classes) and put it in /usr/local/lib/imply-1.3.0/dist/druid/extensions/statsd-emitter/statsd-emitter-0.9.2-SNAPSHOT.one-jar.jar

Attaching a exception that shows up in the logs. 
Here are the steps I am taking:
1. From the cloned git druid repository, extensions-contrib/statsd-emitter/ I did mvn package. The jar with this was causing "ClassNotFound" for StatsDClient which happens to be a dependency for statsd emitter, basically statsd client library. Which was a sort of a relief that atleast the plugin was being picked.
2. I modified the pom.xml to create one-jar which packages all the dependencies into a fat jar. With this jar, the classNotFound errors were gone. 
    I placed this jar in /usr/local/lib/imply-1.3.0/dist/druid/extensions/statsd-emitter
3. I modified the common.runtime.properties here:/usr/local/lib/imply-1.3.0/conf-quickstart/druid/_common
    as below:
    
druid.request.logging.type=emitter
druid.request.logging.feed=druid_requests
druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
druid.emitter=statsd  --> assuming this is right
druid.emitter.logging.logLevel=info
druid.emitter.statsd.hostname=139.49.192.112  --> tried local host also but since its mac, I have a VM with collectd. This is the IP of the VM
druid.emitter.statsd.port=8125
druid.emitter.statsd.prefix="sk"
#druid.emitter.statsd.separator="."
druid.emitter.statsd.includeHost=true

I have added it to loadlist by specifying -Ddruid.extensions.directory=dist/druid/extensions -Ddruid.extensions.loadList='["statsd-emitter"]' in jvm.conf
in the dir - /usr/local/lib/imply-1.3.0/conf-quickstart/druid/broker
Am I missing anything? How do I ensure statsd is sending messages. I changed the code to add some debug statements in StatsDEmitter.java and I dont see them in the logs. Which is making me wonder if the plugin is in effect. Appreciate any help in troubleshooting.
regards
Sunita
To unsubscribe from this group and all its topics, send an email to druid-development+unsub...@googlegroups.com.
broker.log_withStatsdEnabled

Sunita Koppar

unread,
Aug 15, 2016, 6:33:34 PM8/15/16
to Druid Development
Yes Fangjin, not a webserver but I was kafka-rest locally. At that point, I was only trying to ensure druid is emitting the metrics. I later changed the configs to send to posttestserver to confirm druid is emitting metrics and that worked well. So the issue was in kafka-rest setup which I didnt need anyways, so I abandoned it. After that, based on the internal processes, we zeroed down on statsd-emitter for production use. I am having some issues with statsd-emitter now and I have posted my question on this thread. Can you help me with that?

regards
Sunita

Sunita Koppar

unread,
Aug 15, 2016, 6:35:43 PM8/15/16
to Druid Development
I have collectd with statsd plugin enabled, running in a VM on my host. The IP in the config is the IP of the VM.

regards
Sunita

Sunita Koppar

unread,
Aug 15, 2016, 7:03:43 PM8/15/16
to Druid Development
Also, I tried 2 runs with the logger class set to one of the below and I still see the "Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file:  java.naming.factory.initial"

druid.emitter.logging.loggerClass=StatsDEmitter 

or

druid.emitter.logging.loggerClass=StatsDEmitterModule

Both the changes have no effect.

Fangjin Yang

unread,
Aug 15, 2016, 8:46:04 PM8/15/16
to Druid Development
Hi Sunita, did you include the statsD extension as part of your extensions loadList?


Just an FYI, statsd is not an officially supported Druid module and is a community contributed module.

Slim Bouguerra

unread,
Aug 15, 2016, 11:47:19 PM8/15/16
to druid-de...@googlegroups.com
can you attache the logs from broker or historicals. Especially the part when the druid process starts.
Like that we can see if it is loaded and if there is any exceptions.

--
You received this message because you are subscribed to a topic in the Google Groups "Druid Development" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/druid-development/bgWDDJDg574/unsubscribe.
To unsubscribe from this group and all its topics, send an email to druid-development+unsubscribe@googlegroups.com.
To post to this group, send email to druid-development@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/c6c3962b-8fc5-4029-88ff-89e1eba01108%40googlegroups.com.

Sunita Koppar

unread,
Aug 16, 2016, 11:36:33 AM8/16/16
to Druid Development
Fangjin, Thanks for your thoughts. I had not added the statsd-emitter to loadlist as I did not see it take effect for httpLogger. However, based on your inputs I added it now and I still dont see any difference.
Also, I was under the impression that statsd-emitter is authored by an imply employee (Gian Merlino), hence thought it would be supported currently or in near future. 

Slim, Attaching the logs with 2 runs I had, once with the druid.emitter.logging.loggerClass=StatsDEmitter and 
next with druid.emitter.logging.loggerClass=StatsDEmitterModule
Also, for one of the runs, I had the collectd with statsd plugin daemon running and once after killing the process. The logs appear same with or without the process which makes me think there is no effort to establish connection with statsd daemon. I have also tried with standalone statsd daemon and see no difference.

Let me know if logs from any other service will help? I have the statsd emitter enabled for coordinator and historical nodes as well, but they provide almost same information. Appreciate your help.
regards
Sunita
To unsubscribe from this group and all its topics, send an email to druid-developm...@googlegroups.com.

To post to this group, send email to druid-de...@googlegroups.com.
broker.log_withStatsDEmitter
broker.log_withStatsDEmitterModule

Fangjin Yang

unread,
Aug 16, 2016, 12:40:12 PM8/16/16
to Druid Development
Sunita, StatsD is not authored by Imply, why do you think it is? You can look at the Git history to find the original author.

Second, can you please attach your common.runtime.properties?

You will need to set the following configuration in that file:
druid.emitter=statsd
druid.emitter.logging.logLevel=info

In addition to the list of required configuration as mentioned in http://druid.io/docs/0.9.1.1/development/extensions-contrib/statsd.html

Sunita Koppar

unread,
Aug 16, 2016, 1:17:38 PM8/16/16
to Druid Development
Sorry for confusion regarding the author. Yes, I do have the settings you mentioned already and have gone through the link you mentioned. Attached is the common.runtime.properties file in /usr/local/lib/imply-1.3.0/conf-quickstart/druid/_common

The way I trigger druid services is:
cd /usr/local/lib/imply-1.3.0
bin/supervise -c conf/supervise/quickstart.conf

I am ok with default dimensionMapPath.

regards
Sunita
common.runtime.properties

Sunita Koppar

unread,
Aug 16, 2016, 4:52:41 PM8/16/16
to Druid Development
I was able to get this to work. The problem was using the one-jar plugin with which the StatsdEmitter classes were not found. Shaded assembly plugin resolved the issue
Attaching the pom.xml for statsd-emitter which resolved the issues for me.

regards
Sunita
pom.xml

Fangjin

unread,
Aug 16, 2016, 5:15:25 PM8/16/16
to druid-de...@googlegroups.com
Great to hear you got this working Sunita!

To unsubscribe from this group and stop receiving emails from it, send an email to druid-development+unsubscribe@googlegroups.com.
To post to this group, send email to druid-development@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/bc3f41c7-2061-42bb-b6d8-26ba622048e9%40googlegroups.com.

Sindhu Rajesh

unread,
Nov 1, 2017, 4:02:32 PM11/1/17
to Druid Development
Hi,

I am looking for some sample dimensionMapPath configuration, very basic example would be sufficient. Thank you. 

Gian Merlino

unread,
Nov 2, 2017, 2:09:57 AM11/2/17
to druid-de...@googlegroups.com
This is the default configuration: https://github.com/druid-io/druid/blob/master/extensions-contrib/statsd-emitter/src/main/resources/defaultMetricDimensions.json

You could modify it and then provide the modified file as a dimensionMapPath.

Gian

To unsubscribe from this group and stop receiving emails from it, send an email to druid-development+unsubscribe@googlegroups.com.
To post to this group, send email to druid-development@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/1cb071d0-ac6d-4612-93d0-6a91b68afd3a%40googlegroups.com.

Sindhu Rajesh

unread,
Nov 6, 2017, 8:13:18 PM11/6/17
to Druid Development
Thank You Gian,

My Use Case here is to to get data source metadata.
I can run this query against broker node for this {
    "queryType" : "dataSourceMetadata",
    "dataSource": "sample_datasource"
}
But I haven't been able to modify default Metric Dimensions file to add normal druid related query. Really appreciate your help on this!

Gian

ravi ranjan

unread,
Apr 27, 2018, 7:37:09 AM4/27/18
to Druid Development
Hi Folks,

I am new to Druid database. I wanted to setup monitoring for druid.
Can someone let me know the step-by-step process of configuring monitoring for druid database?

Regards
Ravi Ranjan
Reply all
Reply to author
Forward
0 new messages