Benchmarking Druid 0.7.0

1,249 views
Skip to first unread message

Samy Chambi

unread,
Dec 1, 2014, 6:00:53 PM12/1/14
to druid-de...@googlegroups.com
Hi guys,

I've been able to reproduce these benchmarks: http://druid.io/blog/2014/03/17/benchmarking-druid.html, on Druid 0.6.146 version. However, when I tryed to use the new Druid's 0.7.0 version, I've encountered some troubles when loading data on Druid. Please, can you guys confirme me that this indexing service task's code: https://github.com/druid-io/druid-benchmark/blob/master/lineitem_small.task.json, is compatible with the 0.7.0 version of Druid.

I'm using a 0.20.2 hadoop cluster started in local mode, and this is the error I'm encountering when running the load commad :

<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 500 </title>
</head>
<body>
<h2>HTTP ERROR: 500</h2>
<p>Problem accessing /druid/indexer/v1/task. Reason:
<pre>    javax.servlet.ServletException: com.fasterxml.jackson.databind.JsonMappingException: Instantiation of [simple type, class io.druid.indexing.common.task.HadoopIndexTask] value failed: null</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>
</body>
</html>

Thanks,

Greetings!
Samy.
  

Charles Allen

unread,
Dec 1, 2014, 7:00:42 PM12/1/14
to druid-de...@googlegroups.com
Hadoop batch ingestion is broken in master right now. For some reason the config does not get properly persisted to the internal hadoop task. There's something funky going on in the object mapper when its reading the values. Hopefully the internal hadoop indexer task is simply missing a magic binding to allow it to get the configs read properly, but I haven't been able to dig into it much yet.

Samy Chambi

unread,
Dec 1, 2014, 11:29:46 PM12/1/14
to druid-de...@googlegroups.com
Ok. Thanks. 

Are there any other solution to get data into Druid 0.7.0 ? I've tried your ingestion spec published here: https://groups.google.com/forum/#!searchin/druid-development/roaring$20bitmap/druid-development/VBNYgoafQwk/rQANLBwdmKYJ, after changing data location's path and data source's name, but it is still  returning an error:

<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 500 </title>
</head>
<body>
<h2>HTTP ERROR: 500</h2>
<p>Problem accessing /druid/indexer/v1/task. Reason:
<pre>    javax.servlet.ServletException: com.fasterxml.jackson.databind.JsonMappingException: Unexpected token (END_OBJECT), expected FIELD_NAME: missing property 'type' that is to contain type id  (for class io.druid.indexing.common.task.Task)
 at [Source: HttpInputOverHTTP@2b744a22; line: 1, column: 2817]</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>
</body>
</html>

Thanks,

Samy.

Charles Allen

unread,
Dec 2, 2014, 2:27:44 PM12/2/14
to druid-de...@googlegroups.com
Sambi, I just pushed a fix for the CLI indexer, hopefully it alleviates the errors you are seeing. Can you please try again with a fresh pull of master?

Charles Allen

unread,
Dec 2, 2014, 2:31:14 PM12/2/14
to druid-de...@googlegroups.com
Worth noting: the error in your original post was the one I was seeing in the CLI hadoop indexer. The patch should fix that particular error. I haven't tried to reproduce the JSON parse one, but I suspect it has something to do with errors not being propagated correctly.

Any cases where this JSON one shows up would be great to document.

Charles Allen

unread,
Dec 2, 2014, 2:31:51 PM12/2/14
to druid-de...@googlegroups.com
and one of these days I'll stop calling you Sambi  >.<


On Tuesday, December 2, 2014 11:27:44 AM UTC-8, Charles Allen wrote:

Samy Chambi

unread,
Dec 2, 2014, 4:29:00 PM12/2/14
to druid-de...@googlegroups.com
Hi Charles,

Thanks for the fixing. I'm still getting the same errors after running a new Druid's cluster using the current code on github. 

I'm always running an indexing service as described in the documentation:  java -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:..hadoop-0.20.2/conf:config/overlord io.druid.cli.Main server overlord. After that, when launching the load command, a lineitem_small.task.json will be sent to the indexing service. The following error is encountered at that point:

<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 500 </title>
</head>
<body>
<h2>HTTP ERROR: 500</h2>
<p>Problem accessing /druid/indexer/v1/task. Reason:
<pre>    javax.servlet.ServletException: com.fasterxml.jackson.databind.JsonMappingException: Instantiation of [simple type, class io.druid.indexing.common.task.HadoopIndexTask] value failed: null</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>
</body>
</html>   

>> and one of these days I'll stop calling you Sambi  >.<

Cool !

Thanks,

Samy.

Charles Allen

unread,
Dec 3, 2014, 1:41:30 PM12/3/14
to druid-de...@googlegroups.com
Hi Samy, the good news is that I can reproduce the error. Investigating now.

Charles Allen

unread,
Dec 3, 2014, 2:11:22 PM12/3/14
to druid-de...@googlegroups.com
Hi Samy, can you please try the following ingestion spec:

You will need to adjust 
ioConfig -> inputSpec -> paths
and
dataSchema -> dataSource

{
   
"hadoopCoordinates": "org.apache.hadoop:hadoop-core:0.20.205",
   
"spec": {
       
"dataSchema": {
           
"dataSource": "test_small",
           
"granularitySpec": {
               
"intervals": [
                   
"1980-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z"
               
],
               
"queryGranularity": "day",
               
"type": "arbitrary"
           
},
           
"metricsSpec": [
               
{
                   
"name": "count",
                   
"type": "count"
               
},
               
{
                   
"fieldName": "L_QUANTITY",
                   
"name": "L_QUANTITY",
                   
"type": "longSum"
               
},
               
{
                   
"fieldName": "L_EXTENDEDPRICE",
                   
"name": "L_EXTENDEDPRICE",
                   
"type": "doubleSum"
               
},
               
{
                   
"fieldName": "L_DISCOUNT",
                   
"name": "L_DISCOUNT",
                   
"type": "doubleSum"
               
},
               
{
                   
"fieldName": "L_TAX",
                   
"name": "L_TAX",
                   
"type": "doubleSum"
               
}
           
],
           
"parser": {
               
"parseSpec": {
                   
"columns": [
                       
"l_orderkey",
                       
"l_partkey",
                       
"l_suppkey",
                       
"l_linenumber",
                       
"l_quantity",
                       
"l_extendedprice",
                       
"l_discount",
                       
"l_tax",
                       
"l_returnflag",
                       
"l_linestatus",
                       
"l_shipdate",
                       
"l_commitdate",
                       
"l_receiptdate",
                       
"l_shipinstruct",
                       
"l_shipmode",
                       
"l_comment"
                   
],
                   
"delimiter": "|",
                   
"dimensionsSpec": {
                       
"dimensionExclusions": [
                           
"l_shipdate",
                           
"L_TAX",
                           
"count",
                           
"L_QUANTITY",
                           
"L_DISCOUNT",
                           
"L_EXTENDEDPRICE"
                       
],
                       
"dimensions": [
                           
"l_orderkey",
                           
"l_suppkey",
                           
"l_linenumber",
                           
"l_returnflag",
                           
"l_linestatus",
                           
"l_commitdate",
                           
"l_receiptdate",
                           
"l_shipinstruct",
                           
"l_shipmode",
                           
"l_comment"
                       
]
                   
},
                   
"format": "tsv",
                   
"timestampSpec": {
                       
"column": "l_shipdate",
                       
"format": "yyyy-MM-dd"
                   
}
               
},
               
"type": "string"
           
}
       
},
       
"ioConfig": {
           
"inputSpec": {
               
"paths": "lineitem.tbl.small.gz",
               
"type": "static"
           
},
           
"type": "hadoop"
       
}
   
},
   
"type": "index_hadoop"
}

Charles Allen

unread,
Dec 3, 2014, 6:27:18 PM12/3/14
to druid-de...@googlegroups.com
Hi Sammy, with a recent patch the hadoop stuff "should" work. I've been able to get ingestion to go with the following hadoop task:

{
   
"hadoopCoordinates": "org.apache.hadoop:hadoop-client:2.3.0",
   
"spec": {
       
"dataSchema": {
           
"dataSource": "test_small2",

               
"paths": "lineitem.small.tbl",

               
"type": "static"
           
},
           
"type": "hadoop"
       
}
   
},
   
"type": "index_hadoop"
}

And I can get the Firehose ingestion started with the following, although performance is... bad for the local firehose right now:

{
   
"spec": {
       
"dataSchema": {
           
"dataSource": "test_small2",

           
"granularitySpec": {
               
"intervals": [
                   
"1980-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z"
               
],
               
"queryGranularity": "day",

               
"type": "uniform"

           
"type":"index",
           
"firehose":{"baseDir":"/Users/charlesallen/bin/wrk","filter":"lineitem.small.tbl","type":"local"}
       
}
   
},
   
"type": "index"
}

...

Samy Chambi

unread,
Dec 3, 2014, 6:28:18 PM12/3/14
to druid-de...@googlegroups.com
Hi Charles,

The ingestion spec has worked. I got the task id. Good job!

Actually, I have some trouble to ingest new tasks that are already prensent in TaskLock. After processing the received task, the indexing service is returning a task failed log: 2014-12-03 23:04:19,129 INFO [pool-6-thread-1] io.druid.indexing.overlord.TaskQueue - Task FAILED: HadoopIndexTask{id=index_hadoop_tpch_lineitem_small_2014-12-03T23:04:09.899Z, type=index_hadoop, dataSource=tpch_lineitem_small} (3369 run duration)

Is there any way to drop the tasks present in TaskLock?

Thanks,

Samy.  
...

Samy Chambi

unread,
Dec 3, 2014, 6:33:41 PM12/3/14
to druid-de...@googlegroups.com
Please, note also that the new tasks are not added to the druid_segments table in mysql db.

Charles Allen

unread,
Dec 3, 2014, 6:36:39 PM12/3/14
to druid-de...@googlegroups.com
If you're using the hadoop one and haven't applied this patch: https://github.com/metamx/druid/pull/934/files  (not yet in master as of this message), then it is probably the error that merge requests fixes. You can probably cherry-pick 2e6c25493738e45daf4b71fc852a6c3d62cd73f3 if you just want to get your local master branch functional for hadoop ingestion. 

xrvl is cleaning up the PR a bit more before it goes into master, but 2e6c25493738e45daf4b71fc852a6c3d62cd73f3 seems to at least be stable enough to do some local testing.

Samy Chambi

unread,
Dec 4, 2014, 2:30:04 PM12/4/14
to druid-de...@googlegroups.com
Hi Charles,

I've pulled the current code on github which has merged the 2e6c25493738e45daf4b71fc852a6c3d62cd73f3 stuff. I'm still getting a failed status when ingesting the data segment. The metadata weren't ingested too. Please see bellow: 

2014-12-04 18:33:20,775 INFO [pool-6-thread-2] io.druid.indexing.overlord.ForkingTaskRunner - Logging task index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z output to: /tmp/persistent/task/index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z/a67d0315-9806-47bf-bfe5-9ed5a09cecb1/log
2014-12-04 18:33:26,053 INFO [qtp264498757-24] io.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z]: LockTryAcquireAction{interval=1980-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z}
2014-12-04 18:33:26,054 INFO [qtp264498757-24] io.druid.indexing.overlord.TaskLockbox - Task[index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z] already present in TaskLock[index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z]
2014-12-04 18:33:29,369 INFO [qtp264498757-23] io.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z]: LockListAction{}
2014-12-04 18:33:29,818 INFO [pool-6-thread-2] io.druid.indexing.overlord.ForkingTaskRunner - Process exited with status[0] for task: index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z
2014-12-04 18:33:29,819 INFO [pool-6-thread-2] io.druid.indexing.common.tasklogs.FileTaskLogs - Wrote task log to: log/index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z.log
2014-12-04 18:33:29,819 INFO [pool-6-thread-2] io.druid.indexing.overlord.ForkingTaskRunner - Removing temporary directory: /tmp/persistent/task/index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z/a67d0315-9806-47bf-bfe5-9ed5a09cecb1
2014-12-04 18:33:29,820 INFO [pool-6-thread-2] io.druid.indexing.overlord.TaskQueue - Received FAILED status for task: index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z
2014-12-04 18:33:29,820 INFO [pool-6-thread-2] io.druid.indexing.overlord.ForkingTaskRunner - Ignoring request to cancel unknown task: index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z
2014-12-04 18:33:29,821 INFO [pool-6-thread-2] io.druid.indexing.overlord.HeapMemoryTaskStorage - Updating task index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z to status: TaskStatus{id=index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z, status=FAILED, duration=3317}
2014-12-04 18:33:29,821 INFO [pool-6-thread-2] io.druid.indexing.overlord.TaskLockbox - Removing task[index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z] from TaskLock[index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z]
2014-12-04 18:33:29,821 INFO [pool-6-thread-2] io.druid.indexing.overlord.TaskLockbox - TaskLock is now empty: TaskLock{groupId=index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z, dataSource=tpch_lineitem_small, interval=1980-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z, version=2014-12-04T18:33:20.768Z}
2014-12-04 18:33:29,821 INFO [pool-6-thread-2] io.druid.indexing.overlord.TaskQueue - Task done: HadoopIndexTask{id=index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z, type=index_hadoop, dataSource=tpch_lineitem_small}
2014-12-04 18:33:29,822 INFO [pool-6-thread-2] io.druid.indexing.overlord.TaskQueue - Task FAILED: HadoopIndexTask{id=index_hadoop_tpch_lineitem_small_2014-12-04T18:33:20.767Z, type=index_hadoop, dataSource=tpch_lineitem_small} (3317 run duration)
2014-12-04 18:33:37,525 INFO [TaskQueue-StorageSync] io.druid.indexing.overlord.TaskQueue - Synced 0 tasks from storage (0 tasks added, 0 tasks removed).


Thanks,

Samy.

Samy Chambi

unread,
Dec 11, 2014, 5:35:38 PM12/11/14
to druid-de...@googlegroups.com
Hi,

Even with a log stating "Task failed", I've found that new segments wa succesfully created and persisted in local deep storage, and I've been able to query them.

Greetings,
Samy. 

Charles Allen

unread,
Dec 12, 2014, 8:19:40 PM12/12/14
to druid-de...@googlegroups.com
If you happen to have the log files from the run can you send them my way?

I've made a few fixes to various ingestion stuff over the last few weeks/days. I have not run into the issue you describe where a segment reports failure but still shows up in deep storage.

Samy Chambi

unread,
Dec 12, 2014, 10:19:31 PM12/12/14
to druid-de...@googlegroups.com

I'm running the current code on master branch. The indexing service log after sending a new hadoop indexing task: 

2014-12-13 02:56:44,087 INFO [pool-6-thread-1] io.druid.indexing.overlord.ForkingTaskRunner - Logging task index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z output to: /tmp/persistent/task/index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z/14ac891f-2007-4cec-b64b-df560e727035/log
2014-12-13 02:56:49,386 INFO [qtp1681108815-22] io.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z]: LockTryAcquireAction{interval=1980-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z}
2014-12-13 02:56:49,389 INFO [qtp1681108815-22] io.druid.indexing.overlord.TaskLockbox - Task[index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z] already present in TaskLock[index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z]
2014-12-13 02:56:52,772 INFO [qtp1681108815-25] io.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z]: LockListAction{}
2014-12-13 02:56:53,224 INFO [pool-6-thread-1] io.druid.indexing.overlord.ForkingTaskRunner - Process exited with status[0] for task: index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z
2014-12-13 02:56:53,227 INFO [pool-6-thread-1] io.druid.indexing.common.tasklogs.FileTaskLogs - Wrote task log to: log/index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z.log
2014-12-13 02:56:53,231 INFO [pool-6-thread-1] io.druid.indexing.overlord.ForkingTaskRunner - Removing temporary directory: /tmp/persistent/task/index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z/14ac891f-2007-4cec-b64b-df560e727035
2014-12-13 02:56:53,238 INFO [pool-6-thread-1] io.druid.indexing.overlord.TaskQueue - Received FAILED status for task: index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z
2014-12-13 02:56:53,238 INFO [pool-6-thread-1] io.druid.indexing.overlord.ForkingTaskRunner - Ignoring request to cancel unknown task: index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z
2014-12-13 02:56:53,239 INFO [pool-6-thread-1] io.druid.indexing.overlord.HeapMemoryTaskStorage - Updating task index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z to status: TaskStatus{id=index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z, status=FAILED, duration=3382}
2014-12-13 02:56:53,239 INFO [pool-6-thread-1] io.druid.indexing.overlord.TaskLockbox - Removing task[index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z] from TaskLock[index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z]
2014-12-13 02:56:53,239 INFO [pool-6-thread-1] io.druid.indexing.overlord.TaskLockbox - TaskLock is now empty: TaskLock{groupId=index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z, dataSource=tpch_lineitem_small, interval=1980-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z, version=2014-12-13T02:56:43.998Z}
2014-12-13 02:56:53,240 INFO [pool-6-thread-1] io.druid.indexing.overlord.TaskQueue - Task done: HadoopIndexTask{id=index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z, type=index_hadoop, dataSource=tpch_lineitem_small}
2014-12-13 02:56:53,241 INFO [pool-6-thread-1] io.druid.indexing.overlord.TaskQueue - Task FAILED: HadoopIndexTask{id=index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z, type=index_hadoop, dataSource=tpch_lineitem_small} (3382 run duration)

When I check in "/tmp/persistent/task/", I find that "index_hadoop_tpch_lineitem_small_2014-12-13T02:56:43.993Z" file is added. 

Samy.

Samy Chambi

unread,
Dec 18, 2014, 5:08:50 PM12/18/14
to druid-de...@googlegroups.com
Hi,

>> I have not run into the issue you describe where a segment reports failure but still shows up in deep storage.

I think that it was due to a conflict caused by using two different versions of Druid in local mode on the same server. After removing the raw segment from the mysql druid_segments table and restarting a druid cluster using the 0.7.0 version, I was no able to ingest the data segment by sending a new indexing task.

When I checked the log file of the task, I find that it was due to a not sufficient DirectMemorySize on my server. Please see bellow the error which was reported on the task's log file:

1) Not enough direct memory.  Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, or druid.processing.numThreads: maxDirectMemory[3 728 211 968], memoryNeeded[8 589 934 592] = druid.processing.buffer.sizeBytes[1 073 741 824] * ( druid.processing.numThreads[7] + 1 )
  at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:82)
  at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:82)
  while locating io.druid.collections.StupidPool<java.nio.ByteBuffer> annotated with @io.druid.guice.annotations.Global()
    for parameter 1 at io.druid.query.groupby.GroupByQueryEngine.<init>(GroupByQueryEngine.java:79)
  at io.druid.guice.QueryRunnerFactoryModule.configure(QueryRunnerFactoryModule.java:82)
  while locating io.druid.query.groupby.GroupByQueryEngine
    for parameter 0 at io.druid.query.groupby.GroupByQueryRunnerFactory.<init>(GroupByQueryRunnerFactory.java:79)
  at io.druid.guice.QueryRunnerFactoryModule.configure(QueryRunnerFactoryModule.java:79)
  while locating io.druid.query.groupby.GroupByQueryRunnerFactory
  while locating io.druid.query.QueryRunnerFactory annotated with @com.google.inject.multibindings.Element(setName=,uniqueId=24, type=MAPBINDER)
  at io.druid.guice.DruidBinders.queryRunnerFactoryBinder(DruidBinders.java:38)
  while locating java.util.Map<java.lang.Class<? extends io.druid.query.Query>, io.druid.query.QueryRunnerFactory>
    for parameter 0 at io.druid.query.DefaultQueryRunnerFactoryConglomerate.<init>(DefaultQueryRunnerFactoryConglomerate.java:36)
  while locating io.druid.query.DefaultQueryRunnerFactoryConglomerate
  at io.druid.guice.StorageNodeModule.configure(StorageNodeModule.java:55)
  while locating io.druid.query.QueryRunnerFactoryConglomerate
    for parameter 9 at io.druid.indexing.common.TaskToolboxFactory.<init>(TaskToolboxFactory.java:78)
  at io.druid.cli.CliPeon$1.configure(CliPeon.java:129)
  while locating io.druid.indexing.common.TaskToolboxFactory
    for parameter 0 at io.druid.indexing.overlord.ThreadPoolTaskRunner.<init>(ThreadPoolTaskRunner.java:69)
  at io.druid.cli.CliPeon$1.configure(CliPeon.java:155)
  while locating io.druid.indexing.overlord.ThreadPoolTaskRunner
  while locating io.druid.query.QuerySegmentWalker
    for parameter 3 at io.druid.server.QueryResource.<init>(QueryResource.java:93)
  while locating io.druid.server.QueryResource

Actually, I have a 10 gig of free RAM space. Even when fixing the server parameter to -XX:MaxDirectMemorySize=10G, I'm always getting this error; perhaps Druid needs a lot more space than memoryNeeded[8 589 934 592] to ingest the lineitem.tbl.gz data ?

Another thing I wanted to point, is that I'm specifying to use only 1 thread in the overlord config, but Druid will calculate the needed memory size in function of the number of available processors in the machine (please see the link) which is 7 in this case, in lieu of the number of threads specified in the overlord config file. This point shouldn't be fixed ?

Thanks,

Samy. 
...

Fangjin Yang

unread,
Dec 18, 2014, 8:26:35 PM12/18/14
to druid-de...@googlegroups.com
Samy, can you share your config?
...

Samy Chambi

unread,
Dec 18, 2014, 10:49:09 PM12/18/14
to druid-de...@googlegroups.com
Hi Fangjin,

Here is the overlord config file :

-server
-Xmx2g
-Xms2g
-XX:MaxDirectMemorySize=10G
-Ddruid.extensions.coordinates=[\"io.druid.extensions:mysql-metadata-storage:0.7.0-SNAPSHOT\"]
-Duser.timezone=UTC
-Dfile.encoding=UTF-8

-Ddruid.host=localhost
-Ddruid.port=8080
-Ddruid.service=overlord

-Ddruid.zk.service.host=localhost

-Ddruid.db.connector.connectURI=jdbc:mysql://localhost:3306/druid
-Ddruid.db.connector.user=druid
-Ddruid.db.connector.password=diurd

-Ddruid.selectors.indexing.serviceName=overlord
-Ddruid.indexer.queue.startDelay=PT0M
-Ddruid.indexer.runner.javaOpts="-server -Xmx2g"
-Ddruid.indexer.runner.startPort=8088
-Ddruid.indexer.fork.property.druid.processing.numThreads=1
-Ddruid.indexer.fork.property.druid.computation.buffer.size=1000000000

Samy.
...

Charles Allen

unread,
Dec 18, 2014, 11:55:53 PM12/18/14
to druid-de...@googlegroups.com
I usually encounter this error when druid.processing.numThreads isn't set correctly. You can see in the log that it *thinks* its supposed to be 7, but the forking properties you have should set it to 1.

I was encountering some other peon settings issues today where some tasks weren't picking up settings I expected. No hard evidence either way yet though.
...

Charles Allen

unread,
Dec 18, 2014, 11:58:35 PM12/18/14
to druid-de...@googlegroups.com
Can you try setting the druid.processing.numThreads to 1 in the overlord as well? 

Also, can you look at the command line that gets output from the overlord/middleManager log right when the task is run and look to make sure the peon's properties are populated correctly (specifically druid.processing.numThreads)?
...

Samy Chambi

unread,
Dec 19, 2014, 11:25:16 AM12/19/14
to druid-de...@googlegroups.com
I've added the command druid.processing.numThreads=1 to the overlord config and sent a new indexing task and It was successfully processed: 
2014-12-19 01:15:48,616 INFO [task-runner-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_tpch_lineitem_small_2014-12-19T00:28:56.252-05:00",
  "status" : "SUCCESS",
  "duration" : 2806565
}

However, I observed that the raw segment wasn't added to the druid_segments table in Mysql database. When checking the task log file, I found this log output:
2014-12-19 00:29:05,559 INFO [task-runner-0] io.druid.indexer.HadoopDruidIndexerJob - No metadataStorageUpdaterJob set in the config. This is cool if you are running a hadoop index task, otherwise nothing will be uploaded to database.
 
How can I fix that to have Druid sending a raw segment to the druid_segments table? 

One more thing, should I add this command line : druid.processing.bitmap.type='roaring' to the overlord config to switch to a Roaring bitmap compression?

Thanks,

Samy.
...

Samy Chambi

unread,
Dec 19, 2014, 11:30:32 AM12/19/14
to druid-de...@googlegroups.com
Also,

>> Also, can you look at the command line that gets output from the overlord/middleManager log right when the task is run and look to make sure the peon's properties are populated correctly (specifically druid.processing.numThreads)?

I checked that and the druid.processing.numThreads=1 command was correctly picked up.

Samy.
...

Charles Allen

unread,
Dec 30, 2014, 3:52:46 PM12/30/14
to druid-de...@googlegroups.com
Hi Samy, Back at it after a sporadic availability winter break.

Have you added this option to the overlord?

-Ddruid.indexer.runner.type=local

...

Samy Chambi

unread,
Dec 30, 2014, 4:02:04 PM12/30/14
to druid-de...@googlegroups.com
Hi Charles,

>> ...after a sporadic availability winter break.

Happy for you!

>> Have you added this option to the overlord?
>> -Ddruid.indexer.runner.type=local

No, not yet. I'll check with that soon...

Thanks,

Samy.
...

Charles Allen

unread,
Dec 30, 2014, 6:36:55 PM12/30/14
to druid-de...@googlegroups.com
FYI, the overlord / middle manager relationship is kind of annoying to configure currently. If you set the runner to local (instead of remote) then the overlord should spawn the tasks directly, and the tasks should be easier to configure since they inherit druid.* properties (see io.druid.indexing.overlord.ForkingTaskRunner )
...

Samy Chambi

unread,
Dec 30, 2014, 8:47:27 PM12/30/14
to druid-de...@googlegroups.com
I've added the option: -Ddruid.indexer.runner.type=local to the Overlord, however no line is inserted in the druid_segments table after the success of the ingestion.
...

Samy Chambi

unread,
Dec 31, 2014, 10:09:26 AM12/31/14
to druid-de...@googlegroups.com
Hi guys,

I've launched an indexing task using the overlord of an old version of druid (1.6.146) and the line item was correctly inserted into the druid_sgments table after the indexing task has been successfully completed. However, with the curent Druid's version, no line related to the processed segment is added to the druid_segments table after the successful completeness of the indexing task. 

Can you give me some directions to fix that problem up?

Thanks.  
...

Nishant Bangarwa

unread,
Dec 31, 2014, 11:30:26 AM12/31/14
to druid-de...@googlegroups.com
Hi Samy, 

can you share the runtime.props for the overlord that you are using with 0.7 ? 
when running with druid 0.7, have you added the mysql-metadata-storage module and set updated the metadata properties in runtime.props ? 

fwiw, you will need to rename druid.db.* properties to druid.metadata.storage.*
here are sample metadata storage configs - 

# Metadata Storage
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc\:mysql\://localhost\:3306/druid
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=diurd

you can also look at sample configs for 0.7 from https://github.com/druid-io/druid/tree/master/examples/config


--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/8796c6b9-ec47-45c1-a448-05c38b5945e1%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

Samy Chambi

unread,
Dec 31, 2014, 1:12:03 PM12/31/14
to druid-de...@googlegroups.com
Hi Nishant,

I'm using this config for the overlord :

-server
-Xmx2g
-Xms2g
-XX:MaxDirectMemorySize=2g
-Duser.timezone=UTC
-Dfile.encoding=UTF-8

-Ddruid.host=localhost
-Ddruid.port=8087
-Ddruid.service=overlord

-Ddruid.zk.service.host=localhost

-Ddruid.extensions.coordinates=[\"io.druid.extensions:mysql-metadata-storage:0.7.0-SNAPSHOT\"]

-Ddruid.db.connector.connectURI=jdbc:mysql://localhost:3306/druid
-Ddruid.db.connector.user=druid
-Ddruid.db.connector.password=diurd

-Ddruid.selectors.indexing.serviceName=overlord
-Ddruid.indexer.queue.startDelay=PT0M
-Ddruid.indexer.runner.javaOpts="-server -Xmx2g"
-Ddruid.indexer.runner.startPort=8088
druid.processing.numThreads=1
-Ddruid.indexer.runner.type=local
-Ddruid.indexer.fork.property.druid.processing.numThreads=1
-Ddruid.indexer.fork.property.druid.computation.buffer.size=100000000

When I change the prefixes of the meatadata storage stuff to druid.metadata.storage all seems to be correct, however the addition of this  option: druid.metadata.storage.type=mysql, causes errors when starting an overlord node:

~/druid-services-0.7.0-SNAPSHOT$ java -classpath lib/*:/home/samytto/druid-services-0.6.146/hadoop-1.0.3/conf:config/overlord io.druid.cli.Main server overlord
2014-12-31 12:57:46,296 INFO [main] io.druid.guice.PropertiesModule - Loading properties from runtime.properties
2014-12-31 12:57:46,325 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.0.1.Final
2014-12-31 12:57:46,742 INFO [main] io.druid.guice.JsonConfigurator - Loaded class[class io.druid.guice.ExtensionsConfig] from props[druid.extensions.] as [ExtensionsConfig{searchCurrentClassloader=true, coordinates=[], defaultVersion='0.7.0-SNAPSHOT', localRepository='/home/samytto/.m2/repository', remoteRepositories=[http://repo1.maven.org/maven2/, https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local]}]
Exception in thread "main" com.google.inject.CreationException: Guice creation errors:

1) Unknown provider[mysql] of Key[type=io.druid.metadata.MetadataStorageConnector, annotation=[none]], known options[[derby]]
  at io.druid.guice.PolyBind.createChoiceWithDefault(PolyBind.java:67)
  while locating io.druid.metadata.MetadataStorageConnector
  at io.druid.guice.JacksonConfigManagerModule.getConfigManager(JacksonConfigManagerModule.java:52)
  at io.druid.guice.JacksonConfigManagerModule.getConfigManager(JacksonConfigManagerModule.java:52)
  while locating io.druid.common.config.ConfigManager
    for parameter 0 at io.druid.common.config.JacksonConfigManager.<init>(JacksonConfigManager.java:43)
  at io.druid.guice.JacksonConfigManagerModule.configure(JacksonConfigManagerModule.java:41)
  while locating io.druid.common.config.JacksonConfigManager
    for parameter 0 at io.druid.guice.JacksonConfigProvider.configure(JacksonConfigProvider.java:80)
  at io.druid.guice.JacksonConfigProvider.bind(JacksonConfigProvider.java:38)

1 error
        at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:448)
        at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:176)
        at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:110)
        at com.google.inject.Guice.createInjector(Guice.java:96)
        at com.google.inject.Guice.createInjector(Guice.java:73)
        at com.google.inject.Guice.createInjector(Guice.java:62)
        at io.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:371)
        at io.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:56)
        at io.druid.cli.ServerRunnable.run(ServerRunnable.java:39)
        at io.druid.cli.Main.main(Main.java:90)   

I'm sending this indexing task :

{
    "spec": {
        "dataSchema": {
            "dataSource": "tpch_lineitem_small",
            "granularitySpec": {
                "intervals": [
                    "1980/2020"
                        "dimensions": [

                                "l_orderkey",
                                "l_partkey",
                                "l_suppkey",
                                "l_linenumber",
                                "l_returnflag",
                                "l_linestatus",
                                "l_shipdate",
                                "l_commitdate",
                                "l_receiptdate",
                                "l_shipinstruct",
                                "l_shipmode",
                                "l_comment"
                        ]
                    },
                    "format": "tsv",
                    "timestampSpec": {
                        "column": "l_shipdate",
                        "format": "yyyy-MM-dd"
                    }
                },
                "type": "string"
            }
        },
        "ioConfig": {
            "inputSpec": {
                "paths": "/home/druid-benchmark/lineitem.tbl.gz",

                "type": "static"
            },
            "type": "hadoop"
        }
    },
    "type": "index_hadoop"
}


Samy.
...

Fangjin Yang

unread,
Dec 31, 2014, 1:14:45 PM12/31/14
to druid-de...@googlegroups.com
-Ddruid.db.connector.connectURI=jdbc:mysql://localhost:3306/druid
-Ddruid.db.connector.user=druid
-Ddruid.db.connector.password=diurd

Replace druid.db.* with druid.metadata.storage.*

Samy Chambi

unread,
Dec 31, 2014, 1:16:21 PM12/31/14
to druid-de...@googlegroups.com
After some little time, I've found these errors in the overlord console :

2014-12-31 13:12:43,062 WARN [config-manager-0] io.druid.common.config.ConfigManager - Exception when checking property[worker.config]
org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException: java.sql.SQLException: Cannot create JDBC driver of class 'org.apache.derby.jdbc.ClientDriver' for connect URL 'jdbc:mysql://localhost:3306/druid'
        at org.skife.jdbi.v2.DBI.open(DBI.java:210)
        at org.skife.jdbi.v2.DBI.withHandle(DBI.java:257)
        at io.druid.metadata.SQLMetadataConnector.lookup(SQLMetadataConnector.java:335)
        at io.druid.common.config.ConfigManager.poll(ConfigManager.java:108)
        at io.druid.common.config.ConfigManager.access$600(ConfigManager.java:44)
        at io.druid.common.config.ConfigManager$PollingCallable.call(ConfigManager.java:250)
        at io.druid.common.config.ConfigManager$PollingCallable.call(ConfigManager.java:234)
        at com.metamx.common.concurrent.ScheduledExecutors$2.run(ScheduledExecutors.java:99)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: Cannot create JDBC driver of class 'org.apache.derby.jdbc.ClientDriver' for connect URL 'jdbc:mysql://localhost:3306/druid'
        at org.apache.commons.dbcp2.BasicDataSource.createConnectionFactory(BasicDataSource.java:2023)
        at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:1897)
        at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1413)
        at org.skife.jdbi.v2.DataSourceConnectionFactory.openConnection(DataSourceConnectionFactory.java:36)
        at org.skife.jdbi.v2.DBI.open(DBI.java:192)
        ... 14 more
...

Samy Chambi

unread,
Dec 31, 2014, 1:24:50 PM12/31/14
to druid-de...@googlegroups.com
I did, and I'm guetting this error after starting an overlord node:

2014-12-31 13:23:25,885 WARN [config-manager-0] io.druid.common.config.ConfigManager - Exception when checking property[worker.config]
org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException: java.sql.SQLException: Cannot create JDBC driver of class 'org.apache.derby.jdbc.ClientDriver' for connect URL 'jdbc:mysql://localhost:3306/druid'
        at org.skife.jdbi.v2.DBI.open(DBI.java:210)
        at org.skife.jdbi.v2.DBI.withHandle(DBI.java:257)
        at io.druid.metadata.SQLMetadataConnector.lookup(SQLMetadataConnector.java:335)
        at io.druid.common.config.ConfigManager.poll(ConfigManager.java:108)
        at io.druid.common.config.ConfigManager.access$600(ConfigManager.java:44)
        at io.druid.common.config.ConfigManager$PollingCallable.call(ConfigManager.java:250)
        at io.druid.common.config.ConfigManager$PollingCallable.call(ConfigManager.java:234)
        at com.metamx.common.concurrent.ScheduledExecutors$2.run(ScheduledExecutors.java:99)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: Cannot create JDBC driver of class 'org.apache.derby.jdbc.ClientDriver' for connect URL 'jdbc:mysql://localhost:3306/druid'
        at org.apache.commons.dbcp2.BasicDataSource.createConnectionFactory(BasicDataSource.java:2023)
        at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:1897)
        at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1413)
        at org.skife.jdbi.v2.DataSourceConnectionFactory.openConnection(DataSourceConnectionFactory.java:36)
        at org.skife.jdbi.v2.DBI.open(DBI.java:192)
        ... 14 more
Caused by: java.sql.SQLException: No suitable driver
        at org.apache.commons.dbcp2.BasicDataSource.createConnectionFactory(BasicDataSource.java:2014)
        ... 18 more

Here is the new overlord configs : 

-server
-Xmx2g
-Xms2g
-XX:MaxDirectMemorySize=2g
-Duser.timezone=UTC
-Dfile.encoding=UTF-8

-Ddruid.host=localhost
-Ddruid.port=8087
-Ddruid.service=overlord

-Ddruid.zk.service.host=localhost

-Ddruid.extensions.coordinates=[\"io.druid.extensions:mysql-metadata-storage:0.7.0-SNAPSHOT\"]

druid.metadata.storage.connector.connectURI=jdbc:mysql://localhost:3306/druid
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=diurd

-Ddruid.selectors.indexing.serviceName=overlord
-Ddruid.indexer.queue.startDelay=PT0M
-Ddruid.indexer.runner.javaOpts="-server -Xmx2g"
-Ddruid.indexer.runner.startPort=8088
druid.processing.numThreads=1
-Ddruid.indexer.runner.type=local
-Ddruid.indexer.fork.property.druid.processing.numThreads=1
-Ddruid.indexer.fork.property.druid.computation.buffer.size=100000000


...

Charles Allen

unread,
Dec 31, 2014, 3:50:02 PM12/31/14
to druid-de...@googlegroups.com
druid.metadata.storage.type=mysql

is missing
...

Samy Chambi

unread,
Dec 31, 2014, 5:45:02 PM12/31/14
to druid-de...@googlegroups.com
This option: druid.metadata.storage.type=mysql generates the following error when I add it to the configs:
Exception in thread "main" com.google.inject.CreationException: Guice creation errors:

1) Unknown provider[mysql] of Key[type=io.druid.metadata.MetadataStorageConnector, annotation=[none]], known options[[derby]]
  at io.druid.guice.PolyBind.createChoiceWithDefault(PolyBind.java:67)
  while locating io.druid.metadata.MetadataStorageConnector
  at io.druid.guice.JacksonConfigManagerModule.getConfigManager(JacksonConfigManagerModule.java:52)
  at io.druid.guice.JacksonConfigManagerModule.getConfigManager(JacksonConfigManagerModule.java:52)
  while locating io.druid.common.config.ConfigManager
    for parameter 0 at io.druid.common.config.JacksonConfigManager.<init>(JacksonConfigManager.java:43)
  at io.druid.guice.JacksonConfigManagerModule.configure(JacksonConfigManagerModule.java:41)
  while locating io.druid.common.config.JacksonConfigManager
    for parameter 0 at io.druid.guice.JacksonConfigProvider.configure(JacksonConfigProvider.java:80)
  at io.druid.guice.JacksonConfigProvider.bind(JacksonConfigProvider.java:38)

1 error
        at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:448)
        at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:176)
        at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:110)
        at com.google.inject.Guice.createInjector(Guice.java:96)
        at com.google.inject.Guice.createInjector(Guice.java:73)
        at com.google.inject.Guice.createInjector(Guice.java:62)
        at io.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:371)
        at io.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:56)
        at io.druid.cli.ServerRunnable.run(ServerRunnable.java:39)
        at io.druid.cli.Main.main(Main.java:90)
...

Fangjin Yang

unread,
Dec 31, 2014, 6:34:05 PM12/31/14
to druid-de...@googlegroups.com
Samy, did you run 'mvn clean install' beforehand to install the module?

If so, can you share the full logs of your coordinator. It seems like the module isn't being registered.
...

Charles Allen

unread,
Dec 31, 2014, 8:53:44 PM12/31/14
to druid-de...@googlegroups.com
The other option is simply copying the jar into the classpath and ensuring 
druid.extensions.searchCurrentClassloader=true

...

Samy Chambi

unread,
Jan 1, 2015, 2:21:59 AM1/1/15
to druid-de...@googlegroups.com
Hi guys,

The indexing task was successfully completed and a new line corresponding to the ingested segment was added to the druid_segments table.

Two jar files were added to the class path in order to successfully ingest the metadata: mysql-metadata-storage-0.7.0-SNAPSHOT.jar and a mysql jdbc connector, like mysql-connector-java-5.1.34-bin.jar. I think that it would be interesting if these two jars could be initially found in the lib folder of the generated Druid's self contained jar after the building process.

The overlord config used is:

-server
-Xmx2g
-Xms2g
-XX:MaxDirectMemorySize=2g
-Duser.timezone=UTC
-Dfile.encoding=UTF-8


-Ddruid.host=localhost
-Ddruid.port=8087
-Ddruid.service=overlord


-Ddruid.zk.service.host=localhost


-Ddruid.extensions.coordinates=["io.druid.extensions:mysql-metadata-storage:0.7.0-SNAPSHOT"]
druid
.metadata.storage.type=mysql
druid
.metadata.storage.connector.connectURI=jdbc:mysql://localhost:3306/druid
druid
.metadata.storage.connector.user=druid
druid
.metadata.storage.connector.password=diurd
druid
.extensions.searchCurrentClassloader=true

-Ddruid.selectors.indexing.serviceName=overlord
-Ddruid.indexer.queue.startDelay=PT0M
-Ddruid.indexer.runner.javaOpts="-server -Xmx2g"
-Ddruid.indexer.runner.startPort=8088
druid
.processing.numThreads=1
-Ddruid.indexer.runner.type=local
-Ddruid.indexer.fork.property.druid.processing.numThreads=1
-Ddruid.indexer.fork.property.druid.computation.buffer.size=100000000

Thanks!

Samy. 
...

Govind Bhone

unread,
Apr 14, 2015, 8:40:00 AM4/14/15
to druid-de...@googlegroups.com
Hi All,
i am sending the wikkipedia  example events from tranquility to overlord service for indexing and for 6M records it took 8 Minutes
How we can tune the indexing service to get good performance benchmark .

Gian Merlino

unread,
Apr 14, 2015, 10:38:50 AM4/14/15
to druid-de...@googlegroups.com
Hi Govind, I replied to your question in the other thread you posted it at: https://groups.google.com/d/msg/druid-development/eIiuSS-fM8I/T4jRBtguTVUJ
Reply all
Reply to author
Forward
0 new messages