Questions around Tranquility

2,140 views
Skip to first unread message

Deepak Jain

unread,
Aug 25, 2014, 11:09:26 AM8/25/14
to druid-de...@googlegroups.com
1. Can someone share the steps to build and run tranquility ?
2. How to configure it to discover druid cluster ?
3. How to configure it to read data form Apache Kafka cluster ?
4. What are the recommended production & hardware configurations, if any for tranquility ?
5. How to build a production tranquility cluster that is not a single point of failure ?

Appreciate.

Regards,
Deepak

Gian Merlino

unread,
Aug 25, 2014, 7:43:04 PM8/25/14
to druid-de...@googlegroups.com
1, 2- These are covered in the docs in the readme at: https://github.com/metamx/tranquility

3- Tranquility is a library that you use in your own program and is not a standalone service. The easiest way to use it to send data from kafka to druid is to either write a program that reads from kafka and writes to druid in a loop, or to use the Storm bolt (if you are already using Storm).

4- The system requirements are pretty low, it should be a good bit less than the hardware you need for the Druid indexing service for the same dataset.

5- On the writing-to-druid side, you can distribute the work to as many machines as you want. As long as they all have the same tranquility configuration, they will coordinate amongst themselves and everything should just work. On the consuming-from-kafka side, if you are using the high level kafka consumer, or if you are using storm, you will be resilient to the loss of any one reader if you have multiple readers. On the druid side itself, make sure to set "replicants" higher than 1 in your tranquility configuration, to make sure you can sustain a middle manager failure.

Deepak Jain

unread,
Aug 26, 2014, 12:43:34 AM8/26/14
to druid-de...@googlegroups.com
Hello Gian,
thanks for the response.

Tranquility is a library. 
1) Its written in sacala, and can it be used from Kafka consumer written in Java ?   Is the POM dependency mentioned in the documentation, the way to import tranquility library and hence use its APIs? 
2) If am going to use it from a Java project, is the Direct API (Java) at https://github.com/metamx/tranquility the right way to using it ?
3) Could you please include the import statements for the Java API. Once i drop the code, there are multiple conflicts and am not sure which package classes i need to import.
4) What is Finagle ? Do i need to read about it for using Java API. Please point me to some docs.
5) What is Curator ? More docs will help.
6) Few questions about the Java program.

I)
Where can i locate the overlord service name ? http://druid.io/docs/latest/Production-Cluster-Configuration.html has druid.discovery.curator.path=/prod/discovery but it has no mention of string "overlord" ?
In my runtime.properties, i do not have the above property and have this
# If you choose to compress ZK announcements, you must do so for every node type
druid.announcer.type=batch
druid.curator.compress=true


II) What is firehosePattern and what should i set it to ? 
III) final List<String> dimensions = ImmutableList.of("column");
What is dimensions ? Do i need to include the list of dimensions here, if so what is the format ? (comma separated ?)
Same question for aggregates ?
Earelier we used to write a realtime task and submit it to overlord. Will tranquility create such a XML and submit it ? And is this the reason to include the dimension list and aggregates in above format ?

IV)
Can you explain more about this next line ?
final Service<List<Map<String, Object>>, Integer> druidService = DruidBeams <x y z>.
Why is that timestamp method overridden and what is the string "timestamp" ? Should it match to the timestamp column and do it needs to be in lower case ?


V) Sending events to druid.
API: druidService.apply() takes List<Map<String, Object>>.
My single line looks like this: 
{"TSTAMP":"1407858586333","GUID":"c0dee19813e0a568e7e514b5ffff549a","CAL_DT":"1900-01-01"}
So do i need to read such a line from Kafka and create a Key value HashMap and send it to appy ?
This looks lot of string parsing activity. Can you suggest APIs or your method of breaking a incoming JSON line into a HashMap that can be passed to apply() .

Regards,
Deepak

Deepak Jain

unread,
Aug 26, 2014, 5:00:04 AM8/26/14
to druid-de...@googlegroups.com
I found this class  test/java/com/metamx/tranquility/javatests/JavaApiTest.java that has few test cases with JUnit. 
Questions
1) How do i run a Java test case inside a scala project ? Since its not a maven project, i cannot use mvn test or import into Eclipse. Can someone guide me as how to run that pariticular java unit test case ?
2) I see that it instantiates a TestingCluster. In my case i have a druid cluster. How does tranquility lib know where to find the cluster. The information is within ZK, and i do not see any ways of specifying ZK info. 
I see that something "curator" object that accepts a connectionString. How do i get this string (i cannot use TestingCluster).
3) I modified that test case as 
I see below exception with "" connectionString

log4j:WARN Please initialize the log4j system properly.

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V

at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)

at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:169)





Code:
import io.druid.granularity.QueryGranularity;
import io.druid.query.aggregation.AggregatorFactory;
import io.druid.query.aggregation.CountAggregatorFactory;

import java.util.List;
import java.util.Map;

import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.retry.RetryOneTime;
import org.joda.time.DateTime;
import org.joda.time.Period;

import com.google.common.collect.ImmutableList;
import com.metamx.common.Granularity;
import com.metamx.tranquility.beam.Beam;
import com.metamx.tranquility.beam.ClusteredBeamTuning;
import com.metamx.tranquility.druid.DruidBeams;
import com.metamx.tranquility.druid.DruidEnvironment;
import com.metamx.tranquility.druid.DruidLocation;
import com.metamx.tranquility.druid.DruidRollup;
import com.metamx.tranquility.typeclass.Timestamper;
import com.twitter.finagle.Service;

public class TranquilityProducer {
    final String indexService = "druid:overlord"; // Your overlord's service
                                                    // name.
    final String firehosePattern = "druid:firehose:%s"; // Make up a service
                                                        // pattern, include %s
                                                        // somewhere in it.
    final String discoveryPath = "/discovery"; // Your overlord's
                                                // druid.discovery.curator.path
    final String dataSource = "hey";
    private static final List<String> dimensions = ImmutableList.of("column");
    private static final List<AggregatorFactory> aggregators = ImmutableList
            .<AggregatorFactory> of(new CountAggregatorFactory("cnt"));

    public static void main(String[] args) {
        try {
            final CuratorFramework curator = CuratorFrameworkFactory.builder()
                    .connectString("").retryPolicy(new RetryOneTime(1000))
                    .build();
            curator.start();

            final String dataSource = "hey";

            final DruidBeams.Builder<Map<String, Object>> builder = DruidBeams
                    .builder(new Timestamper<Map<String, Object>>() {
                        public DateTime timestamp(Map<String, Object> theMap) {
                            return new DateTime(theMap.get("timestamp"));
                        }
                    })
                    .curator(curator)
                    .discoveryPath("/prod/discovery")
                    .location(
                            new DruidLocation(new DruidEnvironment(
                                    "druid:local:indexer",
                                    "druid:local:firehose:%s"), dataSource))
                    .rollup(DruidRollup.create(dimensions, aggregators,
                            QueryGranularity.MINUTE))
                    .tuning(ClusteredBeamTuning.create(Granularity.HOUR,
                            new Period("PT0M"), new Period("PT10M"), 1, 1));

            final Service<List<Map<String, Object>>, Integer> service = builder
                    .buildJavaService();
            final Beam<Map<String, Object>> beam = builder.buildBeam();
        } catch (Exception e) {
            // Log
        }
    }
}

Gian Merlino

unread,
Aug 26, 2014, 1:51:37 PM8/26/14
to druid-de...@googlegroups.com
Yep, tranquility works fine when called from Java code. You don't need to write any scala code to use it. The easiest way to include it in a maven project is to use the POM snippet from the documentation. The jars are published to the same repository as the druid jars.

The direct API is the best way to use it if you are not running inside storm. The imports are all in the package com.metamx.tranquility; the JavaApiTest (https://github.com/metamx/tranquility/blob/master/src/test/java/com/metamx/tranquility/javatests/JavaApiTest.java) has some more detailed code surrounding creation of the objects, including imports.

Finagle is a client/server networking library based around futures. Tranquility uses it internally to do network communications with druid, and can also expose its own functionality as a finagle Service. If you want to compose tranquility things with other futures-based code, you can read more about finagle here: https://twitter.github.io/finagle/. If you don't, then you don't need to get into the details of finagle. All you need to do is call com.twitter.util.Await.result(...) on the result of druidService.apply(...) (which waits for the future to resolve). There is an example of this in the readme.

Curator is a zookeeper library. If you are curious, you can read more about it at: https://curator.apache.org/. If you just want to get it working, the easiest way to create one is the CuratorFrameworkFactory.builder(). You will need to provide a zookeeper connect string and a retry policy.

The overlord service name should be your overlord's "druid.service" with any slashes replaced by colons. Your firehosePattern can be anything you want as long as it contains a %s (it will be used to announce things in service discovery). Something like "druid:firehose:%s" will work.

You don't need to include the dimensions if you don't want to. You can either use DruidDimensions.specific(...), if you want to provide the list of dimensions, or DruidDimensions.schemalessWithExclusions(...) if you want to provide exclusions. In either case you should provide a List of strings, not a single comma separated string.

The Timestamper is something that knows how to extract a timestamp from your object type. This is necessary to correctly route objects to the proper druid tasks.

In our case, we almost always modify the event before sending it to druid, so we need to deserialize and reserialize the events anyway. The default behavior is designed to be easiest for that use case. If you just want to send events directly, you can set your event type to be a byte array, and provide your own JsonWriter to the DruidBeams builder (through .eventWriter) that just writes out the bytes as-is. You will still need to partially parse the event in order to figure out the timestamp, though.

Gian Merlino

unread,
Aug 26, 2014, 1:53:30 PM8/26/14
to druid-de...@googlegroups.com
You don't need to be able to build it to use it, since we do publish jars. But if you want to build it and run the tests yourself, it is built with SBT, so if you have sbt locally you can run "sbt test" from the command line. I am not sure if Eclipse has SBT integration, but if it does, then getting the appropriate Eclipse plugin should let you run them from within the IDE.

Deepak Jain

unread,
Aug 27, 2014, 3:03:54 AM8/27/14
to druid-de...@googlegroups.com
Thanks Gian.
More questions

Am able to write a program that can read data form kafka. build a a tranquility service.  This tranquility program is able to connect to zk and i do not see any exceptions until 

final Future<Integer> numSentFuture = druidService.apply(listOfEvents); 

but it blocks for ever at final Integer numSent = Await.result(numSentFuture); 

Before the data is sent, there is one overlord and 8 middle managers. i can see them form overlord console.  i did not start realtime task like i used to do earlier.  with tranquility it needs to happen automatically 
Questions:

1) there were no realtime tasks that were started 
2) there is no movement in overlord logs after apply() and Await.result() are executed and executing. 
3) How do i add more aggregates ?
I have one count.

final List<AggregatorFactory> aggregators = ImmutableList.<AggregatorFactory> of(new CountAggregatorFactory("count"));


My realtime task file had the following.


"aggregators": [

                {

                    "type": "count",

                    "name": "count"

                },

                {

                    "type": "doubleSum",

                    "fieldName": "GMB_USD_OL_SC",

                    "name": "gmb_usd"

                },

                {

                    "type": "doubleSum",

                    "fieldName": "FP_GMB_USD_OL_SC",

                    "name": "gmb_fp"

                },

                {

                    "type": "doubleSum",

                    "fieldName": "AUCT_GMB_USD_OL_SC",

                    "name": "gmb_act"

                },

                {

                    "type": "doubleSum",

                    "fieldName": "QTY_BOUGHT_OL_SC",

                    "name": "bi"

                },

                {

                    "type": "doubleSum",

                    "fieldName": "FP_QTY_BOUGHT_OL_SC",

                    "name": "bi_fp"

                },

                {

                    "type": "doubleSum",

                    "fieldName": "AUCT_QTY_BOUGHT_OL_SC",

                    "name": "bi_act"

                }

            ],




Logs in tranquility program 

77 [pool-1-thread-1] INFO org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
87 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
87 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=localhost
87 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.7.0_45
87 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
87 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre
87 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/Users/dvasthimal/ebay/projects/experimentation/poc/kafka/druid_producer/target/test-  PATH PATH

88 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/Users/dvasthimal/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
88 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/var/folders/0v/446flf5d3gddq7tnhljk15pc3910pj/T/
88 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
88 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Mac OS X
88 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=x86_64
88 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=10.9.4
88 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=dvasthimal
88 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/Users/dvasthimal
88 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/Users/dvasthimal/ebay/projects/experimentation/poc/kafka/druid_producer
89 [pool-1-thread-1] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper-296415.phx-os1.stratus.dev.ebay.com:2182 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@184ce1cd
808 [pool-1-thread-1-SendThread(zookeeper-296415.phx-os1.stratus.dev.ebay.com:2182)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper-296415.phx-os1.stratus.dev.ebay.com/10.9.249.162:2182. Will not attempt to authenticate using SASL (unknown error)
1058 [pool-1-thread-1-SendThread(zookeeper-296415.phx-os1.stratus.dev.ebay.com:2182)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zookeeper-296415.phx-os1.stratus.dev.ebay.com/10.9.249.162:2182, initiating session
1307 [pool-1-thread-1-SendThread(zookeeper-296415.phx-os1.stratus.dev.ebay.com:2182)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper-296415.phx-os1.stratus.dev.ebay.com/10.9.249.162:2182, sessionid = 0x14817b4dc230014, negotiated timeout = 40000
1313 [pool-1-thread-1-EventThread] INFO org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED
1314 [ConnectionStateManager-0] WARN org.apache.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
log4j:WARN No appenders could be found for logger (org.jboss.logging).
log4j:WARN Please initialize the log4j system properly.
2067 [pool-1-thread-1] INFO io.druid.guice.JsonConfigurator - Loaded class[class io.druid.server.initialization.ExtensionsConfig] from props[druid.extensions.] as [ExtensionsConfig{searchCurrentClassloader=true, coordinates=[], localRepository='/Users/dvasthimal/.m2/repository', remoteRepositories=[http://repo1.maven.org/maven2/, https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local]}]
2014-08-27 12:26:13,569|main| INFO|KAFKA_FILE_PRODUCER|com.ebay.metricstore.producer.TranquilityProducer|79|Exit:FileKafkaProducer.main()
2547 [pool-1-thread-1] INFO io.druid.guice.JsonConfigurator - Loaded class[class com.metamx.emitter.core.LoggingEmitterConfig] from props[druid.emitter.logging.] as [LoggingEmitterConfig{loggerClass='com.metamx.emitter.core.LoggingEmitter', logLevel='info'}]
2564 [pool-1-thread-1] INFO io.druid.guice.JsonConfigurator - Loaded class[class io.druid.server.metrics.DruidMonitorSchedulerConfig] from props[druid.monitoring.] as [io.druid.server.metrics.DruidMonitorSchedulerConfig@d1947d5]
2580 [pool-1-thread-1] INFO io.druid.guice.JsonConfigurator - Loaded class[class io.druid.server.metrics.MonitorsConfig] from props[druid.monitoring.] as [MonitorsConfig{monitors=[]}]
2728 [pool-1-thread-1] INFO com.metamx.emitter.core.LoggingEmitter - Start: started [true]
2867 [pool-1-thread-1] INFO com.twitter.finagle - Finagle version 6.16.0 (rev=cb019fbe670d16dc8076494e315b4a8a6aa53111) built at 20140515-141646
4462 [pool-1-thread-1] INFO com.metamx.common.scala.net.finagle.DiscoResolver - Updating instances for service[druid/prod/overlord] to Set()
4470 [pool-1-thread-1] WARN finagle - Name resolution is pending
4514 [pool-1-thread-1] INFO com.metamx.tranquility.finagle.FinagleRegistry - Created client for service: druid/prod/overlord

Promise@840878573(state=Transforming(List(),Promise@1610118289(state=Transforming(List(<function1>),Promise@816647376(state=Interruptible(List(<function1>),<function1>))))))
13602 [ClusteredBeam-ZkFuturePool-385693b2-dab5-4bcd-bbc2-6ff5b117457c] INFO com.metamx.tranquility.beam.ClusteredBeam - Creating new beams for identifier[druid/prod/overlord/expt_real] timestamp[2014-08-27T12:00:00.000+05:30] (target = 1, actual = 0)


Regards,
Deepak

Deepak Jain

unread,
Aug 27, 2014, 4:07:16 AM8/27/14
to druid-de...@googlegroups.com
Overlord runtime:
cat /opt/druid/druid-services-0.6.145/config/overlord/runtime.properties 
druid.port=8080
druid.service=druid/prod/overlord

druid.zk.paths.base=/druid/prod

druid.discovery.curator.path=/prod/discovery

druid.extensions.coordinates=["io.druid.extensions:druid-kafka-eight:0.6.145"]

druid.db.connector.connectURI=jdbc:mysql://mysql-phx-os1.stratus.dev.com:3306/druid
druid.db.connector.user=druid
druid.db.connector.password=diurd
druid.db.connector.useValidationQuery=true
druid.db.tables.base=prod

#Run in remote mode
druid.indexer.runner.type=remote
druid.indexer.runner.compressZnodes=true
druid.indexer.runner.minWorkerVersion=1

# Store all task state in MySQL
druid.indexer.storage.type=db

#druid.monitoring.monitors=["com.metamx.metrics.SysMonitor","com.metamx.metrics.JvmMonitor"]

# If you choose to compress ZK announcements, you must do so for every node type
druid.announcer.type=batch
druid.curator.compress=true

[dvasthimal@druid-overlord-296467 expt]$ 


MM runtime:
$ cat /opt/druid/druid-services-0.6.145/config/overlord/runtime.properties 
druid.port=8080
druid.service=druid/prod/middlemanager

druid.zk.paths.base=/druid/prod

druid.discovery.curator.path=/prod/discovery

druid.extensions.coordinates=["io.druid.extensions:druid-hdfs-storage:0.6.145","io.druid.extensions:druid-kafka-eight:0.6.145"]

# Dedicate more resources to peons
druid.indexer.runner.javaOpts=-server -Xmx8g -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
druid.indexer.task.baseTaskDir=/opt/druid/working/localCache/
druid.indexer.task.chathandler.type=announce

druid.worker.capacity=10
druid.worker.ip=10.9.207.194
druid.worker.version=1

druid.selectors.indexing.serviceName=druid:prod:overlord

#druid.monitoring.monitors=["com.metamx.metrics.SysMonitor","com.metamx.metrics.JvmMonitor"]

# If you choose to compress ZK announcements, you must do so for every node type
druid.announcer.type=batch
druid.curator.compress=true


#Realtime configurations
druid.request.logging.type=file
druid.request.logging.dir=/opt/druid/working/request_logs/

# Choices: db (hand off segments), noop (do not hand off segments).
druid.publish.type=db

druid.db.connector.connectURI=jdbc:mysql://mysql-phx-os1.stratus.dev.com:3306/druid
druid.db.connector.user=druid
druid.db.connector.password=diurd
druid.db.connector.useValidationQuery=true
druid.db.tables.base=prod

druid.processing.numThreads=3

druid.segmentCache.locations=[{"path": "/opt/druid/working/druid-cache/indexCache", "maxSize": 0}]
druid.storage.type=hdfs
druid.storage.storageDirectory=hdfs://apollocom:8020/user/dvasthimal/druid/segments/expt

Deepak Jain

unread,
Aug 27, 2014, 4:10:27 AM8/27/14
to druid-de...@googlegroups.com
Druid cluster is using zookeeper 3.4.6
Druid Version: .145

Tranquility code pom dependecy:

<dependency>

<groupId>com.metamx</groupId>

<artifactId>tranquility_2.10</artifactId>

<!-- Or for scala 2.9: -->

<!-- <artifactId>tranquility_2.9.1</artifactId> -->

<version>0.2.1</version>

<exclusions>

<exclusion>

<groupId>org.apache.zookeeper</groupId>

<artifactId>zookeeper</artifactId>

</exclusion>

</exclusions>

</dependency>

<dependency>

<groupId>org.apache.zookeeper</groupId>

<artifactId>zookeeper</artifactId>

<version>3.4.6</version>

</dependency>

...

Deepak Jain

unread,
Aug 27, 2014, 4:57:11 AM8/27/14
to druid-de...@googlegroups.com
After changing to 

new Timestamper<Map<String, Object>>() {
                    public DateTime timestamp(Map<String, Object> theMap) {
                        //return new DateTime(theMap.get("timestamp")); === Commented.
                        Long date = Long.parseLong(theMap.get("TSTAMP")
                                .toString());
                        return new DateTime(date.longValue());
                    }
                })


In my data TSTAMP is the map entry that contains the timestamp. i see that 

final Integer numSent = Await.result(numSentFuture);

returns 0


Regards,

Deepak

...

Deepak Jain

unread,
Aug 27, 2014, 6:10:17 AM8/27/14
to druid-de...@googlegroups.com
Hi,
Attached are the code, runtime.properties of MM and overlord.

Attached log4j.xml is present in CP of the tranquility standalone java program. It produces debug logs for tranquility standalone java program, but does not produce any logs for metamx or duird packages. Hence the only lines i see are attached in log.txt.


Regards,
Deepak
log4j.xml
mm.overlord.properties
mm.runtime.properties
TranquilityProducer.java
log.txt

Deepak Jain

unread,
Aug 27, 2014, 6:25:54 AM8/27/14
to druid-de...@googlegroups.com
I referred many posts related to "DruidBeams" and "tranquility". (I think it requires documentation :)

I do have the following
1. All MMs have: druid.selectors.indexing.serviceName=druid:prod:overlord
2. All MMs have:  druid.indexer.task.chathandler.type=announce
3. All MMs have :  druid.publish.type=db
4. Overlord has this:  druid.service=druid/prod/overlord
5. druid.discovery.curator.path=/prod/discovery is present in all nodes except historical.
6. With

.rollup(DruidRollup.create(DruidDimensions.schemaless(), aggregators, QueryGranularity.MINUTE))

or

final List<String> dimensions = ImmutableList.of("EXPRMNT_ID",

"CHNL_ID", "GUID", "CUMM_START_DT");

.rollup(DruidRollup.create(dimensions, aggregators,QueryGranularity.MINUTE))

I still see the same output.

Promise@630856129(state=Done(Return(0)))

0

after executing these statements:

// Send events to Druid:

final Future<Integer> numSentFuture = druidService

.apply(listOfEvents);

System.out.println(numSentFuture);

// Wait for confirmation:

final Integer numSent = Await.result(numSentFuture);

System.out.println(numSent);


Regards,

Deepak

Deepak Jain

unread,
Aug 27, 2014, 6:27:00 AM8/27/14
to druid-de...@googlegroups.com
#7. I even tried adding this

.timestampSpec(new TimestampSpec("TSTAMP", "posix"))

does not help.

Example TSTAMP value: TSTAMP=1404210353133

After changing to 

Overlord runtime:
Thanks Gian.
More questions

<span style="font-size:12px;line-height:16.799999237060547px;white-space:pre"
...

Deepak Jain

unread,
Aug 27, 2014, 10:13:21 PM8/27/14
to druid-de...@googlegroups.com
Looks like tranquility creates real time task Json and submits to overlord. What is the firehose ? It has to be event receiver ( rest waiting for events) is this done automatically ?

Deepak Jain

unread,
Aug 27, 2014, 10:14:10 PM8/27/14
to druid-de...@googlegroups.com

Fangjin Yang

unread,
Aug 28, 2014, 8:27:20 AM8/28/14
to druid-de...@googlegroups.com
Tranquility automatically creates real-time tasks with the correct firehoses. The actual firehoses it uses are event receiver, timed shutoff, delegating, and possibly more. I can't actually access VPN from my current location to view an actual task json that gets sent :P

Deepak Jain

unread,
Aug 28, 2014, 1:13:54 PM8/28/14
to druid-de...@googlegroups.com
Thanks for ur response. I have attached several details and am unable to send data to Druid from tranquil. Help is appreciated.

Gian Merlino

unread,
Aug 28, 2014, 4:27:53 PM8/28/14
to druid-de...@googlegroups.com
We discussed a bit in IRC and found that the test data was a few weeks old (well outside the windowPeriod) and the "indexService" variable did not match up with what the overlord was announcing (needed to replace slashes with colons). Deepak, please let us know if you have luck with a longer windowPeriod and adjusted indexService config. Thanks!

Deepak Jain

unread,
Aug 29, 2014, 7:35:11 AM8/29/14
to druid-de...@googlegroups.com
It did not help.


I tried two things.

1) 
.tuning(ClusteredBeamTuning.create(Granularity.HOUR,
new Period("PT0M"), new Period("PT0M"), 1, 1))
TO
.tuning(ClusteredBeamTuning.create(Granularity.HOUR,

new Period("PT0M"), new Period("P120D"), 1, 1))

AND (used : instead of /)

.location(new DruidLocation(new DruidEnvironment("druid:prod:overlord", "druid:firehose:%s"),dataSource))



Tranquillity Logs
===============
Exception:
2805 [pool-1-thread-1] INFO com.twitter.finagle - Finagle version 6.16.0 (rev=cb019fbe670d16dc8076494e315b4a8a6aa53111) built at 20140515-141646
org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field "name" (Class org.apache.curator.x.discovery.ServiceInstance), not marked as ignorable
at [Source: [B@68171e49; line: 1, column: 10] (through reference chain: org.apache.curator.x.discovery.ServiceInstance["name"])
at org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
at org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
at org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
at com.ebay.metricstore.producer.TranquilityFileProducer.createDruidService(TranquilityProducer.java:193)
at com.ebay.metricstore.producer.TranquilityFileProducer.run(TranquilityProducer.java:104)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

At .buildJavaService();

2) As there was a exception. I switched service name to use / instead of :  (Very confusing, what needs to be done)

This time its stuck at 

final Integer numSent = Await.result(numSentFuture);


In either case, there is no movement in overlord logs, no creation of RT task.

4694 [pool-1-thread-1] INFO com.metamx.common.scala.net.finagle.DiscoResolver - Updating instances for service[druid/prod/overlord] to Set()

4702 [pool-1-thread-1] WARN finagle - Name resolution is pending

4746 [pool-1-thread-1] INFO com.metamx.tranquility.finagle.FinagleRegistry - Created client for service: druid/prod/overlord

6973 [ClusteredBeam-ZkFuturePool-4c5b8788-7ea4-4e69-96dd-3526f6d44d18] INFO com.metamx.tranquility.beam.ClusteredBeam - Creating new beams for identifier[druid/prod/overlord/expt_real] timestamp[2014-07-01T15:00:00.000+05:30] (target = 1, actual = 0)



More guidance is required.

Regards,
Deepak


Message has been deleted
Message has been deleted

Deepak Jain

unread,
Aug 29, 2014, 8:02:00 AM8/29/14
to druid-de...@googlegroups.com
Tried third option.
I modified druid.selectors.indexing.serviceName=druid/prod/overlord OR druid.selectors.indexing.serviceName=druid:prod:overlord in all co-ordinators and middle managers and 

new DruidLocation(new DruidEnvironment("druid:prod:overlord", "druid:firehose:%s"),

I still see the exception

org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field "name" (Class org.apache.curator.x.discovery.ServiceInstance), not marked as ignorable at [Source: [B@42e68d00; line: 1, column: 10] (through reference chain: org.apache.curator.x.discovery.ServiceInstance["name"])


Gian Merlino

unread,
Aug 29, 2014, 11:46:47 AM8/29/14
to druid-de...@googlegroups.com
That looks like some strange Jackson/Curator interaction. I'm having difficulty reproducing it here even with a few different versions of Jackson. Do you mind sharing your pom or the output of "mvn dependency:list"? I'm wondering if you are getting different versions of things, and they are incompatible in some way.

Deepak Jain

unread,
Aug 29, 2014, 12:54:56 PM8/29/14
to druid-de...@googlegroups.com

$ mvn dependency:list

[INFO] Scanning for projects...

[INFO]                                                                         

[INFO] ------------------------------------------------------------------------

[INFO] Building producer 1.0-SNAPSHOT

[INFO] ------------------------------------------------------------------------

[INFO] 

[INFO] --- maven-dependency-plugin:2.8:list (default-cli) @ producer ---

[INFO] 

[INFO] The following files have been resolved:

[INFO]    io.druid:druid-server:jar:0.6.121:compile

[INFO]    org.apache.maven:maven-aether-provider:jar:3.1.1:compile

[INFO]    com.google.protobuf:protobuf-java:jar:2.5.0:compile

[INFO]    com.sun.jersey:jersey-json:jar:1.9:provided

[INFO]    com.twitter:finagle-core_2.10:jar:6.16.0:compile

[INFO]    stax:stax-api:jar:1.0.1:provided

[INFO]    javax.mail:mail:jar:1.4:compile

[INFO]    com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:jar:2.2.3:compile

[INFO]    commons-dbcp:commons-dbcp:jar:1.4:compile

[INFO]    com.fasterxml.jackson.module:jackson-module-jaxb-annotations:jar:2.2.3:compile

[INFO]    commons-daemon:commons-daemon:jar:1.0.13:provided

[INFO]    commons-io:commons-io:jar:2.1:compile

[INFO]    io.airlift:airline:jar:0.5:compile

[INFO]    com.google.guava:guava:jar:11.0.2:compile

[INFO]    org.mozilla:rhino:jar:1.7R4:compile

[INFO]    io.druid:druid-indexing-service:jar:0.6.121:compile

[INFO]    com.thoughtworks.paranamer:paranamer:jar:2.3:compile

[INFO]    com.yammer.metrics:metrics-annotation:jar:2.2.0:compile

[INFO]    org.apache.curator:curator-framework:jar:2.4.0:compile

[INFO]    org.eclipse.aether:aether-api:jar:0.9.0.M2:compile

[INFO]    org.scala-lang:scala-compiler:jar:2.10.1:compile

[INFO]    com.twitter:util-hashing_2.10:jar:6.16.0:compile

[INFO]    commons-cli:commons-cli:jar:1.2:compile

[INFO]    org.joda:joda-convert:jar:1.6:compile

[INFO]    com.twitter:util-logging_2.10:jar:6.16.0:compile

[INFO]    org.slf4j:slf4j-api:jar:1.7.5:compile

[INFO]    com.ning:compress-lzf:jar:0.8.4:compile

[INFO]    com.davekoelle:alphanum:jar:1.0.3:compile

[INFO]    org.apache.zookeeper:zookeeper:jar:3.4.6:compile

[INFO]    tomcat:jasper-compiler:jar:5.5.23:provided

[INFO]    org.glassfish.grizzly:grizzly-rcm:jar:2.1.2:provided

[INFO]    org.apache.hadoop:hadoop-yarn-server-common:jar:2.2.0.2.0.6.0-76:provided

[INFO]    org.codehaus.jettison:jettison:jar:1.1:provided

[INFO]    org.eclipse.jetty:jetty-http:jar:9.1.5.v20140505:compile

[INFO]    org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.2.0.2.0.6.0-76:provided

[INFO]    org.apache.commons:commons-compress:jar:1.4.1:provided

[INFO]    javax.validation:validation-api:jar:1.1.0.Final:compile

[INFO]    net.sf.kosmosfs:kfs:jar:0.3:provided

[INFO]    org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile

[INFO]    io.tesla.aether:aether-connector-okhttp:jar:0.0.9:compile

[INFO]    commons-codec:commons-codec:jar:1.4:compile

[INFO]    org.eclipse.aether:aether-connector-file:jar:0.9.0.M2:compile

[INFO]    log4j:log4j:jar:1.2.15:compile

[INFO]    com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:jar:2.2.3:compile

[INFO]    com.metamx:scala-util_2.10:jar:1.8.15:compile

[INFO]    com.amazonaws:aws-java-sdk:jar:1.6.0.1:compile

[INFO]    org.apache.maven:maven-repository-metadata:jar:3.1.1:compile

[INFO]    com.sun.jersey:jersey-grizzly2:jar:1.9:provided

[INFO]    org.eintr.loglady:loglady_2.10:jar:1.1.0:compile

[INFO]    io.tesla.aether:tesla-aether:jar:0.0.5:compile

[INFO]    org.abego.treelayout:org.abego.treelayout.core:jar:1.0.1:compile

[INFO]    com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9:provided

[INFO]    commons-net:commons-net:jar:3.1:provided

[INFO]    commons-el:commons-el:jar:1.0:provided

[INFO]    ant:ant:jar:1.6.5:provided

[INFO]    org.apache.pig:pig:jar:0.12.0.2.0.6.0-76:provided

[INFO]    org.apache.hadoop:hadoop-auth:jar:2.2.0.2.0.6.0-76:provided

[INFO]    asm:asm:jar:3.1:compile

[INFO]    org.skife.config:config-magic:jar:0.9:compile

[INFO]    org.eclipse.jdt:core:jar:3.1.1:provided

[INFO]    io.druid:druid-api:jar:0.2.3:compile

[INFO]    org.antlr:ST4:jar:4.0.4:provided

[INFO]    org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.2.0.2.0.6.0-76:provided

[INFO]    org.eclipse.jetty:jetty-security:jar:9.1.5.v20140505:compile

[INFO]    org.apache.curator:curator-recipes:jar:2.4.0:compile

[INFO]    io.druid:druid-processing:jar:0.6.121:compile

[INFO]    commons-logging:commons-logging:jar:1.1.1:compile

[INFO]    org.eclipse.aether:aether-impl:jar:0.9.0.M2:compile

[INFO]    com.fasterxml.jackson.datatype:jackson-datatype-guava:jar:2.2.3:compile

[INFO]    org.mortbay.jetty:jsp-2.1:jar:6.1.14:provided

[INFO]    com.sun.jersey:jersey-core:jar:1.9:compile

[INFO]    com.google.http-client:google-http-client-jackson2:jar:1.15.0-rc:compile

[INFO]    io.netty:netty:jar:3.6.2.Final:compile

[INFO]    org.codehaus.plexus:plexus-utils:jar:3.0.15:compile

[INFO]    commons-digester:commons-digester:jar:1.8:provided

[INFO]    javax.activation:activation:jar:1.1:compile

[INFO]    com.metamx:bytebuffer-collections:jar:0.0.2:compile

[INFO]    org.eclipse.jetty:jetty-util:jar:9.1.5.v20140505:compile

[INFO]    com.maxmind.geoip2:geoip2:jar:0.4.0:compile

[INFO]    com.fasterxml.jackson.core:jackson-databind:jar:2.2.2:compile

[INFO]    aopalliance:aopalliance:jar:1.0:compile

[INFO]    com.yammer.metrics:metrics-core:jar:2.2.0:compile

[INFO]    org.jboss.logging:jboss-logging:jar:3.1.1.GA:compile

[INFO]    com.fasterxml.jackson.dataformat:jackson-dataformat-smile:jar:2.2.2:compile

[INFO]    org.apache.maven:maven-settings:jar:3.1.1:compile

[INFO]    com.metamx:emitter:jar:0.2.12:compile

[INFO]    hsqldb:hsqldb:jar:1.8.0.10:provided

[INFO]    org.apache.httpcomponents:httpclient:jar:4.2:compile

[INFO]    com.fasterxml.jackson.core:jackson-annotations:jar:2.2.2:compile

[INFO]    org.apache.maven:maven-model-builder:jar:3.1.1:compile

[INFO]    c3p0:c3p0:jar:0.9.1.2:compile

[INFO]    com.google.code.findbugs:jsr305:jar:1.3.9:compile

[INFO]    com.ibm.icu:icu4j:jar:4.8.1:compile

[INFO]    org.apache.avro:avro:jar:1.7.4:provided

[INFO]    org.scala-tools.time:time_2.10:jar:0.6-mmx1:compile

[INFO]    org.mortbay.jetty:servlet-api-2.5:jar:6.1.14:provided

[INFO]    javax.xml.bind:jaxb-api:jar:2.2.2:provided

[INFO]    joda-time:joda-time:jar:2.1:compile

[INFO]    org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:provided

[INFO]    org.mortbay.jetty:jsp-api-2.1:jar:6.1.14:provided

[INFO]    com.twitter:util-collection_2.10:jar:6.16.0:compile

[INFO]    org.apache.maven:maven-settings-builder:jar:3.1.1:compile

[INFO]    com.sun.jersey.jersey-test-framework:jersey-test-framework-core:jar:1.9:provided

[INFO]    javax.servlet:javax.servlet-api:jar:3.1.0:compile

[INFO]    commons-configuration:commons-configuration:jar:1.6:provided

[INFO]    org.glassfish.external:management-api:jar:3.0.0-b012:provided

[INFO]    javax.inject:javax.inject:jar:1:compile

[INFO]    commons-pool:commons-pool:jar:1.6:compile

[INFO]    com.twitter:util-app_2.10:jar:6.16.0:compile

[INFO]    com.sun.jersey.contribs:jersey-guice:jar:1.9:compile

[INFO]    org.apache.commons:commons-math:jar:2.1:provided

[INFO]    javax.servlet.jsp:jsp-api:jar:2.1:provided

[INFO]    org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.2.0.2.0.6.0-76:provided

[INFO]    org.glassfish:javax.servlet:jar:3.1:provided

[INFO]    io.druid:druid-common:jar:0.6.121:compile

[INFO]    org.slf4j:jul-to-slf4j:jar:1.7.2:compile

[INFO]    oro:oro:jar:2.0.8:provided

[INFO]    com.google.code.simple-spring-memcached:spymemcached:jar:2.8.4:compile

[INFO]    net.sf.jopt-simple:jopt-simple:jar:3.2:compile

[INFO]    org.codehaus.jackson:jackson-xc:jar:1.8.3:provided

[INFO]    org.apache.curator:curator-x-discovery:jar:2.4.0:compile

[INFO]    com.google.inject:guice:jar:4.0-beta:compile

[INFO]    org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:2.2.0.2.0.6.0-76:provided

[INFO]    jline:jline:jar:0.9.94:compile

[INFO]    commons-beanutils:commons-beanutils:jar:1.7.0:provided

[INFO]    org.apache.httpcomponents:httpcore:jar:4.2:compile

[INFO]    com.metamx:java-util:jar:0.25.1:compile

[INFO]    commons-httpclient:commons-httpclient:jar:3.1:compile

[INFO]    com.jcraft:jsch:jar:0.1.42:provided

[INFO]    org.slf4j:slf4j-simple:jar:1.6.4:compile

[INFO]    org.jdbi:jdbi:jar:2.27:compile

[INFO]    org.apache.curator:curator-client:jar:2.4.0:compile

[INFO]    org.glassfish.grizzly:grizzly-http-server:jar:2.1.2:provided

[INFO]    org.apache.hadoop:hadoop-hdfs:jar:2.2.0.2.0.6.0-76:provided

[INFO]    com.twitter:finagle-http_2.10:jar:6.16.0:compile

[INFO]    com.fasterxml:classmate:jar:0.8.0:compile

[INFO]    com.fasterxml.jackson.datatype:jackson-datatype-joda:jar:2.2.2:compile

[INFO]    com.google.inject.extensions:guice-servlet:jar:3.0:compile

[INFO]    com.maxmind.maxminddb:maxminddb:jar:0.2.0:compile

[INFO]    org.antlr:stringtemplate:jar:3.2.1:provided

[INFO]    com.ircclouds.irc:irc-api:jar:1.0-0011:compile

[INFO]    org.glassfish.grizzly:grizzly-http:jar:2.1.2:provided

[INFO]    javax.servlet:servlet-api:jar:2.5:provided

[INFO]    org.glassfish.grizzly:grizzly-framework:jar:2.1.2:provided

[INFO]    org.antlr:antlr-runtime:jar:3.4:provided

[INFO]    org.eclipse.aether:aether-spi:jar:0.9.0.M2:compile

[INFO]    com.twitter:util-core_2.10:jar:6.16.0:compile

[INFO]    org.eclipse.jetty:jetty-continuation:jar:9.1.5.v20140505:compile

[INFO]    org.scala-lang:scala-library:jar:2.10.1:compile

[INFO]    com.metamx:server-metrics:jar:0.0.9:compile

[INFO]    org.antlr:antlr4-runtime:jar:4.0:compile

[INFO]    com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:provided

[INFO]    org.apache.maven.wagon:wagon-provider-api:jar:2.4:compile

[INFO]    org.glassfish.grizzly:grizzly-http-servlet:jar:2.1.2:provided

[INFO]    org.xerial.snappy:snappy-java:jar:1.0.4.1:compile

[INFO]    org.apache.maven:maven-model:jar:3.1.1:compile

[INFO]    org.glassfish.gmbal:gmbal-api-only:jar:3.0.0-b023:provided

[INFO]    org.scala-lang:scala-reflect:jar:2.10.1:compile

[INFO]    org.tukaani:xz:jar:1.0:provided

[INFO]    junit:junit:jar:3.8.1:test

[INFO]    org.eclipse.jetty:jetty-server:jar:9.1.5.v20140505:compile

[INFO]    org.apache.hadoop:hadoop-yarn-client:jar:2.2.0.2.0.6.0-76:provided

[INFO]    com.metamx:http-client:jar:0.9.6:compile

[INFO]    com.fasterxml.jackson.module:jackson-module-scala_2.10:jar:2.2.2:compile

[INFO]    com.google.http-client:google-http-client:jar:1.15.0-rc:compile

[INFO]    org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.2.0.2.0.6.0-76:provided

[INFO]    org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile

[INFO]    org.eclipse.jetty:jetty-io:jar:9.1.5.v20140505:compile

[INFO]    org.apache.kafka:kafka_2.10:jar:0.8.0:compile

[INFO]    org.mortbay.jetty:jetty-util:jar:6.1.26:provided

[INFO]    org.hyperic:sigar:jar:1.6.5.132:compile

[INFO]    org.apache.hadoop:hadoop-common:jar:2.2.0.2.0.6.0-76:provided

[INFO]    org.hibernate:hibernate-validator:jar:5.0.1.Final:compile

[INFO]    com.metamx:tranquility_2.10:jar:0.2.1:compile

[INFO]    net.jpountz.lz4:lz4:jar:1.1.2:compile

[INFO]    commons-lang:commons-lang:jar:2.5:compile

[INFO]    xmlenc:xmlenc:jar:0.52:provided

[INFO]    org.apache.hadoop:hadoop-yarn-common:jar:2.2.0.2.0.6.0-76:provided

[INFO]    commons-beanutils:commons-beanutils-core:jar:1.8.0:provided

[INFO]    antlr:antlr:jar:2.7.7:provided

[INFO]    com.squareup.okhttp:okhttp:jar:1.0.2:compile

[INFO]    com.twitter:util-codec_2.10:jar:6.16.0:compile

[INFO]    mysql:mysql-connector-java:jar:5.1.18:compile

[INFO]    org.codehaus.plexus:plexus-interpolation:jar:1.19:compile

[INFO]    it.uniroma3.mat:extendedset:jar:1.3.4:compile

[INFO]    com.twitter:util-jvm_2.10:jar:6.16.0:compile

[INFO]    net.java.dev.jets3t:jets3t:jar:0.6.1:compile

[INFO]    org.apache.hadoop:hadoop-annotations:jar:2.2.0.2.0.6.0-76:provided

[INFO]    commons-collections:commons-collections:jar:3.2.1:compile

[INFO]    org.eclipse.aether:aether-util:jar:0.9.0.M2:compile

[INFO]    org.yaml:snakeyaml:jar:1.11:compile

[INFO]    com.sun.jersey:jersey-client:jar:1.9:provided

[INFO]    org.eclipse.jetty:jetty-servlet:jar:9.1.5.v20140505:compile

[INFO]    com.101tec:zkclient:jar:0.3:compile

[INFO]    org.slf4j:slf4j-log4j12:jar:1.7.5:compile

[INFO]    org.eclipse.jetty:jetty-servlets:jar:9.1.5.v20140505:compile

[INFO]    com.h2database:h2:jar:1.3.158:compile

[INFO]    net.sf.opencsv:opencsv:jar:2.3:compile

[INFO]    tomcat:jasper-runtime:jar:5.5.23:provided

[INFO]    org.apache.hadoop:hadoop-yarn-api:jar:2.2.0.2.0.6.0-76:provided

[INFO]    com.google.inject.extensions:guice-multibindings:jar:4.0-beta:compile

[INFO]    xpp3:xpp3:jar:1.1.4c:compile

[INFO]    io.druid:druid-indexing-hadoop:jar:0.6.121:compile

[INFO]    org.mortbay.jetty:jetty:jar:6.1.26:provided

[INFO]    com.sun.jersey:jersey-server:jar:1.9:compile

[INFO]    com.fasterxml.jackson.core:jackson-core:jar:2.2.2:compile

[INFO] 

[INFO] ------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO] ------------------------------------------------------------------------

[INFO] Total time: 3.010s

[INFO] Finished at: Fri Aug 29 22:23:17 GMT+05:30 2014

[INFO] Final Memory: 13M/81M

[INFO] ------------------------------------------------------------------------

:druid_producer dvasthimal$ 

AND

Building producer 1.0-SNAPSHOT

[INFO] ------------------------------------------------------------------------

[INFO] 

[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ producer ---

[INFO] com.ebay.metricstore:producer:jar:1.0-SNAPSHOT

[INFO] +- junit:junit:jar:3.8.1:test

[INFO] +- org.apache.kafka:kafka_2.10:jar:0.8.0:compile

[INFO] |  +- org.scala-lang:scala-library:jar:2.10.1:compile

[INFO] |  +- log4j:log4j:jar:1.2.15:compile

[INFO] |  |  \- javax.mail:mail:jar:1.4:compile

[INFO] |  |     \- javax.activation:activation:jar:1.1:compile

[INFO] |  +- net.sf.jopt-simple:jopt-simple:jar:3.2:compile

[INFO] |  +- org.slf4j:slf4j-simple:jar:1.6.4:compile

[INFO] |  +- org.scala-lang:scala-compiler:jar:2.10.1:compile

[INFO] |  |  \- org.scala-lang:scala-reflect:jar:2.10.1:compile

[INFO] |  +- com.101tec:zkclient:jar:0.3:compile

[INFO] |  +- org.xerial.snappy:snappy-java:jar:1.0.4.1:compile

[INFO] |  +- com.yammer.metrics:metrics-core:jar:2.2.0:compile

[INFO] |  \- com.yammer.metrics:metrics-annotation:jar:2.2.0:compile

[INFO] +- org.apache.hadoop:hadoop-common:jar:2.2.0.2.0.6.0-76:provided

[INFO] |  +- org.apache.hadoop:hadoop-annotations:jar:2.2.0.2.0.6.0-76:provided

[INFO] |  +- com.google.guava:guava:jar:11.0.2:compile

[INFO] |  |  \- com.google.code.findbugs:jsr305:jar:1.3.9:compile

[INFO] |  +- commons-cli:commons-cli:jar:1.2:compile

[INFO] |  +- org.apache.commons:commons-math:jar:2.1:provided

[INFO] |  +- xmlenc:xmlenc:jar:0.52:provided

[INFO] |  +- commons-httpclient:commons-httpclient:jar:3.1:compile

[INFO] |  +- commons-codec:commons-codec:jar:1.4:compile

[INFO] |  +- commons-io:commons-io:jar:2.1:compile

[INFO] |  +- commons-net:commons-net:jar:3.1:provided

[INFO] |  +- javax.servlet:servlet-api:jar:2.5:provided

[INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26:provided

[INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26:provided

[INFO] |  +- com.sun.jersey:jersey-core:jar:1.9:compile

[INFO] |  +- com.sun.jersey:jersey-json:jar:1.9:provided

[INFO] |  |  +- org.codehaus.jettison:jettison:jar:1.1:provided

[INFO] |  |  |  \- stax:stax-api:jar:1.0.1:provided

[INFO] |  |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:provided

[INFO] |  |  |  \- javax.xml.bind:jaxb-api:jar:2.2.2:provided

[INFO] |  |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:provided

[INFO] |  |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:provided

[INFO] |  +- com.sun.jersey:jersey-server:jar:1.9:compile

[INFO] |  |  \- asm:asm:jar:3.1:compile

[INFO] |  +- tomcat:jasper-compiler:jar:5.5.23:provided

[INFO] |  +- tomcat:jasper-runtime:jar:5.5.23:provided

[INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:provided

[INFO] |  +- commons-el:commons-el:jar:1.0:provided

[INFO] |  +- commons-logging:commons-logging:jar:1.1.1:compile

[INFO] |  +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile

[INFO] |  +- commons-lang:commons-lang:jar:2.5:compile

[INFO] |  +- commons-configuration:commons-configuration:jar:1.6:provided

[INFO] |  |  +- commons-collections:commons-collections:jar:3.2.1:compile

[INFO] |  |  +- commons-digester:commons-digester:jar:1.8:provided

[INFO] |  |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:provided

[INFO] |  |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:provided

[INFO] |  +- org.slf4j:slf4j-api:jar:1.7.5:compile

[INFO] |  +- org.slf4j:slf4j-log4j12:jar:1.7.5:compile

[INFO] |  +- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile

[INFO] |  +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile

[INFO] |  +- org.apache.avro:avro:jar:1.7.4:provided

[INFO] |  |  \- com.thoughtworks.paranamer:paranamer:jar:2.3:compile

[INFO] |  +- com.google.protobuf:protobuf-java:jar:2.5.0:compile

[INFO] |  +- org.apache.hadoop:hadoop-auth:jar:2.2.0.2.0.6.0-76:provided

[INFO] |  +- com.jcraft:jsch:jar:0.1.42:provided

[INFO] |  \- org.apache.commons:commons-compress:jar:1.4.1:provided

[INFO] |     \- org.tukaani:xz:jar:1.0:provided

[INFO] +- org.apache.hadoop:hadoop-hdfs:jar:2.2.0.2.0.6.0-76:provided

[INFO] |  \- commons-daemon:commons-daemon:jar:1.0.13:provided

[INFO] +- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.2.0.2.0.6.0-76:provided

[INFO] |  +- org.apache.hadoop:hadoop-yarn-common:jar:2.2.0.2.0.6.0-76:provided

[INFO] |  |  +- org.apache.hadoop:hadoop-yarn-api:jar:2.2.0.2.0.6.0-76:provided

[INFO] |  |  +- com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9:provided

[INFO] |  |  |  +- com.sun.jersey.jersey-test-framework:jersey-test-framework-core:jar:1.9:provided

[INFO] |  |  |  |  \- com.sun.jersey:jersey-client:jar:1.9:provided

[INFO] |  |  |  \- com.sun.jersey:jersey-grizzly2:jar:1.9:provided

[INFO] |  |  |     +- org.glassfish.grizzly:grizzly-http:jar:2.1.2:provided

[INFO] |  |  |     |  \- org.glassfish.grizzly:grizzly-framework:jar:2.1.2:provided

[INFO] |  |  |     |     \- org.glassfish.gmbal:gmbal-api-only:jar:3.0.0-b023:provided

[INFO] |  |  |     |        \- org.glassfish.external:management-api:jar:3.0.0-b012:provided

[INFO] |  |  |     +- org.glassfish.grizzly:grizzly-http-server:jar:2.1.2:provided

[INFO] |  |  |     |  \- org.glassfish.grizzly:grizzly-rcm:jar:2.1.2:provided

[INFO] |  |  |     +- org.glassfish.grizzly:grizzly-http-servlet:jar:2.1.2:provided

[INFO] |  |  |     \- org.glassfish:javax.servlet:jar:3.1:provided

[INFO] |  |  \- com.sun.jersey.contribs:jersey-guice:jar:1.9:compile

[INFO] |  +- com.google.inject.extensions:guice-servlet:jar:3.0:compile

[INFO] |  \- io.netty:netty:jar:3.6.2.Final:compile

[INFO] +- org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.2.0.2.0.6.0-76:provided

[INFO] |  +- org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.2.0.2.0.6.0-76:provided

[INFO] |  |  +- org.apache.hadoop:hadoop-yarn-client:jar:2.2.0.2.0.6.0-76:provided

[INFO] |  |  \- org.apache.hadoop:hadoop-yarn-server-common:jar:2.2.0.2.0.6.0-76:provided

[INFO] |  \- org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.2.0.2.0.6.0-76:provided

[INFO] |     \- org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:2.2.0.2.0.6.0-76:provided

[INFO] +- org.apache.pig:pig:jar:0.12.0.2.0.6.0-76:provided

[INFO] |  +- org.mortbay.jetty:jsp-api-2.1:jar:6.1.14:provided

[INFO] |  +- org.mortbay.jetty:jsp-2.1:jar:6.1.14:provided

[INFO] |  |  +- org.eclipse.jdt:core:jar:3.1.1:provided

[INFO] |  |  \- ant:ant:jar:1.6.5:provided

[INFO] |  +- org.mortbay.jetty:servlet-api-2.5:jar:6.1.14:provided

[INFO] |  +- net.sf.kosmosfs:kfs:jar:0.3:provided

[INFO] |  +- hsqldb:hsqldb:jar:1.8.0.10:provided

[INFO] |  +- oro:oro:jar:2.0.8:provided

[INFO] |  +- org.antlr:antlr-runtime:jar:3.4:provided

[INFO] |  |  +- org.antlr:stringtemplate:jar:3.2.1:provided

[INFO] |  |  \- antlr:antlr:jar:2.7.7:provided

[INFO] |  +- org.antlr:ST4:jar:4.0.4:provided

[INFO] |  \- joda-time:joda-time:jar:2.1:compile

[INFO] +- com.metamx:tranquility_2.10:jar:0.2.1:compile

[INFO] |  +- com.metamx:scala-util_2.10:jar:1.8.15:compile

[INFO] |  |  +- org.eintr.loglady:loglady_2.10:jar:1.1.0:compile

[INFO] |  |  +- com.metamx:java-util:jar:0.25.1:compile

[INFO] |  |  |  \- net.sf.opencsv:opencsv:jar:2.3:compile

[INFO] |  |  +- com.metamx:http-client:jar:0.9.6:compile

[INFO] |  |  +- com.metamx:emitter:jar:0.2.12:compile

[INFO] |  |  +- com.metamx:server-metrics:jar:0.0.9:compile

[INFO] |  |  |  \- org.hyperic:sigar:jar:1.6.5.132:compile

[INFO] |  |  +- org.joda:joda-convert:jar:1.6:compile

[INFO] |  |  +- org.scala-tools.time:time_2.10:jar:0.6-mmx1:compile

[INFO] |  |  +- org.skife.config:config-magic:jar:0.9:compile

[INFO] |  |  +- org.yaml:snakeyaml:jar:1.11:compile

[INFO] |  |  +- org.jdbi:jdbi:jar:2.27:compile

[INFO] |  |  +- mysql:mysql-connector-java:jar:5.1.18:compile

[INFO] |  |  +- com.h2database:h2:jar:1.3.158:compile

[INFO] |  |  +- c3p0:c3p0:jar:0.9.1.2:compile

[INFO] |  |  +- org.apache.curator:curator-framework:jar:2.4.0:compile

[INFO] |  |  |  \- org.apache.curator:curator-client:jar:2.4.0:compile

[INFO] |  |  +- org.apache.curator:curator-recipes:jar:2.4.0:compile

[INFO] |  |  +- org.apache.curator:curator-x-discovery:jar:2.4.0:compile

[INFO] |  |  +- com.twitter:util-core_2.10:jar:6.16.0:compile

[INFO] |  |  +- com.twitter:finagle-core_2.10:jar:6.16.0:compile

[INFO] |  |  |  +- com.twitter:util-app_2.10:jar:6.16.0:compile

[INFO] |  |  |  +- com.twitter:util-collection_2.10:jar:6.16.0:compile

[INFO] |  |  |  +- com.twitter:util-hashing_2.10:jar:6.16.0:compile

[INFO] |  |  |  +- com.twitter:util-jvm_2.10:jar:6.16.0:compile

[INFO] |  |  |  \- com.twitter:util-logging_2.10:jar:6.16.0:compile

[INFO] |  |  \- com.twitter:finagle-http_2.10:jar:6.16.0:compile

[INFO] |  |     \- com.twitter:util-codec_2.10:jar:6.16.0:compile

[INFO] |  +- org.slf4j:jul-to-slf4j:jar:1.7.2:compile

[INFO] |  +- com.fasterxml.jackson.core:jackson-core:jar:2.2.2:compile

[INFO] |  +- com.fasterxml.jackson.core:jackson-annotations:jar:2.2.2:compile

[INFO] |  +- com.fasterxml.jackson.core:jackson-databind:jar:2.2.2:compile

[INFO] |  +- com.fasterxml.jackson.dataformat:jackson-dataformat-smile:jar:2.2.2:compile

[INFO] |  +- com.fasterxml.jackson.datatype:jackson-datatype-joda:jar:2.2.2:compile

[INFO] |  +- com.fasterxml.jackson.module:jackson-module-scala_2.10:jar:2.2.2:compile

[INFO] |  +- io.druid:druid-server:jar:0.6.121:compile

[INFO] |  |  +- io.druid:druid-processing:jar:0.6.121:compile

[INFO] |  |  |  +- com.metamx:bytebuffer-collections:jar:0.0.2:compile

[INFO] |  |  |  +- com.ning:compress-lzf:jar:0.8.4:compile

[INFO] |  |  |  +- it.uniroma3.mat:extendedset:jar:1.3.4:compile

[INFO] |  |  |  +- com.ibm.icu:icu4j:jar:4.8.1:compile

[INFO] |  |  |  +- org.mozilla:rhino:jar:1.7R4:compile

[INFO] |  |  |  \- com.davekoelle:alphanum:jar:1.0.3:compile

[INFO] |  |  +- javax.inject:javax.inject:jar:1:compile

[INFO] |  |  +- com.amazonaws:aws-java-sdk:jar:1.6.0.1:compile

[INFO] |  |  |  \- org.apache.httpcomponents:httpclient:jar:4.2:compile

[INFO] |  |  +- com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:jar:2.2.3:compile

[INFO] |  |  |  +- com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:jar:2.2.3:compile

[INFO] |  |  |  \- com.fasterxml.jackson.module:jackson-module-jaxb-annotations:jar:2.2.3:compile

[INFO] |  |  +- org.eclipse.jetty:jetty-server:jar:9.1.5.v20140505:compile

[INFO] |  |  |  +- javax.servlet:javax.servlet-api:jar:3.1.0:compile

[INFO] |  |  |  +- org.eclipse.jetty:jetty-http:jar:9.1.5.v20140505:compile

[INFO] |  |  |  \- org.eclipse.jetty:jetty-io:jar:9.1.5.v20140505:compile

[INFO] |  |  +- io.tesla.aether:tesla-aether:jar:0.0.5:compile

[INFO] |  |  |  +- org.eclipse.aether:aether-spi:jar:0.9.0.M2:compile

[INFO] |  |  |  +- org.eclipse.aether:aether-util:jar:0.9.0.M2:compile

[INFO] |  |  |  +- org.eclipse.aether:aether-impl:jar:0.9.0.M2:compile

[INFO] |  |  |  +- org.eclipse.aether:aether-connector-file:jar:0.9.0.M2:compile

[INFO] |  |  |  +- io.tesla.aether:aether-connector-okhttp:jar:0.0.9:compile

[INFO] |  |  |  |  +- com.squareup.okhttp:okhttp:jar:1.0.2:compile

[INFO] |  |  |  |  \- org.apache.maven.wagon:wagon-provider-api:jar:2.4:compile

[INFO] |  |  |  +- org.apache.maven:maven-aether-provider:jar:3.1.1:compile

[INFO] |  |  |  |  +- org.apache.maven:maven-model:jar:3.1.1:compile

[INFO] |  |  |  |  +- org.apache.maven:maven-model-builder:jar:3.1.1:compile

[INFO] |  |  |  |  +- org.apache.maven:maven-repository-metadata:jar:3.1.1:compile

[INFO] |  |  |  |  \- org.codehaus.plexus:plexus-utils:jar:3.0.15:compile

[INFO] |  |  |  +- org.apache.maven:maven-settings-builder:jar:3.1.1:compile

[INFO] |  |  |  |  \- org.codehaus.plexus:plexus-interpolation:jar:1.19:compile

[INFO] |  |  |  \- org.apache.maven:maven-settings:jar:3.1.1:compile

[INFO] |  |  +- org.eclipse.aether:aether-api:jar:0.9.0.M2:compile

[INFO] |  |  +- org.antlr:antlr4-runtime:jar:4.0:compile

[INFO] |  |  |  \- org.abego.treelayout:org.abego.treelayout.core:jar:1.0.1:compile

[INFO] |  |  +- com.google.code.simple-spring-memcached:spymemcached:jar:2.8.4:compile

[INFO] |  |  +- net.jpountz.lz4:lz4:jar:1.1.2:compile

[INFO] |  |  +- org.eclipse.jetty:jetty-servlet:jar:9.1.5.v20140505:compile

[INFO] |  |  |  \- org.eclipse.jetty:jetty-security:jar:9.1.5.v20140505:compile

[INFO] |  |  +- org.eclipse.jetty:jetty-servlets:jar:9.1.5.v20140505:compile

[INFO] |  |  |  +- org.eclipse.jetty:jetty-continuation:jar:9.1.5.v20140505:compile

[INFO] |  |  |  \- org.eclipse.jetty:jetty-util:jar:9.1.5.v20140505:compile

[INFO] |  |  +- com.ircclouds.irc:irc-api:jar:1.0-0011:compile

[INFO] |  |  \- com.maxmind.geoip2:geoip2:jar:0.4.0:compile

[INFO] |  |     +- com.maxmind.maxminddb:maxminddb:jar:0.2.0:compile

[INFO] |  |     +- com.google.http-client:google-http-client:jar:1.15.0-rc:compile

[INFO] |  |     |  \- xpp3:xpp3:jar:1.1.4c:compile

[INFO] |  |     \- com.google.http-client:google-http-client-jackson2:jar:1.15.0-rc:compile

[INFO] |  +- io.druid:druid-indexing-service:jar:0.6.121:compile

[INFO] |  |  +- io.druid:druid-common:jar:0.6.121:compile

[INFO] |  |  |  +- io.druid:druid-api:jar:0.2.3:compile

[INFO] |  |  |  |  \- io.airlift:airline:jar:0.5:compile

[INFO] |  |  |  +- commons-dbcp:commons-dbcp:jar:1.4:compile

[INFO] |  |  |  +- commons-pool:commons-pool:jar:1.6:compile

[INFO] |  |  |  +- org.hibernate:hibernate-validator:jar:5.0.1.Final:compile

[INFO] |  |  |  |  +- org.jboss.logging:jboss-logging:jar:3.1.1.GA:compile

[INFO] |  |  |  |  \- com.fasterxml:classmate:jar:0.8.0:compile

[INFO] |  |  |  \- com.fasterxml.jackson.datatype:jackson-datatype-guava:jar:2.2.3:compile

[INFO] |  |  \- io.druid:druid-indexing-hadoop:jar:0.6.121:compile

[INFO] |  |     \- org.apache.httpcomponents:httpcore:jar:4.2:compile

[INFO] |  +- com.google.inject:guice:jar:4.0-beta:compile

[INFO] |  |  \- aopalliance:aopalliance:jar:1.0:compile

[INFO] |  +- com.google.inject.extensions:guice-multibindings:jar:4.0-beta:compile

[INFO] |  \- javax.validation:validation-api:jar:1.1.0.Final:compile

[INFO] \- org.apache.zookeeper:zookeeper:jar:3.4.6:compile

[INFO]    \- jline:jline:jar:0.9.94:compile

[INFO] ------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO] ------------------------------------------------------------------------

[INFO] Total time: 3.117s

[INFO] Finished at: Fri Aug 29 22:23:43 GMT+05:30 2014

[INFO] Final Memory: 13M/81M


AND

pom.xml



<url>http://maven.apache.org</url>

    <dependencies>

        <dependency>

            <groupId>junit</groupId>

            <artifactId>junit</artifactId>

            <version>3.8.1</version>

            <scope>test</scope>

        </dependency>

        <dependency>

            <groupId>org.apache.kafka</groupId>

            <artifactId>kafka_2.10</artifactId>

            <version>0.8.0</version>

            <exclusions>

                <exclusion>

                    <groupId>com.sun.jmx</groupId>

                    <artifactId>jmxri</artifactId>

                </exclusion>

                <exclusion>

                    <groupId>com.sun.jdmk</groupId>

                    <artifactId>jmxtools</artifactId>

                </exclusion>

                <exclusion>

                    <groupId>javax.jms</groupId>

                    <artifactId>jms</artifactId>

                </exclusion>

            </exclusions>

        </dependency>

        <dependency>

            <groupId>org.apache.hadoop</groupId>

            <artifactId>hadoop-common</artifactId>

            <version>2.2.0.2.0.6.0-76</version>

            <scope>provided</scope>

        </dependency>

        <dependency>

            <groupId>org.apache.hadoop</groupId>

            <artifactId>hadoop-hdfs</artifactId>

            <version>2.2.0.2.0.6.0-76</version>

            <scope>provided</scope>

        </dependency>

        <dependency>

            <groupId>org.apache.hadoop</groupId>

            <artifactId>hadoop-mapreduce-client-core</artifactId>

            <version>2.2.0.2.0.6.0-76</version>

            <scope>provided</scope>

        </dependency>

        <dependency>

            <groupId>org.apache.hadoop</groupId>

            <artifactId>hadoop-mapreduce-client-jobclient</artifactId>

            <version>2.2.0.2.0.6.0-76</version>

            <scope>provided</scope>

        </dependency>

        <dependency>

            <groupId>org.apache.pig</groupId>

            <artifactId>pig</artifactId>

            <version>0.12.0.2.0.6.0-76</version>

            <scope>provided</scope>

        </dependency>

        <dependency>

            <groupId>com.metamx</groupId>

            <artifactId>tranquility_2.10</artifactId>

            <!-- Or for scala 2.9: -->

            <!-- <artifactId>tranquility_2.9.1</artifactId> -->

            <version>0.2.1</version>

            <exclusions>

                <exclusion>

                    <groupId>org.apache.zookeeper</groupId>

                    <artifactId>zookeeper</artifactId>

                </exclusion>

            </exclusions>

        </dependency>

        <dependency>

            <groupId>org.apache.zookeeper</groupId>

            <artifactId>zookeeper</artifactId>

            <version>3.4.6</version>

        </dependency>

    </dependencies>

ÐΞ€ρ@Ҝ (๏̯͡๏)

unread,
Sep 1, 2014, 9:11:55 PM9/1/14
to druid-de...@googlegroups.com
Hi Gian
Did you get a chance to look into it?
Regards
Deepak 
--
You received this message because you are subscribed to a topic in the Google Groups "Druid Development" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/druid-development/eIiuSS-fM8I/unsubscribe.
To unsubscribe from this group and all its topics, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/ae0aebae-ed79-4f60-a077-cecfce62e691%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
Deepak


Gian Merlino

unread,
Sep 2, 2014, 2:35:17 PM9/2/14
to druid-de...@googlegroups.com
The problem seems to be an incompatibility between Jackson 1.8.x and Curator 2.4.x. Can you try upgrading to tranquility 0.2.16? This one depends on the newer version of Jackson.

Depending on how your pom is structured, you might also have to specify 1.9.x versions of org.codehaus.jackson:jackson-core-asl and org.codehaus.jackson:jackson-mapper-asl (if something else trumps tranquility's attempt to pull those in).
...

Deepak Jain

unread,
Sep 3, 2014, 4:08:46 AM9/3/14
to druid-de...@googlegroups.com
https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local/com/metamx/ does not have the .2.16 of  tranquility_*.jar . How do i get it and how do i use it from a Java project ?
Currently i do

<dependency>

<groupId>com.metamx</groupId>

<artifactId>tranquility_2.16</artifactId>

<version>0.2.1</version>

<exclusions>

<exclusion>

<groupId>org.apache.zookeeper</groupId>

<artifactId>zookeeper</artifactId>

</exclusion>

</exclusions>

</dependency>

<dependency>

<groupId>org.apache.zookeeper</groupId>

<artifactId>zookeeper</artifactId>

<version>3.4.6</version>

</dependency>


...

Deepak Jain

unread,
Sep 3, 2014, 4:43:50 AM9/3/14
to druid-de...@googlegroups.com
1) I checkout the code and built it using sbt  (clean, compile and package). It created a  target/scala-2.9.1/ tranquility_2.9.1-0.2.17-SNAPSHOT.jar.  
2) There is no corresponding pom.xml

3) This will not help me resolve the problem of conflicting libs and i cant verify the version of jackson/curator as its just a JAR
4) The jar only has com/metamx files and is not a big jar.

I will need someone from metamx team build traquility (share steps) and push .2.16 version to Maven repo, so that it can be picked as dependency. that way i can verify the version of jackson/curator and include the right ones.

Regards,
Deepak
...

Deepak Jain

unread,
Sep 3, 2014, 10:40:45 AM9/3/14
to druid-de...@googlegroups.com
deepak0101
am still using old version of traq, removed jackson dependencies and added explicitly
i dont see any exception
but tranq is waiting indefinetly at final Integer numSent = Await.result(numSentFuture);
overlord has no activity, no RT are spawened. MM are running.
so looks like i will have to switch to latest version of tranq ?
if so, could you please publish it to metamx maven repo. its not available there
Deepak 
</
...

Gian Merlino

unread,
Sep 3, 2014, 2:29:20 PM9/3/14
to druid-de...@googlegroups.com
The 0.2.16 artifacts are located in https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local/com/metamx/tranquility_2.10/0.2.16/ and the pom stanza would be:

  <dependency>
    <groupId>com.metamx</groupId>
    <artifactId>tranquility_2.10</artifactId>
    <version>0.2.16</version>
  </dependency>

(The 2.10 in tranquility_2.10 is a scala version, not tranquility version; it is a naming convention for packages written in scala.)

There could be a variety of reasons for it getting stuck at Await.result, probably the most common being inability to find the indexing service in service discovery. Do you have the full logs for the program on this run? And have you changed the code at all since last time you posted it? That information would help debug.
Deepak 

[INFO] |&nbsp

...

Deepak Jain

unread,
Sep 4, 2014, 12:55:17 PM9/4/14
to druid-de...@googlegroups.com
1)
My pom: http://pastebin.mozilla.org/6315231
This time it started realtime task and after coming out of

final Integer numSent = Await.result(numSentFuture);

It got a timeout exception.


18557 [ClusteredBeam-ZkFuturePool-aa484fde-6b16-43d8-b10d-b842ced7f4a8] INFO com.metamx.tranquility.beam.ClusteredBeam - Beams already created for identifier[druid:prod:overlord/expt_real] timestamp[2014-07-01T15:00:00.000+05:30], with sufficient partitions (target = 1, actual = 1)

18865 [ClusteredBeam-ZkFuturePool-aa484fde-6b16-43d8-b10d-b842ced7f4a8] INFO com.metamx.tranquility.beam.ClusteredBeam - Adding beams for identifier[druid:prod:overlord/expt_real] timestamp[2014-07-01T15:00:00.000+05:30]: List(Map(timestamp -> 2014-07-01T15:00:00.000+05:30, partition -> 0, tasks -> Buffer(Map(id -> index_realtime_expt_real_2014-07-01T15:00:00.000+05:30_0_0_kddfgime, firehoseId -> expt_real-15-0000-0000))))

21134 [ClusteredBeam-ZkFuturePool-aa484fde-6b16-43d8-b10d-b842ced7f4a8] INFO com.metamx.common.scala.net.finagle.DiscoResolver - Updating instances for service[druid:firehose:expt_real-15-0000-0000] to Set()

21135 [ClusteredBeam-ZkFuturePool-aa484fde-6b16-43d8-b10d-b842ced7f4a8] WARN finagle - Name resolution is pending

21136 [ClusteredBeam-ZkFuturePool-aa484fde-6b16-43d8-b10d-b842ced7f4a8] INFO com.metamx.tranquility.finagle.FinagleRegistry - Created client for service: druid:firehose:expt_real-15-0000-0000

111269 [Hashed wheel timer #1] WARN com.metamx.tranquility.beam.ClusteredBeam - Emitting alert: [anomaly] Failed to propagate events: druid:prod:overlord/expt_real

{

  "eventCount" : 1,

  "timestamp" : "2014-07-01T15:00:00.000+05:30",

  "beams" : "HashPartitionBeam(DruidBeam(timestamp = 2014-07-01T15:00:00.000+05:30, partition = 0, tasks = [index_realtime_expt_real_2014-07-01T15:00:00.000+05:30_0_0_kddfgime/expt_real-15-0000-0000]))"

}

com.twitter.finagle.GlobalRequestTimeoutException: exceeded 1.minutes+30.seconds to druid:firehose:expt_real-15-0000-0000 while waiting for a response for the request, including retries (if applicable)

    at com.twitter.finagle.NoStacktrace(Unknown Source)

111293 [Hashed wheel timer #1] INFO com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"alerts","timestamp":"2014-09-04T22:17:52.510+05:30","service":"tranquility","host":"localhost","severity":"anomaly","description":"Failed to propagate events: druid:prod:overlord/expt_real","data":{"exceptionType":"com.twitter.finagle.GlobalRequestTimeoutException","exceptionStackTrace":"com.twitter.finagle.GlobalRequestTimeoutException: exceeded 1.minutes+30.seconds to druid:firehose:expt_real-15-0000-0000 while waiting for a response for the request, including retries (if applicable)\n\tat com.twitter.finagle.NoStacktrace(Unknown Source)\n","timestamp":"2014-07-01T15:00:00.000+05:30","beams":"HashPartitionBeam(DruidBeam(timestamp = 2014-07-01T15:00:00.000+05:30, partition = 0, tasks = [index_realtime_expt_real_2014-07-01T15:00:00.000+05:30_0_0_kddfgime/expt_real-15-0000-0000]))","eventCount":1,"exceptionMessage":"exceeded 1.minutes+30.seconds to druid:firehose:expt_real-15-0000-0000 while waiting for a response for the request, including retries (if applicable)"}}]


count query returned []

2) Killed the tranq client, RT task is still running from previous attempt. I submitted one event in both cases and i see the same exception



18557 [ClusteredBeam-ZkFuturePool-aa484fde-6b16-43d8-b10d-b842ced7f4a8] INFO com.metamx.tranquility.beam.ClusteredBeam - Beams already created for identifier[druid:prod:overlord/expt_real] timestamp[2014-07-01T15:00:00.000+05:30], with sufficient partitions (target = 1, actual = 1)

18865 [ClusteredBeam-ZkFuturePool-aa484fde-6b16-43d8-b10d-b842ced7f4a8] INFO com.metamx.tranquility.beam.ClusteredBeam - Adding beams for identifier[druid:prod:overlord/expt_real] timestamp[2014-07-01T15:00:00.000+05:30]: List(Map(timestamp -> 2014-07-01T15:00:00.000+05:30, partition -> 0, tasks -> Buffer(Map(id -> index_realtime_expt_real_2014-07-01T15:00:00.000+05:30_0_0_kddfgime, firehoseId -> expt_real-15-0000-0000))))

21134 [ClusteredBeam-ZkFuturePool-aa484fde-6b16-43d8-b10d-b842ced7f4a8] INFO com.metamx.common.scala.net.finagle.DiscoResolver - Updating instances for service[druid:firehose:expt_real-15-0000-0000] to Set()

21135 [ClusteredBeam-ZkFuturePool-aa484fde-6b16-43d8-b10d-b842ced7f4a8] WARN finagle - Name resolution is pending

21136 [ClusteredBeam-ZkFuturePool-aa484fde-6b16-43d8-b10d-b842ced7f4a8] INFO com.metamx.tranquility.finagle.FinagleRegistry - Created client for service: druid:firehose:expt_real-15-0000-0000

111269 [Hashed wheel timer #1] WARN com.metamx.tranquility.beam.ClusteredBeam - Emitting alert: [anomaly] Failed to propagate events: druid:prod:overlord/expt_real

{

  "eventCount" : 1,

  "timestamp" : "2014-07-01T15:00:00.000+05:30",

  "beams" : "HashPartitionBeam(DruidBeam(timestamp = 2014-07-01T15:00:00.000+05:30, partition = 0, tasks = [index_realtime_expt_real_2014-07-01T15:00:00.000+05:30_0_0_kddfgime/expt_real-15-0000-0000]))"

}

com.twitter.finagle.GlobalRequestTimeoutException: exceeded 1.minutes+30.seconds to druid:firehose:expt_real-15-0000-0000 while waiting for a response for the request, including retries (if applicable)

    at com.twitter.finagle.NoStacktrace(Unknown Source)

111293 [Hashed wheel timer #1] INFO com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"alerts","timestamp":"2014-09-04T22:17:52.510+05:30","service":"tranquility","host":"localhost","severity":"anomaly","description":"Failed to propagate events: druid:prod:overlord/expt_real","data":{"exceptionType":"com.twitter.finagle.GlobalRequestTimeoutException","exceptionStackTrace":"com.twitter.finagle.GlobalRequestTimeoutException: exceeded 1.minutes+30.seconds to druid:firehose:expt_real-15-0000-0000 while waiting for a response for the request, including retries (if applicable)\n\tat com.twitter.finagle.NoStacktrace(Unknown Source)\n","timestamp":"2014-07-01T15:00:00.000+05:30","beams":"HashPartitionBeam(DruidBeam(timestamp = 2014-07-01T15:00:00.000+05:30, partition = 0, tasks = [index_realtime_expt_real_2014-07-01T15:00:00.000+05:30_0_0_kddfgime/expt_real-15-0000-0000]))","eventCount":1,"exceptionMessage":"exceeded 1.minutes+30.seconds to druid:firehose:expt_real-15-0000-0000 while waiting for a response for the request, including retries (if applicable)"}}]



1. Please help me to move ahead.
2. Does it take this much time to ingest a single event ( > 45 seconds)
3. Please review my tranq code. - http://pastebin.mozilla.org/6315499
Deepak 

[INFO] |&

...

Fangjin Yang

unread,
Sep 4, 2014, 6:48:39 PM9/4/14
to druid-de...@googlegroups.com
Deepak: just a heads up that Gian will be out on vacation until late Sept. If you are still unable to resolve these problems, I can help you take a look.
...

Deepak Jain

unread,
Sep 4, 2014, 9:05:25 PM9/4/14
to druid-de...@googlegroups.com
Hello FJ,
Could you please look into the issue. My earlier email had all details. 
Regards,
Deepak
1)
Deepak 
<p
...

Deepak Jain

unread,
Sep 5, 2014, 12:26:13 AM9/5/14
to druid-de...@googlegroups.com
Exception in RT logs:
2014-09-05 04:24:42,546 INFO [task-runner-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Running task: index_realtime_expt_real_2014-07-01T03:00:00.000-07:00_0_0_gnfndggh
2014-09-05 04:24:42,547 INFO [task-runner-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Performing action for task[index_realtime_expt_real_2014-07-01T03:00:00.000-07:00_0_0_gnfndggh]: LockListAction{}
2014-09-05 04:24:42,559 ERROR [task-runner-0] io.druid.curator.discovery.ServerDiscoverySelector - No server instance found
2014-09-05 04:24:42,560 WARN [task-runner-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Exception submitting action for task[index_realtime_expt_real_2014-07-01T03:00:00.000-07:00_0_0_gnfndggh]
java.io.IOException: Failed to locate service uri
	at io.druid.indexing.common.actions.RemoteTaskActionClient.submit(RemoteTaskActionClient.java:83)
	at io.druid.indexing.common.task.AbstractTask.getTaskLocks(AbstractTask.java:160)
	at io.druid.indexing.common.task.RealtimeIndexTask.run(RealtimeIndexTask.java:195)
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:219)
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:198)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.metamx.common.ISE: Cannot find instance of indexer to talk to!
	at io.druid.indexing.common.actions.RemoteTaskActionClient.getServiceUri(RemoteTaskActionClient.java:134)
	at io.druid.indexing.common.actions.RemoteTaskActionClient.submit(RemoteTaskActionClient.java:80)
	... 8 more
2014-09-05 04:24:42,562 INFO [task-runner-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Will try again in [PT60S].
1)
Deepak 

[INFO]&nb

...

Deepak Jain

unread,
Sep 5, 2014, 12:26:46 AM9/5/14
to druid-de...@googlegroups.com
Exception in Tranquility client

4009 [ClusteredBeam-ZkFuturePool-035edd5c-b8bd-45c6-b98e-4c77ed8bba54] WARN finagle - Name resolution is pending

4011 [ClusteredBeam-ZkFuturePool-035edd5c-b8bd-45c6-b98e-4c77ed8bba54] INFO com.metamx.tranquility.finagle.FinagleRegistry - Created client for service: druid:firehose:expt_real-03-0000-0000

94058 [Hashed wheel timer #1] WARN com.metamx.tranquility.beam.ClusteredBeam - Emitting alert: [anomaly] Failed to propagate events: druid:prod:overlord/expt_real

{

  "eventCount" : 1,

  "timestamp" : "2014-07-01T03:00:00.000-07:00",

  "beams" : "HashPartitionBeam(DruidBeam(timestamp = 2014-07-01T03:00:00.000-07:00, partition = 0, tasks = [index_realtime_expt_real_2014-07-01T03:00:00.000-07:00_0_0_gnfndggh/expt_real-03-0000-0000]))"

}

com.twitter.finagle.GlobalRequestTimeoutException: exceeded 1.minutes+30.seconds to druid:firehose:expt_real-03-0000-0000 while waiting for a response for the request, including retries (if applicable)

at com.twitter.finagle.NoStacktrace(Unknown Source)

94085 [Hashed wheel timer #1] INFO com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"alerts","timestamp":"2014-09-04T21:26:05.172-07:00","service":"tranquility","host":"localhost","severity":"anomaly","description":"Failed to propagate events: druid:prod:overlord/expt_real","data":{"exceptionType":"com.twitter.finagle.GlobalRequestTimeoutException","exceptionStackTrace":"com.twitter.finagle.GlobalRequestTimeoutException: exceeded 1.minutes+30.seconds to druid:firehose:expt_real-03-0000-0000 while waiting for a response for the request, including retries (if applicable)\n\tat com.twitter.finagle.NoStacktrace(Unknown Source)\n","timestamp":"2014-07-01T03:00:00.000-07:00","beams":"HashPartitionBeam(DruidBeam(timestamp = 2014-07-01T03:00:00.000-07:00, partition = 0, tasks = [index_realtime_expt_real_2014-07-01T03:00:00.000-07:00_0_0_gnfndggh/expt_real-03-0000-0000]))","eventCount":1,"exceptionMessage":"exceeded 1.minutes+30.seconds to druid:firehose:expt_real-03-0000-0000 while waiting for a response for the request, including retries (if applicable)"}}]

0

...

Nishant Bangarwa

unread,
Sep 5, 2014, 12:39:23 AM9/5/14
to druid-de...@googlegroups.com
Hi Deepak, 
looks like your middlemanager runtime.props is missing indexing service name or the name is misconfigured. 
e.g. 
druid.selectors.indexing.serviceName=overlord



--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.

To post to this group, send email to druid-de...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Deepak Jain

unread,
Sep 5, 2014, 1:00:32 AM9/5/14
to druid-de...@googlegroups.com
This is my ZK tree:

[zk: localhost:2182(CONNECTED) 9] ls /prod/discovery
[druid:prod:coordinator, druid, druid:prod:broker, druid:prod:overlord]
[zk: localhost:2182(CONNECTED) 10] ls /prod/discovery/druid
[prod]
[zk: localhost:2182(CONNECTED) 11] ls /prod/discovery/druid/prod
[overlord]
[zk: localhost:2182(CONNECTED) 12] ls /prod/discovery/druid/prod/overlord
[]
[zk: localhost:2182(CONNECTED) 13] ls /prod/discovery/druid:prod:overlord
[5178febd-18a6-4975-b5be-abc3dc6bea78]
[zk: localhost:2182(CONNECTED) 14] ls /prod/discovery/druid:prod:overlord/5178febd-18a6-4975-b5be-abc3dc6bea78
[]
[zk: localhost:2182(CONNECTED) 15] get /prod/discovery/druid:prod:overlord/5178febd-18a6-4975-b5be-abc3dc6bea78
{"name":"druid:prod:overlord","id":"5178febd-18a6-4975-b5be-abc3dc6bea78","address":"druid-overlord.dev.com","port":8080,"sslPort":null,"payload":null,"registrationTimeUTC":1409916626601,"serviceType":"DYNAMIC","uriSpec":null}
cZxid = 0x5b
ctime = Fri Sep 05 04:53:33 GMT-07:00 2014
mZxid = 0x5b
mtime = Fri Sep 05 04:53:33 GMT-07:00 2014
pZxid = 0x5b
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x14845a7a280000e
dataLength = 254
numChildren = 0
[zk: localhost:2182(CONNECTED) 16] 



And the runtime.props of MM is

druid.selectors.indexing.serviceName=druid/prod/overlord

...

Deepak Jain

unread,
Sep 5, 2014, 1:05:24 AM9/5/14
to druid-de...@googlegroups.com
runtime of overlord is 

druid.service=druid/prod/overlord

...

Deepak Jain

unread,
Sep 5, 2014, 5:57:00 AM9/5/14
to druid-de...@googlegroups.com
Am able to ingest data through tranquility. 

Lessons
1.  MM runtime.properties must have druid.selectors.indexing.serviceName=druid:prod:overlord and overlord must have  druid.service=druid/prod/overlord
2. For druid dimension and aggregate names must be in lower case. (In tranquility client). (Convert incoming string message to lower case)
3. Timestamp of event must be within windowPeriod.
4. Tranquility client : .timestampSpec(new TimestampSpec("tstamp", "auto"))  must have auto as format.
5.  POM dependency:
        <dependency>
            <groupId>com.metamx</groupId>
            <artifactId>tranquility_2.10</artifactId>
            <version>0.2.16</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.zookeeper</groupId>
                    <artifactId>zookeeper</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.zookeeper</groupId>
            <artifactId>zookeeper</artifactId>
            <version>3.4.6</version>
        </dependency>
        <dependency>
            <groupId>org.codehaus.jackson</groupId>
            <artifactId>jackson-jaxrs</artifactId>
            <version>1.9.13</version>
        </dependency>
        <dependency>
            <groupId>org.codehaus.jackson</groupId>
            <artifactId>jackson-xc</artifactId>
            <version>1.9.13</version>
        </dependency>
        <dependency>
            <groupId>org.codehaus.jackson</groupId>
            <artifactId>jackson-core-asl</artifactId>
            <version>1.9.13</version>
        </dependency>
        <dependency>
            <groupId>org.codehaus.jackson</groupId>
            <artifactId>jackson-mapper-asl</artifactId>
            <version>1.9.13</version>
        </dependency>


Exception in RT logs:
1)
Deepak 

[INFO]  &

...

Deepak Jain

unread,
Sep 5, 2014, 5:58:32 AM9/5/14
to druid-de...@googlegroups.com
And the most important of all

.tuning(ClusteredBeamTuning.create(Granularity.HOURnew Period("PT0M"), new Period("PT60M"), 1, 1))

Here PT60M is the windowPeriod. PT2Y  (year) format is not supported. This will cause your task to start and kill. Once the task is killed (or shutdown) druid will delete the logs for default settings. (this could have been avoided).


Regards,

deepak

...

Tamil selvan R.S

unread,
Mar 14, 2015, 8:24:56 AM3/14/15
to druid-de...@googlegroups.com
Hey, May I have final working verison of the code you have shared?
...

Govind Bhone

unread,
Apr 9, 2015, 10:23:20 AM4/9/15
to druid-de...@googlegroups.com
Hi All ,
I am facing below issues while working with druid.
1. getting below error while starting broker and realtime nodes .
 
Caused by: com.fasterxml.jackson.databind.JsonMappingException: No content to map due to end-of-input
 at [Source: [B@611fd81f; line: 1, column: 1]
 at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148) ~[jackson-databind-2.4.4.jar:2.4.4]
 at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:3110) ~[jackson-databind-2.4.4.jar:2.4.4]
 at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3051) ~[jackson-databind-2.4.4.jar:2.4.4]
 at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2230) ~[jackson-databind-2.4.4.jar:2.4.4]
 at io.druid.client.ServerInventoryView$
 
2. getting below error while sending data from Tranquility client using finagle.
 
ror] (run-main) java.lang.IllegalStateException: Failed to save new beam for identifier[broker/wikipedia] timestamp[2015-04-09T22:00:00.000+08:00]
java.lang.IllegalStateException: Failed to save new beam for identifier[broker/wikipedia] timestamp[2015-04-09T22:00:00.000+08:00]
 at com.metamx.tranquility.beam.ClusteredBeam$$anonfun$2.applyOrElse(ClusteredBeam.scala:264)
 at com.metamx.tranquility.beam.ClusteredBeam$$anonfun$2.applyOrElse(ClusteredBeam.scala:261)
 at com.twitter.util.Future$$anonfun$rescue$1.apply(Future.scala:843)
 at com.twitter.util.Future$$anonfun$rescue$1.apply(Future.scala:841)
 at com.twitter.util.Promise$Transformer.liftedTree1$1(Promise.scala:100)
 at com.twitter.util.Promise$Transformer.k(Promise.scala:100)
 at com.twitter.util.Promise$Transformer.apply(Promise.scala:110)
 at com.twitter.util.Promise$Transformer.apply(Promise.scala:91)
 at com.twitter.util.Promise$$anon$2.run(Promise.scala:345)
 at com.twitter.concurrent.LocalScheduler$Activation.run(Scheduler.scala:186)
 at com.twitter.concurrent.LocalScheduler$Activation.submit(Scheduler.scala:157)
 at com.twitter.concurrent.LocalScheduler.submit(Scheduler.scala:212)
 at com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:86)
 at com.twitter.util.Promise.runq(Promise.scala:331)
 at com.twitter.util.Promise.updateIfEmpty(Promise.scala:642)
 at com.twitter.util.ExecutorServiceFuturePool$$anon$2.run(FuturePool.scala:112)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 at java.util.concurrent.FutureTask.run(FutureTask.java:166)

 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
Caused by: com.metamx.tranquility.druid.IndexServicePermanentException: Failed to parse response
 at com.metamx.tranquility.druid.IndexService$$anonfun$call$1$$anonfun$apply$7$$anonfun$apply$1.applyOrElse(IndexService.scala:91)
 at com.metamx.tranquility.druid.IndexService$$anonfun$call$1$$anonfun$apply$7$$anonfun$apply$1.applyOrElse(IndexService.scala:90)
 at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
 at com.metamx.common.scala.exception$ExceptionOps.mapException(exception.scala:63)
 at com.metamx.tranquility.druid.IndexService$$anonfun$call$1$$anonfun$apply$7.apply(IndexService.scala:90)
 at com.metamx.tranquility.druid.IndexService$$anonfun$call$1$$anonfun$apply$7.apply(IndexService.scala:86)
 at com.twitter.util.Future$$anonfun$map$1$$anonfun$apply$6.apply(Future.scala:863)
 at com.twitter.util.Try$.apply(Try.scala:13)
 at com.twitter.util.Future$.apply(Future.scala:90)
 at com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:863)
 at com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:863)
 at com.twitter.util.Future$$anonfun$flatMap$1.apply(Future.scala:824)
 at com.twitter.util.Future$$anonfun$flatMap$1.apply(Future.scala:823)
 at com.twitter.util.Promise$Transformer.liftedTree1$1(Promise.scala:100)
 at com.twitter.util.Promise$Transformer.k(Promise.scala:100)
 at com.twitter.util.Promise$Transformer.apply(Promise.scala:110)
 at com.twitter.util.Promise$Transformer.apply(Promise.scala:91)
 at com.twitter.util.Promise$$anon$2.run(Promise.scala:345)
 at com.twitter.concurrent.LocalScheduler$Activation.run(Scheduler.scala:186)
 at com.twitter.concurrent.LocalScheduler$Activation.submit(Scheduler.scala:157)
 at com.twitter.concurrent.LocalScheduler.submit(Scheduler.scala:212)
 at com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:86)
 at com.twitter.util.Promise.runq(Promise.scala:331)
 at com.twitter.util.Promise.updateIfEmpty(Promise.scala:642)
 at com.twitter.util.Promise.update(Promise.scala:615)
 at com.twitter.util.Promise.setValue(Promise.scala:591)
 at com.twitter.concurrent.AsyncQueue.offer(AsyncQueue.scala:76)
 at com.twitter.finagle.transport.ChannelTransport.handleUpstream(ChannelTransport.scala:45)
 at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)
 at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
 at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
 at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
 at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
 at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
 at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)
 at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
 at com.twitter.finagle.channel.ChannelStatsHandler.messageReceived(ChannelStatsHandler.scala:86)
 at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
 at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
 at com.twitter.finagle.channel.ChannelRequestStatsHandler.messageReceived(ChannelRequestStatsHandler.scala:35)
 at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
 at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
 at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
 at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
 at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
 at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
 at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
 at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
 at [Source: <html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 404 </title>
</head>
<body>
<h2>HTTP ERROR: 404</h2>
<p>Problem accessing /druid/indexer/v1/task. Reason:
<pre>    Not Found</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>
</body>
</html>
 
please help me to solve this issue .
Thanks in advance .

 
 
 
 

Prajwal Tuladhar

unread,
Apr 9, 2015, 10:34:05 AM4/9/15
to druid-de...@googlegroups.com
Hi Govind,

First error is happening probably because of empty line somewhere in your ingested data stream

Second error is related to JSON serialization, the way you are serialiazing druid event data is not working: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null').

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
--
Cheers,
Praj

Govind Bhone

unread,
Apr 14, 2015, 1:54:44 AM4/14/15
to druid-de...@googlegroups.com
Hi All,
I am able to send the data through tranquiality to indexing overlord service But performance is very slow .
I am able to send  6 Milion messages in 15 minutes .
how can we improve the performance
 
Below is my mesage which i am sending through tranquiality
 
SimpleEvent(now + 17, Map("page" -> "Striker Eureka", "language" -> "en", "user" -> "speed", "unpatrolled" -> "false", "newPage" -> "true", "robot" -> "true", "anonymous" -> "false", "namespace" -> "wikipedia", "continent" -> "Australia", "country" -> "Australia", "region" -> "Cantebury", "city" -> "Syndey", "added" -> 459, "deleted" -> 129, "delta" -> 330))
 
and build service is define as follows
  def newBuilder(index: String, firehose: String, dataSource: String, discoveryPath: String, curator: CuratorFramework, timekeeper: Timekeeper): DruidBeams.Builder[SimpleEvent] = {
    val tuning = ClusteredBeamTuning(
      segmentGranularity = Granularity.HOUR,
      windowPeriod = new Period("PT10M"),
      partitions = 1,
      replicants = 1)
    val rollup = DruidRollup(
      SpecificDruidDimensions(
        Vector("page")),
      IndexedSeq(new CountAggregatorFactory("sum")),
      QueryGranularity.MINUTE)
    val druidEnvironment = new DruidEnvironment(index, firehose)
    val druidLocation = DruidLocation.create(druidEnvironment, dataSource)
    DruidBeams.builder[SimpleEvent]()
      .curator(curator)
      .location(druidLocation)
      .discoveryPath(discoveryPath)
      .rollup(rollup)
      .tuning(tuning)
      .timekeeper(timekeeper)
      .timestampSpec(new TimestampSpec(TranquilityWriter.TimeColumn, TranquilityWriter.TimeFormat))
      .beamMergeFn(beams => new RoundRobinBeam(beams.toIndexedSeq))
  }
 
where ,
  val timekeeper = new TestingTimekeeper
  val curator = CuratorFrameworkFactory.builder()
    .connectString("localhost:2181")
    .retryPolicy(new ExponentialBackoffRetry(500, 15, 10000))
    .build()
  val indexService = "overlord" // Your overlord's service name.
  val firehosePattern = "druid:firehose:%s" // Make up a service pattern, include %s somewhere in it.
  val discoveryPath = "/druid/discovery"
  val dataSource = "wikipedia"
 
Thanks in Advance

Gian Merlino

unread,
Apr 14, 2015, 10:25:22 AM4/14/15
to druid-de...@googlegroups.com
Hi Govind,

Some things that you could try:

- If your bottleneck is on the sending side (tranquility) then having more threads sending to the same Beam/Service will give you better throughput. They're thread safe.

- If your bottleneck is on the receiving side (druid) then you could either try tuning the ingestion-related parameters (mostly: maxRowsInMemory, intermediatePersistPeriod, and heap size) or add more partitions. Each partition corresponds to one Druid indexing service task, and you will get more throughput by launching more tasks.

Govind Bhone

unread,
Apr 17, 2015, 2:12:32 AM4/17/15
to druid-de...@googlegroups.com
HI All,
Suppose I have to push the data from my application to Druid which is the best way
 
1. sending data via Tranquility client to overlord service
2. sending data to realtime node using firehorse setting like below
 
  "ioConfig": {
    "type": "realtime",
    "firehose": {
      "type": "irc",
      "nick": "wiki1234567890",
      "host": "irc.wikimedia.org",
      "channels": [
        "#en.wikipedia",
        "#fr.wikipedia",
        "#de.wikipedia",
        "#ja.wikipedia"
      ]
    },
    "plumber": {
      "type": "realtime"
    }
 
If i send it to overlord service how the workflow works ,  will it  hand-off those messages to realtime node ?
 
 
Thanks,
Govind Bhone
 
 
 

Gian Merlino

unread,
Apr 17, 2015, 2:17:15 PM4/17/15
to druid-de...@googlegroups.com
I think the best way to push data to Druid from an application is to either use Kafka as an intermediate message bus (if you want to decouple things to make operations easier) or to push directly with tranquility.
Reply all
Reply to author
Forward
0 new messages