Error compile 0.6.129-snapshot with hadoop 0.23.9

240 views
Skip to first unread message

hmx...@gmail.com

unread,
Jul 12, 2014, 3:05:40 AM7/12/14
to druid-de...@googlegroups.com
I am trying to compile the source to make it work with hadoop version 0.23.9.

I am getting the following error:

[INFO] 4 errors
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] druid ............................................. SUCCESS [0.002s]
[INFO] druid-common ...................................... SUCCESS [8.021s]
[INFO] druid-processing .................................. SUCCESS [33.532s]
[INFO] druid-server ...................................... SUCCESS [52.085s]
[INFO] druid-examples .................................... SUCCESS [11.219s]
[INFO] druid-indexing-hadoop ............................. FAILURE [2.933s]
[INFO] druid-indexing-service ............................ SKIPPED
[INFO] druid-services .................................... SKIPPED
[INFO] druid-cassandra-storage ........................... SKIPPED
[INFO] druid-hdfs-storage ................................ SKIPPED
[INFO] druid-s3-extensions ............................... SKIPPED
[INFO] druid-kafka-seven ................................. SKIPPED
[INFO] druid-kafka-eight ................................. SKIPPED
[INFO] druid-rabbitmq .................................... SKIPPED
[INFO] druid-histogram ................................... SKIPPED

ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) on project druid-indexing-hadoop: Compilation failure: Compilation failure:
[ERROR] /home/xli/druid/indexing-hadoop/src/main/java/io/druid/indexer/IndexGeneratorJob.java:[64,44] error: cannot find symbol
[ERROR] symbol:   class CombineTextInputFormat
[ERROR] location: package org.apache.hadoop.mapreduce.lib.input
[ERROR] /home/xli/druid/indexing-hadoop/src/main/java/io/druid/indexer/DetermineHashedPartitionsJob.java:[51,44] error: cannot find symbol
[ERROR] symbol:   class CombineTextInputFormat
[ERROR] location: package org.apache.hadoop.mapreduce.lib.input
[ERROR] /home/xli/druid/indexing-hadoop/src/main/java/io/druid/indexer/IndexGeneratorJob.java:[151,32] error: cannot find symbol
[ERROR] symbol:   class CombineTextInputFormat
[ERROR] location: class IndexGeneratorJob


Does the current version even work with old version of hadoop?

Is there a way to bypass it?

The reason I am trying to compile from source is that I got the following error with 0.6.121 while trying to ingest hadoop file with overlord service

2014-07-12 04:23:45,703 ERROR [task-runner-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_fact_test_2014-07-12T04:21:38.892Z, type=index_hadoop, dataSource=fact_test}]
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:601)
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:234)
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:219)
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:198)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
	at java.util.concurrent.FutureTask.run(FutureTask.java:166)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.ExceptionInInitializerError
	at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1342)
	at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1295)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1015)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:972)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:227)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:216)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:838)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:819)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:718)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:707)
	at io.druid.indexer.JobHelper$1.getOutput(JobHelper.java:87)
	at io.druid.indexer.JobHelper$1.getOutput(JobHelper.java:83)
	at com.google.common.io.ByteStreams$7.openStream(ByteStreams.java:1000)
	at com.google.common.io.ByteSource.copyTo(ByteSource.java:203)
	at com.google.common.io.ByteStreams.copy(ByteStreams.java:157)
	at io.druid.indexer.JobHelper.setupClasspath(JobHelper.java:80)
	at io.druid.indexer.IndexGeneratorJob.run(IndexGeneratorJob.java:173)
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:135)
	at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:80)
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopIndexGeneratorInnerProcessing.runTask(HadoopIndexTask.java:273)
	... 12 more
Caused by: java.lang.UnsupportedOperationException: This is supposed to be overridden by subclasses.
	at com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180)
	at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.getSerializedSize(DataTransferProtos.java:6657)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketHeader.<clinit>(PacketHeader.java:37)
	... 32 more
2014-07-12 04:23:45,709 INFO [task-runner-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_fact_test_2014-07-12T04:21:38.892Z",
  "status" : "FAILED",
  "duration" : 120222
}

Gian Merlino

unread,
Jul 13, 2014, 11:05:10 AM7/13/14
to druid-de...@googlegroups.com
It looks like hadoop 0.23.9 doesn't contain CombineTextInputFormat. You can try editing IndexGeneratorJob and DetermineHashedPartitionsJob to just throw an exception if config.isCombineText() is true; it's optional and not used by default, so you might not notice much of a difference.

hmx...@gmail.com

unread,
Jul 15, 2014, 3:00:42 PM7/15/14
to druid-de...@googlegroups.com
Thanks Glan.

Now I am having the following issue trying to start the historical node:

I have put druid-hdfs-storage-0.6.129-SNAPSHOT.jar under the lib directory and the lib dir is the class path. It seems it went to remote repository to fetch that jar anyway. Is there a way to override that?


2014-07-15 18:54:59,363 INFO [main] io.druid.initialization.Initialization - Loading extension[io.druid.extensions:druid-hdfs-storage:0.6.129-SNAPSHOT] for class[io.druid.cli.CliCommandCreator]
2014-07-15 18:55:00,870 ERROR [main] io.druid.initialization.Initialization - Unable to resolve artifacts for [io.druid.extensions:druid-hdfs-storage:jar:0.6.129-SNAPSHOT (runtime) -> [] < [central (http://repo1.maven.org/maven2/, releases+snapshots),  (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, releases+snapshots)]].
java.lang.NullPointerException
        at org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:361)
        at io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
        at io.druid.initialization.Initialization.getClassLoaderForCoordinates(Initialization.java:199)
        at io.druid.initialization.Initialization.getFromExtensions(Initialization.java:141)
        at io.druid.cli.Main.main(Main.java:78)
Exception in thread "main" java.lang.NullPointerException
        at org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:361)
        at io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
        at io.druid.initialization.Initialization.getClassLoaderForCoordinates(Initialization.java:199)
        at io.druid.initialization.Initialization.getFromExtensions(Initialization.java:141)
        at io.druid.cli.Main.main(Main.java:78)

Fangjin Yang

unread,
Jul 15, 2014, 3:52:47 PM7/15/14
to druid-de...@googlegroups.com
Hi,

When you start the historical node, there should be some information about the modules it found to load. Can you share those logs?

Thanks,
FJ

hmx...@gmail.com

unread,
Jul 15, 2014, 4:30:59 PM7/15/14
to druid-de...@googlegroups.com
This is the full log:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/diy_stg/druid-services-0.6.129-SNAPSHOT/lib/slf4j-log4j12-1.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/y/share/hadoop-0.23.9.11.1403031814/share/hadoop/common/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2014-07-15 18:54:58,609 INFO [main] io.druid.guice.PropertiesModule - Loading properties from runtime.properties
2014-07-15 18:54:58,648 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.0.1.Final
2014-07-15 18:54:59,213 INFO [main] io.druid.guice.JsonConfigurator - Loaded class[class io.druid.guice.ExtensionsConfig] from props[druid.extensions.] as [ExtensionsConfig{searchCurrentClassloader=true, coordinates=[io.druid.extensions:druid-hdfs-storage:0.6.129-SNAPSHOT], localRepository='/home/diy_stg/.m2/repository', remoteRepositories=[http://repo1.maven.org/maven2/, https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local]}]
2014-07-15 18:54:59,363 INFO [main] io.druid.initialization.Initialization - Loading extension[io.druid.extensions:druid-hdfs-storage:0.6.129-SNAPSHOT] for class[io.druid.cli.CliCommandCreator]
2014-07-15 18:55:00,870 ERROR [main] io.druid.initialization.Initialization - Unable to resolve artifacts for [io.druid.extensions:druid-hdfs-storage:jar:0.6.129-SNAPSHOT (runtime) -> [] < [central (http://repo1.maven.org/maven2/, releases+snapshots),  (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, releases+snapshots)]].
java.lang.NullPointerException
at org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:361)
at io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
at io.druid.initialization.Initialization.getClassLoaderForCoordinates(Initialization.java:199)
at io.druid.initialization.Initialization.getFromExtensions(Initialization.java:141)
at io.druid.cli.Main.main(Main.java:78)
Exception in thread "main" java.lang.NullPointerException
at org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:361)
at io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
at io.druid.initialization.Initialization.getClassLoaderForCoordinates(Initialization.java:199)
at io.druid.initialization.Initialization.getFromExtensions(Initialization.java:141)
at io.druid.cli.Main.main(Main.java:78)
Heap

Fangjin Yang

unread,
Jul 15, 2014, 6:19:20 PM7/15/14
to druid-de...@googlegroups.com
You can set empty remote repositories with:
druid.extensions.remoteRepositories=[]

and set a local repository with:
druid.extensions.localRepository=

Although any jars on the classpath should be picked up if you specify the extensions config. Do you have the full logs from when you started the node? e.g. You should see logs such as:

2014-07-15 19:59:22,324 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.0.1.Final
2014-07-15 19:59:22,907 INFO [main] io.druid.guice.JsonConfigurator - Loaded class[class io.druid.guice.ExtensionsConfig] from props[druid.extensions.] as [ExtensionsConfig{searchCurrentClassloader=true, coordinates=[io.druid.extensions:druid-s3-extensions:0.6.128, io.druid.extensions:druid-histogram:0.6.129-SNAPSHOT], localRepository='/Users/fangjin/.m2/repository', remoteRepositories=[http://repo1.maven.org/maven2/, https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local]}]
2014-07-15 19:59:23,062 INFO [main] io.druid.initialization.Initialization - Loading extension[io.druid.extensions:druid-s3-extensions:0.6.128] for class[io.druid.cli.CliCommandCreator]

hmx...@gmail.com

unread,
Jul 15, 2014, 6:39:00 PM7/15/14
to druid-de...@googlegroups.com
Thanks Fangjin.

After changed my runtime.properties with the 2 lines, the historical server is up.

However, now I got a different error when trying to start an overload and middle manager service

Here is the full command line and the full error log:

DRUID_HOME=/home/diy_stg/druid-services-0.6.129-SNAPSHOT
DRUID_CONF=$DRUID_HOME/config
export JAVA_HOME=/home/y/share/gridjdk64-1.7.0_17

mkdir -p /home/y/logs/druid/ &> /dev/null

$JAVA_HOME/bin/java -server -Xmx4g -Xms4g -XX:NewSize=256m -XX:MaxNewSize=256m -XX:+UseConcMarkSweepGC \
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/home/y/tmp \
-classpath $DRUID_HOME/lib/*:$DRUID_CONF/overlord:$(hadoop classpath) \
io.druid.cli.Main server overlord &> /home/y/logs/druid/overlord.log &

-bash-4.1$ cat /home/y/logs/druid/overlord.log
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/diy_stg/druid-services-0.6.129-SNAPSHOT/lib/slf4j-log4j12-1.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/y/share/hadoop-0.23.9.11.1403031814/share/hadoop/common/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2014-07-15 22:34:31,144 INFO [main] io.druid.guice.PropertiesModule - Loading properties from runtime.properties
2014-07-15 22:34:31,192 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.0.1.Final
2014-07-15 22:34:31,799 INFO [main] io.druid.guice.JsonConfigurator - Loaded class[class io.druid.guice.ExtensionsConfig] from props[druid.extensions.] as [ExtensionsConfig{searchCurrentClassloader=true, coordinates=[], localRepository='', remoteRepositories=[http://repo1.maven.org/maven2/, https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local]}]
2014-07-15 22:34:32,135 INFO [main] io.druid.initialization.Initialization - Adding local module[class io.druid.storage.hdfs.HdfsStorageDruidModule]
Exception in thread "main" com.google.inject.CreationException: Guice creation errors:

1) Could not find a suitable constructor in io.druid.indexing.common.config.TaskConfig. Classes must have either one (and only one) constructor annotated with @Inject or a zero-argument constructor that is not private.
  at io.druid.indexing.common.config.TaskConfig.class(TaskConfig.java:31)
  while locating io.druid.indexing.common.config.TaskConfig
    for parameter 1 at io.druid.indexing.overlord.ForkingTaskRunnerFactory.<init>(ForkingTaskRunnerFactory.java:54)
  at io.druid.cli.CliOverlord$1.configureRunners(CliOverlord.java:193)

1 error
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:448)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:155)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:107)
at com.google.inject.Guice.createInjector(Guice.java:96)
at com.google.inject.Guice.createInjector(Guice.java:73)
at com.google.inject.Guice.createInjector(Guice.java:62)
at io.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:349)
at io.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:56)
at io.druid.cli.ServerRunnable.run(ServerRunnable.java:39)
at io.druid.cli.Main.main(Main.java:90)

Fangjin Yang

unread,
Jul 15, 2014, 7:13:34 PM7/15/14
to druid-de...@googlegroups.com


--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/328e1887-253b-4ae9-8668-81a0b5fd7afe%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Fangjin Yang

unread,
Jul 16, 2014, 12:59:34 AM7/16/14
to druid-de...@googlegroups.com
BTW hmxxyy, I often don't see msgs in the IRC channel unless someone pings me directly. I am under @fj in the channel.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-development+unsubscribe@googlegroups.com.
To post to this group, send email to druid-development@googlegroups.com.

hmx...@gmail.com

unread,
Jul 16, 2014, 1:37:11 AM7/16/14
to druid-de...@googlegroups.com
Thanks, I will ping you next time.

The fix only fixed overlord,

Middle manager now has the same kind of error, could you please take a look?

Exception in thread "main" 2.311: [GC 2.311: [ParNew: 19647K->1712K(19648K), 0.0063860 secs] 24728K->7611K(63360K), 0.0064470 secs] [Times: user=0.04 sys=0.00, real=0.00 secs]
com.google.inject.CreationException: Guice creation errors:

1) Could not find a suitable constructor in io.druid.indexing.common.config.TaskConfig. Classes must have either one (and only one) constructor annotated with @Inject or a zero-argument constructor that is not private.
  at io.druid.indexing.common.config.TaskConfig.class(TaskConfig.java:31)
  while locating io.druid.indexing.common.config.TaskConfig
    for parameter 1 at io.druid.indexing.overlord.ForkingTaskRunner.<init>(ForkingTaskRunner.java:98)
  at io.druid.cli.CliMiddleManager$1.configure(CliMiddleManager.java:81)

1 error
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:448)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:155)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:107)
at com.google.inject.Guice.createInjector(Guice.java:96)
at com.google.inject.Guice.createInjector(Guice.java:73)
at com.google.inject.Guice.createInjector(Guice.java:62)
at io.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:349)
at io.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:56)
at io.druid.cli.ServerRunnable.run(ServerRunnable.java:39)
at io.druid.cli.Main.main(Main.java:90)
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

Fangjin Yang

unread,
Jul 16, 2014, 1:51:00 AM7/16/14
to druid-de...@googlegroups.com

hmx...@gmail.com

unread,
Jul 16, 2014, 2:22:09 PM7/16/14
to druid-de...@googlegroups.com
Now got the following with the latest code:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) on project druid-server: Compilation failure: Compilation failure:
[ERROR] /home/lih/druid/server/src/main/java/io/druid/timeline/partition/NumberedShardSpec.java:[59,9] error: cannot find symbol
[ERROR] symbol:   class ShardSpecLookup
[ERROR] location: class NumberedShardSpec
[ERROR] /home/lih/druid/server/src/main/java/io/druid/timeline/partition/LinearShardSpec.java:[49,9] error: cannot find symbol
[ERROR] symbol:   class ShardSpecLookup
[ERROR] location: class LinearShardSpec
[ERROR] /home/lih/druid/server/src/main/java/io/druid/timeline/partition/HashBasedNumberedShardSpec.java:[78,9] error: cannot find symbol
[ERROR] symbol:   class ShardSpecLookup
[ERROR] location: class HashBasedNumberedShardSpec
[ERROR] /home/lih/druid/server/src/main/java/io/druid/timeline/partition/SingleDimensionShardSpec.java:[99,9] error: cannot find symbol
[ERROR] symbol:   class ShardSpecLookup
[ERROR] location: class SingleDimensionShardSpec
[ERROR] /home/lih/druid/server/src/main/java/io/druid/timeline/partition/NumberedShardSpec.java:[61,15] error: cannot find symbol
[ERROR] symbol:   class ShardSpecLookup
[ERROR] location: class NumberedShardSpec
[ERROR] /home/lih/druid/server/src/main/java/io/druid/timeline/partition/NumberedShardSpec.java:[58,2] error: method does not override or implement a method from a supertype
[ERROR] /home/lih/druid/server/src/main/java/io/druid/timeline/partition/LinearShardSpec.java:[51,15] error: cannot find symbol
[ERROR] symbol:   class ShardSpecLookup
[ERROR] location: class LinearShardSpec
[ERROR] /home/lih/druid/server/src/main/java/io/druid/timeline/partition/LinearShardSpec.java:[48,2] error: method does not override or implement a method from a supertype
[ERROR] /home/lih/druid/server/src/main/java/io/druid/timeline/partition/HashBasedNumberedShardSpec.java:[80,15] error: cannot find symbol
[ERROR] symbol:   class ShardSpecLookup
[ERROR] location: class HashBasedNumberedShardSpec
[ERROR] /home/lih/druid/server/src/main/java/io/druid/timeline/partition/SingleDimensionShardSpec.java:[101,15] error: cannot find symbol
[ERROR] symbol:   class ShardSpecLookup
[ERROR] location: class SingleDimensionShardSpec
[ERROR] /home/lih/druid/server/src/main/java/io/druid/timeline/partition/SingleDimensionShardSpec.java:[98,2] error: method does not override or implement a method from a supertype

Fangjin Yang

unread,
Jul 16, 2014, 2:29:32 PM7/16/14
to druid-de...@googlegroups.com
Hi, are you sure you've pulled the latest master? Did you also run a maven clean? We have continuous tests running on master every 5 mins and they seem to be okay. FWIW, this process may be a lot easier if you just worked off of stable, as master will have constant pull requests merged in that may disrupt things.


hmx...@gmail.com

unread,
Jul 16, 2014, 6:16:00 PM7/16/14
to druid-de...@googlegroups.com
Thanks, I will put 0.6.121 out and give it a try.


hmx...@gmail.com

unread,
Jul 16, 2014, 9:01:41 PM7/16/14
to druid-de...@googlegroups.com
I compile 0.6.121 with hadoop 0.23.9. After I start up the coordinator and goes to the web console, I have error like:

What I am missing here?

Thanks.

HTTP ERROR: 500

Problem accessing /cluster.html. Reason:

    java.lang.NoSuchMethodError: javax.servlet.http.HttpServletRequest.getServletContext()Ljavax/servlet/ServletContext;

Within the server log, it has

java.lang.NoSuchMethodError: javax.servlet.http.HttpServletRequest.getServletContext()Ljavax/servlet/ServletContext;
        at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:315)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1622)
        at io.druid.server.http.RedirectFilter.doFilter(RedirectFilter.java:71)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1622)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:549)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:219)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1111)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:478)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1045)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
        at org.eclipse.jetty.server.Server.handle(Server.java:462)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:279)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:232)
        at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:534)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536)
        at java.lang.Thread.run(Thread.java:722)
2014-07-17 00:41:42,471 WARN [qtp531696287-99] org.eclipse.jetty.util.thread.QueuedThreadPool -
java.lang.NoSuchMethodError: javax.servlet.http.HttpServletRequest.isAsyncStarted()Z
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:648)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:219)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1111)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:478)
                                                                                                                                                                         206,2-9        8%
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1111)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:478)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1045)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
        at org.eclipse.jetty.server.Server.handle(Server.java:462)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:279)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:232)
        at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:534)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536)
        at java.lang.Thread.run(Thread.java:722)

Fangjin Yang

unread,
Jul 16, 2014, 9:05:43 PM7/16/14
to druid-de...@googlegroups.com
How are you accessing the coordinator console? Are you sure you have the right port? Also, can you try accessing / ?


--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

Gian Merlino

unread,
Jul 16, 2014, 9:11:28 PM7/16/14
to druid-de...@googlegroups.com
Can you try excluding javax.servlet:servlet-api from hadoop when you depend on it in the pom? I think that version of hadoop pulls in an older version than the one that druid needs. If that doesn't help, try running mvn dependency:list with the stock druid and with your hadoop changes, and see if anything else non-hadoop-related has suspiciously changed versions.

hmx...@gmail.com

unread,
Jul 16, 2014, 9:25:56 PM7/16/14
to druid-de...@googlegroups.com
I am sure I am using the right port otherwise you won't have the error in the server log and the error will be different. Coordinator only listens on one port.

hmx...@gmail.com

unread,
Jul 16, 2014, 9:28:01 PM7/16/14
to druid-de...@googlegroups.com
Thanks, will try it out

hmx...@gmail.com

unread,
Jul 17, 2014, 3:02:56 AM7/17/14
to druid-de...@googlegroups.com
Yes, that's was the reason.

Yet I've got another error when trying to run an index task via overlord. I think it is due to the protobuf version conflict. Druid use 2.5.0 

           <dependency>
                <groupId>com.google.protobuf</groupId>
                <artifactId>protobuf-java</artifactId>
                <version>2.5.0</version>
            </dependency>

and hadoop 0.23.9 uses 2.4.0a
<groupId>com.google.protobuf</groupId>
        <artifactId>protobuf-java</artifactId>
        <version>2.4.0a</version>

task[HadoopIndexTask{id=index_hadoop_fact_test_2014-07-17T06:49:10.246Z, type=index_hadoop, dataSource=fact_test}]
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:601)
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:234)
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:219)
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:198)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
	at java.util.concurrent.FutureTask.run(FutureTask.java:166)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.ExceptionInInitializerError
	at org.apache.hadoop.hdfs.DFSOutputStream.computePacketChunkSize(DFSOutputStream.java:1321)
	at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1274)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1015)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:972)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:227)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:216)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:838)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:819)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:718)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:707)
	at io.druid.indexer.JobHelper$1.getOutput(JobHelper.java:87)
	at io.druid.indexer.JobHelper$1.getOutput(JobHelper.java:83)
	at com.google.common.io.ByteStreams$7.openStream(ByteStreams.java:1000)
	at com.google.common.io.ByteSource.copyTo(ByteSource.java:203)
	at com.google.common.io.ByteStreams.copy(ByteStreams.java:157)
	at io.druid.indexer.JobHelper.setupClasspath(JobHelper.java:80)
	at io.druid.indexer.IndexGeneratorJob.run(IndexGeneratorJob.java:174)
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:135)
	at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:80)
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopIndexGeneratorInnerProcessing.runTask(HadoopIndexTask.java:273)
	... 12 more
Caused by: java.lang.UnsupportedOperationException: This is supposed to be overridden by subclasses.
	at com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180)
	at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.getSerializedSize(DataTransferProtos.java:6657)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketHeader.<clinit>(PacketHeader.java:37)
	... 32 more
2014-07-17 06:49:19,978 INFO [task-runner-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_fact_test_2014-07-17T06:49:10.246Z",
  "status" : "FAILED",
  "duration" : 3722
}

Nishant Bangarwa

unread,
Jul 17, 2014, 9:03:53 AM7/17/14
to druid-de...@googlegroups.com
Hi, 
yap older versions of hadoop need protobuf 2.4.0a which is incompatible with protobuf 2.5.0,
downgrading protobuf version and recompiling should resolve this. 


--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

hmx...@gmail.com

unread,
Jul 17, 2014, 10:57:49 AM7/17/14
to druid-de...@googlegroups.com
what would be the changes of downgrading to 2.4.0a?

i have changed the pom.xml, now it won't comppile.

could you please list other modifications needed?

hmx...@gmail.com

unread,
Jul 17, 2014, 1:07:26 PM7/17/14
to druid-de...@googlegroups.com
This the error msg:
[ERROR] /home/xli/druid/processing/src/test/java/io/druid/data/input/ProtoTestEventWrapper.java:[91,27] error: getUnknownFields() in ProtoTestEvent cannot override getUnknownFields() in GeneratedMessage


hmx...@gmail.com

unread,
Jul 17, 2014, 3:06:10 PM7/17/14
to druid-de...@googlegroups.com
I managed to make the build pass after commenting out the override method. Still have no luck.

The indexing task keeps using org.apache.hadoop:hadoop-client as the "hadoopDependencyCoordinates".

Tried to replace it with hadoop-mapreduce-client, hadoop-common, hadoop-hdfs, all ended with same error.

Not sure which jar it is using now to communicate with hadoop.

hmx...@gmail.com

unread,
Jul 17, 2014, 5:50:53 PM7/17/14
to druid-de...@googlegroups.com
After I removed ~/.m2 and did a clean reinstallation, it worked.

Thanks guys for the direction and patience. Really appreciate it.
Reply all
Reply to author
Forward
0 new messages