Excpetion in HiveSplitQueue

1,118 views
Skip to first unread message

martin....@liquidm.com

unread,
Nov 8, 2013, 11:34:55 AM11/8/13
to presto...@googlegroups.com
Hey guys,

nice release! Quick question - I get an exception inside the presto server if I run a select query (describe works). Hadoop1 is our name server but I don't get why I see "response is null" as an exception (hadoop and hive work without any issues on these machines). Any hints?

(IP/hostname are anonymised)

java.lang.RuntimeException: java.io.IOException: Failed on local exception: java.io.IOException: Response is null.; Host Details : local host is: "dev3/111.23.123.123"; destination host is: "hadoop1.XYZ.net":9000;
at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-15.0.jar:na]
at com.facebook.presto.hive.HiveSplitIterable$HiveSplitQueue.computeNext(HiveSplitIterable.java:433) ~[na:na]
at com.facebook.presto.hive.HiveSplitIterable$HiveSplitQueue.computeNext(HiveSplitIterable.java:392) ~[na:na]
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) ~[guava-15.0.jar:na]
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) ~[guava-15.0.jar:na]
at com.facebook.presto.execution.SqlStageExecution.startTasks(SqlStageExecution.java:463) [presto-main-0.52.jar:0.52]
at com.facebook.presto.execution.SqlStageExecution.access$300(SqlStageExecution.java:80) [presto-main-0.52.jar:0.52]
at com.facebook.presto.execution.SqlStageExecution$5.run(SqlStageExecution.java:435) [presto-main-0.52.jar:0.52]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_40]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_40]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_40]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_40]
at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]

Ciao,
Martin

Dain Sundstrom

unread,
Nov 8, 2013, 12:01:05 PM11/8/13
to presto...@googlegroups.com, martin....@liquidm.com
That is an error from within the HDFS client (org.apache.hadoop.ipc.Client:941 in my code) and after a quick review of that code, it looks like this mean the client could not parse the server response.  My guess is the client we bundle with the presto-hive-cdh4 plugin is not compatible with your version of Hadoop.  This code includes Cloudera Hadoop version 2.0.0-cdh4.3.0.  What version are you using?

martin....@liquidm.com

unread,
Nov 8, 2013, 12:31:52 PM11/8/13
to presto...@googlegroups.com, martin....@liquidm.com
Thanks for the quick reply. Indeed we run an older version. We are in the process of upgrading anyway, so I will give it a try after the upgrade!

Ciao,
Martin

baldo.f...@gmail.com

unread,
Nov 10, 2013, 6:00:19 PM11/10/13
to presto...@googlegroups.com, martin....@liquidm.com
Hi,
Thanks for the information, I was actually trying to run it on aws Emr (Hadoop 2.2.0) and I am getting the same exception. 

Ciao,
Federico

Federico

unread,
Nov 11, 2013, 9:23:17 AM11/11/13
to presto...@googlegroups.com, martin....@liquidm.com
Hu,
I have tried re-building presto-hadoop-cdh4 using as dependency Hadoop 2.2.0 instead of the cloudera one modifying the following in the pom file:

 <dep.hadoop.version>2.2.0</dep.hadoop.version>
        <dep.slf4j.version>1.7.5</dep.slf4j.version>

Building is fine, however I am getting the same result:

2013-11-11T14:13:06.602+0000    ERROR   Stage-20131111_141305_00004_nij5n.1-197 com.facebook.presto.execution.SqlStageExecution Error while starting stage 20131111_141305_00004_nij5n.1
java.lang.RuntimeException: java.io.IOException: Failed on local exception: com.facebook.presto.hadoop.shaded.com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status; Host Details : local host is: "ip-10-104-240-47.eu-west-1.compute.internal/10.104.240.47"; destination host is: "ip-10-104-240-47.eu-west-1.compute.internal":9000;
        at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-15.0.jar:na]
        at com.facebook.presto.hive.HiveSplitIterable$HiveSplitQueue.computeNext(HiveSplitIterable.java:433) ~[na:na]
        at com.facebook.presto.hive.HiveSplitIterable$HiveSplitQueue.computeNext(HiveSplitIterable.java:392) ~[na:na]
        at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) ~[guava-15.0.jar:na]
        at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) ~[guava-15.0.jar:na]
        at com.facebook.presto.execution.SqlStageExecution.startTasks(SqlStageExecution.java:463) [presto-main-0.54-SNAPSHOT.jar:0.54-SNAPSHOT]
        at com.facebook.presto.execution.SqlStageExecution.access$300(SqlStageExecution.java:80) [presto-main-0.54-SNAPSHOT.jar:0.54-SNAPSHOT]
        at com.facebook.presto.execution.SqlStageExecution$5.run(SqlStageExecution.java:435) [presto-main-0.54-SNAPSHOT.jar:0.54-SNAPSHOT]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_45]
        at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_45]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_45]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_45]
        at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
Caused by: java.io.IOException: Failed on local exception: com.facebook.presto.hadoop.shaded.com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status; Host Details : local host is: "ip-10-104-240-47.eu-west-1.compute.internal/10.104.240.47"; destination host is: "ip-10-104-240-47.eu-west-1.compute.internal":9000;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:763) ~[na:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1229) ~[na:na]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) ~[na:na]
        at com.sun.proxy.$Proxy155.getListing(Unknown Source) ~[na:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_45]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_45]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_45]
        at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) ~[na:na]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) ~[na:na]
        at com.sun.proxy.$Proxy155.getListing(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:441) ~[na:na]
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1526) ~[na:na]
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1509) ~[na:na]
        at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:406) ~[na:na]
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1462) ~[na:na]
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1502) ~[na:na]
        at com.facebook.presto.hive.ForwardingFileSystem.listStatus(ForwardingFileSystem.java:298) ~[na:na]
        at com.facebook.presto.hive.ForwardingFileSystem.listStatus(ForwardingFileSystem.java:298) ~[na:na]
        at com.facebook.presto.hive.FileSystemWrapper$3.listStatus(FileSystemWrapper.java:146) ~[na:na]
        at org.apache.hadoop.fs.FileSystem$4.<init>(FileSystem.java:1778) ~[na:na]
        at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1777) ~[na:na]
        at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1760) ~[na:na]
        at com.facebook.presto.hive.util.AsyncRecursiveWalker$1.run(AsyncRecursiveWalker.java:58) ~[na:na]
        at com.facebook.presto.hive.util.SuspendingExecutor$1.run(SuspendingExecutor.java:67) ~[na:na]
        at com.facebook.presto.hive.util.BoundedExecutor.executeOrMerge(BoundedExecutor.java:82) ~[na:na]
        at com.facebook.presto.hive.util.BoundedExecutor.access$000(BoundedExecutor.java:41) ~[na:na]
        at com.facebook.presto.hive.util.BoundedExecutor$1.run(BoundedExecutor.java:53) ~[na:na]
        ... 3 common frames omitted
com.facebook.presto.hadoop.shaded.com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status
        at com.facebook.presto.hadoop.shaded.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:81) ~[na:na]
        at org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto$Builder.buildParsed(RpcPayloadHeaderProtos.java:1094) ~[na:na]
        at org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto$Builder.access$1300(RpcPayloadHeaderProtos.java:1028) ~[na:na]
        at org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcPayloadHeaderProtos.java:986) ~[na:na]
        at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:938) ~[na:na]
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:836) ~[na:na]
2013-11-11T14:13:06.683+0000    DEBUG   task-notification-0     com.facebook.presto.execution.TaskStateMachine  Task 20131111_141305_00004_nij5n.0.0 is CANCELED

Any suggestion on what to check next? Otherwise I will start digging into it in the next days.

Cheers,
Federico

Federico

unread,
Nov 11, 2013, 11:28:50 AM11/11/13
to presto...@googlegroups.com, martin....@liquidm.com
I have just noticed that the protobuf version in AWS  Hadoop is 2.4.1, while the one of  Hadoop 2.2.0 in the Maven repo is 2.5.0.
Maybe this is some sort of incompatibility between the two, but I am just guessing here.

Cheers,
Federico

Dain Sundstrom

unread,
Nov 12, 2013, 1:08:26 PM11/12/13
to presto...@googlegroups.com, martin....@liquidm.com
Unfortunately, there are many incompatible versions of hadoop.  Our current plan is to add pre-built connectors for the current stable Apache 1.2 and 2.2  releases.  If the stable 2.2 doesn't work for AWS we could add one more plugin for AWS.

-dain

jans...@gmail.com

unread,
Nov 15, 2013, 4:12:19 AM11/15/13
to presto...@googlegroups.com, martin....@liquidm.com
On Tuesday, November 12, 2013 7:08:26 PM UTC+1, Dain Sundstrom wrote:
> Unfortunately, there are many incompatible versions of hadoop.  Our current plan is to add pre-built connectors for the current stable Apache 1.2 and 2.2  releases.  If the stable 2.2 doesn't work for AWS we could add one more plugin for AWS.
>
>
> -dain

We are running into the same problem with Apache Hadoop 2.2.0. It would be very nice if you could provide a pre-built connector for this (and perhaps other) releases.

Best regards,
Jan Sipke van der Veen

Dain Sundstrom

unread,
Nov 15, 2013, 11:41:10 AM11/15/13
to presto...@googlegroups.com, martin....@liquidm.com, jans...@gmail.com
This is something we are working on.

Janardhan Bobba

unread,
Feb 25, 2014, 2:33:55 AM2/25/14
to presto...@googlegroups.com, martin....@liquidm.com
Presto is awesome but waiting for the hbase connection with presto . . . .

Reply all
Reply to author
Forward
0 new messages