Re: [hypertable-user] DFSBroker: Unable to establish connection to HDFS.

189 views
Skip to first unread message

Doug Judd

unread,
Aug 27, 2012, 11:00:22 AM8/27/12
to hyperta...@googlegroups.com
This is most likely due to Hadoop version mismatch.  What you need to do is stop Hypertable and then make the following jar file replacement in the HYPERTABLE_INSTALL_DIR/lib/java:

hadoop-0.20.2-cdh3u3-core.jar -> hadoop-core-1.0.3.jar

Also add the following jar file to that same directory from your distribution:

commons-configuration-1.6.jar

Be sure to manually remove the hadoop-0.20.2-cdh3u3-core.jar file from the HYPERTABLE_INSTALL_DIR/lib/java directory on all Hypertable machines and push the new jar files out to that directory as well on all machines.  

- Doug

On Mon, Aug 27, 2012 at 4:21 AM, Sassan Haradji <sas...@gmail.com> wrote:
Hi,
I've installed two versions of hadoop: 1.0.3 and 0.20.2 and tried to connect hypertable 0.9.6.1 to it. I followed exactly written in this instruction:
http://hypertable.com/documentation/installation/quick_start_cluster_installation/
except that I manually installed hadoop instead of using cloudera.
but cap start fails with following output:
  * executing `start'
 ** transaction: start
  * executing `start_hyperspace'
  * executing "/opt/hypertable/current/bin/start-hyperspace.sh       --config=/opt/hypertable/0.9.6.1/conf/hypertable.cfg"
    servers: ["master"]
    [master] executing command
 ** [out :: master] Hyperspace appears to be running (14619):
 ** [out :: master] root 14619 14617 0 09:54 ? 00:00:01 /opt/hypertable/current/bin/Hyperspace.Master --pidfile /opt/hypertable/current/run/Hyperspace.pid --verbose --config=/opt/hypertable/0.9.6.1/conf/hypertable.cfg
    command finished in 483ms
  * executing `start_master'
  * executing "/opt/hypertable/current/bin/start-dfsbroker.sh hadoop       --config=/opt/hypertable/0.9.6.1/conf/hypertable.cfg &&\\\n   /opt/hypertable/current/bin/start-master.sh --config=/opt/hypertable/0.9.6.1/conf/hypertable.cfg &&\\\n   /opt/hypertable/current/bin/start-monitoring.sh"
    servers: ["master"]
    [master] executing command
 ** [out :: master] DFS broker: available file descriptors: 1024

 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] ERROR: DFS Broker (hadoop) did not come up
    command finished in 126483ms
failed: "sh -c '/opt/hypertable/current/bin/start-dfsbroker.sh hadoop       --config=/opt/hypertable/0.9.6.1/conf/hypertable.cfg &&\\\n   /opt/hypertable/current/bin/start-master.sh --config=/opt/hypertable/0.9.6.1/conf/hypertable.cfg &&\\\n   /opt/hypertable/current/bin/start-monitoring.sh'" on master


and this is what I found in DfsBroker.hadoop.log:
Num CPUs=1
HdfsBroker.Port=38030
HdfsBroker.Reactors=1
HdfsBroker.Workers=20
HdfsBroker.Hadoop.ConfDir=/usr/local/hadoop/conf
Adding hadoop configuration file /usr/local/hadoop/conf/hdfs-site.xml
Adding hadoop configuration file /usr/local/hadoop/conf/core-site.xml
HdfsBroker.dfs.client.read.shortcircuit=false
HdfsBroker.dfs.replication=1
HdfsBroker.Server.fs.default.name=hdfs://master:54310
12/08/27 11:06:33 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
Aug 27, 2012 11:06:34 AM org.hypertable.DfsBroker.hadoop.HdfsBroker <init>
SEVERE: ERROR: Unable to establish connection to HDFS.
ShutdownHook called
Exception in thread "Thread-1" java.lang.NullPointerException
        at org.hypertable.DfsBroker.hadoop.main$ShutdownHook.run(main.java:73)

and it's the same for both hadoop 1.0.3 and 0.20.2

in hadoop 1.0.3 logs nothing happens when I run cap start but in 0.20.2 it reports that:
2012-08-27 10:24:25,866 WARN org.apache.hadoop.ipc.Server: Incorrect header or version mismatch from 10.116.89.74:45847 got version 4 expected version 3


Does anybody know what's the problem?

--
You received this message because you are subscribed to the Google Groups "Hypertable User" group.
To view this discussion on the web visit https://groups.google.com/d/msg/hypertable-user/-/730x1NMqDaUJ.
To post to this group, send email to hyperta...@googlegroups.com.
To unsubscribe from this group, send email to hypertable-us...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/hypertable-user?hl=en.




--
Doug Judd
CEO, Hypertable Inc.

Sassan Haradji

unread,
Sep 23, 2012, 8:12:40 AM9/23/12
to hyperta...@googlegroups.com, do...@hypertable.com
Thanks Doug, I didn't notice your answer, will check it as soon as possible and report the results.

Binish Xavier

unread,
Nov 5, 2012, 1:26:20 AM11/5/12
to hyperta...@googlegroups.com, do...@hypertable.com
i am having the same error. I followed your instruction but not successful. i am getting the following error in DfsBroker.hadoop.log 
 


Num CPUs=2
HdfsBroker.Port=38030
HdfsBroker.Reactors=2
HdfsBroker.Workers=20
HdfsBroker.Server.fs.default.name=hdfs://motherlode001:9000
5 Nov, 2012 11:48:23 AM org.hypertable.DfsBroker.hadoop.HdfsBroker <init>

Shinobi_Jack

unread,
Nov 19, 2013, 3:26:08 AM11/19/13
to hyperta...@googlegroups.com, do...@hypertable.com
my error as follows: 

jack@hyt210 hypertable_install]$ cap start
  * executing `start'
 ** transaction: start
  * executing `start_hyperspace'
  * executing "/home/jack/hytcluster/hypertable/current/bin/start-hyperspace.sh       --config=/home/jack/hytcluster/hypertable/0.9.7.8/conf/hypertable.cfg"
    servers: ["hyt210"]
    [hyt210] executing command
 ** [out :: hyt210] Started Hyperspace
    command finished in 5316ms
  * executing `start_master'
  * executing "/home/jack/hytcluster/hypertable/current/bin/start-dfsbroker.sh hadoop       --config=/home/jack/hytcluster/hypertable/0.9.7.8/conf/hypertable.cfg &&\\\n   /home/jack/hytcluster/hypertable/current/bin/start-master.sh --config=/home/jack/hytcluster/hypertable/0.9.7.8/conf/hypertable.cfg &&\\\n   /home/jack/hytcluster/hypertable/current/bin/start-monitoring.sh"
    servers: ["hyt210"]
    [hyt210] executing command
 ** [out :: hyt210] DFS broker: available file descriptors: 1024
 ** [out :: hyt210] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: hyt210] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: hyt210] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: hyt210] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: hyt210] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: hyt210] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: hyt210] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: hyt210] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: hyt210] ERROR: DFS Broker (hadoop) did not come up
    command finished in 129185ms
failed: "sh -c '/home/jack/hytcluster/hypertable/current/bin/start-dfsbroker.sh hadoop       --config=/home/jack/hytcluster/hypertable/0.9.7.8/conf/hypertable.cfg &&\\\n   /home/jack/hytcluster/hypertable/current/bin/start-master.sh --config=/home/jack/hytcluster/hypertable/0.9.7.8/conf/hypertable.cfg &&\\\n   /home/jack/hytcluster/hypertable/current/bin/start-monitoring.sh'" on hyt210
[jack@hyt210 hypertable_install]$ cap shell
  * executing `shell'
====================================================================
Welcome to the interactive Capistrano shell! This is an experimental
feature, and is liable to change in future releases. Type 'help' for
a summary of how to use the shell.
--------------------------------------------------------------------
cap> which java
[establishing connection(s) to hyt210, hyt211]
 ** [out :: hyt210] /usr/bin/java
 ** [out :: hyt211] /usr/bin/java
cap> java -version               
 ** [out :: hyt210] java version "1.5.0"
 ** [out :: hyt210] gij (GNU libgcj) version 4.4.7 20120313 (Red Hat 4.4.7-3)
 ** [out :: hyt210] 
 ** [out :: hyt210] Copyright (C) 2007 Free Software Foundation, Inc.
 ** [out :: hyt210] This is free software; see the source for copying conditions.  There is NO
 ** [out :: hyt210] warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
*** [err :: hyt211] java version "1.6.0_31"
*** [err :: hyt211] Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
*** [err :: hyt211] Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

how can i fixed it. any advice with appreciate. 
在 2012年11月5日星期一UTC+8下午2时26分20秒,Binish Xavier写道:

Shinobi_Jack

unread,
Nov 19, 2013, 3:26:39 AM11/19/13
to hyperta...@googlegroups.com, do...@hypertable.com
my error as follows: 

jack@hyt210 hypertable_install]$ cap start
  * executing `start'
 ** transaction: start
  * executing `start_hyperspace'
  * executing "/home/jack/hytcluster/hypertable/current/bin/start-hyperspace.sh       --config=/home/jack/hytcluster/hypertable/0.9.7.8/conf/hypertable.cfg"
    servers: ["hyt210"]
    [hyt210] executing command
 ** [out :: hyt210] Started Hyperspace
    command finished in 5316ms
  * executing `start_master'
i am having the same error. I followed your instruction but not successful. i am getting the following error in DfsBroker.hadoop.log 
 

Shinobi_Jack

unread,
Nov 19, 2013, 3:56:41 AM11/19/13
to hyperta...@googlegroups.com, do...@hypertable.com
the /DfsBroker.hadoop.log as follows:
No Hadoop distro is configured.  Run the following script to
configure:

/home/jack/hytcluster/hypertable/current/bin/set-hadoop-distro.sh

so i did what it tips,and cap start again, then the errors changed to as follows:

Hypertable successfully configured for Hadoop cdh4
Exception in thread "main" java.lang.NoClassDefFoundError: org.hypertable.DfsBroker.hadoop.main
   at gnu.java.lang.MainThread.run(libgcj.so.10)
Caused by: java.lang.ClassNotFoundException: org.hypertable.DfsBroker.hadoop.main not found in gnu.gcj.runtime.SystemClassLoader{urls=[file:/home/jack/hytcluster/hypertable/current/,file:/home/jack/hytcluster/hypertable/current/lib/java/libthrift-0.8.0.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/commons-cli-1.2.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/commons-configuration-1.6.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/commons-httpclient-3.1.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/commons-logging-1.0.4.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/guava-11.0.2.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/guava-r09-jarjar.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/hadoop-auth-2.0.0-cdh4.1.3.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/hadoop-common-2.0.0-cdh4.1.3.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/hadoop-hdfs-2.0.0-cdh4.1.3.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/hadoop-mapreduce-client-core.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/hbase-0.90.6-cdh3u5.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/hive-exec-0.7.1-cdh3u5.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/hive-metastore-0.7.1-cdh3u5.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/hive-serde-0.7.1-cdh3u5.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/junit-4.3.1.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/libthrift-0.8.0.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/log4j-1.2.13.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/protobuf-java-2.4.0a.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/slf4j-api-1.5.8.jar,file:/home/jack/hytcluster/hypertable/current/lib/java/slf4j-log4j12-1.5.8.jar], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}}
   at java.net.URLClassLoader.findClass(libgcj.so.10)
   at java.lang.ClassLoader.loadClass(libgcj.so.10)
   at java.lang.ClassLoader.loadClass(libgcj.so.10)
   at gnu.java.lang.MainThread.run(libgcj.so.10)


by the way, i built hypertable-0.9.7.8 from source code by myself. 
after cap install_package, in path: /hypertable/current/lib/java
like this:
drwxrwxr-x 2 jack jack    4096 Nov 18 23:37 cdh3
drwxrwxr-x 2 jack jack    4096 Nov 18 23:37 cdh4
-rw-r--r-- 1 jack jack   41123 Jul  1 20:24 commons-cli-1.2.jar
-rw-r--r-- 1 jack jack  298829 Jul  1 20:24 commons-configuration-1.6.jar
-rw-r--r-- 1 jack jack  305001 Jul  1 20:24 commons-httpclient-3.1.jar
-rw-r--r-- 1 jack jack   38015 Jul  1 20:24 commons-logging-1.0.4.jar
-rw-r--r-- 1 jack jack 1648200 Nov 19 00:45 guava-11.0.2.jar
-rw-r--r-- 1 jack jack 1218645 Jul  1 20:24 guava-r09-jarjar.jar
-rw-r--r-- 1 jack jack   45332 Nov 19 00:45 hadoop-auth-2.0.0-cdh4.1.3.jar
-rw-r--r-- 1 jack jack 2236855 Nov 19 00:45 hadoop-common-2.0.0-cdh4.1.3.jar
-rw-r--r-- 1 jack jack 4426821 Nov 19 00:45 hadoop-hdfs-2.0.0-cdh4.1.3.jar
-rw-r--r-- 1 jack jack 1423733 Nov 19 00:45 hadoop-mapreduce-client-core.jar
-rw-r--r-- 1 jack jack 2598017 Jul  1 20:24 hbase-0.90.6-cdh3u5.jar
-rw-r--r-- 1 jack jack 3359555 Jul  1 20:24 hive-exec-0.7.1-cdh3u5.jar
-rw-r--r-- 1 jack jack 1538397 Jul  1 20:24 hive-metastore-0.7.1-cdh3u5.jar
-rw-r--r-- 1 jack jack  483159 Jul  1 20:24 hive-serde-0.7.1-cdh3u5.jar
-rw-r--r-- 1 jack jack  106547 Jul  1 20:24 junit-4.3.1.jar
-rw-r--r-- 1 jack jack   15162 Jul  1 20:24 junit-4.3.1.LICENSE.txt
-rw-r--r-- 1 jack jack  336577 Jul  1 20:24 libthrift-0.8.0.jar
-rw-r--r-- 1 jack jack  358180 Jul  1 20:24 log4j-1.2.13.jar
-rw-r--r-- 1 jack jack  449818 Nov 19 00:45 protobuf-java-2.4.0a.jar
-rw-r--r-- 1 jack jack   23445 Jul  1 20:24 slf4j-api-1.5.8.jar
-rw-r--r-- 1 jack jack    9679 Jul  1 20:24 slf4j-log4j12-1.5.8.jar
[jack@hyt210 java]$ cd cdh3
[jack@hyt210 cdh3]$ ll
total 3772
-rw-r--r-- 1 jack jack 3859002 Jul  1 20:24 hadoop-core-0.20.2-cdh3u5.jar

[jack@hyt210 cdh3]$ cd ../cdh4
[jack@hyt210 cdh4]$ ll
total 10004
-rw-r--r-- 1 jack jack 1648200 Jul  1 20:24 guava-11.0.2.jar
-rw-r--r-- 1 jack jack   45332 Jul  1 20:24 hadoop-auth-2.0.0-cdh4.1.3.jar
-rw-r--r-- 1 jack jack 2236855 Jul  1 20:24 hadoop-common-2.0.0-cdh4.1.3.jar
-rw-r--r-- 1 jack jack 4426821 Jul  1 20:24 hadoop-hdfs-2.0.0-cdh4.1.3.jar
-rw-r--r-- 1 jack jack 1423733 Jul  1 20:24 hadoop-mapreduce-client-core.jar
-rw-r--r-- 1 jack jack  449818 Jul  1 20:24 protobuf-java-2.4.0a.jar


my cluster using chh4, why? what should i do more. need your help, any advice with appreciate. tks!!!

在 2013年11月19日星期二UTC+8下午4时26分39秒,Shinobi_Jack写道:

Doug Judd

unread,
Nov 19, 2013, 12:37:24 PM11/19/13
to hypertable-user
I recommend that you upgrade to 0.9.7.13.  There was a lot of work that was done to improve Hadoop support that went into 0.9.7.10.  Give that a try and if it doesn't correct the problem, report back and I'll continue to dig into it.

- Doug


--
You received this message because you are subscribed to the Google Groups "Hypertable User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hypertable-us...@googlegroups.com.

To post to this group, send email to hyperta...@googlegroups.com.

Shinobi_Jack

unread,
Nov 19, 2013, 8:19:15 PM11/19/13
to hyperta...@googlegroups.com, do...@hypertable.com
Dear Doug:
thank you for reply.i will  try upgrade to 0.9.7.13.

by the way,my cluster now use cdh4.1.2 and i have modified hypertable source code with version 0.9.7.8. i want to it well worked as i hoped.


在 2013年11月20日星期三UTC+8上午1时37分24秒,Doug Judd写道:
Reply all
Reply to author
Forward
0 new messages