Problem in running Hypertable on Hadoop

46 views
Skip to first unread message

vive...@gmail.com

unread,
Nov 25, 2009, 12:42:24 AM11/25/09
to Hypertable Development
Hi,

We are using Hypertable 0.9.2.3 and Hadoop 0.20.0. We are successful
in installing Hadoop. Hypertable also gets started with hadoop. But
when we try to create any table, hypertable hangs up. But table name
appears in the "show tables" command. If we try to delete this table
Hypertable hangs again.

If we format the namenode again and try to install the hadoop again,
jobtracker doesnt come up, saying port is already in use. And shows
namenode with 0 Live nodes.

Has anyone come across this issue before.

Please help.

Thanks

Here is the log for hadoop-ems-jobtracker-localhost.localdomain
=================================================================================
2009-11-24 18:29:58,556 WARN org.apache.hadoop.hdfs.DFSClient:
NotReplicatedYetException sleeping /tmp/hadoop-root/mapred/system/
jobtracker.info retries left 1
2009-11-24 18:30:01,761 WARN org.apache.hadoop.hdfs.DFSClient:
DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
java.io.IOException: File /tmp/hadoop-root/mapred/system/
jobtracker.info could only be replicated to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock
(FSNamesystem.java:1256)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock
(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke
(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke
(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod
(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke
(RetryInvocationHandler.java:59)
at $Proxy4.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient
$DFSOutputStream.locateFollowingBlock(DFSClient.java:2873)
at org.apache.hadoop.hdfs.DFSClient
$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2755)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000
(DFSClient.java:2046)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run
(DFSClient.java:2232)

2009-11-24 18:30:01,761 WARN org.apache.hadoop.hdfs.DFSClient: Error
Recovery for block null bad datanode[0] nodes == null
2009-11-24 18:30:01,761 WARN org.apache.hadoop.hdfs.DFSClient: Could
not get block locations. Source file "/tmp/hadoop-root/mapred/system/
jobtracker.info" - Aborting...
2009-11-24 18:30:01,762 WARN org.apache.hadoop.mapred.JobTracker:
Failed to initialize recovery manager. The Recovery manager failed to
access the system files in the system dir (hdfs://localhost:9000/tmp/
hadoop-root/mapred/system).
2009-11-24 18:30:01,765 WARN org.apache.hadoop.mapred.JobTracker: It
might be because the JobTracker failed to read/write system files
(hdfs://localhost:9000/tmp/hadoop-root/mapred/system/jobtracker.info /
hdfs://localhost:9000/tmp/hadoop-root/mapred/system/jobtracker.info.recover)
or the system file hdfs://localhost:9000/tmp/hadoop-root/mapred/system/jobtracker.info
is missing!
2009-11-24 18:30:01,766 WARN org.apache.hadoop.mapred.JobTracker:
Bailing out...
2009-11-24 18:30:01,766 WARN org.apache.hadoop.mapred.JobTracker:
Error starting tracker: org.apache.hadoop.ipc.RemoteException:
java.io.IOException: File /tmp/hadoop-root/mapred/system/
jobtracker.info could only be replicated to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock
(FSNamesystem.java:1256)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock
(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke
(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke
(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod
(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke
(RetryInvocationHandler.java:59)
at $Proxy4.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient
$DFSOutputStream.locateFollowingBlock(DFSClient.java:2873)
at org.apache.hadoop.hdfs.DFSClient
$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2755)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000
(DFSClient.java:2046)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run
(DFSClient.java:2232)

2009-11-24 18:30:02,768 FATAL org.apache.hadoop.mapred.JobTracker:
java.net.BindException: Problem binding to localhost/127.0.0.1:9001 :
Address already in use
at org.apache.hadoop.ipc.Server.bind(Server.java:190)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:253)
at org.apache.hadoop.ipc.Server.<init>(Server.java:1026)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:488)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:450)
at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1537)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:
174)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3528)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind
(ServerSocketChannelImpl.java:137)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
at org.apache.hadoop.ipc.Server.bind(Server.java:188)
... 7 more

2009-11-24 18:30:02,769 INFO org.apache.hadoop.mapred.JobTracker:
SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down JobTracker at localhost.localdomain/
127.0.0.1
************************************************************/

=================================================================================

Doug Judd

unread,
Nov 25, 2009, 12:54:17 AM11/25/09
to hyperta...@googlegroups.com
Is there a reason you are using older versions of both Hypertable and Hadoop?  Hypertable 0.9.2.7 is the most stable version, considerably more stable than 0.9.2.3 and Hadoop 0.20.1 is much more stable than Hadoop 0.20.0.  I would first upgrade your software and then try to get things up and running then.  Feel free to post again to this list if you're still having problems.

- Doug


--

You received this message because you are subscribed to the Google Groups "Hypertable Development" group.
To post to this group, send email to hyperta...@googlegroups.com.
To unsubscribe from this group, send email to hypertable-de...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/hypertable-dev?hl=en.



vive...@gmail.com

unread,
Nov 25, 2009, 1:53:10 AM11/25/09
to Hypertable Development
Thanks Doug for the quick reply.
I will update both Hypertable and Hadoop and let you know.

On Nov 25, 10:54 am, Doug Judd <nuggetwh...@gmail.com> wrote:
> Is there a reason you are using older versions of both Hypertable and
> Hadoop?  Hypertable 0.9.2.7 is the most stable version, considerably more
> stable than 0.9.2.3 and Hadoop 0.20.1 is much more stable than Hadoop
> 0.20.0.  I would first upgrade your software and then try to get things up
> and running then.  Feel free to post again to this list if you're still
> having problems.
>
> - Doug
>
> > hypertable-de...@googlegroups.com<hypertable-dev%2Bunsu...@googlegroups.com>
> > .

vive...@gmail.com

unread,
Nov 26, 2009, 5:29:24 AM11/26/09
to Hypertable Development
Hi Doug,

As you suggested, we updated our Hypertable and Hadoop to their latest
versions, but still it doesnt help out.
We are still struggling to create any table on hypertable using
hadoop.

This is what we are getting when trying to create any table.
===========================================================================================
Error: Hypertable::Exception: Master 'create table' error,
tablename=HadoopTest - HYPERTABLE request timeout
at void Hypertable::MasterClient::create_table(const char*, const
char*, Hypertable::Timer*) (/root/src/hypertable-0.9.2.7-alpha/src/cc/
Hypertable/Lib/MasterClient.cc:104) - HYPERTABLE request timeout
===========================================================================================

Please help us out. If you need anything else to get into the problem
deeper, please ask.

Thanks


On Nov 25, 11:53 am, "vivek8...@gmail.com" <vivek8...@gmail.com>
wrote:

Sanjit Jhala

unread,
Nov 26, 2009, 7:48:30 AM11/26/09
to hyperta...@googlegroups.com
Hi Vivek,

How are you trying to create the table, via the Hypertable shell using
the C++/Thrift client interface ? Can you confirm that all your
servers are running (you can do this by running the serverup tool as
follows: $HT_INSTALL_DIR/bin/serverup
[master|rangeserver|hyperspace|dfsbroker|thriftbroker]) ?
If so can you grep for "ERROR" and "FATAL" in the log directory under
your installation dir (eg /opt/hypertable/0.9.2.7/log/) ?

-Sanjit
> To unsubscribe from this group, send email to hypertable-de...@googlegroups.com.

Doug Judd

unread,
Nov 26, 2009, 11:48:35 AM11/26/09
to hyperta...@googlegroups.com
Can you archive and upload your hypertable llog files to the file upload area so we can take a look?

- Doug

On Thu, Nov 26, 2009 at 2:29 AM, vive...@gmail.com <vive...@gmail.com> wrote:
To unsubscribe from this group, send email to hypertable-de...@googlegroups.com.

vive...@gmail.com

unread,
Nov 27, 2009, 1:37:13 AM11/27/09
to Hypertable Development
Output of the serverup is true, which probably means that all the
serevrs are up and running.

On Nov 26, 5:48 pm, Sanjit Jhala <sjha...@gmail.com> wrote:
> Hi Vivek,
>
> How are you trying to create the table, via the Hypertable shell using
> the C++/Thrift client interface ? Can you confirm that all your
> servers are running (you can do this by running the serverup tool as
> follows: $HT_INSTALL_DIR/bin/serverup
> [master|rangeserver|hyperspace|dfsbroker|thriftbroker]) ?
> If so can you grep for "ERROR" and "FATAL" in the log directory under
> your installation dir (eg /opt/hypertable/0.9.2.7/log/) ?
>
> -Sanjit
>
> On Thu, Nov 26, 2009 at 2:29 AM, vivek8...@gmail.com
> ...
>
> read more »

vive...@gmail.com

unread,
Nov 27, 2009, 1:38:13 AM11/27/09
to Hypertable Development
I have shared the log archive on the files section. Please have a
look.

On Nov 26, 9:48 pm, Doug Judd <nuggetwh...@gmail.com> wrote:
> Can you archive and upload your hypertable llog files to the file upload
> area <http://groups.google.com/group/hypertable-dev/files> so we can take a
> look?
>
> - Doug
> ...
>
> read more »

Doug Judd

unread,
Nov 28, 2009, 8:35:56 PM11/28/09
to hyperta...@googlegroups.com
It looks like what may have happened here is that you ran the system in "local" mode and then switched to running it on top of Hadoop.  This could cause Hyperspace to get out of sync.  Can you try starting the servers in "hadoop" mode and then running the following program:

/opt/hypertable/current/bin/clean-database.sh

then stop the servers and start them back up again and let us know if that works?

- Doug

vive...@gmail.com

unread,
Nov 30, 2009, 12:34:18 AM11/30/09
to Hypertable Development
Thanks Doug,

It solved the problem. Till now we were doing Pseudo Setup, now we
will try to do Cluster Setup for Hadoop, if any problem occurs again i
will let you know.
Thanks again.

On Nov 29, 6:35 am, Doug Judd <nuggetwh...@gmail.com> wrote:
> It looks like what may have happened here is that you ran the system in
> "local" mode and then switched to running it on top of Hadoop.  This could
> cause Hyperspace to get out of sync.  Can you try starting the servers in
> "hadoop" mode and then running the following program:
>
> /opt/hypertable/current/bin/clean-database.sh
>
> then stop the servers and start them back up again and let us know if that
> works?
>
> - Doug
>
> On Thu, Nov 26, 2009 at 10:38 PM, vivek8...@gmail.com
> <vivek8...@gmail.com>wrote:
> ...
>
> read more »

Doug Judd

unread,
Nov 30, 2009, 11:14:14 AM11/30/09
to hyperta...@googlegroups.com
Thanks for reporting back.  I just filed a issue 356 for this one.

- Doug

> ...
>
> read more »

Message has been deleted

Kavya Smily

unread,
Sep 24, 2013, 8:10:57 AM9/24/13
to hyperta...@googlegroups.com

If you have more doubts regarding Hadoop.. Contact Big Infosys they will provide the best Hadoop Online Training from Hyderabad

Reply all
Reply to author
Forward
0 new messages