** [out :: 192.168.1.125] Waiting for Hypertable.Master (localhost:38050) to come up...

178 views
Skip to first unread message

Binish Xavier

unread,
Nov 23, 2012, 12:01:02 AM11/23/12
to hyperta...@googlegroups.com

i am update my hypertable then i am trying to start hypertable,these time display new error...The error is follows

   [192.168.1.125] executing command
 ** [out :: 192.168.1.125] DFS broker: available file descriptors: 1024
 ** [out :: 192.168.1.125] DfsBroker.hadoop appears to be running (6406):
 ** [out :: 192.168.1.125] root 6406 6405 0 10:23 ? 00:00:01 java -classpath /opt/hypertable/current:/opt/hypertable/current/lib/java/libthrift-0.8.0.jar:/opt/hypertable/current/lib/java/commons-cli-1.2.jar:/opt/hypertable/current/lib/java/commons-httpclient-3.1.jar:/opt/hypertable/current/lib/java/commons-logging-1.0.4.jar:/opt/hypertable/current/lib/java/guava-r09-jarjar.jar:/opt/hypertable/current/lib/java/hadoop-0.20.2-cdh3u3-core.jar:/opt/hypertable/current/lib/java/hbase-0.90.4-cdh3u2.jar:/opt/hypertable/current/lib/java/hive-exec-0.7.0-cdh3u0.jar:/opt/hypertable/current/lib/java/hive-metastore-0.7.0-cdh3u0.jar:/opt/hypertable/current/lib/java/hive-serde-0.7.0-cdh3u0.jar:/opt/hypertable/current/lib/java/hypertable-0.9.6.5-examples.jar:/opt/hypertable/current/lib/java/hypertable-0.9.6.5.jar:/opt/hypertable/current/lib/java/junit-4.3.1.jar:/opt/hypertable/current/lib/java/libthrift-0.8.0.jar:/opt/hypertable/current/lib/java/log4j-1.2.13.jar:/opt/hypertable/current/lib/java/slf4j-api-1.5.8.jar:/opt/hypertable/current/lib/java/slf4j-log4j12-1.5.8.jar:/opt/hypertable/current/lib/java/zookeeper-3.3.3-cdh3u2.jar:/opt/hypertable/current/lib/jetty-ext/*.jar org.hypertable.DfsBroker.hadoop.main --verbose --config=/opt/hypertable/0.9.6.5/conf/hypertable.cfg
 ** [out :: 192.168.1.125] Waiting for Hypertable.Master (localhost:38050) to come up...



LOGS Files Hypertable.Master.log

1353646493 NOTICE Hypertable.Master : (/root/src/hypertable/src/cc/Common/Config.cc:548) Initializing Hypertable.Master (Hypertable 0.9.6.5 (v0.9.6.5-0-g118649a))...
CPU cores count=2
CephBroker.MonAddr=10.0.1.245:6789
DfsBroker.Local.Root=192.168.1.125
DfsBroker.Port=38030
HdfsBroker.Hadoop.ConfDir=/home/binish/hadoop/hadoop-0.22.0/conf
Hyperspace.GracePeriod=200000
Hyperspace.KeepAlive.Interval=30000
Hyperspace.Lease.Interval=1000000
Hyperspace.Replica.Dir=hyperspace
Hyperspace.Replica.Host=[192.168.1.125]
Hyperspace.Replica.Port=38040
Hypertable.Master.Port=38050
Hypertable.Master.Reactors=2
Hypertable.RangeServer.Port=38060
Hypertable.Verbose=true
ThriftBroker.Port=38080
config=/opt/hypertable/0.9.6.5/conf/hypertable.cfg
dfs-port=38030
grace-period=200000
hs-host=[192.168.1.125]
hs-port=38040
keepalive=30000
lease-interval=1000000
pidfile=/opt/hypertable/current/run/Hypertable.Master.pid
port=38050
reactors=2
verbose=true
1353646493 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/main.cc:336) Obtained lock on '/hypertable/master'
1353646493 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/main.cc:360) Successfully Initialized.
1353646493 INFO Hypertable.Master : (/root/src/hypertable/src/cc/AsyncComm/ConnectionManager.cc:359) Event: type=DISCONNECT "COMM connect error" from=127.0.0.1:38030; Problem connecting to DFS Broker, will retry in 600000 milliseconds...
1353646493 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/Monitoring.cc:70) Created monitoring dir /opt/hypertable/0.9.6.5/run/monitoring
1353646493 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/Monitoring.cc:70) Created monitoring dir /opt/hypertable/0.9.6.5/run/monitoring/tables
1353646493 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/Monitoring.cc:70) Created monitoring dir /opt/hypertable/0.9.6.5/run/monitoring/rangeservers
1353646493 WARN Hypertable.Master : (/root/src/hypertable/src/cc/AsyncComm/Comm.cc:246) No connection for 127.0.0.1:38030 - COMM not connected
1353646493 ERROR Hypertable.Master : main (/root/src/hypertable/src/cc/Hypertable/Master/main.cc:282): Hypertable::Exception: Error checking existence of DFS path: /hypertable/servers/master/log/mml - COMM not connected
at virtual bool Hypertable::DfsBroker::Client::exists(const Hypertable::String&) (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:655)
at void Hypertable::DfsBroker::Client::send_message(Hypertable::CommBufPtr&, Hypertable::DispatchHandler*) (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:731): DFS send_request to 127.0.0.1:38030 failed
1353646557 NOTICE Hypertable.Master : (/root/src/hypertable/src/cc/Common/Config.cc:548) Initializing Hypertable.Master (Hypertable 0.9.6.5 (v0.9.6.5-0-g118649a))...
CPU cores count=2
CephBroker.MonAddr=10.0.1.245:6789
DfsBroker.Local.Root=192.168.1.125
DfsBroker.Port=38030
HdfsBroker.Hadoop.ConfDir=/home/binish/hadoop/hadoop-0.22.0/conf
Hyperspace.GracePeriod=200000
Hyperspace.KeepAlive.Interval=30000
Hyperspace.Lease.Interval=1000000
Hyperspace.Replica.Dir=hyperspace
Hyperspace.Replica.Host=[192.168.1.125]
Hyperspace.Replica.Port=38040
Hypertable.Master.Port=38050
Hypertable.Master.Reactors=2
Hypertable.RangeServer.Port=38060
Hypertable.Verbose=true
ThriftBroker.Port=38080
config=/opt/hypertable/0.9.6.5/conf/hypertable.cfg
dfs-port=38030
grace-period=200000
hs-host=[192.168.1.125]
hs-port=38040
keepalive=30000
lease-interval=1000000
pidfile=/opt/hypertable/current/run/Hypertable.Master.pid
port=38050
reactors=2
verbose=true
1353646557 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/main.cc:336) Obtained lock on '/hypertable/master'
1353646557 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/main.cc:360) Successfully Initialized.
1353646557 INFO Hypertable.Master : (/root/src/hypertable/src/cc/AsyncComm/ConnectionManager.cc:359) Event: type=DISCONNECT "COMM connect error" from=127.0.0.1:38030; Problem connecting to DFS Broker, will retry in 600000 milliseconds...
1353646557 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/Monitoring.cc:73) rangeservers monitoring stats dir /opt/hypertable/0.9.6.5/run/monitoring exists 
1353646557 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/Monitoring.cc:73) rangeservers monitoring stats dir /opt/hypertable/0.9.6.5/run/monitoring/tables exists 
1353646557 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/Monitoring.cc:73) rangeservers monitoring stats dir /opt/hypertable/0.9.6.5/run/monitoring/rangeservers exists 
1353646557 WARN Hypertable.Master : (/root/src/hypertable/src/cc/AsyncComm/Comm.cc:246) No connection for 127.0.0.1:38030 - COMM not connected
1353646557 ERROR Hypertable.Master : main (/root/src/hypertable/src/cc/Hypertable/Master/main.cc:282): Hypertable::Exception: Error checking existence of DFS path: /hypertable/servers/master/log/mml - COMM not connected
at virtual bool Hypertable::DfsBroker::Client::exists(const Hypertable::String&) (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:655)
at void Hypertable::DfsBroker::Client::send_message(Hypertable::CommBufPtr&, Hypertable::DispatchHandler*) (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:731): DFS send_request to 127.0.0.1:38030 failed


--
Thanks and Regards,
Binish Xavier

Christoph Rupp

unread,
Nov 23, 2012, 5:32:52 AM11/23/12
to hyperta...@googlegroups.com
- Double-check that the DfsBroker is running on 192.168.1.125
- Check the DfsBroker log files on 192.168.1.125
- Last resort: run "cap cleandb", then try to start the cluster again. Make sure that "cap cleandb" does not return any errors.

2012/11/23 Binish Xavier <binish...@nesote.com>
--
You received this message because you are subscribed to the Google Groups "Hypertable User" group.
To post to this group, send email to hyperta...@googlegroups.com.
To unsubscribe from this group, send email to hypertable-us...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/hypertable-user?hl=en.

Binish Xavier

unread,
Nov 23, 2012, 5:43:56 AM11/23/12
to hyperta...@googlegroups.com

Hi  Christoph Rup,

i have run  cap cleandb comment,then again i am start cap start ....,but again error.

Error is give below


 * executing "/opt/hypertable/current/bin/start-dfsbroker.sh hadoop       --config=/opt/hypertable/0.9.6.5/conf/hypertable.cfg &&\\\n   /opt/hypertable/current/bin/start-master.sh --config=/opt/hypertable/0.9.6.5/conf/hypertable.cfg &&\\\n   /opt/hypertable/current/bin/start-monitoring.sh"
    servers: ["192.168.1.125"]
    [192.168.1.125] executing command
 ** [out :: 192.168.1.125] DFS broker: available file descriptors: 1024
 ** [out :: 192.168.1.125] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...

Christoph Rupp

unread,
Nov 23, 2012, 5:44:41 AM11/23/12
to hyperta...@googlegroups.com
The DFS broker fails to start. You have to check the logs, make sure that hdfs is running etc.

2012/11/23 Binish Xavier <binish...@nesote.com>

Binish Xavier

unread,
Nov 23, 2012, 5:46:23 AM11/23/12
to hyperta...@googlegroups.com
DFS  logs




Num CPUs=2
HdfsBroker.Port=38030
HdfsBroker.Reactors=2
HdfsBroker.Workers=20
HdfsBroker.Hadoop.ConfDir=/home/binish/hadoop/hadoop-0.22.0/conf
Adding hadoop configuration file /home/binish/hadoop/hadoop-0.22.0/conf/hdfs-site.xml
Adding hadoop configuration file /home/binish/hadoop/hadoop-0.22.0/conf/core-site.xml
Unable to get dfs.replication value; using default
HdfsBroker.dfs.client.read.shortcircuit=false
HdfsBroker.dfs.replication=-1
HdfsBroker.Server.fs.default.name=hdfs://localhost:9000
12/11/23 10:23:04 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
23 Nov, 2012 4:06:15 PM org.hypertable.DfsBroker.hadoop.HdfsBroker <init>
SEVERE: ERROR: Unable to establish connection to HDFS.
ShutdownHook called
Exception in thread "Thread-1" java.lang.NullPointerException
at org.hypertable.DfsBroker.hadoop.main$ShutdownHook.run(main.java:73)
Num CPUs=2
HdfsBroker.Port=38030
HdfsBroker.Reactors=2
HdfsBroker.Workers=20
HdfsBroker.Hadoop.ConfDir=/home/binish/hadoop/hadoop-0.22.0/conf
Adding hadoop configuration file /home/binish/hadoop/hadoop-0.22.0/conf/hdfs-site.xml
Adding hadoop configuration file /home/binish/hadoop/hadoop-0.22.0/conf/core-site.xml
Unable to get dfs.replication value; using default
HdfsBroker.dfs.client.read.shortcircuit=false
HdfsBroker.dfs.replication=-1
HdfsBroker.Server.fs.default.name=hdfs://localhost:9000
12/11/23 16:10:21 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.

Christoph Rupp

unread,
Nov 23, 2012, 5:47:51 AM11/23/12
to hyperta...@googlegroups.com
SEVERE: ERROR: Unable to establish connection to HDFS.

Make sure that your hadoop and hdfs datanodes/namenode is up and running and that you use a supported version (CDH3).

2012/11/23 Binish Xavier <binish...@nesote.com>

Binish Xavier

unread,
Nov 23, 2012, 5:51:01 AM11/23/12
to hyperta...@googlegroups.com
How to get CDH3 version


My hadoop is currently running.

i have attached my hadoop running screen short pls check




On Fri, Nov 23, 2012 at 4:17 PM, Christoph Rupp <ch...@hypertable.com> wrote:
CDH3
Screenshot-1.png

Binish Xavier

unread,
Nov 23, 2012, 7:29:00 AM11/23/12
to hyperta...@googlegroups.com


1->How to get CDH3 version

2->How to check hadoop is runing properly or not
 
3->My hadoop is currently running.

Christoph Rupp

unread,
Nov 23, 2012, 8:07:30 AM11/23/12
to hyperta...@googlegroups.com
The screenshot just shows that you started hadoop. It does not have to say that it's running. Check the log file of the namenode and datanode if there were any problems. The logs maybe will also contain the version information in case you do not remember what you installed. Otherwise look into your package manager or into the installed files for any hints.

If hdfs is running then use the "hadoop" command line tool to test it, i.e. by copying and fetching a file.

If hdfs works but the DfsBroker still does not start then maybe the hypertable configuration is wrong.

2012/11/23 Binish Xavier <binish...@nesote.com>

--

Binish Xavier

unread,
Nov 27, 2012, 1:10:50 AM11/27/12
to hyperta...@googlegroups.com

i have attached Hadoop log file Please check and replay any error in configuration


Hadoop logs file

2012-11-27 11:36:48,984 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = binish-desktop/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.22.0
STARTUP_MSG:   classpath = /home/binish/hadoop/hadoop-0.22.0/bin/../conf:/usr/lib/jvm/java-6-sun-1.6.0.26/lib/tools.jar:/home/binish/hadoop/hadoop-0.22.0/bin/..:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-common-0.22.0.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-common-test-0.22.0.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-hdfs-0.22.0.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-hdfs-0.22.0-sources.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-hdfs-ant-0.22.0.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-hdfs-test-0.22.0.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-hdfs-test-0.22.0-sources.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-mapred-0.22.0.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-mapred-0.22.0-sources.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-mapred-examples-0.22.0.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-mapred-test-0.22.0.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../hadoop-mapred-tools-0.22.0.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/ant-1.6.5.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/ant-1.7.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/ant-launcher-1.7.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/asm-3.3.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/aspectjrt-1.6.5.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/aspectjtools-1.6.5.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/avro-1.5.3.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/avro-compiler-1.5.3.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/avro-ipc-1.5.3.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/commons-cli-1.2.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/commons-codec-1.4.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/commons-collections-3.2.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/commons-daemon-1.0.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/commons-el-1.0.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/commons-httpclient-3.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/commons-lang-2.5.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/commons-logging-1.1.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/commons-logging-api-1.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/commons-net-1.4.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/core-3.1.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/ecj-3.5.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/guava-r09.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/hsqldb-1.8.0.10.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jackson-core-asl-1.7.3.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jackson-mapper-asl-1.7.3.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jasper-compiler-5.5.12.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jasper-runtime-5.5.12.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jdiff-1.0.9.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jets3t-0.7.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jetty-6.1.26.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jetty-util-6.1.26.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jsch-0.1.42.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jsp-2.1-glassfish-2.1.v20091210.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jsp-2.1-jetty-6.1.26.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jsp-api-2.1-glassfish-2.1.v20091210.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/junit-4.8.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/kfs-0.3.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/log4j-1.2.16.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/mockito-all-1.8.2.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/mockito-all-1.8.5.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/oro-2.0.8.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/paranamer-2.3.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/paranamer-ant-2.3.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/paranamer-generator-2.3.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/qdox-1.12.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/servlet-api-2.5-20081211.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/slf4j-api-1.6.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/snappy-java-1.0.3.2.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/velocity-1.6.4.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/xmlenc-0.52.jar:/home/binish/hadoop/hadoop-0.22.0/bin/../lib/jsp-2.1/*.jar:/home/binish/hadoop/hadoop-0.22.0/hdfs/bin/../conf:/home/binish/hadoop/hadoop-0.22.0/hdfs/bin/../hadoop-hdfs-*.jar:/home/binish/hadoop/hadoop-0.22.0/hdfs/bin/../lib/*.jar:/home/binish/hadoop/hadoop-0.22.0/hdfs/bin/../hadoop-hdfs-*.jar:/home/binish/hadoop/hadoop-0.22.0/hdfs/bin/../lib/*.jar
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/common -r 1207774; compiled by 'jenkins' on Sun Dec  4 00:57:22 UTC 2011
************************************************************/
2012-11-27 11:36:49,249 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2012-11-27 11:36:49,252 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2012-11-27 11:36:49,302 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: defaultReplication = 3
2012-11-27 11:36:49,302 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: maxReplication = 512
2012-11-27 11:36:49,302 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: minReplication = 1
2012-11-27 11:36:49,302 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: maxReplicationStreams = 2
2012-11-27 11:36:49,302 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: shouldCheckForEnoughRacks = false
2012-11-27 11:36:49,304 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 32-bit
2012-11-27 11:36:49,304 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB
2012-11-27 11:36:49,304 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^22 = 4194304 entries
2012-11-27 11:36:49,304 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2012-11-27 11:36:49,323 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=binish
2012-11-27 11:36:49,323 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2012-11-27 11:36:49,324 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2012-11-27 11:36:49,328 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
2012-11-27 11:36:49,732 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2012-11-27 11:36:49,733 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2012-11-27 11:36:49,762 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2012-11-27 11:36:49,947 INFO org.apache.hadoop.hdfs.server.common.Storage: Loading image file /tmp/hadoop-binish/dfs/name/current/fsimage using no compression
2012-11-27 11:36:49,953 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2012-11-27 11:36:49,974 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2012-11-27 11:36:49,977 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 113 loaded in 0 seconds.
2012-11-27 11:36:49,981 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-binish/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2012-11-27 11:36:49,984 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2012-11-27 11:36:49,984 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 690 msecs
2012-11-27 11:36:49,986 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2012-11-27 11:36:50,069 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
2012-11-27 11:36:50,144 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Problem binding to localhost/127.0.0.1:9000 : Address already in use
at org.apache.hadoop.ipc.Server.bind(Server.java:224)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:313)
at org.apache.hadoop.ipc.Server.<init>(Server.java:1582)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:394)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:331)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:291)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:47)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:382)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:368)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:576)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1534)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1543)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:222)
... 12 more

2012-11-27 11:36:50,146 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at binish-desktop/127.0.1.1
************************************************************/








..................................................................
hadoop-binish-namenode-binish-desktop.log

Christoph Rupp

unread,
Nov 27, 2012, 2:50:41 AM11/27/12
to hyperta...@googlegroups.com
As you can see your Hadoop does not start because the port is in use, or maybe a stale process was running. Make sure that hadoop works correctly. If you need help then the cloudera documentation is a good start. But this is not a Hypertable problem.

bye
Christoph

2012/11/27 Binish Xavier <binish...@nesote.com>

Binish Xavier

unread,
Nov 27, 2012, 3:08:50 AM11/27/12
to hyperta...@googlegroups.com
please send  cloudera documentation 


On Tue, Nov 27, 2012 at 1:20 PM, Christoph Rupp <ch...@hypertable.com> wrote:
If you need help then the cloudera documentation is a good start



Binish Xavier

unread,
Nov 27, 2012, 3:10:02 AM11/27/12
to hyperta...@googlegroups.com
its possible to integrate in local network?

On Tue, Nov 27, 2012 at 1:38 PM, Binish Xavier <binish...@nesote.com> wrote:
cloudera documentation 

Christoph Rupp

unread,
Nov 27, 2012, 3:14:12 AM11/27/12
to hyperta...@googlegroups.com
did you try to google for it or look on cloudera.com???

2012/11/27 Binish Xavier <binish...@nesote.com>
--
Reply all
Reply to author
Forward
0 new messages