Namenode and datanode not shown when using jps

6,725 views
Skip to first unread message

sindhu hosamane

unread,
May 9, 2014, 11:18:45 AM5/9/14
to chenn...@googlegroups.com
Hello friends ,

I set up hadoop 1.2.1 on my local machine . worked perfect . showed all name node , datanode , job tracker , task tracker  when using jps .
Next i set up hadoop on my college remote server using ssh .
When i start my hadoop on remote server , It shows only 

25065 TaskTracker

24766 JobTracker

25160 Jps

24634 SecondaryNameNode


Namenode and datanode are missing .

Wierd , i had same conf and values on both local and remote hadoop .

please tell me what could be the reason !!


I have to work on my remote server hadoop , not on my local machine.



Sb Gowtham

unread,
May 9, 2014, 2:32:37 PM5/9/14
to chenn...@googlegroups.com
Hi check with your core-site.xml and hdfs-site.xml .Or simply check your log files.


--
You received this message because you are subscribed to the Google Groups "Hadoop Users Group (HUG) Chennai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chennaihug+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Thanks & Regards
SB.Gowtham
Data Engineer @ Datadotz

sindhu hosamane

unread,
May 9, 2014, 4:18:48 PM5/9/14
to chenn...@googlegroups.com

I checked both . it looks like below 

core-site.xml

<property>

  <name>hadoop.tmp.dir</name>

  <value>/users/sindhuht/hdfstmp</value>

  <description>A base for other temporary directories.</description>

</property>


<property>

  <name>fs.default.name</name>

  <value>hdfs://localhost:54310</value>

  <description>The name of the default file system.  A URI whose

  scheme and authority determine the FileSystem implementation.  The

  uri's scheme determines the config property (fs.SCHEME.impl) naming

  the FileSystem implementation class.  The uri's authority is used to

  determine the host, port, etc. for a filesystem.</description>

</property>

</configuration>


hdfs-site.xml


<configuration>

<property>

  <name>dfs.replication</name>

  <value>1</value>

  <description>Default block replication.

  The actual number of replications can be specified when the file is created.

  The default is used if replication is not specified in create time.

  </description>

</property>

</configuration>


mapred-site.xml


<configuration>

<property>

  <name>mapred.job.tracker</name>

  <value>localhost:54311</value>

  <description>The host and port that the MapReduce job tracker runs

  at.  If "local", then jobs are run in-process as a single map

  and reduce task.

  </description>

</property>

</configuration>


It is the same in my local machine but it works fine..but on remote server hadoop doesn't start all deamons correctly.

Where do i see log files.? Something wrong ..please let me know.

Sb Gowtham

unread,
May 10, 2014, 8:49:24 AM5/10/14
to chenn...@googlegroups.com
You can see the log files in hadoop-1.2.1--->logs (folder) attach me the log folder 


--
You received this message because you are subscribed to the Google Groups "Hadoop Users Group (HUG) Chennai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chennaihug+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

sindhu hosamane

unread,
May 10, 2014, 5:53:31 PM5/10/14
to chenn...@googlegroups.com
Hello friend ,
I attached my logs folder . Also i tried to format my name node . still all daemons didn't start .


logs.zip

Venkat Ankam

unread,
May 10, 2014, 7:55:47 PM5/10/14
to chenn...@googlegroups.com
Sindhu,

This looks like 'port' issue.  Make a change as below.

  <name>fs.default.name</name>

  <value>hdfs://localhost:8020</value>


  <name>mapred.job.tracker</name>

  <value>localhost:8021</value>


Regards,

Venkat



On Sat, May 10, 2014 at 2:53 PM, sindhu hosamane <sind...@gmail.com> wrote:
Hello friend ,
I attached my logs folder . Also i tried to format my name node . still all daemons didn't start .


sindhu hosamane

unread,
May 11, 2014, 6:45:25 AM5/11/14
to chenn...@googlegroups.com
Also tried port numbers as said by venkat , still not all daemons are started.
But it shows starting all those like below


starting namenode, logging to /users/sindhuht/hadoop-1.2.1/libexec/../logs/hadoop-sindhuht-namenode-localhost.out

localhost: starting datanode, logging to /users/sindhuht/hadoop-1.2.1/libexec/../logs/hadoop-sindhuht-datanode-localhost.out

localhost: starting secondarynamenode, logging to /users/sindhuht/hadoop-1.2.1/libexec/../logs/hadoop-sindhuht-secondarynamenode-localhost.out

starting jobtracker, logging to /users/sindhuht/hadoop-1.2.1/libexec/../logs/hadoop-sindhuht-jobtracker-localhost.out

localhost: starting tasktracker, logging to /users/sindhuht/hadoop-1.2.1/libexec/../logs/hadoop-sindhuht-tasktracker-localhost.out


But when i do jps , not all are started .  Struggling , still could not fix..!! 
This hadoop is on my remote server . i am using ssh simply to work with this hadoop . Does it matter ?

swapnil joshi

unread,
May 11, 2014, 2:45:34 PM5/11/14
to chenn...@googlegroups.com
Hi Sindhu,
I think it's IP Address problem. Hey first you have to get IP address of your machine and replace that value with text "localhost" in all conf. file of hadoop.
for example if your system have IP address 192.168.1.56 then
<name>fs.default.name</name>

  <value>hdfs://192.168.1.56:8020</value>


  <name>mapred.job.tracker</name>

  <value>192.168.1.56:8021</value>





--
You received this message because you are subscribed to the Google Groups "Hadoop Users Group (HUG) Chennai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chennaihug+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Regards,
Swapnil K. Joshi

sindhu hosamane

unread,
May 11, 2014, 5:31:41 PM5/11/14
to chenn...@googlegroups.com

Used iconfig and got the ipaddress and added ip address instead of localhost in all conf files. 
Still efforts are in vain . namenode and datanode are not started .

As i read in some other thread recreated hadoop folder , and copied backed up conf folder and also recreated tmp folder .And formatted name node again.
Still didn't work .

Can please somebody attach me  the conf folder if your hadoop is running perfect .That would be really helpful . So that i can cross check ..!!
I am scratching my head since 2 days.!!  Your help would be appreciated.

swapnil joshi

unread,
May 12, 2014, 1:08:36 AM5/12/14
to chenn...@googlegroups.com
Are you installing hadoop on single node or on multiple nodes?
Which Operating System you are using?
What is output of "jps" command?
and give me out put of following command:
ls -l /users/sindhuht/

give me this information. As per your configuration, I think you are installing hadoop on single node.


--
You received this message because you are subscribed to the Google Groups "Hadoop Users Group (HUG) Chennai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chennaihug+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

sindhu hosamane

unread,
May 12, 2014, 1:48:54 AM5/12/14
to chenn...@googlegroups.com

***    I am installing single node hadoop
***    Ubuntu machine ( but it is a remote server , remote ubuntu machine .i am working with hadoop on remote ubuntu server)
****  output of ls -l /users/sindhuht

    sindhuht@localhost:~$ ls -l /users/sindhuht

total 62396

drwxr-xr-x  2 sindhuht user     4096 May  6 22:45 Downloads

drwxr-xr-x  2 sindhuht user     4096 May 11 22:23 conf

-rw-r--r--  1 sindhuht user      179 Oct 19  2012 examples.desktop

drwxr-xr-x 16 sindhuht user     4096 May 11 22:34 hadoop-1.2.1

-rw-r--r--  1 sindhuht user 63851630 Jul 23  2013 hadoop-1.2.1.tar.gz

drwxr-xr-x  2 sindhuht user     4096 May  9 14:58 hdfsdata

drwxr-xr-x  4 sindhuht user     4096 May 11 22:35 hdfstmp

-rwxr-xr-x  1 sindhuht user    11440 May  7 15:35 lein.sh

drwxr-xr-x  2 sindhuht user     4096 May  9 11:13 useful data


****  output of jps

9859 Jps

9334 SecondaryNameNode

9763 TaskTracker


( but sometimes task tracker too starts . sometimes it doesn't ) 

But name node and datanode are never started .



sindhu hosamane

unread,
May 12, 2014, 1:55:01 AM5/12/14
to chenn...@googlegroups.com

Also added to previous mail , i am attaching my recent logs folder .
Thanks


logs.zip

swapnil joshi

unread,
May 12, 2014, 2:34:16 AM5/12/14
to chenn...@googlegroups.com
Hi Sindhu,
can you format namenode?


On Mon, May 12, 2014 at 11:25 AM, sindhu hosamane <sind...@gmail.com> wrote:

Also added to previous mail , i am attaching my recent logs folder .
Thanks


--
You received this message because you are subscribed to the Google Groups "Hadoop Users Group (HUG) Chennai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chennaihug+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

swapnil joshi

unread,
May 12, 2014, 2:46:19 AM5/12/14
to chenn...@googlegroups.com
use following command to format namenode.

sindhuht@localhost:~$ cd /users/sindhuht/hadoop-1.2.1
sindhuht@localhost:~$ bin/hadoop namenode -format
sindhuht@localhost:~$ bin/stop-all.sh
sindhuht@localhost:~$ bin/start_all.sh

give me terminal out put of all this command.

Garvit Bansal

unread,
May 12, 2014, 2:59:44 AM5/12/14
to chenn...@googlegroups.com
Hello, 

I also gone through the same problem but finally got a way to get out of this. Format the namenode after stopping it. Try to delete the temp folder and do the steps again. This might solve your problem.

Thanks.

Ashwanth Kumar

unread,
May 12, 2014, 2:59:41 AM5/12/14
to chenn...@googlegroups.com
I saw your logs just now, I see the following problems
- Your NameNode doesn't start because someone is already using that port (54310)
2014-05-12 07:47:16,032 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:174)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)

Find the process that uses this port --
$ netstat -ptlen | grep 54310 

Your response be (something like)
tcp        0      0 127.0.0.1:54310         0.0.0.0:*               LISTEN      1001       3505789     11586/java      

Get the ProcessID marked in bold. If can't recognize the process or don't want it to run, kill the process using the following 
$ kill -9 11586

- Regarding the DataNode, it doesn't start because it can't find the NameNode running. 
- Your JobTracker also has the same issue. Port 50030 is already being used. 
2014-05-12 07:47:20,411 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030
2014-05-12 07:47:20,413 FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:174)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)


Something tells me, you are running the NameNode / JobTracker on a different user already. Remember "jps" returns only active java process of the current user. 

PS: Just to be sure, before you try the above methods - navigate to http://<remote-machine-ip>:50030/ or http://<remote-machine-ip>:50070/ on your browser to check if they are accessible or not. 



On Mon, May 12, 2014 at 11:25 AM, sindhu hosamane <sind...@gmail.com> wrote:

Also added to previous mail , i am attaching my recent logs folder .
Thanks


--
You received this message because you are subscribed to the Google Groups "Hadoop Users Group (HUG) Chennai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chennaihug+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--

Ashwanth Kumar / ashwanthkumar.in

sindhu hosamane

unread,
May 16, 2014, 4:41:40 AM5/16/14
to chenn...@googlegroups.com
Thanks all for ur help .
Got it working now . Ports i was trying to access were free though .
I changed my hdfs-site.xml to look like 

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
<property>

<name>dfs.data.dir</name>
<value>/users/ch.neuensta/hadoop/tmp/dfs/name/data</value> 
<final>true</final> 
</property> 
<property> 
<name>dfs.name.dir</name>
<value>/users/ch.neuensta/hadoop/tmp/dfs/name</value> 
<final>true</final> 
</property>
</configuration>

And recreated  tmp folder and formatted name node .
It worked .
Reply all
Reply to author
Forward
0 new messages