zero running map tasks and running reduce task in UI

35 views
Skip to first unread message

Taniya Tom

unread,
Aug 3, 2014, 10:46:33 PM8/3/14
to hadoop-learner-tutori...@googlegroups.com


Hi,

I am a hadoop student from Udemy. I followed all the instructions for hadoop installation without much trouble. In the end. I ran  " bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+' " to test hadoop. But I see 0 'running map task' and 0 'running reduce tasks in the UI from browser. Attaching the screen shot as well.

Is there a problem with my installation? Or is it ok to have zero value for these parameters? Please help me.

Thanks,

Taniya

screenshot.PNG

Nitesh Jain

unread,
Aug 4, 2014, 2:50:21 AM8/4/14
to hadoop-learner-tutori...@googlegroups.com
Hi Taniya,

The installation process is a little tricky. Could you please let me know of the following so that I can better understand the problem?

1. Output of command: 
jps
2, Output/error message when you typed in the command 
bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+

Best,
Nitesh 

Taniya Tom

unread,
Aug 4, 2014, 6:59:08 PM8/4/14
to hadoop-learner-tutori...@googlegroups.com
Thank you for the quick response Nitesh. 
Here is what I got for jps command.
]0;taniya@taniya-VirtualBox: ~/hadoop/hadoop-1.2.1 taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ jps
2497 SecondaryNameNode
2723 TaskTracker
2872 Jps
2584 JobTracker
2221 NameNode
2351 DataNode

The output/error I got from bin/hadoop jar  hadoop-examples-*.jar grep input output 'dfs[a-z.]+' is:
]0;taniya@taniya-VirtualBox: ~/hadoop/hadoop-1.2.1 taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ bin/hadoop jar  
hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
14/08/04 15:15:20 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/08/04 15:15:20 WARN snappy.LoadSnappy: Snappy native library not loaded
14/08/04 15:15:20 INFO mapred.JobClient: Cleaning up the staging area hdfs://localhost:9000/tmp/hadoop-taniya/mapred/staging/taniya/.staging/job_201408041513_0001
14/08/04 15:15:20 ERROR security.UserGroupInformation: PriviledgedActionException as:taniya cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/user/taniya/input
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/user/taniya/input
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)
at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
at org.apache.hadoop.examples.Grep.run(Grep.java:69)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.Grep.main(Grep.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
]0;taniya@taniya-VirtualBox: ~/hadoop/hadoop-1.2.1 taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ ls output/*
[0m [01;32moutput/part-00000 [0m   [01;32moutput/_SUCCESS [0m


The documents showing the two localhost UI is also attached. I see the SUCCESS message when I do 'ls output/*' after the hadoop command " bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+ ".
But I still see only 0 running nodes in the UI. 

Thanks,
Taniya
namenode_localhost.pdf
mapreduce_administration.pdf

Nitesh Jain

unread,
Aug 5, 2014, 12:47:38 AM8/5/14
to hadoop-learner-tutori...@googlegroups.com
Hi Taniya,

First of all Congratulations! You have got the installation perfectly right as I see all the demons running. The issue here is that the command is not able to find the 'input' path. You would be able to find that message in the output verbiage of the command, if you read it closely.

If you have made the changes to the configuration file core-site.xml,  mapred-site.xml and hdfs-site.xml and in the pseudo-distribution mode already (which I think you are), all you need to do is run the following command to create the input file on HDFS:
hadoop fs -mkdir input

and then again run the command to test hadoop framework.  However, if you were in standalone mode, then you would create a file on local file system by a simple command:
mkdir input 

Hope this works for you!

Best,
Nitesh

Taniya Tom

unread,
Aug 5, 2014, 2:10:48 AM8/5/14
to hadoop-learner-tutori...@googlegroups.com
Hi Nitesh,

I created an input folder using the command 'hadoop fs -mkdir input' as you suggested.  Now this is what I get when I run the command to test hadoop framework. I see an ERROR message on an existing output directory. Should I delete this directory before executing a hadoop query everytime? Also, I am not able to locate the correct directory in my system.
There are two output folders in my system in these locations:
/home/taniya/hadoop/hadoop-1.2.1/docs/api/org/apache/hadoop/mapreduce/lib/output
/home/taniya/hadoop/hadoop-1.2.1/output                 
But the error message gives a path : Output directory hdfs://localhost:9000/user/taniya/output

I'm not sure what to do.

This is the output/error message that I get.
]0;taniya@taniya-VirtualBox: ~/hadoop/hadoop-1.2.1 taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ bin/hadoop jar  
hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
14/08/04 22:40:28 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/08/04 22:40:28 WARN snappy.LoadSnappy: Snappy native library not loaded
14/08/04 22:40:28 INFO mapred.FileInputFormat: Total input paths to process : 0
14/08/04 22:40:29 INFO mapred.JobClient: Running job: job_201408042239_0001
14/08/04 22:40:30 INFO mapred.JobClient:  map 0% reduce 0%
14/08/04 22:40:42 INFO mapred.JobClient:  map 0% reduce 100%
14/08/04 22:40:44 INFO mapred.JobClient: Job complete: job_201408042239_0001
14/08/04 22:40:44 INFO mapred.JobClient: Counters: 19
14/08/04 22:40:44 INFO mapred.JobClient:   Map-Reduce Framework
14/08/04 22:40:44 INFO mapred.JobClient:     Combine output records=0
14/08/04 22:40:44 INFO mapred.JobClient:     Spilled Records=0
14/08/04 22:40:44 INFO mapred.JobClient:     Reduce input records=0
14/08/04 22:40:44 INFO mapred.JobClient:     Reduce output records=0
14/08/04 22:40:44 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=1848279040
14/08/04 22:40:44 INFO mapred.JobClient:     Reduce shuffle bytes=0
14/08/04 22:40:44 INFO mapred.JobClient:     Physical memory (bytes) snapshot=65650688
14/08/04 22:40:44 INFO mapred.JobClient:     Combine input records=0
14/08/04 22:40:44 INFO mapred.JobClient:     CPU time spent (ms)=250
14/08/04 22:40:44 INFO mapred.JobClient:     Reduce input groups=0
14/08/04 22:40:44 INFO mapred.JobClient:     Total committed heap usage (bytes)=33423360
14/08/04 22:40:44 INFO mapred.JobClient:   FileSystemCounters
14/08/04 22:40:44 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=55776
14/08/04 22:40:44 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=86
14/08/04 22:40:44 INFO mapred.JobClient:   File Output Format Counters 
14/08/04 22:40:44 INFO mapred.JobClient:     Bytes Written=86
14/08/04 22:40:44 INFO mapred.JobClient:   Job Counters 
14/08/04 22:40:44 INFO mapred.JobClient:     Launched reduce tasks=1
14/08/04 22:40:44 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=5751
14/08/04 22:40:44 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/08/04 22:40:44 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=6970
14/08/04 22:40:44 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/08/04 22:40:44 INFO mapred.JobClient: Cleaning up the staging area hdfs://localhost:9000/tmp/hadoop-taniya/mapred/staging/taniya/.staging/job_201408042239_0002
14/08/04 22:40:44 ERROR security.UserGroupInformation: PriviledgedActionException as:taniya cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://localhost:9000/user/taniya/output already exists
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://localhost:9000/user/taniya/output already exists
at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:121)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:975)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
at org.apache.hadoop.examples.Grep.run(Grep.java:84)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.Grep.main(Grep.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
]0;taniya@taniya-VirtualBox: ~/hadoop/hadoop-1.2.1 taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ 
]0;taniya@taniya-VirtualBox: ~/hadoop/hadoop-1.2.1 taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ jps
4689 SecondaryNameNode
4771 JobTracker
4547 DataNode
5379 Jps
4406 NameNode
4924 TaskTracker
]0;taniya@taniya-VirtualBox: ~/hadoop/hadoop-1.2.1 taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ bin/stop-all.sh 

Kindly help.

Thanks,
Taniya


On Sunday, August 3, 2014 7:46:33 PM UTC-7, Taniya Tom wrote:

Nitesh Jain

unread,
Aug 5, 2014, 5:54:02 AM8/5/14
to hadoop-learner-tutori...@googlegroups.com
Hey Taniya,

You would need to change the output directory name as in HDFS there are no overwrites. This is a precautionary mechanism put in place just to same the results created after a long running job, so that you do not (even accidentally) overwrite the results from the previous runs.

What happened is that when you typed in the commands before this, the job didn't run but the output directory go created and so now you get the message that output directory already exists. So what you need to do is just change the name of the output directory. Make it from output to output1 (or something else you like) and  things would run just fine. Something like below:

bin/hadoop jar hadoop-examples-*.jar grep input output1 'dfs[a-z.]+'

Do let me know if you still face issues!

Cheers,
Nitesh

Taniya Tom

unread,
Aug 5, 2014, 2:43:37 PM8/5/14
to hadoop-learner-tutori...@googlegroups.com
Hi Nitesh,

It says new output folder "out01" is not created.
Copying the typescript that I generated. 

taniya@taniya-VirtualBox:~$ ssh localhost
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-32-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

*** System restart required ***
Last login: Tue Aug  5 11:16:59 2014 from localhost
taniya@taniya-VirtualBox:~$ cd hadoop/hadoop-1.2.1/
taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ bin/start-all.sh
starting namenode, logging to /home/taniya/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-taniya-namenode-taniya-VirtualBox.out
localhost: starting datanode, logging to /home/taniya/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-taniya-datanode-taniya-VirtualBox.out
localhost: starting secondarynamenode, logging to /home/taniya/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-taniya-secondarynamenode-taniya-VirtualBox.out
starting jobtracker, logging to /home/taniya/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-taniya-jobtracker-taniya-VirtualBox.out
localhost: starting tasktracker, logging to /home/taniya/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-taniya-tasktracker-taniya-VirtualBox.out
taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ jps
10457 SecondaryNameNode
10314 DataNode
10539 JobTracker
10172 NameNode
10829 Jps
10687 TaskTracker
taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ bin/hadoop jar hadoop-examples-*.jar grep input out01 'dfs[a-z.]+'
14/08/05 11:26:48 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/08/05 11:26:49 WARN snappy.LoadSnappy: Snappy native library not loaded
14/08/05 11:26:49 INFO mapred.FileInputFormat: Total input paths to process : 0
14/08/05 11:26:49 INFO mapred.JobClient: Running job: job_201408051125_0001
14/08/05 11:26:50 INFO mapred.JobClient:  map 0% reduce 0%
14/08/05 11:27:02 INFO mapred.JobClient:  map 0% reduce 100%
14/08/05 11:27:04 INFO mapred.JobClient: Job complete: job_201408051125_0001
14/08/05 11:27:04 INFO mapred.JobClient: Counters: 19
14/08/05 11:27:04 INFO mapred.JobClient:   Map-Reduce Framework
14/08/05 11:27:04 INFO mapred.JobClient:     Combine output records=0
14/08/05 11:27:04 INFO mapred.JobClient:     Spilled Records=0
14/08/05 11:27:04 INFO mapred.JobClient:     Reduce input records=0
14/08/05 11:27:04 INFO mapred.JobClient:     Reduce output records=0
14/08/05 11:27:04 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=1848279040
14/08/05 11:27:04 INFO mapred.JobClient:     Reduce shuffle bytes=0
14/08/05 11:27:04 INFO mapred.JobClient:     Physical memory (bytes) snapshot=65585152
14/08/05 11:27:04 INFO mapred.JobClient:     Combine input records=0
14/08/05 11:27:04 INFO mapred.JobClient:     CPU time spent (ms)=260
14/08/05 11:27:04 INFO mapred.JobClient:     Reduce input groups=0
14/08/05 11:27:04 INFO mapred.JobClient:     Total committed heap usage (bytes)=33423360
14/08/05 11:27:04 INFO mapred.JobClient:   FileSystemCounters
14/08/05 11:27:04 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=55778
14/08/05 11:27:04 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=86
14/08/05 11:27:04 INFO mapred.JobClient:   File Output Format Counters 
14/08/05 11:27:04 INFO mapred.JobClient:     Bytes Written=86
14/08/05 11:27:04 INFO mapred.JobClient:   Job Counters 
14/08/05 11:27:04 INFO mapred.JobClient:     Launched reduce tasks=1
14/08/05 11:27:04 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=5638
14/08/05 11:27:04 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/08/05 11:27:04 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=6778
14/08/05 11:27:04 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/08/05 11:27:05 INFO mapred.FileInputFormat: Total input paths to process : 1
14/08/05 11:27:05 INFO mapred.JobClient: Running job: job_201408051125_0002
14/08/05 11:27:06 INFO mapred.JobClient:  map 0% reduce 0%
14/08/05 11:27:14 INFO mapred.JobClient:  map 100% reduce 0%
14/08/05 11:27:24 INFO mapred.JobClient:  map 100% reduce 33%
14/08/05 11:27:25 INFO mapred.JobClient:  map 100% reduce 100%
14/08/05 11:27:27 INFO mapred.JobClient: Job complete: job_201408051125_0002
14/08/05 11:27:27 INFO mapred.JobClient: Counters: 29
14/08/05 11:27:27 INFO mapred.JobClient:   Map-Reduce Framework
14/08/05 11:27:27 INFO mapred.JobClient:     Spilled Records=0
14/08/05 11:27:27 INFO mapred.JobClient:     Map output materialized bytes=6
14/08/05 11:27:27 INFO mapred.JobClient:     Reduce input records=0
14/08/05 11:27:27 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=3696443392
14/08/05 11:27:27 INFO mapred.JobClient:     Map input records=0
14/08/05 11:27:27 INFO mapred.JobClient:     SPLIT_RAW_BYTES=117
14/08/05 11:27:27 INFO mapred.JobClient:     Map output bytes=0
14/08/05 11:27:27 INFO mapred.JobClient:     Reduce shuffle bytes=6
14/08/05 11:27:27 INFO mapred.JobClient:     Physical memory (bytes) snapshot=245657600
14/08/05 11:27:27 INFO mapred.JobClient:     Map input bytes=0
14/08/05 11:27:27 INFO mapred.JobClient:     Reduce input groups=0
14/08/05 11:27:27 INFO mapred.JobClient:     Combine output records=0
14/08/05 11:27:27 INFO mapred.JobClient:     Reduce output records=0
14/08/05 11:27:27 INFO mapred.JobClient:     Map output records=0
14/08/05 11:27:27 INFO mapred.JobClient:     Combine input records=0
14/08/05 11:27:27 INFO mapred.JobClient:     CPU time spent (ms)=1350
14/08/05 11:27:27 INFO mapred.JobClient:     Total committed heap usage (bytes)=179900416
14/08/05 11:27:27 INFO mapred.JobClient:   File Input Format Counters 
14/08/05 11:27:27 INFO mapred.JobClient:     Bytes Read=86
14/08/05 11:27:27 INFO mapred.JobClient:   FileSystemCounters
14/08/05 11:27:27 INFO mapred.JobClient:     HDFS_BYTES_READ=203
14/08/05 11:27:27 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=110049
14/08/05 11:27:27 INFO mapred.JobClient:     FILE_BYTES_READ=6
14/08/05 11:27:27 INFO mapred.JobClient:   File Output Format Counters 
14/08/05 11:27:27 INFO mapred.JobClient:     Bytes Written=0
14/08/05 11:27:27 INFO mapred.JobClient:   Job Counters 
14/08/05 11:27:27 INFO mapred.JobClient:     Launched map tasks=1
14/08/05 11:27:27 INFO mapred.JobClient:     Launched reduce tasks=1
14/08/05 11:27:27 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=10509
14/08/05 11:27:27 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/08/05 11:27:27 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=9941
14/08/05 11:27:27 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/08/05 11:27:27 INFO mapred.JobClient:     Data-local map tasks=1
taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ ls out01/*
ls: cannot access out01/*: No such file or directory
taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ bin/stop-all.shstopping jobtracker
localhost: stopping tasktracker
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode
taniya@taniya-VirtualBox:~/hadoop/hadoop-1.2.1$ exit
logout
Connection to localhost closed.


Thank you for reaching out to help me.

Taniya

Nitesh Jain

unread,
Aug 6, 2014, 1:26:47 AM8/6/14
to
Hey Taniya,

First of all accept my Congratulation! That is a successful run of program!

The out01 got created on HDFS and not on the local file system. So when you would do a ls on HDFS by the command:
hadoop fs -ls

You should see the out01 folder. Inside that you would see other files like the log file and the output file. This would be covered later in the course how to see the output files and understand the verbiage of the successfully completed command.

Hadoop installation is a little tricky. Appreciate your efforts to be patient and install it successfully!

Best,
Nitesh

Taniya Tom

unread,
Aug 6, 2014, 12:46:03 PM8/6/14
to hadoop-learner-tutori...@googlegroups.com
Thank you Nitesh.

Taniya


On Tuesday, August 5, 2014 10:26:47 PM UTC-7, Nitesh Jain wrote:
Hey Taniya,

First of all accept my Congratulation! That is a successful run of program!

The out01 got created on HDFS and not on the local file system. So when you would do a ls on HDFS by the command:
hadoop fs -ls

You should see the out01 folder. Inside that you would see other files like the log file and the output file. This would be covered later in the course how to see the output files and understand the verbiage of the successfully completed command.

Hadoop installation is a little tricky. Appreciate your efforts to be patient and install it successfully!

Best,
Nitesh
Reply all
Reply to author
Forward
0 new messages