Hi all
Im having two clusters [windows] DEV140 and DEV144 . Where DEV144 is master and DEV140 is slave
And im having Map/ Reduce program which is written in python - in DEV144[master] local disk [C:\Python33\..]
using the following code i run the mapreduce
hadoop jar /HDP/hadoop-1.2.0.1.3.0.0-0380/contrib/streaming/hadoop-streaming-1.2.0.1.3.0.0-0380.jar -mapper "python C:\Python33\mapper.py" -reducer "python C:\Python33\redu.py" -input "/user/sornalingam/input/input.txt" -output "/user/sornalingam/output/out20131113_15"
And In DEV144 every job got success but in DEV140 i got this error in the log file
stderr logspython: can't open file 'C:\Python33\mapper.py': [Errno 2] No such file or directory
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
Now
1. Do i need to copy my Map/Reduce prog in all the cluster ?
2. How can i solve this problem
Kindly help me
Im newbie to hadoop
Thanks
Sornalingam