Error from attempt_201309051457_0001_m_000000_0: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1

6,090 views
Skip to first unread message

Kunlei

unread,
Sep 5, 2013, 3:38:26 PM9/5/13
to mr...@googlegroups.com
Dear all,

I am trying to run mapreduce jobs (Python code) using MRjob on my university's hadoop cluster. 

PS: I have not root right. I must run all processes on compute nodes by requesting nodes through PBS.

When I ran the following:
python ${HOME}/mrSVM_Pegasos/mrSVM.py -r hadoop -v --hadoop-bin "$HADOOP_HOME/bin/hadoop --config $HADOOP_JOB_CONF" < ${HOME}/mrSVM_Pegasos/kickStart.txt > ${HOME}//mrSVM_Pegasos/mrSVM_output.txt

I got the error (the details are attached below),"Job not successful. Error: # of failed Map Tasks exceeded allowed limit"
----------------------------------------------------------------------------------------------------------------------------------------------------------
looking for configs in /wsu/home/fk/fk02/fk0287/.mrjob.conf
using configs in /wsu/home/fk/fk02/fk0287/.mrjob.conf
Active configuration:
{'base_tmp_dir': '/wsu/home/fk/fk02/fk0287/mrjob/tmp',
 'bootstrap_mrjob': False,
 'cleanup': ['NONE'],
 'cleanup_on_failure': ['NONE'],
 'hadoop_bin': ['/wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop',
                '--config',
                '/wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642050.vpbs1/conf'],
 'hadoop_extra_args': [],
 'hadoop_home': '/wsu/arch/x86_64/hadoop/hadoop-1.0.4',
 'hadoop_streaming_jar': None,
 'hadoop_version': '1.0.4',
 'hdfs_scratch_dir': 'tmp/mrjob',
 'interpreter': ['python'],
 'jobconf': {'mapred.map.max.attempts': 2,
             'mapred.skip.attempts.to.start.skipping': 2,
             'mapred.skip.map.max.skip.records': 1,
             'mapred.skip.mode.enabled': True},
 'label': None,
 'owner': 'fk0287',
 'python_archives': [],
 'python_bin': ['python'],
 'setup_cmds': [],
 'setup_scripts': [],
 'steps_interpreter': ['/wsu/home/fk/fk02/fk0287/python275/bin/python'],
 'steps_python_bin': ['/wsu/home/fk/fk02/fk0287/python275/bin/python'],
 'upload_archives': [],
 'upload_files': []}
Looking for hadoop streaming jar in /wsu/arch/x86_64/hadoop/hadoop-1.0.4
Hadoop streaming jar is /wsu/arch/x86_64/hadoop/hadoop-1.0.4/contrib/streaming/hadoop-streaming-1.0.4.jar
reading from STDIN
creating tmp directory /wsu/home/fk/fk02/fk0287/mrjob/tmp/mrSVM.fk0287.20130905.191703.708619
dumping stdin to local file /wsu/home/fk/fk02/fk0287/mrjob/tmp/mrSVM.fk0287.20130905.191703.708619/STDIN
Making directory hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/ on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642050.vpbs1/conf fs -mkdir hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/
Copying local files into hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/
Uploading /wsu/home/fk/fk02/fk0287/mrjob/tmp/mrSVM.fk0287.20130905.191703.708619/STDIN -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/STDIN on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642050.vpbs1/conf fs -put /wsu/home/fk/fk02/fk0287/mrjob/tmp/mrSVM.fk0287.20130905.191703.708619/STDIN hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/STDIN
Uploading /wsu/home/fk/fk02/fk0287/trialsZKL/hadoopTry/mrSVM_Pegasos/mrSVM.py -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/mrSVM.py on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642050.vpbs1/conf fs -put /wsu/home/fk/fk02/fk0287/trialsZKL/hadoopTry/mrSVM_Pegasos/mrSVM.py hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/mrSVM.py
> /wsu/home/fk/fk02/fk0287/python275/bin/python /wsu/home/fk/fk02/fk0287/trialsZKL/hadoopTry/mrSVM_Pegasos/mrSVM.py --steps
running step 1 of 2
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642050.vpbs1/conf version
Using Hadoop version 1.0.4
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642050.vpbs1/conf jar /wsu/arch/x86_64/hadoop/hadoop-1.0.4/contrib/streaming/hadoop-streaming-1.0.4.jar -files 'hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/mrSVM.py#mrSVM.py' -D mapred.map.max.attempts=2 -D mapred.skip.attempts.to.start.skipping=2 -D mapred.skip.map.max.skip.records=1 -D mapred.skip.mode.enabled=True -cmdenv TZ=America/Detroit -input hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/STDIN -output hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/step-output/1 -mapper 'python mrSVM.py --step-num=0 --mapper' -reducer 'python mrSVM.py --step-num=0 --reducer'
HADOOP: packageJobJar: [/wsu/home/fk/fk02/fk0287/hadoop/tmp/hadoop-unjar1639639520478391263/] [] /tmp/streamjob4208268757340114556.jar tmpDir=null
HADOOP: Loaded the native-hadoop library
HADOOP: Snappy native library not loaded
HADOOP: Total input paths to process : 1
HADOOP: getLocalDirs(): [/wsu/home/fk/fk02/fk0287/hadoop/tmp/dfs/data1/mapred/local]
HADOOP: Running job: job_201309051517_0001
HADOOP: To kill this job, run:
HADOOP: /wsu/arch/amd64/hadoop/hadoop-1.0.4/libexec/../bin/hadoop job  -Dmapred.job.tracker=hdfs://dad1:9001 -kill job_201309051517_0001
HADOOP:  map 0%  reduce 0%
HADOOP:  map 100%  reduce 100%
HADOOP: To kill this job, run:
HADOOP: /wsu/arch/amd64/hadoop/hadoop-1.0.4/libexec/../bin/hadoop job  -Dmapred.job.tracker=hdfs://dad1:9001 -kill job_201309051517_0001
HADOOP: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201309051517_0001_m_000001
HADOOP: killJob...
HADOOP: Streaming Command Failed!
Job failed with return code 256: ['/wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop', '--config', '/wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642050.vpbs1/conf', 'jar', '/wsu/arch/x86_64/hadoop/hadoop-1.0.4/contrib/streaming/hadoop-streaming-1.0.4.jar', '-files', 'hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/mrSVM.py#mrSVM.py', '-D', 'mapred.map.max.attempts=2', '-D', 'mapred.skip.attempts.to.start.skipping=2', '-D', 'mapred.skip.map.max.skip.records=1', '-D', 'mapred.skip.mode.enabled=True', '-cmdenv',  '-input', 'hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/STDIN', '-output', 'hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/step-output/1', '-mapper', 'python mrSVM.py --step-num=0 --mapper', '-reducer', 'python mrSVM.py --step-num=0 --reducer']
Scanning logs for probable cause of failure
Traceback (most recent call last):
  File "/wsu/home/fk/fk02/fk0287/mrSVM_Pegasos/mrSVM.py", line 79, in <module>
    MRsvm.run()
  File "/wsu/home/fk/fk02/fk0287/python275/lib/python2.7/site-packages/mrjob/job.py", line 483, in run
    mr_job.execute()
  File "/wsu/home/fk/fk02/fk0287/python275/lib/python2.7/site-packages/mrjob/job.py", line 501, in execute
    super(MRJob, self).execute()
  File "/wsu/home/fk/fk02/fk0287/python275/lib/python2.7/site-packages/mrjob/launch.py", line 146, in execute
    self.run_job()
  File "/wsu/home/fk/fk02/fk0287/python275/lib/python2.7/site-packages/mrjob/launch.py", line 207, in run_job
    runner.run()
  File "/wsu/home/fk/fk02/fk0287/python275/lib/python2.7/site-packages/mrjob/runner.py", line 450, in run
    self._run()
  File "/wsu/home/fk/fk02/fk0287/python275/lib/python2.7/site-packages/mrjob/hadoop.py", line 238, in _run
    self._run_job_in_hadoop()
  File "/wsu/home/fk/fk02/fk0287/python275/lib/python2.7/site-packages/mrjob/hadoop.py", line 356, in _run_job_in_hadoop
    raise CalledProcessError(returncode, streaming_args)
subprocess.CalledProcessError: Command '['/wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop', '--config', '/wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642050.vpbs1/conf', 'jar', '/wsu/arch/x86_64/hadoop/hadoop-1.0.4/contrib/streaming/hadoop-streaming-1.0.4.jar', '-files', 'hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/mrSVM.py#mrSVM.py', '-D', 'mapred.map.max.attempts=2', '-D', 'mapred.skip.attempts.to.start.skipping=2', '-D', 'mapred.skip.map.max.skip.records=1', '-D', 'mapred.skip.mode.enabled=True', '-cmdenv', 'TZ=America/Detroit', '-input', 'hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/files/STDIN', '-output', 'hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130905.191703.708619/step-output/1', '-mapper', 'python mrSVM.py --step-num=0 --mapper', '-reducer', 'python mrSVM.py --step-num=0 --reducer']' returned non-zero exit status 256
----------------------------------------------------------------------------------------------------------------------------------------------------------



Then I checked the logs, I found in the jobtracker log that Error from attempt_201309051457_0001_m_000000_0: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1

----------------------------------------------------------------------------------------------------------------------------------------------------------
2013-09-05 14:57:45,207 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201309051457_0001_m_000000_0: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:576)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
----------------------------------------------------------------------------------------------------------------------------------------------------------

Anyone can help me out? Thanks.

Best regards
Kunlei

Steve Johnson

unread,
Sep 5, 2013, 3:42:30 PM9/5/13
to mr...@googlegroups.com
A very, very (very) quick Google search turned up this Stack Overflow answer which is relevant to you.
 
--
You received this message because you are subscribed to the Google Groups "mrjob" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mrjob+un...@googlegroups.com.
 

Steve Johnson

unread,
Sep 5, 2013, 3:43:29 PM9/5/13
to mr...@googlegroups.com
Though the link's debugging advice is about Elastic MapReduce; in your case, just grep through the stderr logs for tracebacks or check them by hand for errors.

Kunlei

unread,
Sep 5, 2013, 3:49:44 PM9/5/13
to mr...@googlegroups.com
Thanks for Steve's very quick reply and suggestions.

Before I posted this above message here, I have already searched from the Internet and tried some suggestions, but I still could not solve this problem.

Best regards
Kunlei

Steve Johnson

unread,
Sep 5, 2013, 3:53:01 PM9/5/13
to mr...@googlegroups.com
No one on this list will be able to solve your problem using the information you provided, because the relevant error message is in the logs somewhere.

Kunlei

unread,
Sep 5, 2013, 3:59:31 PM9/5/13
to mr...@googlegroups.com
Yes, I found from the jobtracker log: Error from attempt_201309051457_0001_m_000000_0: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1

The reason I posted on the list is to hope someone might encounter same/similar problem before, and some helpful suggestions or comments may be provided.

I will check all the logs very carefully again. Hope I can find other related error information.

Thanks again.

Best regards
Kunlei

Steve Johnson

unread,
Sep 5, 2013, 4:07:18 PM9/5/13
to mr...@googlegroups.com
That's not a helpful error, it just says that its task failed without giving the reason. What you should actually be looking for is the stderr output of the subprocess, not the thing that tried to run the process.

Kunlei

unread,
Sep 8, 2013, 3:56:02 PM9/8/13
to mr...@googlegroups.com
Hi Steve,

By modifying the configuration according to the stderr logs, the program has finally got to be run successfully with correct results (the details are attached). Thanks.

I have one more silly question: why I can't access the  http://dad1.grid.wayne.edu:50030/jobdetails.jsp?jobid=job_201309081504_0001 to track the tasks?

I have set the mapred.job.tracker in the mapred-site.xml as follows
  <property>
    <name>mapred.job.tracker</name>
    <value>hdfs://dad1:9001</value>
  </property>
  <property>

PS: dad1 is the master node for this task.

Best regards
Kunlei

---------------------------------------------------------------------------------------------------------------------------------------------
Looking for hadoop streaming jar in /wsu/arch/x86_64/hadoop/hadoop-1.0.4
Hadoop streaming jar is /wsu/arch/x86_64/hadoop/hadoop-1.0.4/contrib/streaming/hadoop-streaming-1.0.4.jar
reading from STDIN
creating tmp directory /wsu/home/fk/fk02/fk0287/mrjob/tmp/mrSVM.fk0287.20130908.190428.152426
dumping stdin to local file /wsu/home/fk/fk02/fk0287/mrjob/tmp/mrSVM.fk0287.20130908.190428.152426/STDIN
Making directory hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/ on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -mkdir hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/
Copying local files into hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/
Uploading /wsu/home/fk/fk02/fk0287/python275/lib/libpython2.7.so.1.0 -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libpython2.7.so.1.0 on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/home/fk/fk02/fk0287/python275/lib/libpython2.7.so.1.0 hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libpython2.7.so.1.0
Uploading /wsu/home/fk/fk02/fk0287/trialsZKL/hadoopTry/mrSVM_Pegasos/mrSVM.py -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/mrSVM.py on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/home/fk/fk02/fk0287/trialsZKL/hadoopTry/mrSVM_Pegasos/mrSVM.py hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/mrSVM.py
Uploading /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libcblas.so -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libcblas.so on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libcblas.so hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libcblas.so
Uploading /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libf77blas.so -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libf77blas.so on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libf77blas.so hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libf77blas.so
Uploading /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libblas.so -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libblas.so on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libblas.so hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libblas.so
Uploading /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/liblapack.so -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/liblapack.so on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/liblapack.so hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/liblapack.so
Uploading /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libptcblas.so -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libptcblas.so on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libptcblas.so hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libptcblas.so
Uploading /wsu/home/fk/fk02/fk0287/python275/lib/libpython2.7.so -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libpython2.7.so on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/home/fk/fk02/fk0287/python275/lib/libpython2.7.so hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libpython2.7.so
Uploading /wsu/home/fk/fk02/fk0287/mrjob/tmp/mrSVM.fk0287.20130908.190428.152426/STDIN -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/STDIN on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/home/fk/fk02/fk0287/mrjob/tmp/mrSVM.fk0287.20130908.190428.152426/STDIN hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/STDIN
Uploading /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libatlas.so -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libatlas.so on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libatlas.so hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libatlas.so
Uploading /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libptf77blas.so -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libptf77blas.so on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/arch/amd64/lib/ATLAS/ATLAS_amd64/lib/libptf77blas.so hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libptf77blas.so
Uploading /wsu/home/fk/fk02/fk0287/python275/lib/python2.7/site-packages/site-packages.tar.gz -> hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/site-packages.tar.gz on HDFS
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -put /wsu/home/fk/fk02/fk0287/python275/lib/python2.7/site-packages/site-packages.tar.gz hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/site-packages.tar.gz
> /wsu/home/fk/fk02/fk0287/python275/bin/python /wsu/home/fk/fk02/fk0287/trialsZKL/hadoopTry/mrSVM_Pegasos/mrSVM.py --steps
running step 1 of 2
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf version
Using Hadoop version 1.0.4
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf jar /wsu/arch/x86_64/hadoop/hadoop-1.0.4/contrib/streaming/hadoop-streaming-1.0.4.jar -files 'hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libpython2.7.so.1.0#libpython2.7.so.1.0,hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/liblapack.so#liblapack.so,hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libcblas.so#libcblas.so,hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libblas.so#libblas.so,hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libpython2.7.so#libpython2.7.so,hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libptcblas.so#libptcblas.so,hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libf77blas.so#libf77blas.so,hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/mrSVM.py#mrSVM.py,hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libptf77blas.so#libptf77blas.so,hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/libatlas.so#libatlas.so' -archives 'hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/site-packages.tar.gz#site-packages.tar.gz' -D mapred.map.max.attempts=2 -D mapred.skip.attempts.to.start.skipping=2 -D mapred.skip.map.max.skip.records=1 -D mapred.skip.mode.enabled=True -cmdenv PYTHONPATH=site-packages.tar.gz -input hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/files/STDIN -output hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/step-output/1 -mapper 'python mrSVM.py --step-num=0 --mapper' -reducer 'python mrSVM.py --step-num=0 --reducer'
HADOOP: packageJobJar: [/tmp/zkl1/hadoop-unjar604010539993525218/] [] /tmp/streamjob1831737190425371931.jar tmpDir=null
HADOOP: Loaded the native-hadoop library
HADOOP: Snappy native library not loaded
HADOOP: Total input paths to process : 1
HADOOP: getLocalDirs(): [/tmp/zkl1/dfs/data1/mapred/local]
HADOOP: Running job: job_201309081504_0001
HADOOP: To kill this job, run:
HADOOP: /wsu/arch/amd64/hadoop/hadoop-1.0.4/libexec/../bin/hadoop job  -Dmapred.job.tracker=hdfs://dad1:9001 -kill job_201309081504_0001
HADOOP:  map 0%  reduce 0%
HADOOP:  map 50%  reduce 0%
HADOOP:  map 100%  reduce 0%
HADOOP:  map 100%  reduce 17%
HADOOP:  map 100%  reduce 100%
HADOOP: Job complete: job_201309081504_0001
HADOOP: Output: hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/output
Streaming final output from hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/output
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -lsr hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/output
> /wsu/arch/x86_64/hadoop/hadoop-1.0.4/bin/hadoop --config /wsu/home/fk/fk02/fk0287/hadoop/mrSVM/642880.vpbs1/conf fs -cat hdfs:///user/fk0287/tmp/mrjob/mrSVM.fk0287.20130908.190428.152426/output/part-00000
---------------------------------------------------------------------------------------------------------------------------------------------

Steve Johnson

unread,
Sep 8, 2013, 4:08:36 PM9/8/13
to mr...@googlegroups.com
Please start using gist.github.com or pastebin.com to dump long blocks of text in the future.
 
I'm not sure what the issue is. This is more of a general Hadoop question.

István Szukács

unread,
Nov 26, 2013, 2:12:10 PM11/26/13
to mr...@googlegroups.com
out of curiosity,

how did you capture the STDERR with your Python mapper and reducer?

thank you in advance,
Istvan

Poornesh B.D

unread,
Feb 4, 2015, 7:30:49 AM2/4/15
to mr...@googlegroups.com
stderr  will be in the /app/hadoop/tmp/mapred/userlogs
and subprocess failed with code 1 beacuse of your code has syntax error or u r not specified the library in code
Reply all
Reply to author
Forward
0 new messages