Error while running program in hadoop

323 views
Skip to first unread message

Piyush Pawar

unread,
Apr 24, 2014, 12:43:38 PM4/24/14
to chenn...@googlegroups.com
Hello everyone,
 I am trying to execute code given by
but gives following errors.
Plz reply if u knw hw to resolve it.

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/04/24 18:20:47 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/04/24 18:20:48 INFO input.FileInputFormat: Total input paths to process : 607
14/04/24 18:20:48 INFO mapred.JobClient: Running job: job_local597148126_0001
14/04/24 18:20:48 ERROR mapred.FileOutputCommitter: Mkdirs failed to create /home/hduser/small/out/_temporary

14/04/24 18:20:48 WARN mapred.LocalJobRunner: job_local597148126_0001
java.lang.Exception: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
Caused by: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:164)
at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.<init>(CombineFileRecordReader.java:126)
at org.idryman.combinefiles.CFInputFormat.createRecordReader(CFInputFormat.java:26)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:488)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:155)
... 12 more
Caused by: java.io.FileNotFoundException: /home/hduser/small/indata/test85 - Copy (3).txt (Permission denied)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.apache.hadoop.fs.RawLocalFileSystem$TrackingFileInputStream.<init>(RawLocalFileSystem.java:71)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.<init>(RawLocalFileSystem.java:107)
at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:182)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:126)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:436)
at org.idryman.combinefiles.CFRecordReader.<init>(CFRecordReader.java:34)
... 17 more

14/04/24 18:20:49 INFO mapred.JobClient:  map 0% reduce 0%
14/04/24 18:20:49 INFO mapred.JobClient: Job complete: job_local597148126_0001
14/04/24 18:20:49 INFO mapred.JobClient: Counters: 0


Thanks
Piyush

Senthil Kumar

unread,
Apr 24, 2014, 5:34:06 PM4/24/14
to chenn...@googlegroups.com
Hi Piyush
It seems you are running the program locally.
Can you tell me the permissions of the local folder ''?  If it is not 755, can you change its permission to 755 recursively?
Can you tell me the answer after executing with the above change?

Thanks
Senthil

piyush pawar

unread,
Apr 25, 2014, 1:38:44 PM4/25/14
to chenn...@googlegroups.com
Thank u so much for ur reply
After giving permission permission denied error has been gone but
still there are so many errors.
I hv searched on google too but not getting proper way to resolve it.
The errors are posted as follows:



14/04/25 21:53:09 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
14/04/25 21:53:09 WARN mapred.JobClient: No job jar file set. User
classes may not be found. See JobConf(Class) or
JobConf#setJar(String).
14/04/25 21:53:09 INFO input.FileInputFormat: Total input paths to process : 6
14/04/25 21:53:09 INFO mapred.JobClient: Running job: job_local1850497627_0001
14/04/25 21:53:09 ERROR mapred.FileOutputCommitter: Mkdirs failed to
create /home/hduser/tmp/putput/_temporary
14/04/25 21:53:09 INFO mapred.LocalJobRunner: Waiting for map tasks
14/04/25 21:53:09 INFO mapred.LocalJobRunner: Starting task:
attempt_local1850497627_0001_m_000000_0
14/04/25 21:53:10 INFO util.ProcessTree: setsid exited with exit code 0
14/04/25 21:53:10 INFO mapred.Task: Using ResourceCalculatorPlugin :
org.apache.hadoop.util.LinuxResourceCalculatorPlugin@13442f1
14/04/25 21:53:10 INFO mapred.MapTask: Processing split:
Paths:/home/hduser/tmp/dfs:0+4096,/home/hduser/tmp/t2.txt~:0+12,/home/hduser/tmp/mapred:0+4096,/home/hduser/tmp/t3.txt:0+4594,/home/hduser/tmp/t2.txt:0+112866,/home/hduser/tmp/t1.txt:0+116
14/04/25 21:53:10 INFO mapred.LocalJobRunner: Map task executor complete.
14/04/25 21:53:10 WARN mapred.LocalJobRunner: job_local1850497627_0001
java.lang.Exception: java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
Caused by: java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:164)
at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.<init>(CombineFileRecordReader.java:126)
at p1.org.idryman.combinefiles.CFInputFormat.createRecordReader(CFInputFormat.java:21)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:488)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:155)
... 12 more
Caused by: java.io.FileNotFoundException: /home/hduser/tmp/dfs (Is a directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.apache.hadoop.fs.RawLocalFileSystem$TrackingFileInputStream.<init>(RawLocalFileSystem.java:71)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.<init>(RawLocalFileSystem.java:107)
at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:182)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:126)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:436)
at p1.org.idryman.combinefiles.CFRecordReader.<init>(CFRecordReader.java:34)
... 17 more
14/04/25 21:53:10 INFO mapred.JobClient: map 0% reduce 0%
14/04/25 21:53:10 INFO mapred.JobClient: Job complete: job_local1850497627_0001
14/04/25 21:53:10 INFO mapred.JobClient: Counters: 0

Senthil Kumar

unread,
Apr 25, 2014, 1:43:37 PM4/25/14
to chenn...@googlegroups.com
Piyush

Can you send me the code? I can take the same from web but i need to look at your code with the changes.

Thanks
Senthil

piyush pawar

unread,
Apr 26, 2014, 11:07:23 AM4/26/14
to chenn...@googlegroups.com
import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.idryman.combinefiles.CFInputFormat;
import org.idryman.combinefiles.FileLineWritable;


public class TestMain extends Configured implements Tool{

/**
* @param args
* @throws Exception
*/


@Override
public int run(String[] args) throws Exception {
// TODO Auto-generated method stub
Configuration conf = getConf();
Job job = new Job(conf);
job.setJobName("CombineFile");
job.setJarByClass(TestMain.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
job.setInputFormatClass(CFInputFormat.class);
job.setMapperClass(TestMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setReducerClass(IntSumReducer.class);
job.setNumReduceTasks(13);
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.out.println("Length"+args.length);
job.submit();
job.waitForCompletion(true);

return 0;
}

public static class TestMapper extends Mapper<FileLineWritable,
Text, Text, IntWritable>{
private Text txt = new Text();
private IntWritable count = new IntWritable(1);
@Override
public void map (FileLineWritable key, Text val, Context context)
throws IOException, InterruptedException{
StringTokenizer st = new StringTokenizer(val.toString());
while (st.hasMoreTokens()){
txt.set(st.nextToken());
context.write(txt, count);
}
}
}

public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new TestMain(), args);
System.exit(res);
}
}

piyush pawar

unread,
Apr 26, 2014, 11:09:44 AM4/26/14
to chenn...@googlegroups.com
CFInputFormat.java


package org.idryman.combinefiles;

import java.io.IOException;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader;
import org.apache.hadoop.mapreduce.lib.input.CombineFileSplit;

public class CFInputFormat extends
CombineFileInputFormat<FileLineWritable, Text> {
public CFInputFormat(){
super();
setMaxSplitSize(67108864); // 64 MB, default block size on hadoop
}
public RecordReader<FileLineWritable, Text>
createRecordReader(InputSplit split, TaskAttemptContext context)
throws IOException{
return new CombineFileRecordReader<FileLineWritable,
Text>((CombineFileSplit)split, context, CFRecordReader.class);
}
@Override
protected boolean isSplitable(JobContext context, Path file){
return false;
}
}




CFRecordReader.java

package org.idryman.combinefiles;

import java.io.IOException;

import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.CombineFileSplit;
import org.apache.hadoop.util.LineReader;


public class CFRecordReader extends RecordReader<FileLineWritable, Text>{
private long startOffset;
private long end;
private long pos;
private FileSystem fs;
private Path path;
private FileLineWritable key;
private Text value;

private FSDataInputStream fileIn;
private LineReader reader;

public CFRecordReader(CombineFileSplit split, TaskAttemptContext
context, Integer index) throws IOException{
this.path = split.getPath(index);
fs = this.path.getFileSystem(context.getConfiguration());
this.startOffset = split.getOffset(index);
this.end = startOffset + split.getLength(index);

fileIn = fs.open(path);
reader = new LineReader(fileIn);
this.pos = startOffset;
}

@Override
public void initialize(InputSplit arg0, TaskAttemptContext arg1)
throws IOException, InterruptedException {
// Won't be called, use custom Constructor
// `CFRecordReader(CombineFileSplit split, TaskAttemptContext
context, Integer index)`
// instead
}

@Override
public void close() throws IOException {}

@Override
public float getProgress() throws IOException{
if (startOffset == end) {
return 0;
}
return Math.min(1.0f, (pos - startOffset) / (float) (end - startOffset));
}

@Override
public FileLineWritable getCurrentKey() throws IOException,
InterruptedException {
return key;
}

@Override
public Text getCurrentValue() throws IOException, InterruptedException {
return value;
}

@Override
public boolean nextKeyValue() throws IOException{
if (key == null) {
key = new FileLineWritable();
key.fileName = path.getName();
}
key.offset = pos;
if (value == null){
value = new Text();
}
int newSize = 0;
if (pos < end) {
newSize = reader.readLine(value);
pos += newSize;
}
if (newSize == 0) {
key = null;
value = null;
return false;
} else{
return true;
}
}
}





FileLineWritable.java
package org.idryman.combinefiles;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.WritableComparable;

public class FileLineWritable implements WritableComparable<FileLineWritable>{
public long offset;
public String fileName;

public void readFields(DataInput in) throws IOException {
this.offset = in.readLong();
this.fileName = Text.readString(in);
}

public void write(DataOutput out) throws IOException {
out.writeLong(offset);
Text.writeString(out, fileName);
}

public int compareTo(FileLineWritable that) {
int cmp = this.fileName.compareTo(that.fileName);
if (cmp != 0) return cmp;
return (int)Math.signum((double)(this.offset - that.offset));
}

@Override
public int hashCode() { // generated hashCode()
final int prime = 31;
int result = 1;
result = prime * result + ((fileName == null) ? 0 : fileName.hashCode());
result = prime * result + (int) (offset ^ (offset >>> 32));
return result;
}

@Override
public boolean equals(Object obj) { // generated equals()
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
FileLineWritable other = (FileLineWritable) obj;
if (fileName == null) {
if (other.fileName != null)
return false;
} else if (!fileName.equals(other.fileName))
return false;
if (offset != other.offset)
return false;
return true;

piyush pawar

unread,
Apr 26, 2014, 11:10:36 AM4/26/14
to chenn...@googlegroups.com
I'm using Hadoop-1.2.1 on ubuntu with eclipse-kepler as IDE
Reply all
Reply to author
Forward
0 new messages