Lab 3 Sample code - wordcount執行

274 views
Skip to first unread message

Yafang

unread,
Nov 28, 2011, 3:39:06 AM11/28/11
to nthu-201...@googlegroups.com
Since many people have the question about the execution of attached wordcount example.
I just show the steps of Hadoop Lab attached MapReduce sample code execution.

[xxx@cloudcomputing01 wordcount]$ cd Sample\ code/MR/wordcount
[xxx@cloudcomputing01 wordcount]$ javac wordcount.java mymapper.java myreducer.java
[xxx@cloudcomputing01 wordcount]$ ls
mymapper.class  myreducer.class  wordcount.class
mymapper.java   myreducer.java   wordcount.java
[xxx@cloudcomputing01 wordcount]$ cd ..
[xxx@cloudcomputing01 MR]$ jar cvf wordcount.jar -C wordcount/ .
added manifest
adding: mymapper.java(in = 657) (out= 326)(deflated 50%)
adding: myreducer.class(in = 1682) (out= 710)(deflated 57%)
adding: wordcount.java(in = 1222) (out= 484)(deflated 60%)
adding: wordcount.class(in = 1722) (out= 937)(deflated 45%)
adding: mymapper.class(in = 1677) (out= 728)(deflated 56%)
adding: myreducer.java(in = 571) (out= 283)(deflated 50%)
[xxx@cloudcomputing01 MR]$ hadoop jar wordcount.jar wordcount input output

make sure there is a directory "input" and no directory "output" in your hdfs.

Notice that our sample code do not have "package wordcount;" upon every java file.
You do not need to use hadoop jar wordcount.jar wordcount.wordcount input output when you run your jar.
And if you want to use package, please check the correct execution way.

郭舟東 (Zhoudong Guo)

unread,
Nov 28, 2011, 9:10:23 AM11/28/11
to nthu-201...@googlegroups.com
請問助教,我在執行完我修改后的wordcount后出現output中沒有數據的情況,會是什麽原因呢?而且這個output文件好像還在被使用。
下面是執行是的log信息,看上去沒有什麽錯誤,但感覺好像是output檔沒有被close的樣子,因為我ls以後顯示output檔大小為0.
11/11/28 14:05:44 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/11/28 14:05:44 INFO input.FileInputFormat: Total input paths to process : 1
11/11/28 14:05:45 INFO mapred.JobClient: Running job: job_201111231315_0112
11/11/28 14:05:46 INFO mapred.JobClient:  map 0% reduce 0%
11/11/28 14:06:00 INFO mapred.JobClient:  map 100% reduce 0%
11/11/28 14:06:12 INFO mapred.JobClient:  map 100% reduce 66%
11/11/28 14:06:15 INFO mapred.JobClient:  map 100% reduce 100%
11/11/28 14:06:20 INFO mapred.JobClient: Job complete: job_201111231315_0112
11/11/28 14:06:20 INFO mapred.JobClient: Counters: 29
11/11/28 14:06:20 INFO mapred.JobClient:   Job Counters
11/11/28 14:06:20 INFO mapred.JobClient:     Launched reduce tasks=3
11/11/28 14:06:20 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=11843
11/11/28 14:06:20 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
11/11/28 14:06:20 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
11/11/28 14:06:20 INFO mapred.JobClient:     Rack-local map tasks=1
11/11/28 14:06:20 INFO mapred.JobClient:     Launched map tasks=1
11/11/28 14:06:20 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=30067
11/11/28 14:06:20 INFO mapred.JobClient:   File Output Format Counters
11/11/28 14:06:20 INFO mapred.JobClient:     Bytes Written=636
11/11/28 14:06:20 INFO mapred.JobClient:   FileSystemCounters
11/11/28 14:06:20 INFO mapred.JobClient:     FILE_BYTES_READ=10454
11/11/28 14:06:20 INFO mapred.JobClient:     HDFS_BYTES_READ=1916221
11/11/28 14:06:20 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=91979
11/11/28 14:06:20 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=636
11/11/28 14:06:20 INFO mapred.JobClient:   File Input Format Counters
11/11/28 14:06:20 INFO mapred.JobClient:     Bytes Read=1916105
11/11/28 14:06:20 INFO mapred.JobClient:   Map-Reduce Framework
11/11/28 14:06:20 INFO mapred.JobClient:     Map output materialized bytes=764
11/11/28 14:06:20 INFO mapred.JobClient:     Map input records=29626
11/11/28 14:06:20 INFO mapred.JobClient:     Reduce shuffle bytes=0
11/11/28 14:06:20 INFO mapred.JobClient:     Spilled Records=770
11/11/28 14:06:20 INFO mapred.JobClient:     Map output bytes=9496350
11/11/28 14:06:20 INFO mapred.JobClient:     CPU time spent (ms)=10470
11/11/28 14:06:20 INFO mapred.JobClient:     Total committed heap usage (bytes)=795672576
11/11/28 14:06:20 INFO mapred.JobClient:     Combine input records=1583308
11/11/28 14:06:20 INFO mapred.JobClient:     SPLIT_RAW_BYTES=116
11/11/28 14:06:20 INFO mapred.JobClient:     Reduce input records=93
11/11/28 14:06:20 INFO mapred.JobClient:     Reduce input groups=93
11/11/28 14:06:20 INFO mapred.JobClient:     Combine output records=677
11/11/28 14:06:20 INFO mapred.JobClient:     Physical memory (bytes) snapshot=609906688
11/11/28 14:06:20 INFO mapred.JobClient:     Reduce output records=93
11/11/28 14:06:20 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=12752052224
11/11/28 14:06:20 INFO mapred.JobClient:     Map output records=1582724

周宗毅

unread,
Nov 28, 2011, 9:19:33 AM11/28/11
to nthu-201...@googlegroups.com
Hi

根據你貼上來的過程

"Bytes Written=636"  的確有東西寫入

請去查看一下 "output資料夾"  底下的文件

(output本身是資料夾)

應該是part-r-0000X這些檔案

TA

郭舟東 (Zhoudong Guo)

unread,
Nov 28, 2011, 10:57:06 AM11/28/11
to nthu-201...@googlegroups.com
這個output不是資料夾,我是輸出到一個TXT檔的,但是看下面這些信息,output1.txt的大小是0,而且不能用hadoop fs -rm刪除,只能用-rmr刪,感覺上是沒有被關閉。
$ hadoop fs -ls
Found 6 items
-rw-r--r--   1 supergroup    1916105 2011-11-28 13:51 /user/m100062467/1.txt
-rw-r--r--   1 supergroup         23 2011-11-28 02:49 /user/m100062467/100062467
-rw-r--r--   1 supergroup       3863 2011-11-28 07:15 /user/m100062467/hdfs.jar
drwxr-xr-x   - supergroup          0 2011-11-28 13:49 /user/m100062467/mr
-rw-r--r--   1 supergroup       3113 2011-11-28 13:51 /user/m100062467/mr.jar
drwx------   - supergroup          0 2011-11-28 14:06 /user/m100062467/output1.txt

周宗毅

unread,
Nov 28, 2011, 11:00:39 AM11/28/11
to nthu-201...@googlegroups.com
Hi

它是一個資料夾沒錯喔

drwx------   前面的那個d就表示它是directory  所以你只能使用rmr去刪除

你可以試試看這個指令 hadoop fs -ls /user/m100062467/output1.txt

它會show出該資料夾底下的所有資料

你的output就在裡面:)

TA

郭舟東 (Zhoudong Guo)

unread,
Nov 28, 2011, 11:29:29 AM11/28/11
to nthu-201...@googlegroups.com
原來是這樣,多謝助教。。

鴻昌 吳

unread,
Dec 6, 2011, 10:20:32 PM12/6/11
to nthu-201...@googlegroups.com
助教你好

我照著指令打出來

但似乎還是不行?


以下是我的操作情況
===================================================
[m100062530@cloudcomputing01 wordcount]$ ls
mymapper.java  myreducer.java  wordcount.java
[m100062530@cloudcomputing01 wordcount]$ javac wordcount.java mymapper.java myreducer.java
[m100062530@cloudcomputing01 wordcount]$ ls
mymapper.class  mymapper.java  myreducer.class  myreducer.java  wordcount.class  wordcount.java
[m100062530@cloudcomputing01 wordcount]$ cd ..
[m100062530@cloudcomputing01 mr]$ ls
bMakefile  mymapper.class  myreducer.class  wordcount        wordcount.java
Makefile   mymapper.java   myreducer.java   wordcount.class
[m100062530@cloudcomputing01 mr]$ jar cvf wordcount.jar -C wordcount/ .

added manifest
adding: mymapper.java(in = 657) (out= 326)(deflated 50%)
adding: myreducer.class(in = 1682) (out= 710)(deflated 57%)
adding: wordcount.java(in = 1222) (out= 484)(deflated 60%)
adding: wordcount.class(in = 1722) (out= 937)(deflated 45%)
adding: mymapper.class(in = 1677) (out= 728)(deflated 56%)
adding: myreducer.java(in = 571) (out= 283)(deflated 50%)
[m100062530@cloudcomputing01 mr]$ hadoop jar wordcount.jar wordcount input output3
11/12/07 03:11:50 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
11/12/07 03:11:50 INFO input.FileInputFormat: Total input paths to process : 1
11/12/07 03:11:50 INFO mapred.JobClient: Running job: job_201112060944_0168
11/12/07 03:11:51 INFO mapred.JobClient:  map 0% reduce 0%
11/12/07 03:12:03 INFO mapred.JobClient: Task Id : attempt_201112060944_0168_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: mymapper
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:866)
        at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: mymapper
        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:247)
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:819)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:864)
        ... 8 more

11/12/07 03:12:11 INFO mapred.JobClient: Task Id : attempt_201112060944_0168_m_000000_1, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: mymapper
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:866)
        at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: mymapper
        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:247)
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:819)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:864)
        ... 8 more

11/12/07 03:12:17 INFO mapred.JobClient: Task Id : attempt_201112060944_0168_m_000000_2, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: mymapper
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:866)
        at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: mymapper
        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:247)
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:819)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:864)
        ... 8 more

11/12/07 03:12:29 INFO mapred.JobClient: Job complete: job_201112060944_0168
11/12/07 03:12:29 INFO mapred.JobClient: Counters: 8
11/12/07 03:12:29 INFO mapred.JobClient:   Job Counters
11/12/07 03:12:29 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=24418
11/12/07 03:12:29 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
11/12/07 03:12:29 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
11/12/07 03:12:29 INFO mapred.JobClient:     Rack-local map tasks=3
11/12/07 03:12:29 INFO mapred.JobClient:     Launched map tasks=4
11/12/07 03:12:29 INFO mapred.JobClient:     Data-local map tasks=1
11/12/07 03:12:29 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
11/12/07 03:12:29 INFO mapred.JobClient:     Failed map tasks=1

周宗毅

unread,
Dec 6, 2011, 11:07:02 PM12/6/11
to nthu-201...@googlegroups.com
Hi ,

請問是否為sample code?

嘗試看看sample code是否會有一樣問題

因為錯誤訊息是 "java.lang.ClassNotFoundException: mymapper"

貌似你的mymapper無法被找到..

請確認該class的路徑設定或是函式名稱等是否正確

鴻昌 吳

unread,
Dec 7, 2011, 12:32:15 AM12/7/11
to nthu-201...@googlegroups.com
助教你好

code 都是sample code我完全沒有動 (package 那行被註解起來)

後來我又做了以下的operation
(加入class檔的確切位置到CLASSPATH中)
 結果: 錯誤依舊QQ....
=============================
[m100062530@cloudcomputing01 mr]$ ls
bMakefile  mymapper.class  myreducer.class  wordcount        wordcount.jar
Makefile   mymapper.java   myreducer.java   wordcount.class  wordcount.java
[m100062530@cloudcomputing01 mr]$ pwd
/home/student/m100062530/mr
[m100062530@cloudcomputing01 mr]$ export CLASSPATH=/home/student/m100062530/mr;
[m100062530@cloudcomputing01 mr]$ hadoop jar wordcount.jar wordcount input output7
11/12/07 05:19:57 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
11/12/07 05:19:57 INFO input.FileInputFormat: Total input paths to process : 1
11/12/07 05:19:58 INFO mapred.JobClient: Running job: job_201112060944_0182
11/12/07 05:19:59 INFO mapred.JobClient:  map 0% reduce 0%
11/12/07 05:20:11 INFO mapred.JobClient: Task Id : attempt_201112060944_0182_m_000000_0, Status : FAILED

java.lang.RuntimeException: java.lang.ClassNotFoundException: mymapper
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:866)
        at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: mymapper
        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:247)
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:819)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:864)
        ... 8 more

===============================================================

以下再附上我目前使用的Sample Code//package wordcount;

==============================================================
import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class mymapper extends Mapper<Object, Text, Text, IntWritable> {

  private final static IntWritable one = new IntWritable(1);
  private Text word = new Text();

  public void map(Object key, Text value, Context context)
      throws IOException, InterruptedException {
    StringTokenizer itr = new StringTokenizer(value.toString());
    while (itr.hasMoreTokens()) {
      word.set(itr.nextToken());
      context.write(word, one);
    }
  }
}

=================================================================
//package wordcount;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class myreducer extends Reducer<Text, IntWritable, Text, IntWritable> {
  private IntWritable result = new IntWritable();

  public void reduce(Text key, Iterable<IntWritable> values, Context context)
      throws IOException, InterruptedException {
    int sum = 0;
    for (IntWritable val : values) {
      sum += val.get();
    }
    result.set(sum);
    context.write(key, result);
  }
}


===============================================================
//package wordcount;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;


public class wordcount {

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    String[] otherArgs = new GenericOptionsParser(conf, args)
        .getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount <in> <out>");
      System.exit(2);
    }
    Job job = new Job(conf, "word count");
    job.setJarByClass(wordcount.class);
    job.setMapperClass(mymapper.class);
    job.setCombinerClass(myreducer.class);
    job.setReducerClass(myreducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

        FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}


============================================================



周宗毅

unread,
Dec 7, 2011, 1:04:50 AM12/7/11
to nthu-201...@googlegroups.com
Hi ,

我抓過你的檔案重新編譯沒有問題:)

我猜想你忘記移除掉先前編譯過的wordcount.jar...

不然就是重新傳一份...因為我也確認過sample code沒有問題了~"~

TA

鴻昌 吳

unread,
Dec 7, 2011, 4:49:57 AM12/7/11
to nthu-201...@googlegroups.com
助教我找到問題了
謝謝助教的幫忙~

在這邊也提供一下我發生錯誤的問題根源所在:

編譯完的class檔,我更動到它的絕對路徑了(ex. 移到別的資料夾)
之後再包成jar檔會有問題

如果助教願意的話可以幫我解惑嗎?
為什麼編譯完的class檔不能移到別處?
跟XXX.java 的code 內容有關嗎?

周宗毅

unread,
Dec 7, 2011, 5:12:26 AM12/7/11
to nthu-201...@googlegroups.com
Hi ,

這個你要請教跟Java比較熟的:)

猜想應該是complie之後你變動了"class"   讓原本的class關係改變吧? 

黃詩閔

unread,
Dec 8, 2011, 8:03:21 AM12/8/11
to nthu-201...@googlegroups.com
不好意思 我想借這個主題 問一下和wordcount有關的問題
請問一下 助教

我在執行 javac wordcount.java mymapper.java myreducer.java時
會跑出這個錯class file for org.apache.commons.cli.Options not found
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();

後來我看網路上的解決方法,照著打javac -classpath hadoop-0.20.2-core.jar:lib/commons-cli-1.2.jar wordcount.java  也不行
反而還會出現更多的錯


所以想上來請益一下大家 謝謝  

周宗毅

unread,
Dec 8, 2011, 8:24:17 AM12/8/11
to nthu-201...@googlegroups.com
Hi ,

你這個是在哪裡做的?

如果是在自己機器上面的話請確認你的hadoop的版本相對應的是這些library~

server的機器應該都已經正確加入了:)

黃詩閔

unread,
Dec 8, 2011, 9:46:52 AM12/8/11
to nthu-201...@googlegroups.com
我這個是在我自己機器上做的。
請問一下server的機器 正確的加入方式是怎麼做的?

謝謝助教

周宗毅

unread,
Dec 8, 2011, 9:49:04 AM12/8/11
to nthu-201...@googlegroups.com
Hi ,

你在server上面應該可以直接編譯吧?

javac *.java

會有錯誤嗎?

劉郁蘭

unread,
Dec 10, 2011, 9:40:32 AM12/10/11
to nthu-201...@googlegroups.com
我也是在自己機器上做然後遇到這個問題

解決方法其實就是你找到的那串指令無誤
javac -classpath hadoop-0.20.2-core.jar:lib/commons-cli-1.2.jar wordcount.java

只是要check一下hadoop路徑是否正確,core.jar檔名是否符合你安裝的版本

在我電腦上打的指令是:
javac -classpath /opt/hadoop/hadoop-core-0.20.203.0.jar:/opt/hadoop/lib/commons-cli-1.2.jar wordcount.java 

藍色:hadoop安裝目錄路徑
紅色:hadoop core的jar檔,可以 ls hadoop安裝目錄看一下你安裝版本的core檔完整檔名是什麼
紫色:要編譯的java檔

Reply all
Reply to author
Forward
0 new messages