Hi Julia,
Thanks for reaching out, please find the explanation on the errors.
Explanation for bash: bin/hadoop: No such file or directory
Linux says "No such directory found" when it is not able to find the script that we are asking it to run. In case we use bin/hadoop, we should be in hadoop installations directory at the time we are running this. (in the directory which we marked as $HADOOP_INSTALL). What happens is that by specify bin/hadoop, we are asking Linux to go and execute the script "hadoop" which is present in the bin folder of Hadoop installation. We are relatively pointing to the bin folder, so we should be at the folder above it at the time we run the command.
However, this complexity is resolved once we add the bin folder to the active path, which we do by adding the following two line in profile script:
HADOOP_INSTALL=/home/{user_name}/hadoop/hadoop-1.2.1
PATH=$PATH:$HADOOP_INSTALL/bin
By doing this scripts in hadoop/bin folder becomes accessible from any active directory as it gets included in the active path.
Explanation for Not a valid JAR:
Again concept is the same. Linux at this point of time is not able to find the jar file in the active path. So the idea is to be in the directory where the jar file is present, at the time when you run this command. So if you get into hadoop directory by
cd $HADOOP_INSTALL (if this doesn't work, try explicitly giving the hadoop path)
and then run the command
mkdir input (because the next command needs an input directory which can be empty for now)
and then run the command