SPARK_JAR=./core/target/spark-core-assembly-0.6.0.jar ./run spark.deploy.yarn.Client --jar examples/target/scala-2.9.2/spark-examples_2.9.2-0.6.0.jar --class spark.examples.SparkPi --args yarn-standalone --num-workers 3 --worker-cores 1
13/08/08 05:44:49 INFO yarn.Client: Connecting to ResourceManager atnamemaster/
10.0.0.106:908013/08/08 05:44:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/08/08 05:44:50 INFO yarn.Client: Got Cluster metric info from ASM, numNodeManagers=3
13/08/08 05:44:50 INFO yarn.Client: Queue info .. queueName=default, queueCurrentCapacity=0.0, queueMaxCapacity=1.0, queueApplicationCount=10, queueChildQueueCount=0
13/08/08 05:44:50 INFO yarn.Client: Requesting new Application
13/08/08 05:44:50 INFO yarn.Client: Got new ApplicationId: application_1375929175938_0011
13/08/08 05:44:50 INFO yarn.Client: Max mem capabililty of resources in this cluster 8192
13/08/08 05:44:50 INFO yarn.Client: Setting up application submission context for ASM
13/08/08 05:44:50 INFO yarn.Client: Preparing Local resources
13/08/08 05:44:50 INFO yarn.Client: Uploading core/target/spark-core-assembly-0.6.0.jar to hdfs://namemaster:9000/user/root/spark/11spark.jar
13/08/08 05:44:56 INFO yarn.Client: Uploading examples/target/scala-2.9.2/spark-examples_2.9.2-0.6.0.jar to hdfs://namemaster:9000/user/root/spark/11app.jar
13/08/08 05:44:56 INFO yarn.Client: Setting up the launch environment
13/08/08 05:44:56 INFO yarn.Client: Setting up container launch context
13/08/08 05:44:56 INFO yarn.Client: Command for the ApplicationMaster: java -server -Xmx640m spark.deploy.yarn.ApplicationMaster --class spark.examples.SparkPi --jar examples/target/scala-2.9.2/spark-examples_2.9.2-0.6.0.jar --args 'yarn-standalone' --worker-memory 1024 --worker-cores 1 --num-workers 3 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
13/08/08 05:44:56 INFO yarn.Client: Submitting application to ASM
13/08/08 05:44:57 INFO yarn.Client: Application report from ASM:
application identifier: application_1375929175938_0011
appId: 11
clientToken: null
appDiagnostics:
appMasterHost: N/A
appQueue: default
appMasterRpcPort: 0
appStartTime: 1375940696450
yarnAppState: ACCEPTED
distributedFinalState: UNDEFINED
appTrackingUrl: namemaster:8088/proxy/application_1375929175938_0011/
appUser: root
13/08/08 05:44:58 INFO yarn.Client: Application report from ASM:
application identifier: application_1375929175938_0011
appId: 11
clientToken: null
appDiagnostics: Application application_1375929175938_0011 failed 1 times due to AM Container for appattempt_1375929175938_0011_000001 exited with exitCode: 127 due to: .Failing this attempt.. Failing the application.
appMasterHost: N/A
appQueue: default
appMasterRpcPort: 0
appStartTime: 1375940696450
yarnAppState: FAILED
distributedFinalState: FAILED
appTrackingUrl: namemaster:8088/proxy/application_1375929175938_0011/
appUser: root
I checked the yarn logs and got the "/bin/bash: java: command not found" error in stderr file.
But I've already set the JAVA_HOME both in the .bashrc and hadoop-env.sh,and checked the map/reduce examples with no problems,and even in the spark-env.sh.