final String stringType = "string";
Tap fileSink = createProductConfigTap(SinkMode.REPLACE);
List<Tap> sources = new ArrayList<>();
HiveTableDescriptor layoutDesc = new HiveTableDescriptor("ele_defn",
new String[]{"ele_defn_id", "ele_nme", "ele_typ", "ele_prnt_id", "ele_path"},
new String[]{stringType, stringType, stringType, stringType, stringType}
);
sources.add(new HiveTap(layoutDesc, layoutDesc.toScheme(), SinkMode.KEEP, true));
String[] query = new String[]{"SELECT edf.ele_nme, edf.ele_defn_id FROM ele_defn edf"};
Flow extractPC = new HiveFlow("PC", query, sources, fileSink);
new CascadeConnector().connect(extractPC).complete();
--
You received this message because you are subscribed to the Google Groups "cascading-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cascading-use...@googlegroups.com.
To post to this group, send email to cascadi...@googlegroups.com.
Visit this group at http://groups.google.com/group/cascading-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/cascading-user/4a7ee4d5-1d14-4bea-947a-aba6ec3612f1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/cascading-user/1d631b4a-a809-4965-b71e-49822f195a0b%40googlegroups.com.
OK makes sense, is the defined sink also used to create the resulting Hive table, or does is that done by Hive itself?
I'm able to get it working with a simple select from a single table, however as soon as I try a join I am back to the same failure as before:15/08/05 16:52:49 INFO mr.MapredLocalTask: Executing: /usr/lib/hadoop/bin/hadoop jar /jobs/productConfig/lib/productConfig-1.0.26-SNAPSHOT.jar org.apache.hadoop.hive.ql.exec.mr.ExecDriver -localtask -plan file:/tmp/vagrant/hive_2015-08-05_16-52-48_422_7625493396239307589-1/-local-10005/plan.xml -jobconffile file:/tmp/vagrant/hive_2015-08-05_16-52-48_422_7625493396239307589-1/-local-10006/jobconf.xmlException in thread "main" java.io.IOException: No space left on deviceat java.io.FileOutputStream.writeBytes(Native Method)at java.io.FileOutputStream.write(FileOutputStream.java:345)at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80)at org.apache.hadoop.util.RunJar.unJar(RunJar.java:90)at org.apache.hadoop.util.RunJar.unJar(RunJar.java:64)at org.apache.hadoop.util.RunJar.main(RunJar.java:188)Execution failed with exit status: 115/08/05 16:52:55 ERROR exec.Task: Execution failed with exit status: 1Obtaining error information15/08/05 16:52:55 ERROR exec.Task: Obtaining error information
Task failed!Task ID:Stage-5Logs:15/08/05 16:52:55 ERROR exec.Task:Task failed!Task ID:Stage-5Logs:...............Job Submission failed with exception 'java.io.FileNotFoundException(File file:/tmp/vagrant/hive_2015-08-05_16-51-19_567_3005785431880350821-1/-local-10003/HashTable-Stage-4 does not exist)'15/08/05 16:53:04 ERROR exec.Task: Job Submission failed with exception 'java.io.FileNotFoundException(File file:/tmp/vagrant/hive_2015-08-05_16-51-19_567_3005785431880350821-1/-local-10003/HashTable-Stage-4 does not exist)'java.io.FileNotFoundException: File file:/tmp/vagrant/hive_2015-08-05_16-51-19_567_3005785431880350821-1/-local-10003/HashTable-Stage-4 does not exist............cascading.CascadingException: hive error 'FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask' while running query CREATE TABLE otmu_product_config as SELECT edf.ele_nme, edf.ele_defn_id, pcd.prod_cnfg_dtl_desc FROM ele_defn edf, prod_cnfg_dtl pcd...........
To view this discussion on the web visit https://groups.google.com/d/msgid/cascading-user/eeafef3c-847b-4f95-b4bb-731c9a98153a%40googlegroups.com.
In this use case, Hive will do that. If you use a HiveTap for writing in a Cascading flow, we will handle it (including registering it in the metastore).
You seem to run out of local disk space.
Can you walk me through, how you do this vs the way you do it in hive itself?
HiveTableDescriptor confDesc = new HiveTableDescriptor("otmu_product_config",
new String[]{"ele_nme", "ele_defn_id", "prod_cnfg_dtl_desc"},
new String[]{stringType, stringType, stringType}
);
Tap sink = new HiveTap(confDesc, confDesc.toScheme(), SinkMode.REPLACE, false);
List<Tap> sources = createProdConfigSources();
String[] query = new String[]{"DROP TABLE otmu_product_config ", "CREATE TABLE otmu_product_config as " +
"SELECT edf.ele_nme, edf.ele_defn_id, pcd.prod_cnfg_dtl_desc FROM ele_defn edf, prod_cnfg_dtl pcd"};
Flow extractPC = new HiveFlow("Extract Otmu Product Config", query, sources, sink);
new CascadeConnector().connect(extractPC).complete();
public static List<Tap> createProdConfigSources() {
List<Tap> sources = new ArrayList<>();
final String stringType = "string";
final String bigint = "bigint";
HiveTableDescriptor layoutDesc = new HiveTableDescriptor("ele_defn",
new String[]{"ele_defn_id", ele_nme, "ele_typ", "ele_prnt_id", "ele_path"},
new String[]{stringType, stringType, stringType, stringType, stringType});
sources.add(new HiveTap(layoutDesc, layoutDesc.toScheme(), SinkMode.KEEP, true));
HiveTableDescriptor cnfg_dtl_desc = new HiveTableDescriptor("prod_cnfg_dtl",
new String[]{"prod_cnfg_dtl_id", "prod_cnfg_id", "prod_cnfg_dtl_cd", "prod_cnfg_dtl_desc", "prod_cnfg_dtl_val"},
new String[]{bigint, bigint, bigint, stringType, stringType});
sources.add(new HiveTap(cnfg_dtl_desc, cnfg_dtl_desc.toScheme(), SinkMode.KEEP, true));
return sources;
}
Out of interest I opened another shell and repeatedly ran DF -h while the Cascading job ran and it indeed does appear that I am somehow running out of local space....
props.setProperty("hive.exec.mode.local.auto", "false");
props.setProperty("mapred.job.tracker", "cluster:8088");
in the CascadeConnector, but I still see that its trying to run as a local task (mr.MapredLocalTask)
Has anyone successfully run similar joins using cascading-hive?
OK, it doesnt look like thats the real problemI switched to run against a partition with 12GB free and the job still dies
ERROR mr.MapredLocalTask: Execution failed with exit status: 137It seems like this is memory related, but why is the job running for so long under cascading (5+ mins compared to a couple of seconds in Hive Shell?)I've tried settingprops.setProperty("hive.exec.mode.local.auto", "false");
props.setProperty("mapred.job.tracker", "cluster:8088");in the CascadeConnector, but I still see that its trying to run as a local task (mr.MapredLocalTask)
Has anyone successfully run similar joins using cascading-hive?
On Wednesday, August 5, 2015 at 5:59:20 PM UTC+1, PaulON wrote:Out of interest I opened another shell and repeatedly ran DF -h while the Cascading job ran and it indeed does appear that I am somehow running out of local space....any ideas why a small query like this would need 2GB of disk space?(And why the same is not required when run in Hive shell?)Also, should this not be using hdfs rather than local disk? I dont think we can guarantee that we will always have tens of GB's of free disk on the Edge Node of our cluster...Paul
--
You received this message because you are subscribed to the Google Groups "cascading-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cascading-use...@googlegroups.com.
To post to this group, send email to cascadi...@googlegroups.com.
Visit this group at http://groups.google.com/group/cascading-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/cascading-user/4cee1a2a-7564-49be-9f75-79ee0ff6789d%40googlegroups.com.
On Wed, Aug 5, 2015 at 10:07 PM, PaulON <pone...@gmail.com> wrote:OK, it doesnt look like thats the real problemI switched to run against a partition with 12GB free and the job still diesWhich hive and hadoop version is this? I'd like to replicate the problem over here.
MapRedLocalTask is not a local mapreduce job. That is a task, that Hive runs locally, before it kicks of a mapred job. They use that for preparing hash-join data and similar things: https://hive.apache.org/javadocs/r0.12.0/api/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.html
2016-06-16 12:23:48,569 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 5 2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 5 submitted by user jmill383 2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1466017117730_0005 2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 IP=127.0.0.1 OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1466017117730_0005 2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0005 State change from NEW to NEW_SAVING 2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1466017117730_0005 2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0005 State change from NEW_SAVING to SUBMITTED 2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1466017117730_0005 user: jmill383 leaf-queue of parent: root #applications: 1 2016-06-16 12:23:49,043 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1466017117730_0005 from user: jmill383, in queue: default 2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0005 State change from SUBMITTED to ACCEPTED 2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0005_000001 2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from NEW to SUBMITTED 2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0005 from user: jmill383 activated in queue: default 2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0005 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@47d16aaa, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1 2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0005_000001 to scheduler from user jmill383 in queue default 2016-06-16 12:23:49,048 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from SUBMITTED to SCHEDULED 2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_01_000001 Container Transitioned from NEW to ALLOCATED 2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0005 CONTAINERID=container_1466017117730_0005_01_000001 2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0005_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation 2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0005_000001 container=Container: [ContainerId: container_1466017117730_0005_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8> 2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8> 2016-06-16 12:23:49,406 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0005_01_000001 2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_01_000001 Container Transitioned from ALLOCATED to ACQUIRED 2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0005_000001 2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0005 AttemptId: appattempt_1466017117730_0005_000001 MasterContainer: Container: [ContainerId: container_1466017117730_0005_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] 2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from SCHEDULED to ALLOCATED_SAVING 2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from ALLOCATED_SAVING to ALLOCATED 2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0005_000001 2016-06-16 12:23:49,408 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0005_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0005_000001 2016-06-16 12:23:49,409 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0005_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-06-16 12:23:49,409 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0005_000001 2016-06-16 12:23:49,409 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0005_000001 2016-06-16 12:23:49,415 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0005_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0005_000001 2016-06-16 12:23:49,415 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from ALLOCATED to LAUNCHED 2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_01_000001 Container Transitioned from ACQUIRED to COMPLETED 2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0005_01_000001 in state: COMPLETED event:FINISHED 2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0005 CONTAINERID=container_1466017117730_0005_01_000001 2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0005_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true 2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0005_000001 with final state: FAILED, and exit status: 1 2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0> 2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from LAUNCHED to FINAL_SAVING 2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0005_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8> 2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8> 2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0005_000001 2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0005_000001 released container container_1466017117730_0005_01_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED 2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0005_000001 2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from FINAL_SAVING to FAILED 2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 2 2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0005_000001 is done. finalState=FAILED 2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0005_000002 2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0005 requests cleared 2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from NEW to SUBMITTED 2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0005 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0 2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0005 from user: jmill383 activated in queue: default 2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0005 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@4f769ece, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1 2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0005_000002 to scheduler from user jmill383 in queue default 2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from SUBMITTED to SCHEDULED 2016-06-16 12:23:51,406 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_02_000001 Container Transitioned from NEW to ALLOCATED 2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0005 CONTAINERID=container_1466017117730_0005_02_000001 2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0005_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation 2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0005_000002 container=Container: [ContainerId: container_1466017117730_0005_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8> 2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8> 2016-06-16 12:23:51,408 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0005_02_000001 2016-06-16 12:23:51,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_02_000001 Container Transitioned from ALLOCATED to ACQUIRED 2016-06-16 12:23:51,408 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0005_000002 2016-06-16 12:23:51,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0005 AttemptId: appattempt_1466017117730_0005_000002 MasterContainer: Container: [ContainerId: container_1466017117730_0005_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] 2016-06-16 12:23:51,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from SCHEDULED to ALLOCATED_SAVING 2016-06-16 12:23:51,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from ALLOCATED_SAVING to ALLOCATED 2016-06-16 12:23:51,409 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0005_000002 2016-06-16 12:23:51,410 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0005_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0005_000002 2016-06-16 12:23:51,410 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0005_02_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-06-16 12:23:51,410 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0005_000002 2016-06-16 12:23:51,410 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0005_000002 2016-06-16 12:23:51,415 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0005_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0005_000002 2016-06-16 12:23:51,416 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from ALLOCATED to LAUNCHED 2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_02_000001 Container Transitioned from ACQUIRED to COMPLETED 2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0005_02_000001 in state: COMPLETED event:FINISHED 2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0005 CONTAINERID=container_1466017117730_0005_02_000001 2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0005_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true 2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0005_000002 with final state: FAILED, and exit status: 1 2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0> 2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from LAUNCHED to FINAL_SAVING 2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0005_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8> 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8> 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0005_000002 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0005_000002 released container container_1466017117730_0005_02_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0005_000002 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from FINAL_SAVING to FAILED 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 2. The max attempts is 2 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1466017117730_0005 with final state: FAILED 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0005 State change from ACCEPTED to FINAL_SAVING 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1466017117730_0005 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0005_000002 is done. finalState=FAILED 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application application_1466017117730_0005 failed 2 times due to AM Container for appattempt_1466017117730_0005_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0005/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1466017117730_0005_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0005 requests cleared 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0005 State change from FINAL_SAVING to FAILED 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0005 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0 2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1466017117730_0005 user: jmill383 leaf-queue of parent: root #applications: 0 2016-06-16 12:23:52,409 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1466017117730_0005 failed 2 times due to AM Container for appattempt_1466017117730_0005_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0005/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1466017117730_0005_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. APPID=application_1466017117730_0005 2016-06-16 12:23:52,410 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1466017117730_0005,name=select distinct(warctype) from commoncrawl18(Stage-1),user=jmill383,queue=default,state=FAILED,trackingUrl=http://starchild.ltsnet.net:8088/cluster/app/application_1466017117730_0005,appMasterHost=N/A,startTime=1466094229042,finishTime=1466094232409,finalStatus=FAILED 2016-06-16 12:23:53,205 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 IP=127.0.0.1 OPERATION=Kill Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1466017117730_0005 2016-06-16 12:23:53,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed..
I suspect a JAVA HEAP issue, but i do not receive any errors in my logs regarding a memory problem Just an Exit Code 1
Please advise if you can assist
2016-06-16 12:23:53,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2016-06-16 13:52:41,457 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 6 2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 6 submitted by user jmill383 2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1466017117730_0006 2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 IP=127.0.0.1 OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1466017117730_0006 2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0006 State change from NEW to NEW_SAVING 2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1466017117730_0006 2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0006 State change from NEW_SAVING to SUBMITTED 2016-06-16 13:52:41,931 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1466017117730_0006 user: jmill383 leaf-queue of parent: root #applications: 1 2016-06-16 13:52:41,931 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1466017117730_0006 from user: jmill383, in queue: default 2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0006 State change from SUBMITTED to ACCEPTED 2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0006_000001 2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from NEW to SUBMITTED 2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0006 from user: jmill383 activated in queue: default 2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0006 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@d041557, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1 2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0006_000001 to scheduler from user jmill383 in queue default 2016-06-16 13:52:41,936 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from SUBMITTED to SCHEDULED 2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_01_000001 Container Transitioned from NEW to ALLOCATED 2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0006 CONTAINERID=container_1466017117730_0006_01_000001 2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0006_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation 2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0006_000001 container=Container: [ContainerId: container_1466017117730_0006_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8> 2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8> 2016-06-16 13:52:42,365 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0006_01_000001 2016-06-16 13:52:42,365 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_01_000001 Container Transitioned from ALLOCATED to ACQUIRED 2016-06-16 13:52:42,366 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0006_000001 2016-06-16 13:52:42,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0006 AttemptId: appattempt_1466017117730_0006_000001 MasterContainer: Container: [ContainerId: container_1466017117730_0006_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] 2016-06-16 13:52:42,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from SCHEDULED to ALLOCATED_SAVING 2016-06-16 13:52:42,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from ALLOCATED_SAVING to ALLOCATED 2016-06-16 13:52:42,366 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0006_000001 2016-06-16 13:52:42,367 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0006_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0006_000001 2016-06-16 13:52:42,367 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0006_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-06-16 13:52:42,367 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0006_000001 2016-06-16 13:52:42,367 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0006_000001 2016-06-16 13:52:42,373 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0006_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0006_000001 2016-06-16 13:52:42,374 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from ALLOCATED to LAUNCHED 2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_01_000001 Container Transitioned from ACQUIRED to COMPLETED 2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0006_01_000001 in state: COMPLETED event:FINISHED 2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0006 CONTAINERID=container_1466017117730_0006_01_000001 2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0006_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true 2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0006_000001 with final state: FAILED, and exit status: 1 2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0> 2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from LAUNCHED to FINAL_SAVING 2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0006_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8> 2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8> 2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0006_000001 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0006_000001 released container container_1466017117730_0006_01_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0006_000001 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from FINAL_SAVING to FAILED 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 2 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0006_000001 is done. finalState=FAILED 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0006_000002 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0006 requests cleared 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from NEW to SUBMITTED 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0006 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0006 from user: jmill383 activated in queue: default 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0006 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@7da36a0f, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1 2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0006_000002 to scheduler from user jmill383 in queue default 2016-06-16 13:52:43,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from SUBMITTED to SCHEDULED 2016-06-16 13:52:44,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2016-06-16 13:52:44,365 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_02_000001 Container Transitioned from NEW to ALLOCATED 2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0006 CONTAINERID=container_1466017117730_0006_02_000001 2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0006_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation 2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0006_000002 container=Container: [ContainerId: container_1466017117730_0006_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8> 2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8> 2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0006_02_000001 2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_02_000001 Container Transitioned from ALLOCATED to ACQUIRED 2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0006_000002 2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0006 AttemptId: appattempt_1466017117730_0006_000002 MasterContainer: Container: [ContainerId: container_1466017117730_0006_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] 2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from SCHEDULED to ALLOCATED_SAVING 2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from ALLOCATED_SAVING to ALLOCATED 2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0006_000002 2016-06-16 13:52:44,369 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0006_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0006_000002 2016-06-16 13:52:44,369 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0006_02_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-06-16 13:52:44,369 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0006_000002 2016-06-16 13:52:44,369 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0006_000002 2016-06-16 13:52:44,375 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0006_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0006_000002 2016-06-16 13:52:44,375 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from ALLOCATED to LAUNCHED 2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_02_000001 Container Transitioned from ACQUIRED to COMPLETED 2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0006_02_000001 in state: COMPLETED event:FINISHED 2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0006 CONTAINERID=container_1466017117730_0006_02_000001 2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0006_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true 2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0006_000002 with final state: FAILED, and exit status: 1 2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0> 2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from LAUNCHED to FINAL_SAVING 2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0006_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8> 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8> 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0006_000002 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0006_000002 released container container_1466017117730_0006_02_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0006_000002 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from FINAL_SAVING to FAILED 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 2. The max attempts is 2 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1466017117730_0006 with final state: FAILED 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0006 State change from ACCEPTED to FINAL_SAVING 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0006_000002 is done. finalState=FAILED 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1466017117730_0006 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0006 requests cleared 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application application_1466017117730_0006 failed 2 times due to AM Container for appattempt_1466017117730_0006_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0006/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1466017117730_0006_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0006 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0006 State change from FINAL_SAVING to FAILED 2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1466017117730_0006 user: jmill383 leaf-queue of parent: root #applications: 0 2016-06-16 13:52:45,367 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1466017117730_0006 failed 2 times due to AM Container for appattempt_1466017117730_0006_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0006/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1466017117730_0006_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. APPID=application_1466017117730_0006 2016-06-16 13:52:45,368 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1466017117730_0006,name=insert overwrite table keyvalue selec...dual(Stage-1),user=jmill383,queue=default,state=FAILED,trackingUrl=http://starchild.ltsnet.net:8088/cluster/app/application_1466017117730_0006,appMasterHost=N/A,startTime=1466099561930,finishTime=1466099565367,finalStatus=FAILED 2016-06-16 13:52:46,136 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 IP=127.0.0.1 OPERATION=Kill Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1466017117730_0006 2016-06-16 13:52:46,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
> 16/06/16 13:52:46 WARN mapreduce.Counters: Group FileSystemC...
2016-06-17 08:11:50,851 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 7 2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 7 submitted by user jmill383 2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1466017117730_0007 2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0007 State change from NEW to NEW_SAVING 2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 IP=127.0.0.1 OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1466017117730_0007 2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1466017117730_0007 2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0007 State change from NEW_SAVING to SUBMITTED 2016-06-17 08:11:51,322 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1466017117730_0007 user: jmill383 leaf-queue of parent: root #applications: 1 2016-06-17 08:11:51,322 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1466017117730_0007 from user: jmill383, in queue: default 2016-06-17 08:11:51,326 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0007 State change from SUBMITTED to ACCEPTED 2016-06-17 08:11:51,326 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0007_000001 2016-06-17 08:11:51,326 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from NEW to SUBMITTED 2016-06-17 08:11:51,326 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0007 from user: jmill383 activated in queue: default 2016-06-17 08:11:51,326 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0007 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@78ebdb9f, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1 2016-06-17 08:11:51,327 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0007_000001 to scheduler from user jmill383 in queue default 2016-06-17 08:11:51,327 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from SUBMITTED to SCHEDULED 2016-06-17 08:11:52,256 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_01_000001 Container Transitioned from NEW to ALLOCATED 2016-06-17 08:11:52,256 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_01_000001 2016-06-17 08:11:52,256 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0007_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation 2016-06-17 08:11:52,256 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0007_000001 container=Container: [ContainerId: container_1466017117730_0007_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8> 2016-06-17 08:11:52,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 2016-06-17 08:11:52,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8> 2016-06-17 08:11:52,258 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0007_01_000001 2016-06-17 08:11:52,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_01_000001 Container Transitioned from ALLOCATED to ACQUIRED 2016-06-17 08:11:52,260 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0007_000001 2016-06-17 08:11:52,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0007 AttemptId: appattempt_1466017117730_0007_000001 MasterContainer: Container: [ContainerId: container_1466017117730_0007_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] 2016-06-17 08:11:52,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from SCHEDULED to ALLOCATED_SAVING 2016-06-17 08:11:52,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from ALLOCATED_SAVING to ALLOCATED 2016-06-17 08:11:52,261 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0007_000001 2016-06-17 08:11:52,263 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0007_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0007_000001 2016-06-17 08:11:52,263 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0007_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-06-17 08:11:52,263 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0007_000001 2016-06-17 08:11:52,263 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0007_000001 2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0007_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0007_000001 2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from ALLOCATED to LAUNCHED 2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_01_000001 Container Transitioned from ACQUIRED to COMPLETED 2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0007_01_000001 in state: COMPLETED event:FINISHED 2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_01_000001 2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0007_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true 2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0007_000001 with final state: FAILED, and exit status: 1 2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0> 2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from LAUNCHED to FINAL_SAVING 2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0007_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8> 2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8> 2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0007_000001 2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0007_000001 released container container_1466017117730_0007_01_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED 2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0007_000001 2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from FINAL_SAVING to FAILED 2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 2 2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0007_000002 2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0007_000001 is done. finalState=FAILED 2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from NEW to SUBMITTED 2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0007 requests cleared 2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0007 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0 2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0007 from user: jmill383 activated in queue: default 2016-06-17 08:11:53,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0007 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@625b5f17, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1 2016-06-17 08:11:53,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0007_000002 to scheduler from user jmill383 in queue default 2016-06-17 08:11:53,259 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from SUBMITTED to SCHEDULED 2016-06-17 08:11:54,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 2016-06-17 08:11:54,258 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_02_000001 Container Transitioned from NEW to ALLOCATED 2016-06-17 08:11:54,259 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_02_000001 2016-06-17 08:11:54,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0007_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation 2016-06-17 08:11:54,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0007_000002 container=Container: [ContainerId: container_1466017117730_0007_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8> 2016-06-17 08:11:54,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 2016-06-17 08:11:54,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8> 2016-06-17 08:11:54,261 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0007_02_000001 2016-06-17 08:11:54,262 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_02_000001 Container Transitioned from ALLOCATED to ACQUIRED 2016-06-17 08:11:54,262 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0007_000002 2016-06-17 08:11:54,262 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0007 AttemptId: appattempt_1466017117730_0007_000002 MasterContainer: Container: [ContainerId: container_1466017117730_0007_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] 2016-06-17 08:11:54,263 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from SCHEDULED to ALLOCATED_SAVING 2016-06-17 08:11:54,263 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from ALLOCATED_SAVING to ALLOCATED 2016-06-17 08:11:54,263 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0007_000002 2016-06-17 08:11:54,265 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0007_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0007_000002 2016-06-17 08:11:54,265 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0007_02_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-06-17 08:11:54,265 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0007_000002 2016-06-17 08:11:54,265 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0007_000002 2016-06-17 08:11:54,276 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0007_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0007_000002 2016-06-17 08:11:54,276 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from ALLOCATED to LAUNCHED 2016-06-17 08:11:55,259 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_02_000001 Container Transitioned from ACQUIRED to COMPLETED 2016-06-17 08:11:55,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0007_02_000001 in state: COMPLETED event:FINISHED 2016-06-17 08:11:55,259 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_02_000001 2016-06-17 08:11:55,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0007_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0007_000002 with final state: FAILED, and exit status: 1 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0> 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from LAUNCHED to FINAL_SAVING 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0007_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8> 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8> 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0007_000002 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0007_000002 released container container_1466017117730_0007_02_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0007_000002 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from FINAL_SAVING to FAILED 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 2. The max attempts is 2 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1466017117730_0007 with final state: FAILED 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0007 State change from ACCEPTED to FINAL_SAVING 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0007_000002 is done. finalState=FAILED 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1466017117730_0007 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0007 requests cleared 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0007 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0 2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application application_1466017117730_0007 failed 2 times due to AM Container for appattempt_1466017117730_0007_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0007/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1466017117730_0007_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. 2016-06-17 08:11:55,261 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0007 State change from FINAL_SAVING to FAILED 2016-06-17 08:11:55,261 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1466017117730_0007 user: jmill383 leaf-queue of parent: root #applications: 0 2016-06-17 08:11:55,261 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1466017117730_0007 failed 2 times due to AM Container for appattempt_1466017117730_0007_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0007/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1466017117730_0007_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. APPID=application_1466017117730_0007 2016-06-17 08:11:55,261 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1466017117730_0007,name=insert overwrite table keyvalue selec...dual(Stage-1),user=jmill383,queue=default,state=FAILED,trackingUrl=http://starchild.ltsnet.net:8088/cluster/app/application_1466017117730_0007,appMasterHost=N/A,startTime=1466165511321,finishTime=1466165515260,finalStatus=FAILED 2016-06-17 08:11:55,652 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383 IP=127.0.0.1 OPERATION=Kill Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1466017117730_0007 2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
Below is a copy of the nodemanager log
2016-06-17 08:11:52,270 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1466017117730_0007_000001 (auth:SIMPLE) 2016-06-17 08:11:52,274 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1466017117730_0007_01_000001 by user jmill383 2016-06-17 08:11:52,274 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1466017117730_0007 2016-06-17 08:11:52,274 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=jmill383 IP=10.40.190.207 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_01_000001 2016-06-17 08:11:52,274 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1466017117730_0007 transitioned from NEW to INITING 2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1466017117730_0007_01_000001 to application application_1466017117730_0007 2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1466017117730_0007 transitioned from INITING to RUNNING 2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_01_000001 transitioned from NEW to LOCALIZING 2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1466017117730_0007 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.jar transitioned from INIT to DOWNLOADING 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.splitmetainfo transitioned from INIT to DOWNLOADING 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.split transitioned from INIT to DOWNLOADING 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.xml transitioned from INIT to DOWNLOADING 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hive/jmill383/7aaa4ab5-bffc-4a5e-9b25-626e34603153/hive_2016-06-17_08-11-50_298_1846359449944448840-1/-mr-10004/60053f51-d5c2-4eb7-a55a-472b0cc36df8/map.xml transitioned from INIT to DOWNLOADING 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1466017117730_0007_01_000001 2016-06-17 08:11:52,280 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /tmp/hadoop-jmill383/nm-local-dir/nmPrivate/container_1466017117730_0007_01_000001.tokens. Credentials list: 2016-06-17 08:11:52,288 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user jmill383 2016-06-17 08:11:52,294 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /tmp/hadoop-jmill383/nm-local-dir/nmPrivate/container_1466017117730_0007_01_000001.tokens to /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/container_1466017117730_0007_01_000001.tokens 2016-06-17 08:11:52,295 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007 = file:/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007 2016-06-17 08:11:52,383 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.jar(->/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/filecache/10/job.jar) transitioned from DOWNLOADING to LOCALIZED 2016-06-17 08:11:52,399 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.splitmetainfo(->/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/filecache/11/job.splitmetainfo) transitioned from DOWNLOADING to LOCALIZED 2016-06-17 08:11:52,414 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.split(->/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/filecache/12/job.split) transitioned from DOWNLOADING to LOCALIZED 2016-06-17 08:11:52,430 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.xml(->/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/filecache/13/job.xml) transitioned from DOWNLOADING to LOCALIZED 2016-06-17 08:11:52,446 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hive/jmill383/7aaa4ab5-bffc-4a5e-9b25-626e34603153/hive_2016-06-17_08-11-50_298_1846359449944448840-1/-mr-10004/60053f51-d5c2-4eb7-a55a-472b0cc36df8/map.xml(->/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/filecache/17/map.xml) transitioned from DOWNLOADING to LOCALIZED 2016-06-17 08:11:52,446 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_01_000001 transitioned from LOCALIZING to LOCALIZED 2016-06-17 08:11:52,461 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_01_000001 transitioned from LOCALIZED to RUNNING 2016-06-17 08:11:52,476 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/container_1466017117730_0007_01_000001/default_container_executor.sh] 2016-06-17 08:11:52,588 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1466017117730_0007_01_000001 is : 1 2016-06-17 08:11:52,588 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1466017117730_0007_01_000001 and exit code: 1 ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from container-launch. 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Container id: container_1466017117730_0007_01_000001 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exit code: 1 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Stack trace: ExitCodeException exitCode=1: 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell.run(Shell.java:455) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.FutureTask.run(FutureTask.java:262) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.lang.Thread.run(Thread.java:745) 2016-06-17 08:11:52,588 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 1 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_01_000001 transitioned from RUNNING to EXITED_WITH_FAILURE 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1466017117730_0007_01_000001 2016-06-17 08:11:52,601 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/container_1466017117730_0007_01_000001 2016-06-17 08:11:52,601 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=jmill383 OPERATION=Container Finished - Failed TARGET=ContainerImpl RESULT=FAILURE DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_01_000001 2016-06-17 08:11:52,601 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_01_000001 transitioned from EXITED_WITH_FAILURE to DONE 2016-06-17 08:11:52,601 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1466017117730_0007_01_000001 from application application_1466017117730_0007 2016-06-17 08:11:52,602 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1466017117730_0007 2016-06-17 08:11:52,610 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1466017117730_0007_01_000001 2016-06-17 08:11:52,610 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1466017117730_0007_01_000001 2016-06-17 08:11:54,258 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1466017117730_0007_01_000001] 2016-06-17 08:11:54,270 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1466017117730_0007_000002 (auth:SIMPLE) 2016-06-17 08:11:54,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1466017117730_0007_02_000001 by user jmill383 2016-06-17 08:11:54,275 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=jmill383 IP=10.40.190.207 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_02_000001 2016-06-17 08:11:54,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1466017117730_0007_02_000001 to application application_1466017117730_0007 2016-06-17 08:11:54,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_02_000001 transitioned from NEW to LOCALIZING 2016-06-17 08:11:54,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1466017117730_0007 2016-06-17 08:11:54,277 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_02_000001 transitioned from LOCALIZING to LOCALIZED 2016-06-17 08:11:54,303 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_02_000001 transitioned from LOCALIZED to RUNNING 2016-06-17 08:11:54,318 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/container_1466017117730_0007_02_000001/default_container_executor.sh] 2016-06-17 08:11:54,426 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1466017117730_0007_02_000001 is : 1 2016-06-17 08:11:54,426 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1466017117730_0007_02_000001 and exit code: 1 ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from container-launch. 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Container id: container_1466017117730_0007_02_000001 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exit code: 1 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Stack trace: ExitCodeException exitCode=1: 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell.run(Shell.java:455) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.FutureTask.run(FutureTask.java:262) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.lang.Thread.run(Thread.java:745) 2016-06-17 08:11:54,427 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 1 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_02_000001 transitioned from RUNNING to EXITED_WITH_FAILURE 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1466017117730_0007_02_000001 2016-06-17 08:11:54,438 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/container_1466017117730_0007_02_000001 2016-06-17 08:11:54,438 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=jmill383 OPERATION=Container Finished - Failed TARGET=ContainerImpl RESULT=FAILURE DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_02_000001 2016-06-17 08:11:54,438 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_02_000001 transitioned from EXITED_WITH_FAILURE to DONE 2016-06-17 08:11:54,438 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1466017117730_0007_02_000001 from application application_1466017117730_0007 2016-06-17 08:11:54,438 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1466017117730_0007 2016-06-17 08:11:55,610 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1466017117730_0007_02_000001 2016-06-17 08:11:55,611 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1466017117730_0007_02_000001 2016-06-17 08:11:56,259 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1466017117730_0007_02_000001] 2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1466017117730_0007 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP 2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007 2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1466017117730_0007 2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1466017117730_0007 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED 2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1466017117730_0007, with delay of 10800 seconds
> 16/06/16 13:52:46 WARN mapreduce.Counters: Group FileSystemC...
> 16/06/16 13:52:46 WARN mapreduce.Counters: Group FileSystemC...