Problems using HiveFlow

744 views
Skip to first unread message

PaulON

unread,
Aug 5, 2015, 8:07:20 AM8/5/15
to cascading-user
Hey,

Im trying what I thought would be pretty simple but hitting some pretty basic blockers.

We have a bunch of Hive tables, I want to query them via cascading, perform some logic and dump the result to Hdfs.

As a basic test I use a simple select query and try to sink to hdfs, but I never get any output in the file (in fact the file/folder isnt even created)

The query runs directly in hive without any problems, in fact I can even see the expected output in the tmp files when I run the cascading job.

any ideas what I am doing wrong here?

(note I can use a regular HadoopFlowConnector and just sink to hdfs without any problems)

            final String stringType = "string";
Tap fileSink = createProductConfigTap(SinkMode.REPLACE);

List<Tap> sources = new ArrayList<>();
HiveTableDescriptor layoutDesc = new HiveTableDescriptor("ele_defn",
new String[]{"ele_defn_id", "ele_nme", "ele_typ", "ele_prnt_id", "ele_path"},
                    new String[]{stringType, stringType, stringType, stringType, stringType}
);
sources.add(new HiveTap(layoutDesc, layoutDesc.toScheme(), SinkMode.KEEP, true));
String[] query = new String[]{"SELECT edf.ele_nme, edf.ele_defn_id FROM ele_defn edf"};
Flow extractPC = new HiveFlow("PC", query, sources, fileSink);
new CascadeConnector().connect(extractPC).complete();


Also when I try to run the real query (which joins across multiple tables) the job seems to run repeatedly (not visible in Hadoop or Hue as jobs) and then exits with a "No Space Left On Device" error, which is incorrect.

15/08/05 12:17:38 INFO mr.MapredLocalTask: Executing: /usr/lib/hadoop/bin/hadoop jar /jobs/productConfig-1.0.26-SNAPSHOT.jar org.apache.hadoop.hive.ql.exec.mr.ExecDriver -localtask -plan file:/tmp/vagrant/hive_2015-08-05_12-17-36_136_2113008607249771120-1/-local-10005/plan.xml   -jobconffile file:/tmp/vagrant/hive_2015-08-05_12-17-36_136_2113008607249771120-1/-local-10006/jobconf.xml
Exception in thread "main" java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:345)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:90)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:64)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:188)
Execution failed with exit status: 1
15/08/05 12:17:43 ERROR exec.Task: Execution failed with exit status: 1
Obtaining error information
15/08/05 12:17:43 ERROR exec.Task: Obtaining error information

Task failed!
Task ID:
  Stage-4

Logs:

15/08/05 12:17:43 ERROR exec.Task:
Task failed!
Task ID:
  Stage-4


Looking at the demo samples I doing see anyone doing what we are trying, are we approaching this the wrong way altogether?

Cheers!

Andre Kelpe

unread,
Aug 5, 2015, 8:52:13 AM8/5/15
to cascading-user
Hi Paul,

this is not, how HiveFlow currently works. We currently don't support binding the output of a query to the sink Tap, since that would mean, we have to modify the SQL passed by the user, which is a big no-no. You can modify the query to be something like this:

create table <sink> as <your query>. You could also read from the table via Cascading, instead of using SQL. You can see all the features supported by cascading-hive in the demo sub-project: https://github.com/Cascading/cascading-hive/tree/2.0/demo

- André

--
You received this message because you are subscribed to the Google Groups "cascading-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cascading-use...@googlegroups.com.
To post to this group, send email to cascadi...@googlegroups.com.
Visit this group at http://groups.google.com/group/cascading-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/cascading-user/4a7ee4d5-1d14-4bea-947a-aba6ec3612f1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--

PaulON

unread,
Aug 5, 2015, 10:18:28 AM8/5/15
to cascading-user
Ah...ok thanks André!

so does the Sink tap in a HiveFlow serve any purpose?

I was using the examples in the demo, but wasnt aware of this restriction, might be worthwhile to make it more explicit?(or is it just me :()

Cheers,
Paul

Andre Kelpe

unread,
Aug 5, 2015, 10:23:11 AM8/5/15
to cascading-user
Yes, it enables the Cascade connector to figure out, which flows can run in which order: One of the big use cases that drove the creation of cascading-hive was to be able to combine Hive and Cascading flows. In order to do that, we have to have taps, so that the CascadeConnector knows, when to run what.

If you think the docs can be enhanced, please send us a PR and I will incorporate it.

- André


For more options, visit https://groups.google.com/d/optout.

PaulON

unread,
Aug 5, 2015, 12:07:52 PM8/5/15
to cascading-user
OK makes sense, is the defined sink also used to create the resulting Hive table, or does is that done by Hive itself?

I'm able to get it working with a simple select from a single table, however as soon as I try a join I am back to the same failure as before:

15/08/05 16:52:49 INFO mr.MapredLocalTask: Executing: /usr/lib/hadoop/bin/hadoop jar /jobs/productConfig/lib/productConfig-1.0.26-SNAPSHOT.jar org.apache.hadoop.hive.ql.exec.mr.ExecDriver -localtask -plan file:/tmp/vagrant/hive_2015-08-05_16-52-48_422_7625493396239307589-1/-local-10005/plan.xml   -jobconffile file:/tmp/vagrant/hive_2015-08-05_16-52-48_422_7625493396239307589-1/-local-10006/jobconf.xml
Exception in thread "main" java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:345)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:90)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:64)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:188)
Execution failed with exit status: 1
15/08/05 16:52:55 ERROR exec.Task: Execution failed with exit status: 1
Obtaining error information
15/08/05 16:52:55 ERROR exec.Task: Obtaining error information

Task failed!
Task ID:
  Stage-5

Logs:

15/08/05 16:52:55 ERROR exec.Task:
Task failed!
Task ID:
  Stage-5

Logs:
.....
.....
.....

Job Submission failed with exception 'java.io.FileNotFoundException(File file:/tmp/vagrant/hive_2015-08-05_16-51-19_567_3005785431880350821-1/-local-10003/HashTable-Stage-4 does not exist)'
15/08/05 16:53:04 ERROR exec.Task: Job Submission failed with exception 'java.io.FileNotFoundException(File file:/tmp/vagrant/hive_2015-08-05_16-51-19_567_3005785431880350821-1/-local-10003/HashTable-Stage-4 does not exist)'
java.io.FileNotFoundException: File file:/tmp/vagrant/hive_2015-08-05_16-51-19_567_3005785431880350821-1/-local-10003/HashTable-Stage-4 does not exist
....
....
....
cascading.CascadingException: hive error 'FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask' while running query CREATE TABLE otmu_product_config as SELECT edf.ele_nme, edf.ele_defn_id, pcd.prod_cnfg_dtl_desc FROM ele_defn edf, prod_cnfg_dtl pcd
......
.....


Any ideas? This exact same query runs as expected on Hive, could it be file permissions issues?

Andre Kelpe

unread,
Aug 5, 2015, 12:18:13 PM8/5/15
to cascading-user
I your use

On Wed, Aug 5, 2015 at 6:07 PM, PaulON <pone...@gmail.com> wrote:
OK makes sense, is the defined sink also used to create the resulting Hive table, or does is that done by Hive itself?

In this use case, Hive will do that. If you use a HiveTap for writing in a Cascading flow, we will handle it (including registering it in the metastore).
 

I'm able to get it working with a simple select from a single table, however as soon as I try a join I am back to the same failure as before:

15/08/05 16:52:49 INFO mr.MapredLocalTask: Executing: /usr/lib/hadoop/bin/hadoop jar /jobs/productConfig/lib/productConfig-1.0.26-SNAPSHOT.jar org.apache.hadoop.hive.ql.exec.mr.ExecDriver -localtask -plan file:/tmp/vagrant/hive_2015-08-05_16-52-48_422_7625493396239307589-1/-local-10005/plan.xml   -jobconffile file:/tmp/vagrant/hive_2015-08-05_16-52-48_422_7625493396239307589-1/-local-10006/jobconf.xml
Exception in thread "main" java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:345)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:90)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:64)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:188)
Execution failed with exit status: 1
15/08/05 16:52:55 ERROR exec.Task: Execution failed with exit status: 1
Obtaining error information
15/08/05 16:52:55 ERROR exec.Task: Obtaining error information


You seem to run out of local disk space.
 
Task failed!
Task ID:
  Stage-5

Logs:

15/08/05 16:52:55 ERROR exec.Task:
Task failed!
Task ID:
  Stage-5

Logs:
.....
.....
.....

Job Submission failed with exception 'java.io.FileNotFoundException(File file:/tmp/vagrant/hive_2015-08-05_16-51-19_567_3005785431880350821-1/-local-10003/HashTable-Stage-4 does not exist)'
15/08/05 16:53:04 ERROR exec.Task: Job Submission failed with exception 'java.io.FileNotFoundException(File file:/tmp/vagrant/hive_2015-08-05_16-51-19_567_3005785431880350821-1/-local-10003/HashTable-Stage-4 does not exist)'
java.io.FileNotFoundException: File file:/tmp/vagrant/hive_2015-08-05_16-51-19_567_3005785431880350821-1/-local-10003/HashTable-Stage-4 does not exist
....
....
....
cascading.CascadingException: hive error 'FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask' while running query CREATE TABLE otmu_product_config as SELECT edf.ele_nme, edf.ele_defn_id, pcd.prod_cnfg_dtl_desc FROM ele_defn edf, prod_cnfg_dtl pcd
......
.....



Can you walk me through, how you do this vs the way you do it in hive itself?

 

For more options, visit https://groups.google.com/d/optout.

PaulON

unread,
Aug 5, 2015, 12:39:46 PM8/5/15
to cascading-user


On Wednesday, August 5, 2015 at 5:18:13 PM UTC+1, Andre Kelpe wrote:

In this use case, Hive will do that. If you use a HiveTap for writing in a Cascading flow, we will handle it (including registering it in the metastore).
Makes sense, cheers. 
 
 
You seem to run out of local disk space.
It looks like that from the error message, but I cant see any evidence of that on the filesystem, I have GB's of free space.
 
Can you walk me through, how you do this vs the way you do it in hive itself?

In Hive I am literally running the statement in Hive shell (also from HUE) and it just runs (and creates the table) as expected.

In my sources do I need to define all fields/columns or just the ones I care about? (I have tried both)

From cascading I have the following:

HiveTableDescriptor confDesc = new HiveTableDescriptor("otmu_product_config",
new String[]{"ele_nme", "ele_defn_id", "prod_cnfg_dtl_desc"},

new String[]{stringType, stringType, stringType}
);

Tap sink = new HiveTap(confDesc, confDesc.toScheme(), SinkMode.REPLACE, false);

List<Tap> sources = createProdConfigSources();

String[] query = new String[]{"DROP TABLE otmu_product_config ", "CREATE TABLE otmu_product_config as " +
"SELECT edf.ele_nme, edf.ele_defn_id, pcd.prod_cnfg_dtl_desc FROM ele_defn edf, prod_cnfg_dtl pcd"};

            Flow extractPC = new HiveFlow("Extract Otmu Product Config", query, sources, sink);
new CascadeConnector().connect(extractPC).complete();

Where
    public static List<Tap> createProdConfigSources() {


List<Tap> sources = new ArrayList<>();
        final String stringType = "string";
        final String bigint = "bigint";

HiveTableDescriptor layoutDesc = new HiveTableDescriptor("ele_defn",
                new String[]{"ele_defn_id", ele_nme, "ele_typ", "ele_prnt_id", "ele_path"},

new String[]{stringType, stringType, stringType, stringType, stringType});
sources.add(new HiveTap(layoutDesc, layoutDesc.toScheme(), SinkMode.KEEP, true));

        HiveTableDescriptor cnfg_dtl_desc = new HiveTableDescriptor("prod_cnfg_dtl",
new String[]{"prod_cnfg_dtl_id", "prod_cnfg_id", "prod_cnfg_dtl_cd", "prod_cnfg_dtl_desc", "prod_cnfg_dtl_val"},
new String[]{bigint, bigint, bigint, stringType, stringType});
sources.add(new HiveTap(cnfg_dtl_desc, cnfg_dtl_desc.toScheme(), SinkMode.KEEP, true));
return sources;
}



PaulON

unread,
Aug 5, 2015, 12:59:20 PM8/5/15
to cascading-user
Out of interest I opened another shell and repeatedly ran DF -h while the Cascading job ran and it indeed does appear that I am somehow running out of local space....

any ideas why a small query like this would need 2GB of disk space?
(And why the same is not required when run in Hive shell?) 

Also, should this not be using hdfs rather than local disk? I dont think we can guarantee that we will always have tens of GB's of free disk on the Edge Node of our cluster...

Paul

PaulON

unread,
Aug 5, 2015, 4:07:30 PM8/5/15
to cascading-user
OK, it doesnt look like thats the real problem

I switched to run against a partition with 12GB free and the job still dies

ERROR mr.MapredLocalTask: Execution failed with exit status: 137

It seems like this is memory related, but why is the job running for so long under cascading (5+ mins compared to a couple of seconds in Hive Shell?)

I've tried setting
props.setProperty("hive.exec.mode.local.auto", "false");
props.setProperty("mapred.job.tracker", "cluster:8088");

in the CascadeConnector, but I still see that its trying to run as a local task (mr.MapredLocalTask)

Has anyone successfully run similar joins using cascading-hive?

Andre Kelpe

unread,
Aug 6, 2015, 6:04:04 AM8/6/15
to cascading-user
On Wed, Aug 5, 2015 at 10:07 PM, PaulON <pone...@gmail.com> wrote:
OK, it doesnt look like thats the real problem

I switched to run against a partition with 12GB free and the job still dies

Which hive and hadoop version is this? I'd like to replicate the problem over here.

 
ERROR mr.MapredLocalTask: Execution failed with exit status: 137

It seems like this is memory related, but why is the job running for so long under cascading (5+ mins compared to a couple of seconds in Hive Shell?)

I've tried setting
props.setProperty("hive.exec.mode.local.auto", "false");
props.setProperty("mapred.job.tracker", "cluster:8088");

in the CascadeConnector, but I still see that its trying to run as a local task (mr.MapredLocalTask)

MapRedLocalTask is not a local mapreduce job. That is a task, that Hive runs locally, before it kicks of a mapred job. They use that for preparing hash-join data and similar things: https://hive.apache.org/javadocs/r0.12.0/api/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.html

 
Has anyone successfully run similar joins using cascading-hive?


On Wednesday, August 5, 2015 at 5:59:20 PM UTC+1, PaulON wrote:
Out of interest I opened another shell and repeatedly ran DF -h while the Cascading job ran and it indeed does appear that I am somehow running out of local space....

any ideas why a small query like this would need 2GB of disk space?
(And why the same is not required when run in Hive shell?) 

Also, should this not be using hdfs rather than local disk? I dont think we can guarantee that we will always have tens of GB's of free disk on the Edge Node of our cluster...

Paul

--
You received this message because you are subscribed to the Google Groups "cascading-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cascading-use...@googlegroups.com.
To post to this group, send email to cascadi...@googlegroups.com.
Visit this group at http://groups.google.com/group/cascading-user.

For more options, visit https://groups.google.com/d/optout.

PaulON

unread,
Aug 7, 2015, 8:41:55 AM8/7/15
to cascading-user


On Thursday, August 6, 2015 at 11:04:04 AM UTC+1, Andre Kelpe wrote:
On Wed, Aug 5, 2015 at 10:07 PM, PaulON <pone...@gmail.com> wrote:
OK, it doesnt look like thats the real problem

I switched to run against a partition with 12GB free and the job still dies

Which hive and hadoop version is this? I'd like to replicate the problem over here.

I'm using

Hive 0.13.1-cdh5.3.1
Hadoop 2.5.0-cdh5.3.1
 
MapRedLocalTask is not a local mapreduce job. That is a task, that Hive runs locally, before it kicks of a mapred job. They use that for preparing hash-join data and similar things: https://hive.apache.org/javadocs/r0.12.0/api/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.html

Ah ok, thanks for the explanation.
It does seem that this is the bit that is causing problems, as I never see any jobs appear in job tracker (it doesnt get to that point)
 

JOHN MILLER

unread,
Jun 9, 2016, 2:37:09 PM6/9/16
to cascading-user
Greetings

I am writing to inquire about an error i cant seem to get rid of when executing a HiveFlow

 cascading.CascadingException: hive error 'FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask'

I can send the logs for this transaction if needed

From the documentation that i have been reading states that this particular error is associated with JAVA HEAP SPACE being excessive, but i do not see "JAVA HEAP SPACES" errors anywhere in my listings
https://blogs.msdn.microsoft.com/bigdatasupport/2014/11/11/some-commonly-used-yarn-memory-settings/

Please advise if u can assist

John M

Andre Kelpe

unread,
Jun 10, 2016, 6:52:00 AM6/10/16
to cascading-user
Hi John,

please tell us a bit more about your set-up and when and how you see
the problem. Also a stacktrace would be helpful.

- André
> Visit this group at https://groups.google.com/group/cascading-user.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/cascading-user/e6fd78e6-37a6-4a98-b367-4541d228d924%40googlegroups.com.

JOHN MILLER

unread,
Jun 16, 2016, 12:31:10 PM6/16/16
to cascading-user
Greetings

I am actually attempting to run mapreduce via a HIVE query 

SELECT DISTINCT(field name) from table name;

Upon executing this query, i receive the following

hive> select distinct(warctype) from commoncrawl18;
Query ID = jmill383_20160616122347_d611d884-2c87-4bc2-bfe6-497590da7085
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1466017117730_0005, Tracking URL = http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0005/
Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1466017117730_0005
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2016-06-16 12:23:53,166 Stage-1 map = 0%,  reduce = 0%
Ended Job = job_1466017117730_0005 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://starchild.ltsnet.net:8088/cluster/app/application_1466017117730_0005

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
hive>



The hadoop logs from this execution does not reveal anything much different   Below is the hadoop log of this query

2016-06-16 12:23:48,569 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 5
2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 5 submitted by user jmill383
2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1466017117730_0005
2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	IP=127.0.0.1	OPERATION=Submit Application Request	TARGET=ClientRMService	RESULT=SUCCESS	APPID=application_1466017117730_0005
2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0005 State change from NEW to NEW_SAVING
2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1466017117730_0005
2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0005 State change from NEW_SAVING to SUBMITTED
2016-06-16 12:23:49,042 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1466017117730_0005 user: jmill383 leaf-queue of parent: root #applications: 1
2016-06-16 12:23:49,043 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1466017117730_0005 from user: jmill383, in queue: default
2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0005 State change from SUBMITTED to ACCEPTED
2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0005_000001
2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from NEW to SUBMITTED
2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0005 from user: jmill383 activated in queue: default
2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0005 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@47d16aaa, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2016-06-16 12:23:49,047 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0005_000001 to scheduler from user jmill383 in queue default
2016-06-16 12:23:49,048 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from SUBMITTED to SCHEDULED
2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_01_000001 Container Transitioned from NEW to ALLOCATED
2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0005	CONTAINERID=container_1466017117730_0005_01_000001
2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0005_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation
2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0005_000001 container=Container: [ContainerId: container_1466017117730_0005_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2016-06-16 12:23:49,405 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2016-06-16 12:23:49,406 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0005_01_000001
2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0005_000001
2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0005 AttemptId: appattempt_1466017117730_0005_000001 MasterContainer: Container: [ContainerId: container_1466017117730_0005_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ]
2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from SCHEDULED to ALLOCATED_SAVING
2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from ALLOCATED_SAVING to ALLOCATED
2016-06-16 12:23:49,407 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0005_000001
2016-06-16 12:23:49,408 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0005_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0005_000001
2016-06-16 12:23:49,409 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0005_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 
2016-06-16 12:23:49,409 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0005_000001
2016-06-16 12:23:49,409 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0005_000001
2016-06-16 12:23:49,415 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0005_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0005_000001
2016-06-16 12:23:49,415 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from ALLOCATED to LAUNCHED
2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_01_000001 Container Transitioned from ACQUIRED to COMPLETED
2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0005_01_000001 in state: COMPLETED event:FINISHED
2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0005	CONTAINERID=container_1466017117730_0005_01_000001
2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0005_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0005_000001 with final state: FAILED, and exit status: 1
2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0>
2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from LAUNCHED to FINAL_SAVING
2016-06-16 12:23:50,406 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0005_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0005_000001
2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0005_000001 released container container_1466017117730_0005_01_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED
2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0005_000001
2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000001 State change from FINAL_SAVING to FAILED
2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 2
2016-06-16 12:23:50,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0005_000001 is done. finalState=FAILED
2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0005_000002
2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0005 requests cleared
2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from NEW to SUBMITTED
2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0005 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0005 from user: jmill383 activated in queue: default
2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0005 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@4f769ece, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0005_000002 to scheduler from user jmill383 in queue default
2016-06-16 12:23:50,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from SUBMITTED to SCHEDULED
2016-06-16 12:23:51,406 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_02_000001 Container Transitioned from NEW to ALLOCATED
2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0005	CONTAINERID=container_1466017117730_0005_02_000001
2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0005_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation
2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0005_000002 container=Container: [ContainerId: container_1466017117730_0005_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2016-06-16 12:23:51,407 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2016-06-16 12:23:51,408 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0005_02_000001
2016-06-16 12:23:51,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_02_000001 Container Transitioned from ALLOCATED to ACQUIRED
2016-06-16 12:23:51,408 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0005_000002
2016-06-16 12:23:51,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0005 AttemptId: appattempt_1466017117730_0005_000002 MasterContainer: Container: [ContainerId: container_1466017117730_0005_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ]
2016-06-16 12:23:51,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from SCHEDULED to ALLOCATED_SAVING
2016-06-16 12:23:51,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from ALLOCATED_SAVING to ALLOCATED
2016-06-16 12:23:51,409 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0005_000002
2016-06-16 12:23:51,410 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0005_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0005_000002
2016-06-16 12:23:51,410 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0005_02_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 
2016-06-16 12:23:51,410 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0005_000002
2016-06-16 12:23:51,410 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0005_000002
2016-06-16 12:23:51,415 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0005_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0005_000002
2016-06-16 12:23:51,416 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from ALLOCATED to LAUNCHED
2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0005_02_000001 Container Transitioned from ACQUIRED to COMPLETED
2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0005_02_000001 in state: COMPLETED event:FINISHED
2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0005	CONTAINERID=container_1466017117730_0005_02_000001
2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0005_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0005_000002 with final state: FAILED, and exit status: 1
2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0>
2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from LAUNCHED to FINAL_SAVING
2016-06-16 12:23:52,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0005_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0005_000002
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0005_000002 released container container_1466017117730_0005_02_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0005_000002
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0005_000002 State change from FINAL_SAVING to FAILED
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 2. The max attempts is 2
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1466017117730_0005 with final state: FAILED
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0005 State change from ACCEPTED to FINAL_SAVING
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1466017117730_0005
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0005_000002 is done. finalState=FAILED
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application application_1466017117730_0005 failed 2 times due to AM Container for appattempt_1466017117730_0005_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0005/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1466017117730_0005_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0005 requests cleared
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0005 State change from FINAL_SAVING to FAILED
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0005 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2016-06-16 12:23:52,409 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1466017117730_0005 user: jmill383 leaf-queue of parent: root #applications: 0
2016-06-16 12:23:52,409 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=Application Finished - Failed	TARGET=RMAppManager	RESULT=FAILURE	DESCRIPTION=App failed with state: FAILED	PERMISSIONS=Application application_1466017117730_0005 failed 2 times due to AM Container for appattempt_1466017117730_0005_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0005/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1466017117730_0005_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.	APPID=application_1466017117730_0005
2016-06-16 12:23:52,410 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1466017117730_0005,name=select distinct(warctype) from commoncrawl18(Stage-1),user=jmill383,queue=default,state=FAILED,trackingUrl=http://starchild.ltsnet.net:8088/cluster/app/application_1466017117730_0005,appMasterHost=N/A,startTime=1466094229042,finishTime=1466094232409,finalStatus=FAILED
2016-06-16 12:23:53,205 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	IP=127.0.0.1	OPERATION=Kill Application Request	TARGET=ClientRMService	RESULT=SUCCESS	APPID=application_1466017117730_0005
2016-06-16 12:23:53,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed..


I suspect a JAVA HEAP issue, but i do not receive any errors in my logs regarding a memory problem Just an Exit Code 1

Please advise if you can assist

Andre Kelpe

unread,
Jun 16, 2016, 12:54:39 PM6/16/16
to cascading-user
That has really nothing to do with Cascading. You should ask on the
hive mailing-list or your hadoop vendor:
http://hive.apache.org/mailing_lists.html

- André
> https://groups.google.com/d/msgid/cascading-user/199d6086-f73f-44ba-8015-25811b45f84b%40googlegroups.com.

JOHN MILLER

unread,
Jun 16, 2016, 2:00:27 PM6/16/16
to cascading-user
Greetings

My apologies   I sent u the wrong one

Trying to execute one of the cascading-hive examples posted on github  Gives me the same error as the previous execution   (hive error 'FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask')
Below is the log from the cascading-hive example

[jmill383@starchild demo]$ /opt/hadoop/bin/hadoop jar  build/libs/cascading-hive-demo-1.0.jar cascading.hive.HiveDemo
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/06/16 13:52:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/16 13:52:35 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/16 13:52:35 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/16 13:52:35 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/usr/local/hive/lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/tmp/hadoop-unjar6128009439718134380/lib/datanucleus-rdbms-3.2.9.jar."
16/06/16 13:52:35 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/tmp/hadoop-unjar6128009439718134380/lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/usr/local/hive/lib/datanucleus-api-jdo-3.2.6.jar."
16/06/16 13:52:35 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/tmp/hadoop-unjar6128009439718134380/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/usr/local/hive/lib/datanucleus-core-3.2.10.jar."
16/06/16 13:52:35 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/06/16 13:52:35 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/06/16 13:52:36 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/06/16 13:52:37 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/06/16 13:52:37 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/06/16 13:52:38 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/06/16 13:52:38 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/06/16 13:52:38 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/16 13:52:38 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/16 13:52:38 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/16 13:52:38 INFO metastore.HiveMetaStore: Added admin role in metastore
16/06/16 13:52:38 INFO metastore.HiveMetaStore: Added public role in metastore
16/06/16 13:52:38 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=dual
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=dual
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/16 13:52:38 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/16 13:52:38 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/16 13:52:38 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/16 13:52:38 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/16 13:52:38 INFO property.AppProps: using app.id: E2BFC17E3733458B8BD3479235775E2D
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=keyvalue
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/16 13:52:38 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/16 13:52:38 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/16 13:52:38 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/16 13:52:38 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=keyvalue
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/16 13:52:38 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/16 13:52:38 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/16 13:52:38 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/16 13:52:38 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=keyvalue2
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue2   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/16 13:52:38 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/16 13:52:38 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/16 13:52:38 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/16 13:52:38 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=keyvalue2
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue2   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/16 13:52:38 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/16 13:52:38 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/16 13:52:38 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/16 13:52:38 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/16 13:52:38 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/16 13:52:38 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/16 13:52:38 INFO util.Util: resolving application jar from found main method on: cascading.hive.HiveDemo
16/06/16 13:52:38 INFO planner.HadoopPlanner: using application jar: /home/jmill383/cascading-hive/demo/build/libs/cascading-hive-demo-1.0.jar
16/06/16 13:52:39 INFO flow.Flow: [uppercase kv -> kv2 ] executed rule registry: MapReduceHadoopRuleRegistry, completed as: SUCCESS, in: 00:00.050
16/06/16 13:52:39 INFO flow.Flow: [uppercase kv -> kv2 ] rule registry: MapReduceHadoopRuleRegistry, supports assembly with steps: 1, nodes: 1
16/06/16 13:52:39 INFO flow.Flow: [uppercase kv -> kv2 ] rule registry: MapReduceHadoopRuleRegistry, result was selected using: 'default comparator: selects plan with fewest steps and fewest nodes'
16/06/16 13:52:39 INFO Configuration.deprecation: mapred.used.genericoptionsparser is deprecated. Instead, use mapreduce.client.genericoptionsparser.used
16/06/16 13:52:39 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
16/06/16 13:52:39 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
16/06/16 13:52:39 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
16/06/16 13:52:39 INFO Configuration.deprecation: mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
16/06/16 13:52:39 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
16/06/16 13:52:39 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
16/06/16 13:52:39 INFO util.Version: Concurrent, Inc - Cascading 3.1.0-wip-60
16/06/16 13:52:39 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] starting
16/06/16 13:52:39 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]  parallel execution of flows is enabled: false
16/06/16 13:52:39 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]  executing total flows: 3
16/06/16 13:52:39 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]  allocating management threads: 1
16/06/16 13:52:39 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] starting flow: load data into dual
16/06/16 13:52:39 INFO flow.Flow: [load data into dual] at least one sink is marked for delete
16/06/16 13:52:39 INFO flow.Flow: [load data into dual] sink oldest modified date: Wed Dec 31 18:59:59 EST 1969
16/06/16 13:52:39 INFO metastore.HiveMetaStore: 1: get_table : db=default tbl=dual
16/06/16 13:52:39 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/16 13:52:39 INFO metastore.HiveMetaStore: 1: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/16 13:52:39 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/16 13:52:39 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/16 13:52:39 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/16 13:52:39 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/16 13:52:39 INFO hive.HiveTap: strict mode: comparing existing hive table with table descriptor
16/06/16 13:52:39 INFO metastore.HiveMetaStore: 1: Shutting down the object store...
16/06/16 13:52:39 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/16 13:52:39 INFO metastore.HiveMetaStore: 1: Metastore shutdown complete.
16/06/16 13:52:39 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/16 13:52:39 INFO metastore.HiveMetaStore: 2: get_all_databases
16/06/16 13:52:39 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_all_databases   
16/06/16 13:52:39 INFO metastore.HiveMetaStore: 2: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/16 13:52:39 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/16 13:52:39 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/16 13:52:39 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/16 13:52:39 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/16 13:52:39 INFO metastore.HiveMetaStore: 2: get_functions: db=default pat=*
16/06/16 13:52:39 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_functions: db=default pat=*   
16/06/16 13:52:39 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
16/06/16 13:52:39 INFO session.SessionState: Created local directory: /tmp/005e1e3b-7c20-4007-9068-85c3c37e804a_resources
16/06/16 13:52:39 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/005e1e3b-7c20-4007-9068-85c3c37e804a
16/06/16 13:52:39 INFO session.SessionState: Created local directory: /tmp/jmill383/005e1e3b-7c20-4007-9068-85c3c37e804a
16/06/16 13:52:39 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/005e1e3b-7c20-4007-9068-85c3c37e804a/_tmp_space.db
16/06/16 13:52:39 INFO hive.HiveQueryRunner: running hive query: 'load data local inpath 'file:///home/jmill383/cascading-hive/demo/src/main/resources/data.txt' overwrite into table dual'
16/06/16 13:52:39 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:39 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:39 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:39 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:39 INFO parse.ParseDriver: Parsing command: load data local inpath 'file:///home/jmill383/cascading-hive/demo/src/main/resources/data.txt' overwrite into table dual
16/06/16 13:52:40 INFO parse.ParseDriver: Parse Completed
16/06/16 13:52:40 INFO log.PerfLogger: </PERFLOG method=parse start=1466099559782 end=1466099560269 duration=487 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 2: get_table : db=default tbl=dual
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/16 13:52:40 INFO ql.Driver: Semantic Analysis Completed
16/06/16 13:52:40 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1466099560271 end=1466099560384 duration=113 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
16/06/16 13:52:40 INFO log.PerfLogger: </PERFLOG method=compile start=1466099559759 end=1466099560389 duration=630 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO ql.Driver: Starting command(queryId=jmill383_20160616135239_b341a258-10ad-4d41-aec5-594b06e7e52c): load data local inpath 'file:///home/jmill383/cascading-hive/demo/src/main/resources/data.txt' overwrite into table dual
16/06/16 13:52:40 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1466099559759 end=1466099560391 duration=632 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=task.MOVE.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO ql.Driver: Starting task [Stage-0:MOVE] in serial mode
Loading data to table default.dual
16/06/16 13:52:40 INFO exec.Task: Loading data to table default.dual from file:/home/jmill383/cascading-hive/demo/src/main/resources/data.txt
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 2: get_table : db=default tbl=dual
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 2: get_table : db=default tbl=dual
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/16 13:52:40 INFO common.FileUtils: deleting  hdfs://localhost:8025/user/hive/warehouse/dual/data.txt
16/06/16 13:52:40 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
16/06/16 13:52:40 INFO metadata.Hive: Replacing src:file:/home/jmill383/cascading-hive/demo/src/main/resources/data.txt, dest: hdfs://localhost:8025/user/hive/warehouse/dual/data.txt, Status:true
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 2: alter_table: db=default tbl=dual newtbl=dual
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=alter_table: db=default tbl=dual newtbl=dual   
16/06/16 13:52:40 INFO hive.log: Updating table stats fast for dual
16/06/16 13:52:40 INFO hive.log: Updated size of table dual to 2
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=task.STATS.Stage-1 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO ql.Driver: Starting task [Stage-1:STATS] in serial mode
16/06/16 13:52:40 INFO exec.StatsTask: Executing stats task
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 2: get_table : db=default tbl=dual
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 2: get_table : db=default tbl=dual
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 2: alter_table: db=default tbl=dual newtbl=dual
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=alter_table: db=default tbl=dual newtbl=dual   
16/06/16 13:52:40 INFO hive.log: Updating table stats fast for dual
16/06/16 13:52:40 INFO hive.log: Updated size of table dual to 2
Table default.dual stats: [numFiles=1, numRows=0, totalSize=2, rawDataSize=0]
16/06/16 13:52:40 INFO exec.Task: Table default.dual stats: [numFiles=1, numRows=0, totalSize=2, rawDataSize=0]
16/06/16 13:52:40 INFO log.PerfLogger: </PERFLOG method=runTasks start=1466099560391 end=1466099560786 duration=395 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1466099560389 end=1466099560786 duration=397 from=org.apache.hadoop.hive.ql.Driver>
OK
16/06/16 13:52:40 INFO ql.Driver: OK
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1466099560787 end=1466099560787 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: </PERFLOG method=Driver.run start=1466099559759 end=1466099560787 duration=1028 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1466099560788 end=1466099560788 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] completed flow: load data into dual
16/06/16 13:52:40 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] starting flow: select data from dual into keyvalue
16/06/16 13:52:40 INFO flow.Flow: [select data from dual ...] at least one sink is marked for delete
16/06/16 13:52:40 INFO flow.Flow: [select data from dual ...] sink oldest modified date: Wed Dec 31 18:59:59 EST 1969
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 1: get_table : db=default tbl=keyvalue
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 1: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/16 13:52:40 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/16 13:52:40 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/16 13:52:40 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/16 13:52:40 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/16 13:52:40 INFO hive.HiveTap: strict mode: comparing existing hive table with table descriptor
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 1: Shutting down the object store...
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 1: Metastore shutdown complete.
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/16 13:52:40 INFO session.SessionState: Created local directory: /tmp/a34f8971-3b4a-4e27-9fba-a006da4ce041_resources
16/06/16 13:52:40 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041
16/06/16 13:52:40 INFO session.SessionState: Created local directory: /tmp/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041
16/06/16 13:52:40 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041/_tmp_space.db
16/06/16 13:52:40 INFO hive.HiveQueryRunner: running hive query: 'insert overwrite table keyvalue select 'Hello' as key, 'hive!' as value from dual'
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO parse.ParseDriver: Parsing command: insert overwrite table keyvalue select 'Hello' as key, 'hive!' as value from dual
16/06/16 13:52:40 INFO parse.ParseDriver: Parse Completed
16/06/16 13:52:40 INFO log.PerfLogger: </PERFLOG method=parse start=1466099560881 end=1466099560889 duration=8 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:40 INFO parse.CalcitePlanner: Starting Semantic Analysis
16/06/16 13:52:40 INFO parse.CalcitePlanner: Completed phase 1 of Semantic Analysis
16/06/16 13:52:40 INFO parse.CalcitePlanner: Get metadata for source tables
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=dual
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 3: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/16 13:52:40 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/16 13:52:40 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/16 13:52:40 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/16 13:52:40 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/16 13:52:40 INFO parse.CalcitePlanner: Get metadata for subqueries
16/06/16 13:52:40 INFO parse.CalcitePlanner: Get metadata for destination tables
16/06/16 13:52:40 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/16 13:52:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/16 13:52:40 INFO parse.CalcitePlanner: Completed getting MetaData in Semantic Analysis
16/06/16 13:52:40 INFO parse.BaseSemanticAnalyzer: Not invoking CBO because the statement has too few joins
16/06/16 13:52:40 INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-16_13-52-40_881_3574852600279506565-1
16/06/16 13:52:41 INFO parse.CalcitePlanner: Set stats collection dir : hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-16_13-52-40_881_3574852600279506565-1/-ext-10001
16/06/16 13:52:41 INFO ppd.OpProcFactory: Processing for FS(2)
16/06/16 13:52:41 INFO ppd.OpProcFactory: Processing for SEL(1)
16/06/16 13:52:41 INFO ppd.OpProcFactory: Processing for TS(0)
16/06/16 13:52:41 INFO log.PerfLogger: <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/06/16 13:52:41 INFO log.PerfLogger: </PERFLOG method=partition-retrieving start=1466099561099 end=1466099561100 duration=1 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/06/16 13:52:41 INFO optimizer.GenMRFileSink1: using CombineHiveInputformat for the merge job
16/06/16 13:52:41 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/16 13:52:41 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/16 13:52:41 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/16 13:52:41 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/16 13:52:41 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/16 13:52:41 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/16 13:52:41 INFO parse.CalcitePlanner: Completed plan generation
16/06/16 13:52:41 INFO ql.Driver: Semantic Analysis Completed
16/06/16 13:52:41 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1466099560889 end=1466099561115 duration=226 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:41 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:key, type:string, comment:null), FieldSchema(name:value, type:string, comment:null)], properties:null)
16/06/16 13:52:41 INFO log.PerfLogger: </PERFLOG method=compile start=1466099560881 end=1466099561116 duration=235 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:41 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
16/06/16 13:52:41 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:41 INFO ql.Driver: Starting command(queryId=jmill383_20160616135240_d1a2d880-80c2-4337-8ba9-834f5b81f057): insert overwrite table keyvalue select 'Hello' as key, 'hive!' as value from dual
Query ID = jmill383_20160616135240_d1a2d880-80c2-4337-8ba9-834f5b81f057
16/06/16 13:52:41 INFO ql.Driver: Query ID = jmill383_20160616135240_d1a2d880-80c2-4337-8ba9-834f5b81f057
Total jobs = 3
16/06/16 13:52:41 INFO ql.Driver: Total jobs = 3
16/06/16 13:52:41 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1466099560881 end=1466099561116 duration=235 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:41 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:41 INFO log.PerfLogger: <PERFLOG method=task.MAPRED.Stage-1 from=org.apache.hadoop.hive.ql.Driver>
Launching Job 1 out of 3
16/06/16 13:52:41 INFO ql.Driver: Launching Job 1 out of 3
16/06/16 13:52:41 INFO ql.Driver: Starting task [Stage-1:MAPRED] in serial mode
Number of reduce tasks is set to 0 since there's no reduce operator
16/06/16 13:52:41 INFO exec.Task: Number of reduce tasks is set to 0 since there's no reduce operator
16/06/16 13:52:41 INFO ql.Context: New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041/hive_2016-06-16_13-52-40_881_3574852600279506565-1
16/06/16 13:52:41 INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
16/06/16 13:52:41 INFO exec.Utilities: Processing alias dual
16/06/16 13:52:41 INFO exec.Utilities: Adding input file hdfs://localhost:8025/user/hive/warehouse/dual
16/06/16 13:52:41 INFO exec.Utilities: Content Summary not cached for hdfs://localhost:8025/user/hive/warehouse/dual
16/06/16 13:52:41 INFO ql.Context: New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041/hive_2016-06-16_13-52-40_881_3574852600279506565-1
16/06/16 13:52:41 INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/06/16 13:52:41 INFO exec.Utilities: Serializing MapWork via kryo
16/06/16 13:52:41 INFO log.PerfLogger: </PERFLOG method=serializePlan start=1466099561180 end=1466099561246 duration=66 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/06/16 13:52:41 INFO Configuration.deprecation: mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication
16/06/16 13:52:41 ERROR mr.ExecDriver: yarn
16/06/16 13:52:41 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/06/16 13:52:41 INFO fs.FSStatsPublisher: created : hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-16_13-52-40_881_3574852600279506565-1/-ext-10001
16/06/16 13:52:41 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/06/16 13:52:41 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041/hive_2016-06-16_13-52-40_881_3574852600279506565-1/-mr-10004/d4382cbb-9d51-4550-a1dd-52196510eee9/map.xml
16/06/16 13:52:41 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041/hive_2016-06-16_13-52-40_881_3574852600279506565-1/-mr-10004/d4382cbb-9d51-4550-a1dd-52196510eee9/reduce.xml
16/06/16 13:52:41 INFO exec.Utilities: ***************non-local mode***************
16/06/16 13:52:41 INFO exec.Utilities: local path = hdfs://localhost:8025/tmp/hive/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041/hive_2016-06-16_13-52-40_881_3574852600279506565-1/-mr-10004/d4382cbb-9d51-4550-a1dd-52196510eee9/reduce.xml
16/06/16 13:52:41 INFO exec.Utilities: Open file to read in plan: hdfs://localhost:8025/tmp/hive/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041/hive_2016-06-16_13-52-40_881_3574852600279506565-1/-mr-10004/d4382cbb-9d51-4550-a1dd-52196510eee9/reduce.xml
16/06/16 13:52:41 INFO exec.Utilities: File not found: File does not exist: /tmp/hive/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041/hive_2016-06-16_13-52-40_881_3574852600279506565-1/-mr-10004/d4382cbb-9d51-4550-a1dd-52196510eee9/reduce.xml
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

16/06/16 13:52:41 INFO exec.Utilities: No plan file found: hdfs://localhost:8025/tmp/hive/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041/hive_2016-06-16_13-52-40_881_3574852600279506565-1/-mr-10004/d4382cbb-9d51-4550-a1dd-52196510eee9/reduce.xml
16/06/16 13:52:41 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/06/16 13:52:41 INFO log.PerfLogger: <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/06/16 13:52:41 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/a34f8971-3b4a-4e27-9fba-a006da4ce041/hive_2016-06-16_13-52-40_881_3574852600279506565-1/-mr-10004/d4382cbb-9d51-4550-a1dd-52196510eee9/map.xml
16/06/16 13:52:41 INFO io.CombineHiveInputFormat: Total number of paths: 1, launching 1 threads to check non-combinable ones.
16/06/16 13:52:41 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://localhost:8025/user/hive/warehouse/dual; using filter path hdfs://localhost:8025/user/hive/warehouse/dual
16/06/16 13:52:41 INFO input.FileInputFormat: Total input paths to process : 1
16/06/16 13:52:41 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
16/06/16 13:52:41 INFO io.CombineHiveInputFormat: number of splits 1
16/06/16 13:52:41 INFO io.CombineHiveInputFormat: Number of all splits 1
16/06/16 13:52:41 INFO log.PerfLogger: </PERFLOG method=getSplits start=1466099561651 end=1466099561678 duration=27 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/06/16 13:52:41 INFO mapreduce.JobSubmitter: number of splits:1
16/06/16 13:52:41 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1466017117730_0006
16/06/16 13:52:41 INFO impl.YarnClientImpl: Submitted application application_1466017117730_0006
16/06/16 13:52:41 INFO mapreduce.Job: The url to track the job: http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0006/
Starting Job = job_1466017117730_0006, Tracking URL = http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0006/
16/06/16 13:52:41 INFO exec.Task: Starting Job = job_1466017117730_0006, Tracking URL = http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0006/
Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1466017117730_0006
16/06/16 13:52:41 INFO exec.Task: Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1466017117730_0006

Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
16/06/16 13:52:46 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
16/06/16 13:52:46 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-06-16 13:52:46,077 Stage-1 map = 0%,  reduce = 0%
16/06/16 13:52:46 INFO exec.Task: 2016-06-16 13:52:46,077 Stage-1 map = 0%,  reduce = 0%
16/06/16 13:52:46 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
Ended Job = job_1466017117730_0006 with errors
16/06/16 13:52:46 ERROR exec.Task: Ended Job = job_1466017117730_0006 with errors

Error during job, obtaining debugging information...
16/06/16 13:52:46 ERROR exec.Task: Error during job, obtaining debugging information...
16/06/16 13:52:46 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
Job Tracking URL: http://starchild.ltsnet.net:8088/cluster/app/application_1466017117730_0006
16/06/16 13:52:46 ERROR exec.Task: Job Tracking URL: http://starchild.ltsnet.net:8088/cluster/app/application_1466017117730_0006
16/06/16 13:52:46 INFO impl.YarnClientImpl: Killed application application_1466017117730_0006

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
16/06/16 13:52:46 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
16/06/16 13:52:46 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1466099561116 end=1466099566155 duration=5039 from=org.apache.hadoop.hive.ql.Driver>
MapReduce Jobs Launched:
16/06/16 13:52:46 INFO ql.Driver: MapReduce Jobs Launched:
16/06/16 13:52:46 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead

Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
16/06/16 13:52:46 INFO ql.Driver: Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL

Total MapReduce CPU Time Spent: 0 msec
16/06/16 13:52:46 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec
16/06/16 13:52:46 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:46 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1466099566158 end=1466099566158 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:46 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:46 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1466099566161 end=1466099566161 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/16 13:52:46 WARN cascade.Cascade: [uppercase kv -> kv2 +l...] flow failed: select data from dual into keyvalue
cascading.CascadingException: hive error 'FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask' while running query insert overwrite table keyvalue select 'Hello' as key, 'hive!' as value from dual
    at cascading.flow.hive.HiveQueryRunner.run(HiveQueryRunner.java:131)
    at cascading.flow.hive.HiveQueryRunner.call(HiveQueryRunner.java:167)
    at cascading.flow.hive.HiveQueryRunner.call(HiveQueryRunner.java:41)

    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
16/06/16 13:52:46 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] stopping all flows
16/06/16 13:52:46 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] stopping flow: uppercase kv -> kv2
16/06/16 13:52:46 INFO flow.Flow: [uppercase kv -> kv2 ] stopping all jobs
16/06/16 13:52:46 INFO flow.Flow: [uppercase kv -> kv2 ] stopping: (1/1) .../hive/warehouse/keyvalue2
16/06/16 13:52:46 INFO flow.Flow: [uppercase kv -> kv2 ] stopped all jobs
16/06/16 13:52:46 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] stopping flow: select data from dual into keyvalue
16/06/16 13:52:46 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] stopping flow: load data into dual
16/06/16 13:52:46 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] stopped all flows
Exception in thread "main" cascading.cascade.CascadeException: flow failed: select data from dual into keyvalue
    at cascading.cascade.BaseCascade$CascadeJob.call(BaseCascade.java:963)
    at cascading.cascade.BaseCascade$CascadeJob.call(BaseCascade.java:900)

    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: cascading.CascadingException: hive error 'FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask' while running query insert overwrite table keyvalue select 'Hello' as key, 'hive!' as value from dual
    at cascading.flow.hive.HiveQueryRunner.run(HiveQueryRunner.java:131)
    at cascading.flow.hive.HiveQueryRunner.call(HiveQueryRunner.java:167)
    at cascading.flow.hive.HiveQueryRunner.call(HiveQueryRunner.java:41)
    ... 4 more
[jmill383@starchild demo]$



Below is the hadoop logs for cascading-hive

2016-06-16 12:23:53,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2016-06-16 13:52:41,457 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 6
2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 6 submitted by user jmill383
2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1466017117730_0006
2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	IP=127.0.0.1	OPERATION=Submit Application Request	TARGET=ClientRMService	RESULT=SUCCESS	APPID=application_1466017117730_0006
2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0006 State change from NEW to NEW_SAVING
2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1466017117730_0006
2016-06-16 13:52:41,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0006 State change from NEW_SAVING to SUBMITTED
2016-06-16 13:52:41,931 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1466017117730_0006 user: jmill383 leaf-queue of parent: root #applications: 1
2016-06-16 13:52:41,931 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1466017117730_0006 from user: jmill383, in queue: default
2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0006 State change from SUBMITTED to ACCEPTED
2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0006_000001
2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from NEW to SUBMITTED
2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0006 from user: jmill383 activated in queue: default
2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0006 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@d041557, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2016-06-16 13:52:41,935 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0006_000001 to scheduler from user jmill383 in queue default
2016-06-16 13:52:41,936 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from SUBMITTED to SCHEDULED
2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_01_000001 Container Transitioned from NEW to ALLOCATED
2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0006	CONTAINERID=container_1466017117730_0006_01_000001
2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0006_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation
2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0006_000001 container=Container: [ContainerId: container_1466017117730_0006_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2016-06-16 13:52:42,364 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2016-06-16 13:52:42,365 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0006_01_000001
2016-06-16 13:52:42,365 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2016-06-16 13:52:42,366 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0006_000001
2016-06-16 13:52:42,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0006 AttemptId: appattempt_1466017117730_0006_000001 MasterContainer: Container: [ContainerId: container_1466017117730_0006_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ]
2016-06-16 13:52:42,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from SCHEDULED to ALLOCATED_SAVING
2016-06-16 13:52:42,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from ALLOCATED_SAVING to ALLOCATED
2016-06-16 13:52:42,366 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0006_000001
2016-06-16 13:52:42,367 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0006_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0006_000001
2016-06-16 13:52:42,367 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0006_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 
2016-06-16 13:52:42,367 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0006_000001
2016-06-16 13:52:42,367 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0006_000001
2016-06-16 13:52:42,373 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0006_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0006_000001
2016-06-16 13:52:42,374 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from ALLOCATED to LAUNCHED
2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_01_000001 Container Transitioned from ACQUIRED to COMPLETED
2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0006_01_000001 in state: COMPLETED event:FINISHED
2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0006	CONTAINERID=container_1466017117730_0006_01_000001
2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0006_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0006_000001 with final state: FAILED, and exit status: 1
2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0>
2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from LAUNCHED to FINAL_SAVING
2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0006_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2016-06-16 13:52:43,365 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0006_000001
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0006_000001 released container container_1466017117730_0006_01_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0006_000001
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000001 State change from FINAL_SAVING to FAILED
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 2
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0006_000001 is done. finalState=FAILED
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0006_000002
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0006 requests cleared
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from NEW to SUBMITTED
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0006 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0006 from user: jmill383 activated in queue: default
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0006 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@7da36a0f, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2016-06-16 13:52:43,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0006_000002 to scheduler from user jmill383 in queue default
2016-06-16 13:52:43,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from SUBMITTED to SCHEDULED
2016-06-16 13:52:44,365 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2016-06-16 13:52:44,365 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_02_000001 Container Transitioned from NEW to ALLOCATED
2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0006	CONTAINERID=container_1466017117730_0006_02_000001
2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0006_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation
2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0006_000002 container=Container: [ContainerId: container_1466017117730_0006_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2016-06-16 13:52:44,366 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0006_02_000001
2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_02_000001 Container Transitioned from ALLOCATED to ACQUIRED
2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0006_000002
2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0006 AttemptId: appattempt_1466017117730_0006_000002 MasterContainer: Container: [ContainerId: container_1466017117730_0006_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ]
2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from SCHEDULED to ALLOCATED_SAVING
2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from ALLOCATED_SAVING to ALLOCATED
2016-06-16 13:52:44,367 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0006_000002
2016-06-16 13:52:44,369 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0006_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0006_000002
2016-06-16 13:52:44,369 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0006_02_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 
2016-06-16 13:52:44,369 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0006_000002
2016-06-16 13:52:44,369 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0006_000002
2016-06-16 13:52:44,375 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0006_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0006_000002
2016-06-16 13:52:44,375 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from ALLOCATED to LAUNCHED
2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0006_02_000001 Container Transitioned from ACQUIRED to COMPLETED
2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0006_02_000001 in state: COMPLETED event:FINISHED
2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0006	CONTAINERID=container_1466017117730_0006_02_000001
2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0006_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0006_000002 with final state: FAILED, and exit status: 1
2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0>
2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from LAUNCHED to FINAL_SAVING
2016-06-16 13:52:45,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0006_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0006_000002
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0006_000002 released container container_1466017117730_0006_02_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0006_000002
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0006_000002 State change from FINAL_SAVING to FAILED
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 2. The max attempts is 2
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1466017117730_0006 with final state: FAILED
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0006 State change from ACCEPTED to FINAL_SAVING
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0006_000002 is done. finalState=FAILED
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1466017117730_0006
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0006 requests cleared
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application application_1466017117730_0006 failed 2 times due to AM Container for appattempt_1466017117730_0006_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0006/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1466017117730_0006_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0006 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0006 State change from FINAL_SAVING to FAILED
2016-06-16 13:52:45,367 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1466017117730_0006 user: jmill383 leaf-queue of parent: root #applications: 0
2016-06-16 13:52:45,367 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=Application Finished - Failed	TARGET=RMAppManager	RESULT=FAILURE	DESCRIPTION=App failed with state: FAILED	PERMISSIONS=Application application_1466017117730_0006 failed 2 times due to AM Container for appattempt_1466017117730_0006_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0006/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1466017117730_0006_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.	APPID=application_1466017117730_0006
2016-06-16 13:52:45,368 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1466017117730_0006,name=insert overwrite table keyvalue selec...dual(Stage-1),user=jmill383,queue=default,state=FAILED,trackingUrl=http://starchild.ltsnet.net:8088/cluster/app/application_1466017117730_0006,appMasterHost=N/A,startTime=1466099561930,finishTime=1466099565367,finalStatus=FAILED
2016-06-16 13:52:46,136 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	IP=127.0.0.1	OPERATION=Kill Application Request	TARGET=ClientRMService	RESULT=SUCCESS	APPID=application_1466017117730_0006
2016-06-16 13:52:46,366 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...


Please advise if u can assist

John M

Andre Kelpe

unread,
Jun 17, 2016, 6:24:43 AM6/17/16
to cascading-user
This looks like a configuration problem for hadoop/hive. Please take a
look at the cluster side logs to see the actual error.

- André
> https://groups.google.com/d/msgid/cascading-user/c29a519a-ba89-4e69-9014-ab50379da94b%40googlegroups.com.

JOHN MILLER

unread,
Jun 17, 2016, 8:06:08 AM6/17/16
to cascading-user
Greetings

Thanx for the response.

below is a small snippet from the hive.log file   Errors are highlighted but does not give much information about the culprit

2016-06-16 12:23:48,892 INFO  [main]: mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(494)) - number of splits:1
2016-06-16 12:23:48,951 INFO  [main]: mapreduce.JobSubmitter (JobSubmitter.java:printTokens(583)) - Submitting tokens for job: job_1466017117730_0005
2016-06-16 12:23:49,056 INFO  [main]: impl.YarnClientImpl (YarnClientImpl.java:submitApplication(251)) - Submitted application application_1466017117730_0005
2016-06-16 12:23:49,075 INFO  [main]: mapreduce.Job (Job.java:submit(1300)) - The url to track the job: http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0005/
2016-06-16 12:23:49,076 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - Starting Job = job_1466017117730_0005, Tracking URL = http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0005/
2016-06-16 12:23:49,076 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1466017117730_0005
2016-06-16 12:23:53,128 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2016-06-16 12:23:53,168 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-06-16 12:23:53,168 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - 2016-06-16 12:23:53,166 Stage-1 map = 0%,  reduce = 0%
2016-06-16 12:23:53,172 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-06-16 12:23:53,177 ERROR [main]: exec.Task (SessionState.java:printError(957)) - Ended Job = job_1466017117730_0005 with errors
2016-06-16 12:23:53,178 ERROR [Thread-27]: exec.Task (SessionState.java:printError(957)) - Error during job, obtaining debugging information...
2016-06-16 12:23:53,178 INFO  [Thread-27]: Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1049)) - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2016-06-16 12:23:53,179 ERROR [Thread-27]: exec.Task (SessionState.java:printError(957)) - Job Tracking URL: http://starchild.ltsnet.net:8088/cluster/app/application_1466017117730_0005
2016-06-16 12:23:53,207 INFO  [main]: impl.YarnClientImpl (YarnClientImpl.java:killApplication(364)) - Killed application application_1466017117730_0005
2016-06-16 12:23:53,226 ERROR [main]: ql.Driver (SessionState.java:printError(957)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
2016-06-16 12:23:53,226 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=Driver.execute start=1466094228131 end=1466094233226 duration=5095 from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:53,226 INFO  [main]: ql.Driver (SessionState.java:printInfo(948)) - MapReduce Jobs Launched:
2016-06-16 12:23:53,228 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-06-16 12:23:53,229 INFO  [main]: ql.Driver (SessionState.java:printInfo(948)) - Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
2016-06-16 12:23:53,229 INFO  [main]: ql.Driver (SessionState.java:printInfo(948)) - Total MapReduce CPU Time Spent: 0 msec
2016-06-16 12:23:53,229 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:53,229 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks start=1466094233229 end=1466094233229 duration=0 from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:53,232 INFO  [main]: exec.ListSinkOperator (Operator.java:close(612)) - 7 finished. closing...
2016-06-16 12:23:53,232 INFO  [main]: exec.ListSinkOperator (Operator.java:close(634)) - 7 Close done
2016-06-16 12:23:53,252 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:53,253 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks start=1466094233252 end=1466094233253 duration=1 from=org.apache.hadoop.hive.ql.Driver>
[jmill383@starchild jmill383]$

Below is the full snippet


[jmill383@starchild jmill383]$ more hive.log
2016-06-16 12:23:14,842 WARN  [main]: common.LogUtils (LogUtils.java:logConfigLocation(145)) - hive-site.xml not found on CLASSPATH
2016-06-16 12:23:14,944 INFO  [main]: SessionState (SessionState.java:printInfo(948)) -
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-jdbc-1.2.1-standalone.jar!/hive-log4j.properties
2016-06-16 12:23:15,071 WARN  [main]: util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-06-16 12:23:15,117 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(589)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2016-06-16 12:23:15,153 INFO  [main]: metastore.ObjectStore (ObjectStore.java:initialize(289)) - ObjectStore, initialize called
2016-06-16 12:23:16,622 INFO  [main]: metastore.ObjectStore (ObjectStore.java:getPMF(370)) - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partitio
n,Database,Type,FieldSchema,Order"
2016-06-16 12:23:17,906 INFO  [main]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(139)) - Using direct SQL, underlying DB is DERBY
2016-06-16 12:23:17,908 INFO  [main]: metastore.ObjectStore (ObjectStore.java:setConf(272)) - Initialized ObjectStore
2016-06-16 12:23:18,100 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles_core(663)) - Added admin role in metastore
2016-06-16 12:23:18,101 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles_core(672)) - Added public role in metastore
2016-06-16 12:23:18,155 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:addAdminUsers_core(712)) - No user is added in admin role, since config is empty
2016-06-16 12:23:18,231 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_all_databases
2016-06-16 12:23:18,232 INFO  [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=jmill383    ip=unknown-ip-addr    cmd=get_all_databases   
2016-06-16 12:23:18,246 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_functions: db=default pat=*
2016-06-16 12:23:18,246 INFO  [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=jmill383    ip=unknown-ip-addr    cmd=get_functions: db=default pat=*   
2016-06-16 12:23:18,305 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_functions: db=warc_part_db pat=*
2016-06-16 12:23:18,306 INFO  [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=jmill383    ip=unknown-ip-addr    cmd=get_functions: db=warc_part_db pat=*   
2016-06-16 12:23:18,626 INFO  [main]: session.SessionState (SessionState.java:createPath(638)) - Created local directory: /tmp/a7f6e5ec-1f3e-4128-acca-939348c274e9_resources
2016-06-16 12:23:18,646 INFO  [main]: session.SessionState (SessionState.java:createPath(638)) - Created HDFS directory: /tmp/hive/jmill383/a7f6e5ec-1f3e-4128-acca-939348c274e9
2016-06-16 12:23:18,649 INFO  [main]: session.SessionState (SessionState.java:createPath(638)) - Created local directory: /tmp/jmill383/a7f6e5ec-1f3e-4128-acca-939348c274e9
2016-06-16 12:23:18,661 INFO  [main]: session.SessionState (SessionState.java:createPath(638)) - Created HDFS directory: /tmp/hive/jmill383/a7f6e5ec-1f3e-4128-acca-939348c274e9/_tmp_space.db
2016-06-16 12:23:47,095 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:47,096 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:47,096 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:47,115 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:47,117 INFO  [main]: parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command: select distinct(warctype) from commoncrawl18
2016-06-16 12:23:47,533 INFO  [main]: parse.ParseDriver (ParseDriver.java:parse(209)) - Parse Completed
2016-06-16 12:23:47,533 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=parse start=1466094227115 end=1466094227533 duration=418 from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:47,535 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:47,569 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:analyzeInternal(10042)) - Starting Semantic Analysis
2016-06-16 12:23:47,570 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:genResolvedParseTree(10025)) - Completed phase 1 of Semantic Analysis
2016-06-16 12:23:47,570 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1530)) - Get metadata for source tables
2016-06-16 12:23:47,570 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_table : db=default tbl=commoncrawl18
2016-06-16 12:23:47,570 INFO  [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=commoncrawl18   
2016-06-16 12:23:47,751 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1682)) - Get metadata for subqueries
2016-06-16 12:23:47,755 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1706)) - Get metadata for destination tables
2016-06-16 12:23:47,793 INFO  [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/a7f6e5ec-1f3e-4128-acca-939348c274e9/hive_2016-06-16_12-23-47_115
_7454725954954088432-1
2016-06-16 12:23:47,795 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:genResolvedParseTree(10029)) - Completed getting MetaData in Semantic Analysis
2016-06-16 12:23:47,799 INFO  [main]: parse.BaseSemanticAnalyzer (CalcitePlanner.java:canCBOHandleAst(386)) - Not invoking CBO because the statement has too few joins
2016-06-16 12:23:47,924 INFO  [main]: common.FileUtils (FileUtils.java:mkdir(501)) - Creating directory if it doesn't exist: hdfs://localhost:8025/tmp/hive/jmill383/a7f6e5ec-1f3e-4128-acca-939348c274e9/hive_2016
-06-16_12-23-47_115_7454725954954088432-1/-mr-10000/.hive-staging_hive_2016-06-16_12-23-47_115_7454725954954088432-1
2016-06-16 12:23:48,009 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:genFileSinkPlan(6630)) - Set stats collection dir : hdfs://localhost:8025/tmp/hive/jmill383/a7f6e5ec-1f3e-4128-acca-939348c274e9/
hive_2016-06-16_12-23-47_115_7454725954954088432-1/-mr-10000/.hive-staging_hive_2016-06-16_12-23-47_115_7454725954954088432-1/-ext-10002
2016-06-16 12:23:48,054 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for FS(6)
2016-06-16 12:23:48,055 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for SEL(5)
2016-06-16 12:23:48,055 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for GBY(4)
2016-06-16 12:23:48,055 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for RS(3)
2016-06-16 12:23:48,055 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for GBY(2)
2016-06-16 12:23:48,055 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for SEL(1)
2016-06-16 12:23:48,055 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(382)) - Processing for TS(0)
2016-06-16 12:23:48,067 INFO  [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneReduceSinkOperator(817)) - RS 3 oldColExprMap: {KEY._col0=Column[_col0]}
2016-06-16 12:23:48,067 INFO  [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneReduceSinkOperator(866)) - RS 3 newColExprMap: {KEY._col0=Column[_col0]}
2016-06-16 12:23:48,099 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
2016-06-16 12:23:48,099 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=partition-retrieving start=1466094228099 end=1466094228099 duration=0 from=org.apache.hadoop.hive.ql.opti
mizer.ppr.PartitionPruner>
2016-06-16 12:23:48,108 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(175)) - Looking for table scans where optimization is applicable
2016-06-16 12:23:48,109 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(199)) - Found 0 null table scans
2016-06-16 12:23:48,109 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(175)) - Looking for table scans where optimization is applicable
2016-06-16 12:23:48,109 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(199)) - Found 0 null table scans
2016-06-16 12:23:48,110 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(175)) - Looking for table scans where optimization is applicable
2016-06-16 12:23:48,110 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(199)) - Found 0 null table scans
2016-06-16 12:23:48,111 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:analyzeInternal(10128)) - Completed plan generation
2016-06-16 12:23:48,111 INFO  [main]: ql.Driver (Driver.java:compile(436)) - Semantic Analysis Completed
2016-06-16 12:23:48,111 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=semanticAnalyze start=1466094227535 end=1466094228111 duration=576 from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:48,126 INFO  [main]: exec.ListSinkOperator (Operator.java:initialize(332)) - Initializing operator OP[7]
2016-06-16 12:23:48,127 INFO  [main]: exec.ListSinkOperator (Operator.java:initialize(372)) - Initialization Done 7 OP
2016-06-16 12:23:48,127 INFO  [main]: exec.ListSinkOperator (Operator.java:initializeChildren(429)) - Operator 7 OP initialized
2016-06-16 12:23:48,131 INFO  [main]: ql.Driver (Driver.java:getSchema(240)) - Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:warctype, type:string, comment:null)], properties:null)
2016-06-16 12:23:48,131 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=compile start=1466094227096 end=1466094228131 duration=1035 from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:48,131 INFO  [main]: ql.Driver (Driver.java:checkConcurrency(160)) - Concurrency mode is disabled, not creating a lock manager
2016-06-16 12:23:48,131 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:48,131 INFO  [main]: ql.Driver (Driver.java:execute(1325)) - Starting command(queryId=jmill383_20160616122347_d611d884-2c87-4bc2-bfe6-497590da7085): select distinct(warctype) from commoncrawl18
2016-06-16 12:23:48,132 INFO  [main]: ql.Driver (SessionState.java:printInfo(948)) - Query ID = jmill383_20160616122347_d611d884-2c87-4bc2-bfe6-497590da7085
2016-06-16 12:23:48,132 INFO  [main]: ql.Driver (SessionState.java:printInfo(948)) - Total jobs = 1
2016-06-16 12:23:48,133 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=TimeToSubmit start=1466094227096 end=1466094228133 duration=1037 from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:48,133 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:48,133 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=task.MAPRED.Stage-1 from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:48,141 INFO  [main]: ql.Driver (SessionState.java:printInfo(948)) - Launching Job 1 out of 1
2016-06-16 12:23:48,143 INFO  [main]: ql.Driver (Driver.java:launchTask(1648)) - Starting task [Stage-1:MAPRED] in serial mode
2016-06-16 12:23:48,143 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=getInputSummary from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-06-16 12:23:48,157 INFO  [main]: exec.Utilities (Utilities.java:getInputSummary(2648)) - Cache Content Summary for hdfs://localhost:8025/user/hive/warehouse/commoncrawl18 length: 2785280 file count: 5 direc
tory count: 1
2016-06-16 12:23:48,158 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=getInputSummary start=1466094228143 end=1466094228158 duration=15 from=org.apache.hadoop.hive.ql.exec.Uti
lities>
2016-06-16 12:23:48,158 INFO  [main]: exec.Utilities (Utilities.java:estimateNumberOfReducers(3244)) - BytesPerReducer=256000000 maxReducers=1009 totalInputFileSize=2785280
2016-06-16 12:23:48,158 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - Number of reduce tasks not specified. Estimated from input data size: 1
2016-06-16 12:23:48,158 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - In order to change the average load for a reducer (in bytes):
2016-06-16 12:23:48,158 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) -   set hive.exec.reducers.bytes.per.reducer=<number>
2016-06-16 12:23:48,158 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - In order to limit the maximum number of reducers:
2016-06-16 12:23:48,158 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) -   set hive.exec.reducers.max=<number>
2016-06-16 12:23:48,159 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - In order to set a constant number of reducers:
2016-06-16 12:23:48,159 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) -   set mapreduce.job.reduces=<number>
2016-06-16 12:23:48,159 INFO  [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/a7f6e5ec-1f3e-4128-acca-939348c274e9/hive_2016-06-16_12-23-47_115
_7454725954954088432-1
2016-06-16 12:23:48,170 INFO  [main]: mr.ExecDriver (ExecDriver.java:execute(288)) - Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
2016-06-16 12:23:48,172 INFO  [main]: exec.Utilities (Utilities.java:getInputPaths(3390)) - Processing alias commoncrawl18
2016-06-16 12:23:48,172 INFO  [main]: exec.Utilities (Utilities.java:getInputPaths(3407)) - Adding input file hdfs://localhost:8025/user/hive/warehouse/commoncrawl18
2016-06-16 12:23:48,172 INFO  [main]: exec.Utilities (Utilities.java:isEmptyPath(2687)) - Content Summary hdfs://localhost:8025/user/hive/warehouse/commoncrawl18length: 2785280 num files: 5 num directories: 1
2016-06-16 12:23:48,172 INFO  [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/a7f6e5ec-1f3e-4128-acca-939348c274e9/hive_2016-06-16_12-23-47_115
_7454725954954088432-1
2016-06-16 12:23:48,215 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-06-16 12:23:48,216 INFO  [main]: exec.Utilities (Utilities.java:serializePlan(937)) - Serializing MapWork via kryo
2016-06-16 12:23:48,344 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=serializePlan start=1466094228215 end=1466094228344 duration=129 from=org.apache.hadoop.hive.ql.exec.Util
ities>
2016-06-16 12:23:48,345 INFO  [main]: Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1049)) - mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication
2016-06-16 12:23:48,359 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-06-16 12:23:48,359 INFO  [main]: exec.Utilities (Utilities.java:serializePlan(937)) - Serializing ReduceWork via kryo
2016-06-16 12:23:48,384 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=serializePlan start=1466094228359 end=1466094228384 duration=25 from=org.apache.hadoop.hive.ql.exec.Utili
ties>
2016-06-16 12:23:48,392 ERROR [main]: mr.ExecDriver (ExecDriver.java:execute(400)) - yarn
2016-06-16 12:23:48,423 INFO  [main]: client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at /127.0.0.1:8032
2016-06-16 12:23:48,544 INFO  [main]: client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at /127.0.0.1:8032
2016-06-16 12:23:48,548 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(389)) - PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/a7f6e5ec-1f3e-4128-acca-939348c274e9/hive_2016-06-16_12-23-47_115_7454
725954954088432-1/-mr-10004/4a54a6ad-5f9e-4a9f-924d-16abb10ab9fc/map.xml
2016-06-16 12:23:48,549 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(389)) - PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/a7f6e5ec-1f3e-4128-acca-939348c274e9/hive_2016-06-16_12-23-47_115_7454
725954954088432-1/-mr-10004/4a54a6ad-5f9e-4a9f-924d-16abb10ab9fc/reduce.xml
2016-06-16 12:23:48,652 WARN  [main]: mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(153)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your appli

cation with ToolRunner to remedy this.
2016-06-16 12:23:48,782 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
2016-06-16 12:23:48,783 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(389)) - PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/a7f6e5ec-1f3e-4128-acca-939348c274e9/hive_2016-06-16_12-23-47_115_7454
725954954088432-1/-mr-10004/4a54a6ad-5f9e-4a9f-924d-16abb10ab9fc/map.xml
2016-06-16 12:23:48,783 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(517)) - Total number of paths: 1, launching 1 threads to check non-combinable ones.
2016-06-16 12:23:48,793 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getCombineSplits(439)) - CombineHiveInputSplit creating pool for hdfs://localhost:8025/user/hive/warehouse/commoncrawl
18; using filter path hdfs://localhost:8025/user/hive/warehouse/commoncrawl18
2016-06-16 12:23:48,801 INFO  [main]: input.FileInputFormat (FileInputFormat.java:listStatus(281)) - Total input paths to process : 5
2016-06-16 12:23:48,803 INFO  [main]: input.CombineFileInputFormat (CombineFileInputFormat.java:createSplits(413)) - DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
2016-06-16 12:23:48,804 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getCombineSplits(494)) - number of splits 1
2016-06-16 12:23:48,804 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(587)) - Number of all splits 1
2016-06-16 12:23:48,804 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=getSplits start=1466094228782 end=1466094228804 duration=22 from=org.apache.hadoop.hive.ql.io.CombineHive
InputFormat>
2016-06-16 12:23:48,892 INFO  [main]: mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(494)) - number of splits:1
2016-06-16 12:23:48,951 INFO  [main]: mapreduce.JobSubmitter (JobSubmitter.java:printTokens(583)) - Submitting tokens for job: job_1466017117730_0005
2016-06-16 12:23:49,056 INFO  [main]: impl.YarnClientImpl (YarnClientImpl.java:submitApplication(251)) - Submitted application application_1466017117730_0005
2016-06-16 12:23:49,075 INFO  [main]: mapreduce.Job (Job.java:submit(1300)) - The url to track the job: http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0005/
2016-06-16 12:23:49,076 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - Starting Job = job_1466017117730_0005, Tracking URL = http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0005/
2016-06-16 12:23:49,076 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1466017117730_0005
2016-06-16 12:23:53,128 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2016-06-16 12:23:53,168 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-06-16 12:23:53,168 INFO  [main]: exec.Task (SessionState.java:printInfo(948)) - 2016-06-16 12:23:53,166 Stage-1 map = 0%,  reduce = 0%
2016-06-16 12:23:53,172 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-06-16 12:23:53,177 ERROR [main]: exec.Task (SessionState.java:printError(957)) - Ended Job = job_1466017117730_0005 with errors
2016-06-16 12:23:53,178 ERROR [Thread-27]: exec.Task (SessionState.java:printError(957)) - Error during job, obtaining debugging information...

2016-06-16 12:23:53,178 INFO  [Thread-27]: Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1049)) - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2016-06-16 12:23:53,179 ERROR [Thread-27]: exec.Task (SessionState.java:printError(957)) - Job Tracking URL: http://starchild.ltsnet.net:8088/cluster/app/application_1466017117730_0005
2016-06-16 12:23:53,207 INFO  [main]: impl.YarnClientImpl (YarnClientImpl.java:killApplication(364)) - Killed application application_1466017117730_0005
2016-06-16 12:23:53,226 ERROR [main]: ql.Driver (SessionState.java:printError(957)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
2016-06-16 12:23:53,226 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=Driver.execute start=1466094228131 end=1466094233226 duration=5095 from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:53,226 INFO  [main]: ql.Driver (SessionState.java:printInfo(948)) - MapReduce Jobs Launched:
2016-06-16 12:23:53,228 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-06-16 12:23:53,229 INFO  [main]: ql.Driver (SessionState.java:printInfo(948)) - Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
2016-06-16 12:23:53,229 INFO  [main]: ql.Driver (SessionState.java:printInfo(948)) - Total MapReduce CPU Time Spent: 0 msec
2016-06-16 12:23:53,229 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:53,229 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks start=1466094233229 end=1466094233229 duration=0 from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:53,232 INFO  [main]: exec.ListSinkOperator (Operator.java:close(612)) - 7 finished. closing...
2016-06-16 12:23:53,232 INFO  [main]: exec.ListSinkOperator (Operator.java:close(634)) - 7 Close done
2016-06-16 12:23:53,252 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2016-06-16 12:23:53,253 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks start=1466094233252 end=1466094233253 duration=1 from=org.apache.hadoop.hive.ql.Driver>
[jmill383@starchild jmill383]$
> 16/06/16 13:52:46 WARN mapreduce.Counters: Group FileSystemC...

JOHN MILLER

unread,
Jun 17, 2016, 8:18:10 AM6/17/16
to cascading-user
Greetiings

Below is a copy of the resourcemanager log


2016-06-17 08:11:50,851 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 7
2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 7 submitted by user jmill383
2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1466017117730_0007
2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0007 State change from NEW to NEW_SAVING
2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	IP=127.0.0.1	OPERATION=Submit Application Request	TARGET=ClientRMService	RESULT=SUCCESS	APPID=application_1466017117730_0007
2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1466017117730_0007
2016-06-17 08:11:51,321 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0007 State change from NEW_SAVING to SUBMITTED
2016-06-17 08:11:51,322 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1466017117730_0007 user: jmill383 leaf-queue of parent: root #applications: 1
2016-06-17 08:11:51,322 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1466017117730_0007 from user: jmill383, in queue: default
2016-06-17 08:11:51,326 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0007 State change from SUBMITTED to ACCEPTED
2016-06-17 08:11:51,326 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0007_000001
2016-06-17 08:11:51,326 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from NEW to SUBMITTED
2016-06-17 08:11:51,326 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0007 from user: jmill383 activated in queue: default
2016-06-17 08:11:51,326 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0007 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@78ebdb9f, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2016-06-17 08:11:51,327 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0007_000001 to scheduler from user jmill383 in queue default
2016-06-17 08:11:51,327 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from SUBMITTED to SCHEDULED
2016-06-17 08:11:52,256 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_01_000001 Container Transitioned from NEW to ALLOCATED
2016-06-17 08:11:52,256 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0007	CONTAINERID=container_1466017117730_0007_01_000001
2016-06-17 08:11:52,256 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0007_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation
2016-06-17 08:11:52,256 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0007_000001 container=Container: [ContainerId: container_1466017117730_0007_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2016-06-17 08:11:52,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2016-06-17 08:11:52,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2016-06-17 08:11:52,258 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0007_01_000001
2016-06-17 08:11:52,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2016-06-17 08:11:52,260 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0007_000001
2016-06-17 08:11:52,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0007 AttemptId: appattempt_1466017117730_0007_000001 MasterContainer: Container: [ContainerId: container_1466017117730_0007_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ]
2016-06-17 08:11:52,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from SCHEDULED to ALLOCATED_SAVING
2016-06-17 08:11:52,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from ALLOCATED_SAVING to ALLOCATED
2016-06-17 08:11:52,261 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0007_000001
2016-06-17 08:11:52,263 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0007_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0007_000001
2016-06-17 08:11:52,263 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0007_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 
2016-06-17 08:11:52,263 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0007_000001
2016-06-17 08:11:52,263 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0007_000001
2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0007_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0007_000001
2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from ALLOCATED to LAUNCHED
2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_01_000001 Container Transitioned from ACQUIRED to COMPLETED
2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0007_01_000001 in state: COMPLETED event:FINISHED
2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0007	CONTAINERID=container_1466017117730_0007_01_000001
2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0007_01_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0007_000001 with final state: FAILED, and exit status: 1
2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0>
2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from LAUNCHED to FINAL_SAVING
2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0007_01_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0007_000001
2016-06-17 08:11:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0007_000001 released container container_1466017117730_0007_01_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED
2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0007_000001
2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000001 State change from FINAL_SAVING to FAILED
2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 2
2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1466017117730_0007_000002
2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0007_000001 is done. finalState=FAILED
2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from NEW to SUBMITTED
2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0007 requests cleared
2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0007 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2016-06-17 08:11:53,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1466017117730_0007 from user: jmill383 activated in queue: default
2016-06-17 08:11:53,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1466017117730_0007 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@625b5f17, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2016-06-17 08:11:53,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1466017117730_0007_000002 to scheduler from user jmill383 in queue default
2016-06-17 08:11:53,259 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from SUBMITTED to SCHEDULED
2016-06-17 08:11:54,258 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2016-06-17 08:11:54,258 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_02_000001 Container Transitioned from NEW to ALLOCATED
2016-06-17 08:11:54,259 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0007	CONTAINERID=container_1466017117730_0007_02_000001
2016-06-17 08:11:54,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1466017117730_0007_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation
2016-06-17 08:11:54,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1466017117730_0007_000002 container=Container: [ContainerId: container_1466017117730_0007_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2016-06-17 08:11:54,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2016-06-17 08:11:54,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2016-06-17 08:11:54,261 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : starchild.ltsnet.net:32963 for container : container_1466017117730_0007_02_000001
2016-06-17 08:11:54,262 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_02_000001 Container Transitioned from ALLOCATED to ACQUIRED
2016-06-17 08:11:54,262 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1466017117730_0007_000002
2016-06-17 08:11:54,262 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1466017117730_0007 AttemptId: appattempt_1466017117730_0007_000002 MasterContainer: Container: [ContainerId: container_1466017117730_0007_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ]
2016-06-17 08:11:54,263 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from SCHEDULED to ALLOCATED_SAVING
2016-06-17 08:11:54,263 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from ALLOCATED_SAVING to ALLOCATED
2016-06-17 08:11:54,263 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1466017117730_0007_000002
2016-06-17 08:11:54,265 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1466017117730_0007_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0007_000002
2016-06-17 08:11:54,265 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1466017117730_0007_02_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 
2016-06-17 08:11:54,265 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1466017117730_0007_000002
2016-06-17 08:11:54,265 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1466017117730_0007_000002
2016-06-17 08:11:54,276 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1466017117730_0007_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] for AM appattempt_1466017117730_0007_000002
2016-06-17 08:11:54,276 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from ALLOCATED to LAUNCHED
2016-06-17 08:11:55,259 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1466017117730_0007_02_000001 Container Transitioned from ACQUIRED to COMPLETED
2016-06-17 08:11:55,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1466017117730_0007_02_000001 in state: COMPLETED event:FINISHED
2016-06-17 08:11:55,259 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1466017117730_0007	CONTAINERID=container_1466017117730_0007_02_000001
2016-06-17 08:11:55,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1466017117730_0007_02_000001 of capacity <memory:2048, vCores:1> on host starchild.ltsnet.net:32963, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1466017117730_0007_000002 with final state: FAILED, and exit status: 1
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=jmill383 user-resources=<memory:0, vCores:0>
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from LAUNCHED to FINAL_SAVING
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1466017117730_0007_02_000001, NodeId: starchild.ltsnet.net:32963, NodeHttpAddress: starchild.ltsnet.net:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.40.190.207:32963 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1466017117730_0007_000002
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1466017117730_0007_000002 released container container_1466017117730_0007_02_000001 on node: host: starchild.ltsnet.net:32963 #containers=0 available=8192 used=0 with event: FINISHED
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1466017117730_0007_000002
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1466017117730_0007_000002 State change from FINAL_SAVING to FAILED
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 2. The max attempts is 2
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1466017117730_0007 with final state: FAILED
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0007 State change from ACCEPTED to FINAL_SAVING
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1466017117730_0007_000002 is done. finalState=FAILED
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1466017117730_0007
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1466017117730_0007 requests cleared
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1466017117730_0007 user: jmill383 queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2016-06-17 08:11:55,260 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application application_1466017117730_0007 failed 2 times due to AM Container for appattempt_1466017117730_0007_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0007/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1466017117730_0007_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2016-06-17 08:11:55,261 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1466017117730_0007 State change from FINAL_SAVING to FAILED
2016-06-17 08:11:55,261 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1466017117730_0007 user: jmill383 leaf-queue of parent: root #applications: 0
2016-06-17 08:11:55,261 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	OPERATION=Application Finished - Failed	TARGET=RMAppManager	RESULT=FAILURE	DESCRIPTION=App failed with state: FAILED	PERMISSIONS=Application application_1466017117730_0007 failed 2 times due to AM Container for appattempt_1466017117730_0007_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://starchild.ltsnet.net:8088/proxy/application_1466017117730_0007/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1466017117730_0007_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.	APPID=application_1466017117730_0007
2016-06-17 08:11:55,261 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1466017117730_0007,name=insert overwrite table keyvalue selec...dual(Stage-1),user=jmill383,queue=default,state=FAILED,trackingUrl=http://starchild.ltsnet.net:8088/cluster/app/application_1466017117730_0007,appMasterHost=N/A,startTime=1466165511321,finishTime=1466165515260,finalStatus=FAILED
2016-06-17 08:11:55,652 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=jmill383	IP=127.0.0.1	OPERATION=Kill Application Request	TARGET=ClientRMService	RESULT=SUCCESS	APPID=application_1466017117730_0007
2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...

Below is a copy of the nodemanager log

2016-06-17 08:11:52,270 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1466017117730_0007_000001 (auth:SIMPLE) 2016-06-17 08:11:52,274 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1466017117730_0007_01_000001 by user jmill383 2016-06-17 08:11:52,274 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1466017117730_0007 2016-06-17 08:11:52,274 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=jmill383 IP=10.40.190.207 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_01_000001 2016-06-17 08:11:52,274 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1466017117730_0007 transitioned from NEW to INITING 2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1466017117730_0007_01_000001 to application application_1466017117730_0007 2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1466017117730_0007 transitioned from INITING to RUNNING 2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_01_000001 transitioned from NEW to LOCALIZING 2016-06-17 08:11:52,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1466017117730_0007 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.jar transitioned from INIT to DOWNLOADING 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.splitmetainfo transitioned from INIT to DOWNLOADING 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.split transitioned from INIT to DOWNLOADING 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.xml transitioned from INIT to DOWNLOADING 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hive/jmill383/7aaa4ab5-bffc-4a5e-9b25-626e34603153/hive_2016-06-17_08-11-50_298_1846359449944448840-1/-mr-10004/60053f51-d5c2-4eb7-a55a-472b0cc36df8/map.xml transitioned from INIT to DOWNLOADING 2016-06-17 08:11:52,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1466017117730_0007_01_000001 2016-06-17 08:11:52,280 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /tmp/hadoop-jmill383/nm-local-dir/nmPrivate/container_1466017117730_0007_01_000001.tokens. Credentials list: 2016-06-17 08:11:52,288 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user jmill383 2016-06-17 08:11:52,294 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /tmp/hadoop-jmill383/nm-local-dir/nmPrivate/container_1466017117730_0007_01_000001.tokens to /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/container_1466017117730_0007_01_000001.tokens 2016-06-17 08:11:52,295 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007 = file:/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007 2016-06-17 08:11:52,383 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.jar(->/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/filecache/10/job.jar) transitioned from DOWNLOADING to LOCALIZED 2016-06-17 08:11:52,399 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.splitmetainfo(->/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/filecache/11/job.splitmetainfo) transitioned from DOWNLOADING to LOCALIZED 2016-06-17 08:11:52,414 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.split(->/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/filecache/12/job.split) transitioned from DOWNLOADING to LOCALIZED 2016-06-17 08:11:52,430 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hadoop-yarn/staging/jmill383/.staging/job_1466017117730_0007/job.xml(->/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/filecache/13/job.xml) transitioned from DOWNLOADING to LOCALIZED 2016-06-17 08:11:52,446 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:8025/tmp/hive/jmill383/7aaa4ab5-bffc-4a5e-9b25-626e34603153/hive_2016-06-17_08-11-50_298_1846359449944448840-1/-mr-10004/60053f51-d5c2-4eb7-a55a-472b0cc36df8/map.xml(->/tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/filecache/17/map.xml) transitioned from DOWNLOADING to LOCALIZED 2016-06-17 08:11:52,446 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_01_000001 transitioned from LOCALIZING to LOCALIZED 2016-06-17 08:11:52,461 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_01_000001 transitioned from LOCALIZED to RUNNING 2016-06-17 08:11:52,476 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/container_1466017117730_0007_01_000001/default_container_executor.sh] 2016-06-17 08:11:52,588 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1466017117730_0007_01_000001 is : 1 2016-06-17 08:11:52,588 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1466017117730_0007_01_000001 and exit code: 1 ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from container-launch. 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Container id: container_1466017117730_0007_01_000001 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exit code: 1 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Stack trace: ExitCodeException exitCode=1: 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell.run(Shell.java:455) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.FutureTask.run(FutureTask.java:262) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.lang.Thread.run(Thread.java:745) 2016-06-17 08:11:52,588 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 1 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_01_000001 transitioned from RUNNING to EXITED_WITH_FAILURE 2016-06-17 08:11:52,588 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1466017117730_0007_01_000001 2016-06-17 08:11:52,601 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/container_1466017117730_0007_01_000001 2016-06-17 08:11:52,601 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=jmill383 OPERATION=Container Finished - Failed TARGET=ContainerImpl RESULT=FAILURE DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_01_000001 2016-06-17 08:11:52,601 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_01_000001 transitioned from EXITED_WITH_FAILURE to DONE 2016-06-17 08:11:52,601 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1466017117730_0007_01_000001 from application application_1466017117730_0007 2016-06-17 08:11:52,602 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1466017117730_0007 2016-06-17 08:11:52,610 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1466017117730_0007_01_000001 2016-06-17 08:11:52,610 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1466017117730_0007_01_000001 2016-06-17 08:11:54,258 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1466017117730_0007_01_000001] 2016-06-17 08:11:54,270 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1466017117730_0007_000002 (auth:SIMPLE) 2016-06-17 08:11:54,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1466017117730_0007_02_000001 by user jmill383 2016-06-17 08:11:54,275 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=jmill383 IP=10.40.190.207 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_02_000001 2016-06-17 08:11:54,275 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1466017117730_0007_02_000001 to application application_1466017117730_0007 2016-06-17 08:11:54,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_02_000001 transitioned from NEW to LOCALIZING 2016-06-17 08:11:54,276 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1466017117730_0007 2016-06-17 08:11:54,277 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_02_000001 transitioned from LOCALIZING to LOCALIZED 2016-06-17 08:11:54,303 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_02_000001 transitioned from LOCALIZED to RUNNING 2016-06-17 08:11:54,318 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/container_1466017117730_0007_02_000001/default_container_executor.sh] 2016-06-17 08:11:54,426 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1466017117730_0007_02_000001 is : 1 2016-06-17 08:11:54,426 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1466017117730_0007_02_000001 and exit code: 1 ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from container-launch. 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Container id: container_1466017117730_0007_02_000001 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exit code: 1 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Stack trace: ExitCodeException exitCode=1: 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell.run(Shell.java:455) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.FutureTask.run(FutureTask.java:262) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: at java.lang.Thread.run(Thread.java:745) 2016-06-17 08:11:54,427 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 1 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_02_000001 transitioned from RUNNING to EXITED_WITH_FAILURE 2016-06-17 08:11:54,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1466017117730_0007_02_000001 2016-06-17 08:11:54,438 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007/container_1466017117730_0007_02_000001 2016-06-17 08:11:54,438 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=jmill383 OPERATION=Container Finished - Failed TARGET=ContainerImpl RESULT=FAILURE DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE APPID=application_1466017117730_0007 CONTAINERID=container_1466017117730_0007_02_000001 2016-06-17 08:11:54,438 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1466017117730_0007_02_000001 transitioned from EXITED_WITH_FAILURE to DONE 2016-06-17 08:11:54,438 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1466017117730_0007_02_000001 from application application_1466017117730_0007 2016-06-17 08:11:54,438 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1466017117730_0007 2016-06-17 08:11:55,610 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1466017117730_0007_02_000001 2016-06-17 08:11:55,611 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1466017117730_0007_02_000001 2016-06-17 08:11:56,259 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1466017117730_0007_02_000001] 2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1466017117730_0007 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP 2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-jmill383/nm-local-dir/usercache/jmill383/appcache/application_1466017117730_0007 2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1466017117730_0007 2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1466017117730_0007 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED 2016-06-17 08:11:56,260 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1466017117730_0007, with delay of 10800 seconds
> 16/06/16 13:52:46 WARN mapreduce.Counters: Group FileSystemC...

JOHN MILLER

unread,
Jun 28, 2016, 10:07:25 AM6/28/16
to cascading-user
Greetings Andre

I resolve the configuration/memory issues i had before,  I am wondering if u could assist in verifying that the listed output is indeed correct.

I have a cascading-hive issue  I am running the cascading-hive project on github....hadoop says my job executed successfully, but i have 2 issues that are preventing me to believe that the job was 100% successful

My first issue is listed below  The reduce.xml file  is not able to be located for whatever reason

16/06/28 09:35:13 INFO exec.Utilities: Open file to read in plan: hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-13_732_8803492905504424077-1/-mr-10004/227233a8-aa2d-4d63-9b59-0185b7289ff0/reduce.xml
16/06/28 09:35:13 INFO exec.Utilities: File not found: File does not exist: /tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-13_732_8803492905504424077-1/-mr-10004/227233a8-aa2d-4d63-9b59-0185b7289ff0/reduce.xml

    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)


My second issue is listed below and is regarding the results i am given in hadoop  Don't know if these are the results i am supposed to be seeing.  I am thinking that i am missing something  



-rw-r--r--   3 jmill383 supergroup          0 2016-06-28 09:35 /user/hive/warehouse/keyvalue2/_SUCCESS
-rw-r--r--   3 jmill383 supergroup         17 2016-06-28 09:35 /user/hive/warehouse/keyvalue2/part-00000
-rw-r--r--   3 jmill383 supergroup         14 2016-06-28 09:35 /user/hive/warehouse/keyvalue2/part-00001
-rw-r--r--   3 jmill383 supergroup         12 2016-06-28 09:35 /user/hive/warehouse/keyvalue2/part-00002


[jmill383@starchild ~]$ /opt/hadoop/bin/hadoop fs -cat /user/hive/warehouse/keyvalue2/part-00000

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/06/28 09:44:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
MULTIPLE QUERIES
[jmill383@starchild ~]$ /opt/hadoop/bin/hadoop fs -cat /user/hive/warehouse/keyvalue2/part-00001

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/06/28 09:44:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ARE SUPPORTED
[jmill383@starchild ~]$ /opt/hadoop/bin/hadoop fs -cat /user/hive/warehouse/keyvalue2/part-00002

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/06/28 09:44:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
HELLO HIVE!

Please advise if u can assist.  The full log is listed below


[jmill383@starchild demo]$ /opt/hadoop/bin/hadoop jar build/libs/cascading-hive-demo-1.0.jar cascading.hive.HiveDemo
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/06/28 09:34:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/28 09:34:46 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:34:46 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:34:46 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/tmp/hadoop-unjar4097488226604019535/lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/usr/local/hive/lib/datanucleus-api-jdo-3.2.6.jar."
16/06/28 09:34:46 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/usr/local/hive/lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/tmp/hadoop-unjar4097488226604019535/lib/datanucleus-rdbms-3.2.9.jar."
16/06/28 09:34:46 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/usr/local/hive/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/tmp/hadoop-unjar4097488226604019535/lib/datanucleus-core-3.2.10.jar."
16/06/28 09:34:46 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/06/28 09:34:46 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/06/28 09:34:49 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/06/28 09:34:50 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:34:50 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:34:50 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:34:50 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:34:50 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:34:50 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:34:50 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:34:50 INFO metastore.HiveMetaStore: Added admin role in metastore
16/06/28 09:34:50 INFO metastore.HiveMetaStore: Added public role in metastore
16/06/28 09:34:50 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/06/28 09:34:50 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=dual
16/06/28 09:34:50 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=dual
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:34:51 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:34:51 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:34:51 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:34:51 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:34:51 INFO property.AppProps: using app.id: FBDA9DC4B7244F0FA13832BCAA3931DA
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=keyvalue
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:34:51 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:34:51 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:34:51 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:34:51 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=keyvalue
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:34:51 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:34:51 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:34:51 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:34:51 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=keyvalue2
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue2   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:34:51 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:34:51 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:34:51 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:34:51 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=keyvalue2
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue2   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:34:51 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:34:51 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:34:51 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:34:51 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:34:51 INFO util.Util: resolving application jar from found main method on: cascading.hive.HiveDemo
16/06/28 09:34:51 INFO planner.HadoopPlanner: using application jar: /home/jmill383/cascading-hive/demo/build/libs/cascading-hive-demo-1.0.jar
16/06/28 09:34:51 INFO flow.Flow: [uppercase kv -> kv2 ] executed rule registry: MapReduceHadoopRuleRegistry, completed as: SUCCESS, in: 00:00.049
16/06/28 09:34:51 INFO flow.Flow: [uppercase kv -> kv2 ] rule registry: MapReduceHadoopRuleRegistry, supports assembly with steps: 1, nodes: 1
16/06/28 09:34:51 INFO flow.Flow: [uppercase kv -> kv2 ] rule registry: MapReduceHadoopRuleRegistry, result was selected using: 'default comparator: selects plan with fewest steps and fewest nodes'
16/06/28 09:34:51 INFO Configuration.deprecation: mapred.used.genericoptionsparser is deprecated. Instead, use mapreduce.client.genericoptionsparser.used
16/06/28 09:34:51 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
16/06/28 09:34:51 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
16/06/28 09:34:51 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
16/06/28 09:34:51 INFO Configuration.deprecation: mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
16/06/28 09:34:51 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
16/06/28 09:34:51 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
16/06/28 09:34:51 INFO util.Version: Concurrent, Inc - Cascading 3.1.0-wip-60
16/06/28 09:34:51 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] starting
16/06/28 09:34:51 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]  parallel execution of flows is enabled: false
16/06/28 09:34:51 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]  executing total flows: 3
16/06/28 09:34:51 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]  allocating management threads: 1
16/06/28 09:34:51 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] starting flow: load data into dual
16/06/28 09:34:51 INFO flow.Flow: [load data into dual] at least one sink is marked for delete
16/06/28 09:34:51 INFO flow.Flow: [load data into dual] sink oldest modified date: Wed Dec 31 18:59:59 EST 1969
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 1: get_table : db=default tbl=dual
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 1: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:34:51 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:34:51 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:34:51 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:34:51 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:34:51 INFO hive.HiveTap: strict mode: comparing existing hive table with table descriptor
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 1: Shutting down the object store...
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 1: Metastore shutdown complete.
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 2: get_all_databases
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_all_databases   
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 2: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:34:51 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:34:51 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:34:51 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:34:51 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:34:51 INFO metastore.HiveMetaStore: 2: get_functions: db=default pat=*
16/06/28 09:34:51 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_functions: db=default pat=*   
16/06/28 09:34:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:34:52 INFO session.SessionState: Created local directory: /tmp/618394da-6db9-47b6-a3d0-e7f3ca7bb682_resources
16/06/28 09:34:52 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/618394da-6db9-47b6-a3d0-e7f3ca7bb682
16/06/28 09:34:52 INFO session.SessionState: Created local directory: /tmp/jmill383/618394da-6db9-47b6-a3d0-e7f3ca7bb682
16/06/28 09:34:52 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/618394da-6db9-47b6-a3d0-e7f3ca7bb682/_tmp_space.db
16/06/28 09:34:52 INFO hive.HiveQueryRunner: running hive query: 'load data local inpath 'file:///home/jmill383/cascading-hive/demo/src/main/resources/data.txt' overwrite into table dual'
16/06/28 09:34:52 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO parse.ParseDriver: Parsing command: load data local inpath 'file:///home/jmill383/cascading-hive/demo/src/main/resources/data.txt' overwrite into table dual
16/06/28 09:34:52 INFO parse.ParseDriver: Parse Completed
16/06/28 09:34:52 INFO log.PerfLogger: </PERFLOG method=parse start=1467120892117 end=1467120892605 duration=488 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO metastore.HiveMetaStore: 2: get_table : db=default tbl=dual
16/06/28 09:34:52 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/28 09:34:52 INFO ql.Driver: Semantic Analysis Completed
16/06/28 09:34:52 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1467120892606 end=1467120892729 duration=123 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
16/06/28 09:34:52 INFO log.PerfLogger: </PERFLOG method=compile start=1467120892097 end=1467120892735 duration=638 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
16/06/28 09:34:52 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO ql.Driver: Starting command(queryId=jmill383_20160628093452_46558a56-8d07-4265-a725-7f4d92020877): load data local inpath 'file:///home/jmill383/cascading-hive/demo/src/main/resources/data.txt' overwrite into table dual
16/06/28 09:34:52 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1467120892097 end=1467120892737 duration=640 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO log.PerfLogger: <PERFLOG method=task.MOVE.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:52 INFO ql.Driver: Starting task [Stage-0:MOVE] in serial mode

Loading data to table default.dual
16/06/28 09:34:52 INFO exec.Task: Loading data to table default.dual from file:/home/jmill383/cascading-hive/demo/src/main/resources/data.txt
16/06/28 09:34:52 INFO metastore.HiveMetaStore: 2: get_table : db=default tbl=dual
16/06/28 09:34:52 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/28 09:34:52 INFO metastore.HiveMetaStore: 2: get_table : db=default tbl=dual
16/06/28 09:34:52 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/28 09:34:52 INFO common.FileUtils: deleting  hdfs://localhost:8025/user/hive/warehouse/dual/data.txt
16/06/28 09:34:52 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
16/06/28 09:34:52 INFO metadata.Hive: Replacing src:file:/home/jmill383/cascading-hive/demo/src/main/resources/data.txt, dest: hdfs://localhost:8025/user/hive/warehouse/dual/data.txt, Status:true
16/06/28 09:34:52 INFO metastore.HiveMetaStore: 2: alter_table: db=default tbl=dual newtbl=dual
16/06/28 09:34:52 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=alter_table: db=default tbl=dual newtbl=dual   
16/06/28 09:34:52 INFO hive.log: Updating table stats fast for dual
16/06/28 09:34:52 INFO hive.log: Updated size of table dual to 2
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=task.STATS.Stage-1 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO ql.Driver: Starting task [Stage-1:STATS] in serial mode
16/06/28 09:34:53 INFO exec.StatsTask: Executing stats task
16/06/28 09:34:53 INFO metastore.HiveMetaStore: 2: get_table : db=default tbl=dual
16/06/28 09:34:53 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/28 09:34:53 INFO metastore.HiveMetaStore: 2: get_table : db=default tbl=dual
16/06/28 09:34:53 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/28 09:34:53 INFO metastore.HiveMetaStore: 2: alter_table: db=default tbl=dual newtbl=dual
16/06/28 09:34:53 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=alter_table: db=default tbl=dual newtbl=dual   
16/06/28 09:34:53 INFO hive.log: Updating table stats fast for dual
16/06/28 09:34:53 INFO hive.log: Updated size of table dual to 2

Table default.dual stats: [numFiles=1, numRows=0, totalSize=2, rawDataSize=0]
16/06/28 09:34:53 INFO exec.Task: Table default.dual stats: [numFiles=1, numRows=0, totalSize=2, rawDataSize=0]
16/06/28 09:34:53 INFO log.PerfLogger: </PERFLOG method=runTasks start=1467120892737 end=1467120893112 duration=375 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1467120892735 end=1467120893112 duration=377 from=org.apache.hadoop.hive.ql.Driver>
OK
16/06/28 09:34:53 INFO ql.Driver: OK
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1467120893113 end=1467120893113 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO log.PerfLogger: </PERFLOG method=Driver.run start=1467120892097 end=1467120893113 duration=1016 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1467120893114 end=1467120893114 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] completed flow: load data into dual
16/06/28 09:34:53 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] starting flow: select data from dual into keyvalue
16/06/28 09:34:53 INFO flow.Flow: [select data from dual ...] at least one sink is marked for delete
16/06/28 09:34:53 INFO flow.Flow: [select data from dual ...] sink oldest modified date: Wed Dec 31 18:59:59 EST 1969
16/06/28 09:34:53 INFO metastore.HiveMetaStore: 1: get_table : db=default tbl=keyvalue
16/06/28 09:34:53 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:34:53 INFO metastore.HiveMetaStore: 1: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:34:53 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:34:53 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:34:53 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:34:53 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:34:53 INFO hive.HiveTap: strict mode: comparing existing hive table with table descriptor
16/06/28 09:34:53 INFO metastore.HiveMetaStore: 1: Shutting down the object store...
16/06/28 09:34:53 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:34:53 INFO metastore.HiveMetaStore: 1: Metastore shutdown complete.
16/06/28 09:34:53 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:34:53 INFO session.SessionState: Created local directory: /tmp/e633ab4b-3b34-4389-912e-02737ef9a352_resources
16/06/28 09:34:53 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352
16/06/28 09:34:53 INFO session.SessionState: Created local directory: /tmp/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352
16/06/28 09:34:53 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/_tmp_space.db
16/06/28 09:34:53 INFO hive.HiveQueryRunner: running hive query: 'insert overwrite table keyvalue select 'Hello' as key, 'hive!' as value from dual'
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO parse.ParseDriver: Parsing command: insert overwrite table keyvalue select 'Hello' as key, 'hive!' as value from dual
16/06/28 09:34:53 INFO parse.ParseDriver: Parse Completed
16/06/28 09:34:53 INFO log.PerfLogger: </PERFLOG method=parse start=1467120893195 end=1467120893203 duration=8 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO parse.CalcitePlanner: Starting Semantic Analysis
16/06/28 09:34:53 INFO parse.CalcitePlanner: Completed phase 1 of Semantic Analysis
16/06/28 09:34:53 INFO parse.CalcitePlanner: Get metadata for source tables
16/06/28 09:34:53 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=dual
16/06/28 09:34:53 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/28 09:34:53 INFO metastore.HiveMetaStore: 3: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:34:53 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:34:53 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:34:53 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:34:53 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:34:53 INFO parse.CalcitePlanner: Get metadata for subqueries
16/06/28 09:34:53 INFO parse.CalcitePlanner: Get metadata for destination tables
16/06/28 09:34:53 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:34:53 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:34:53 INFO parse.CalcitePlanner: Completed getting MetaData in Semantic Analysis
16/06/28 09:34:53 INFO parse.BaseSemanticAnalyzer: Not invoking CBO because the statement has too few joins
16/06/28 09:34:53 INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1
16/06/28 09:34:53 INFO parse.CalcitePlanner: Set stats collection dir : hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/-ext-10001
16/06/28 09:34:53 INFO ppd.OpProcFactory: Processing for FS(2)
16/06/28 09:34:53 INFO ppd.OpProcFactory: Processing for SEL(1)
16/06/28 09:34:53 INFO ppd.OpProcFactory: Processing for TS(0)
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/06/28 09:34:53 INFO log.PerfLogger: </PERFLOG method=partition-retrieving start=1467120893458 end=1467120893459 duration=1 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/06/28 09:34:53 INFO optimizer.GenMRFileSink1: using CombineHiveInputformat for the merge job
16/06/28 09:34:53 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/28 09:34:53 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/28 09:34:53 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/28 09:34:53 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/28 09:34:53 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/28 09:34:53 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/28 09:34:53 INFO parse.CalcitePlanner: Completed plan generation
16/06/28 09:34:53 INFO ql.Driver: Semantic Analysis Completed
16/06/28 09:34:53 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1467120893203 end=1467120893476 duration=273 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:key, type:string, comment:null), FieldSchema(name:value, type:string, comment:null)], properties:null)
16/06/28 09:34:53 INFO log.PerfLogger: </PERFLOG method=compile start=1467120893194 end=1467120893476 duration=282 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO ql.Driver: Starting command(queryId=jmill383_20160628093453_f1b9a412-7a95-4fa6-bcf4-98047a845985): insert overwrite table keyvalue select 'Hello' as key, 'hive!' as value from dual
Query ID = jmill383_20160628093453_f1b9a412-7a95-4fa6-bcf4-98047a845985
16/06/28 09:34:53 INFO ql.Driver: Query ID = jmill383_20160628093453_f1b9a412-7a95-4fa6-bcf4-98047a845985
Total jobs = 3
16/06/28 09:34:53 INFO ql.Driver: Total jobs = 3
16/06/28 09:34:53 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1467120893194 end=1467120893476 duration=282 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=task.MAPRED.Stage-1 from=org.apache.hadoop.hive.ql.Driver>

Launching Job 1 out of 3
16/06/28 09:34:53 INFO ql.Driver: Launching Job 1 out of 3
16/06/28 09:34:53 INFO ql.Driver: Starting task [Stage-1:MAPRED] in serial mode

Number of reduce tasks is set to 0 since there's no reduce operator
16/06/28 09:34:53 INFO exec.Task: Number of reduce tasks is set to 0 since there's no reduce operator
16/06/28 09:34:53 INFO ql.Context: New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-34-53_195_4347723315783187064-1
16/06/28 09:34:53 INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
16/06/28 09:34:53 INFO exec.Utilities: Processing alias dual
16/06/28 09:34:53 INFO exec.Utilities: Adding input file hdfs://localhost:8025/user/hive/warehouse/dual
16/06/28 09:34:53 INFO exec.Utilities: Content Summary not cached for hdfs://localhost:8025/user/hive/warehouse/dual
16/06/28 09:34:53 INFO ql.Context: New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-34-53_195_4347723315783187064-1
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/06/28 09:34:53 INFO exec.Utilities: Serializing MapWork via kryo
16/06/28 09:34:53 INFO log.PerfLogger: </PERFLOG method=serializePlan start=1467120893528 end=1467120893585 duration=57 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/06/28 09:34:53 INFO Configuration.deprecation: mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication
16/06/28 09:34:53 ERROR mr.ExecDriver: yarn
16/06/28 09:34:53 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/06/28 09:34:53 INFO fs.FSStatsPublisher: created : hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/-ext-10001
16/06/28 09:34:53 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/06/28 09:34:53 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-34-53_195_4347723315783187064-1/-mr-10004/615baaaf-9612-4cf0-99d5-6c7d1c2296ba/map.xml
16/06/28 09:34:53 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-34-53_195_4347723315783187064-1/-mr-10004/615baaaf-9612-4cf0-99d5-6c7d1c2296ba/reduce.xml
16/06/28 09:34:53 INFO exec.Utilities: ***************non-local mode***************
16/06/28 09:34:53 INFO exec.Utilities: local path = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-34-53_195_4347723315783187064-1/-mr-10004/615baaaf-9612-4cf0-99d5-6c7d1c2296ba/reduce.xml
16/06/28 09:34:53 INFO exec.Utilities: Open file to read in plan: hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-34-53_195_4347723315783187064-1/-mr-10004/615baaaf-9612-4cf0-99d5-6c7d1c2296ba/reduce.xml
16/06/28 09:34:53 INFO exec.Utilities: File not found: File does not exist: /tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-34-53_195_4347723315783187064-1/-mr-10004/615baaaf-9612-4cf0-99d5-6c7d1c2296ba/reduce.xml

    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

16/06/28 09:34:53 INFO exec.Utilities: No plan file found: hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-34-53_195_4347723315783187064-1/-mr-10004/615baaaf-9612-4cf0-99d5-6c7d1c2296ba/reduce.xml
16/06/28 09:34:53 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/06/28 09:34:53 INFO log.PerfLogger: <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/06/28 09:34:53 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-34-53_195_4347723315783187064-1/-mr-10004/615baaaf-9612-4cf0-99d5-6c7d1c2296ba/map.xml
16/06/28 09:34:53 INFO io.CombineHiveInputFormat: Total number of paths: 1, launching 1 threads to check non-combinable ones.
16/06/28 09:34:54 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://localhost:8025/user/hive/warehouse/dual; using filter path hdfs://localhost:8025/user/hive/warehouse/dual
16/06/28 09:34:54 INFO input.FileInputFormat: Total input paths to process : 1
16/06/28 09:34:54 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
16/06/28 09:34:54 INFO io.CombineHiveInputFormat: number of splits 1
16/06/28 09:34:54 INFO io.CombineHiveInputFormat: Number of all splits 1
16/06/28 09:34:54 INFO log.PerfLogger: </PERFLOG method=getSplits start=1467120893990 end=1467120894016 duration=26 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/06/28 09:34:54 INFO mapreduce.JobSubmitter: number of splits:1
16/06/28 09:34:54 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1466695700491_0039
16/06/28 09:34:54 INFO impl.YarnClientImpl: Submitted application application_1466695700491_0039
16/06/28 09:34:54 INFO mapreduce.Job: The url to track the job: http://starchild.ltsnet.net:8088/proxy/application_1466695700491_0039/
Starting Job = job_1466695700491_0039, Tracking URL = http://starchild.ltsnet.net:8088/proxy/application_1466695700491_0039/
16/06/28 09:34:54 INFO exec.Task: Starting Job = job_1466695700491_0039, Tracking URL = http://starchild.ltsnet.net:8088/proxy/application_1466695700491_0039/
Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1466695700491_0039
16/06/28 09:34:54 INFO exec.Task: Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1466695700491_0039
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
16/06/28 09:34:57 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
16/06/28 09:34:57 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-06-28 09:34:57,414 Stage-1 map = 0%,  reduce = 0%
16/06/28 09:34:57 INFO exec.Task: 2016-06-28 09:34:57,414 Stage-1 map = 0%,  reduce = 0%
2016-06-28 09:35:02,550 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.88 sec
16/06/28 09:35:02 INFO exec.Task: 2016-06-28 09:35:02,550 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.88 sec
MapReduce Total cumulative CPU time: 880 msec
16/06/28 09:35:03 INFO exec.Task: MapReduce Total cumulative CPU time: 880 msec
Ended Job = job_1466695700491_0039
16/06/28 09:35:03 INFO exec.Task: Ended Job = job_1466695700491_0039
16/06/28 09:35:03 INFO exec.FileSinkOperator: Moving tmp dir: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/_tmp.-ext-10002 to: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/-ext-10002
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=task.CONDITION.Stage-7 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO ql.Driver: Starting task [Stage-7:CONDITIONAL] in serial mode
Stage-4 is selected by condition resolver.
16/06/28 09:35:03 INFO exec.Task: Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
16/06/28 09:35:03 INFO exec.Task: Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
16/06/28 09:35:03 INFO exec.Task: Stage-5 is filtered out by condition resolver.
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=task.MOVE.Stage-4 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO ql.Driver: Starting task [Stage-4:MOVE] in serial mode
Moving data to: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/-ext-10000
16/06/28 09:35:03 INFO exec.Task: Moving data to: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/-ext-10000 from hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/-ext-10002
16/06/28 09:35:03 INFO metadata.Hive: Replacing src:hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/-ext-10002, dest: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/-ext-10000, Status:true
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=task.MOVE.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO ql.Driver: Starting task [Stage-0:MOVE] in serial mode
Loading data to table default.keyvalue
16/06/28 09:35:03 INFO exec.Task: Loading data to table default.keyvalue from hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/-ext-10000
16/06/28 09:35:03 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:03 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:03 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:03 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:03 INFO common.FileUtils: deleting  hdfs://localhost:8025/user/hive/warehouse/keyvalue/000000_0
16/06/28 09:35:03 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
16/06/28 09:35:03 INFO common.FileUtils: deleting  hdfs://localhost:8025/user/hive/warehouse/keyvalue/000000_0_copy_1
16/06/28 09:35:03 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
16/06/28 09:35:03 INFO common.FileUtils: deleting  hdfs://localhost:8025/user/hive/warehouse/keyvalue/000000_0_copy_2
16/06/28 09:35:03 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
16/06/28 09:35:03 INFO metadata.Hive: Replacing src:hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/-ext-10000/000000_0, dest: hdfs://localhost:8025/user/hive/warehouse/keyvalue/000000_0, Status:true
16/06/28 09:35:03 INFO metastore.HiveMetaStore: 3: alter_table: db=default tbl=keyvalue newtbl=keyvalue
16/06/28 09:35:03 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=alter_table: db=default tbl=keyvalue newtbl=keyvalue   
16/06/28 09:35:03 INFO hive.log: Updating table stats fast for keyvalue
16/06/28 09:35:03 INFO hive.log: Updated size of table keyvalue to 12
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=task.STATS.Stage-2 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO ql.Driver: Starting task [Stage-2:STATS] in serial mode
16/06/28 09:35:03 INFO exec.StatsTask: Executing stats task
16/06/28 09:35:03 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:03 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:03 INFO fs.FSStatsPublisher: created : hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-34-53_195_4347723315783187064-1/-ext-10001
16/06/28 09:35:03 INFO fs.FSStatsAggregator: Read stats : {default.keyvalue/={numRows=1, rawDataSize=11}}
16/06/28 09:35:03 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:03 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:03 INFO fs.FSStatsAggregator: Read stats for : default.keyvalue/    numRows    1
16/06/28 09:35:03 INFO fs.FSStatsAggregator: Read stats for : default.keyvalue/    rawDataSize    11
16/06/28 09:35:03 INFO metastore.HiveMetaStore: 3: alter_table: db=default tbl=keyvalue newtbl=keyvalue
16/06/28 09:35:03 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=alter_table: db=default tbl=keyvalue newtbl=keyvalue   
16/06/28 09:35:03 INFO hive.log: Updating table stats fast for keyvalue
16/06/28 09:35:03 INFO hive.log: Updated size of table keyvalue to 12
Table default.keyvalue stats: [numFiles=1, numRows=1, totalSize=12, rawDataSize=11]
16/06/28 09:35:03 INFO exec.Task: Table default.keyvalue stats: [numFiles=1, numRows=1, totalSize=12, rawDataSize=11]
16/06/28 09:35:03 INFO log.PerfLogger: </PERFLOG method=runTasks start=1467120893476 end=1467120903829 duration=10353 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1467120893476 end=1467120903829 duration=10353 from=org.apache.hadoop.hive.ql.Driver>
MapReduce Jobs Launched:
16/06/28 09:35:03 INFO ql.Driver: MapReduce Jobs Launched:
Stage-Stage-1: Map: 1   Cumulative CPU: 0.88 sec   HDFS Read: 3289 HDFS Write: 84 SUCCESS
16/06/28 09:35:03 INFO ql.Driver: Stage-Stage-1: Map: 1   Cumulative CPU: 0.88 sec   HDFS Read: 3289 HDFS Write: 84 SUCCESS
Total MapReduce CPU Time Spent: 880 msec
16/06/28 09:35:03 INFO ql.Driver: Total MapReduce CPU Time Spent: 880 msec
OK
16/06/28 09:35:03 INFO ql.Driver: OK
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1467120903830 end=1467120903830 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO log.PerfLogger: </PERFLOG method=Driver.run start=1467120893194 end=1467120903830 duration=10636 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO hive.HiveQueryRunner: running hive query: 'insert into table keyvalue select 'Multiple' as key, 'queries' as value from dual'
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO parse.ParseDriver: Parsing command: insert into table keyvalue select 'Multiple' as key, 'queries' as value from dual
16/06/28 09:35:03 INFO parse.ParseDriver: Parse Completed
16/06/28 09:35:03 INFO log.PerfLogger: </PERFLOG method=parse start=1467120903846 end=1467120903847 duration=1 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO parse.CalcitePlanner: Starting Semantic Analysis
16/06/28 09:35:03 INFO parse.CalcitePlanner: Completed phase 1 of Semantic Analysis
16/06/28 09:35:03 INFO parse.CalcitePlanner: Get metadata for source tables
16/06/28 09:35:03 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=dual
16/06/28 09:35:03 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/28 09:35:03 INFO parse.CalcitePlanner: Get metadata for subqueries
16/06/28 09:35:03 INFO parse.CalcitePlanner: Get metadata for destination tables
16/06/28 09:35:03 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:03 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:03 INFO parse.CalcitePlanner: Completed getting MetaData in Semantic Analysis
16/06/28 09:35:03 INFO parse.BaseSemanticAnalyzer: Not invoking CBO because the statement has too few joins
16/06/28 09:35:03 INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1
16/06/28 09:35:03 INFO parse.CalcitePlanner: Set stats collection dir : hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/-ext-10001
16/06/28 09:35:03 INFO ppd.OpProcFactory: Processing for FS(7)
16/06/28 09:35:03 INFO ppd.OpProcFactory: Processing for SEL(6)
16/06/28 09:35:03 INFO ppd.OpProcFactory: Processing for TS(5)
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/06/28 09:35:03 INFO log.PerfLogger: </PERFLOG method=partition-retrieving start=1467120903878 end=1467120903878 duration=0 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/06/28 09:35:03 INFO optimizer.GenMRFileSink1: using CombineHiveInputformat for the merge job
16/06/28 09:35:03 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/28 09:35:03 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/28 09:35:03 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/28 09:35:03 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/28 09:35:03 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/28 09:35:03 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/28 09:35:03 INFO parse.CalcitePlanner: Completed plan generation
16/06/28 09:35:03 INFO ql.Driver: Semantic Analysis Completed
16/06/28 09:35:03 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1467120903847 end=1467120903879 duration=32 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:key, type:string, comment:null), FieldSchema(name:value, type:string, comment:null)], properties:null)
16/06/28 09:35:03 INFO log.PerfLogger: </PERFLOG method=compile start=1467120903830 end=1467120903879 duration=49 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO ql.Driver: Starting command(queryId=jmill383_20160628093503_2412ed34-efaf-4b30-b4a3-f862307a3fb6): insert into table keyvalue select 'Multiple' as key, 'queries' as value from dual
Query ID = jmill383_20160628093503_2412ed34-efaf-4b30-b4a3-f862307a3fb6
16/06/28 09:35:03 INFO ql.Driver: Query ID = jmill383_20160628093503_2412ed34-efaf-4b30-b4a3-f862307a3fb6
Total jobs = 3
16/06/28 09:35:03 INFO ql.Driver: Total jobs = 3
16/06/28 09:35:03 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1467120903830 end=1467120903880 duration=50 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=task.MAPRED.Stage-1 from=org.apache.hadoop.hive.ql.Driver>

Launching Job 1 out of 3
16/06/28 09:35:03 INFO ql.Driver: Launching Job 1 out of 3
16/06/28 09:35:03 INFO ql.Driver: Starting task [Stage-1:MAPRED] in serial mode

Number of reduce tasks is set to 0 since there's no reduce operator
16/06/28 09:35:03 INFO exec.Task: Number of reduce tasks is set to 0 since there's no reduce operator
16/06/28 09:35:03 INFO ql.Context: New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-03_846_6893561345056796100-1
16/06/28 09:35:03 INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
16/06/28 09:35:03 INFO exec.Utilities: Processing alias dual
16/06/28 09:35:03 INFO exec.Utilities: Adding input file hdfs://localhost:8025/user/hive/warehouse/dual
16/06/28 09:35:03 INFO exec.Utilities: Content Summary not cached for hdfs://localhost:8025/user/hive/warehouse/dual
16/06/28 09:35:03 INFO ql.Context: New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-03_846_6893561345056796100-1
16/06/28 09:35:03 INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/06/28 09:35:03 INFO exec.Utilities: Serializing MapWork via kryo
16/06/28 09:35:03 INFO log.PerfLogger: </PERFLOG method=serializePlan start=1467120903913 end=1467120903929 duration=16 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/06/28 09:35:03 ERROR mr.ExecDriver: yarn
16/06/28 09:35:03 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/06/28 09:35:03 INFO fs.FSStatsPublisher: created : hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/-ext-10001
16/06/28 09:35:03 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/06/28 09:35:03 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-03_846_6893561345056796100-1/-mr-10004/0aa98e98-6df7-404c-aa36-ea154f44c9fa/map.xml
16/06/28 09:35:03 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-03_846_6893561345056796100-1/-mr-10004/0aa98e98-6df7-404c-aa36-ea154f44c9fa/reduce.xml
16/06/28 09:35:03 INFO exec.Utilities: ***************non-local mode***************
16/06/28 09:35:03 INFO exec.Utilities: local path = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-03_846_6893561345056796100-1/-mr-10004/0aa98e98-6df7-404c-aa36-ea154f44c9fa/reduce.xml
16/06/28 09:35:03 INFO exec.Utilities: Open file to read in plan: hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-03_846_6893561345056796100-1/-mr-10004/0aa98e98-6df7-404c-aa36-ea154f44c9fa/reduce.xml
16/06/28 09:35:03 INFO exec.Utilities: File not found: File does not exist: /tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-03_846_6893561345056796100-1/-mr-10004/0aa98e98-6df7-404c-aa36-ea154f44c9fa/reduce.xml

    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

16/06/28 09:35:03 INFO exec.Utilities: No plan file found: hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-03_846_6893561345056796100-1/-mr-10004/0aa98e98-6df7-404c-aa36-ea154f44c9fa/reduce.xml
16/06/28 09:35:03 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/06/28 09:35:04 INFO log.PerfLogger: <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/06/28 09:35:04 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-03_846_6893561345056796100-1/-mr-10004/0aa98e98-6df7-404c-aa36-ea154f44c9fa/map.xml
16/06/28 09:35:04 INFO io.CombineHiveInputFormat: Total number of paths: 1, launching 1 threads to check non-combinable ones.
16/06/28 09:35:04 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://localhost:8025/user/hive/warehouse/dual; using filter path hdfs://localhost:8025/user/hive/warehouse/dual
16/06/28 09:35:04 INFO input.FileInputFormat: Total input paths to process : 1
16/06/28 09:35:04 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
16/06/28 09:35:04 INFO io.CombineHiveInputFormat: number of splits 1
16/06/28 09:35:04 INFO io.CombineHiveInputFormat: Number of all splits 1
16/06/28 09:35:04 INFO log.PerfLogger: </PERFLOG method=getSplits start=1467120904100 end=1467120904105 duration=5 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/06/28 09:35:04 INFO mapreduce.JobSubmitter: number of splits:1
16/06/28 09:35:04 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1466695700491_0040
16/06/28 09:35:04 INFO impl.YarnClientImpl: Submitted application application_1466695700491_0040
16/06/28 09:35:04 INFO mapreduce.Job: The url to track the job: http://starchild.ltsnet.net:8088/proxy/application_1466695700491_0040/
Starting Job = job_1466695700491_0040, Tracking URL = http://starchild.ltsnet.net:8088/proxy/application_1466695700491_0040/
16/06/28 09:35:04 INFO exec.Task: Starting Job = job_1466695700491_0040, Tracking URL = http://starchild.ltsnet.net:8088/proxy/application_1466695700491_0040/
Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1466695700491_0040
16/06/28 09:35:04 INFO exec.Task: Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1466695700491_0040
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
16/06/28 09:35:08 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
16/06/28 09:35:08 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-06-28 09:35:08,367 Stage-1 map = 0%,  reduce = 0%
16/06/28 09:35:08 INFO exec.Task: 2016-06-28 09:35:08,367 Stage-1 map = 0%,  reduce = 0%
2016-06-28 09:35:12,471 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.89 sec
16/06/28 09:35:12 INFO exec.Task: 2016-06-28 09:35:12,471 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.89 sec
MapReduce Total cumulative CPU time: 890 msec
16/06/28 09:35:13 INFO exec.Task: MapReduce Total cumulative CPU time: 890 msec
Ended Job = job_1466695700491_0040
16/06/28 09:35:13 INFO exec.Task: Ended Job = job_1466695700491_0040
16/06/28 09:35:13 INFO exec.FileSinkOperator: Moving tmp dir: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/_tmp.-ext-10002 to: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/-ext-10002
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=task.CONDITION.Stage-7 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO ql.Driver: Starting task [Stage-7:CONDITIONAL] in serial mode
Stage-4 is selected by condition resolver.
16/06/28 09:35:13 INFO exec.Task: Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
16/06/28 09:35:13 INFO exec.Task: Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
16/06/28 09:35:13 INFO exec.Task: Stage-5 is filtered out by condition resolver.
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=task.MOVE.Stage-4 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO ql.Driver: Starting task [Stage-4:MOVE] in serial mode
Moving data to: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/-ext-10000
16/06/28 09:35:13 INFO exec.Task: Moving data to: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/-ext-10000 from hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/-ext-10002
16/06/28 09:35:13 INFO metadata.Hive: Replacing src:hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/-ext-10002, dest: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/-ext-10000, Status:true
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=task.MOVE.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO ql.Driver: Starting task [Stage-0:MOVE] in serial mode
Loading data to table default.keyvalue
16/06/28 09:35:13 INFO exec.Task: Loading data to table default.keyvalue from hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/-ext-10000
16/06/28 09:35:13 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:13 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:13 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:13 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:13 INFO metadata.Hive: Renaming src: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/-ext-10000/000000_0, dest: hdfs://localhost:8025/user/hive/warehouse/keyvalue/000000_0_copy_1, Status:true
16/06/28 09:35:13 INFO metastore.HiveMetaStore: 3: alter_table: db=default tbl=keyvalue newtbl=keyvalue
16/06/28 09:35:13 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=alter_table: db=default tbl=keyvalue newtbl=keyvalue   
16/06/28 09:35:13 INFO hive.log: Updating table stats fast for keyvalue
16/06/28 09:35:13 INFO hive.log: Updated size of table keyvalue to 29
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=task.STATS.Stage-2 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO ql.Driver: Starting task [Stage-2:STATS] in serial mode
16/06/28 09:35:13 INFO exec.StatsTask: Executing stats task
16/06/28 09:35:13 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:13 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:13 INFO fs.FSStatsPublisher: created : hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-03_846_6893561345056796100-1/-ext-10001
16/06/28 09:35:13 INFO fs.FSStatsAggregator: Read stats : {default.keyvalue/={numRows=1, rawDataSize=16}}
16/06/28 09:35:13 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:13 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:13 INFO fs.FSStatsAggregator: Read stats for : default.keyvalue/    numRows    1
16/06/28 09:35:13 INFO fs.FSStatsAggregator: Read stats for : default.keyvalue/    rawDataSize    16
16/06/28 09:35:13 INFO metastore.HiveMetaStore: 3: alter_table: db=default tbl=keyvalue newtbl=keyvalue
16/06/28 09:35:13 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=alter_table: db=default tbl=keyvalue newtbl=keyvalue   
16/06/28 09:35:13 INFO hive.log: Updating table stats fast for keyvalue
16/06/28 09:35:13 INFO hive.log: Updated size of table keyvalue to 29
Table default.keyvalue stats: [numFiles=2, numRows=2, totalSize=29, rawDataSize=27]
16/06/28 09:35:13 INFO exec.Task: Table default.keyvalue stats: [numFiles=2, numRows=2, totalSize=29, rawDataSize=27]
16/06/28 09:35:13 INFO log.PerfLogger: </PERFLOG method=runTasks start=1467120903880 end=1467120913715 duration=9835 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1467120903879 end=1467120913715 duration=9836 from=org.apache.hadoop.hive.ql.Driver>
MapReduce Jobs Launched:
16/06/28 09:35:13 INFO ql.Driver: MapReduce Jobs Launched:
Stage-Stage-1: Map: 1   Cumulative CPU: 0.89 sec   HDFS Read: 3458 HDFS Write: 89 SUCCESS
16/06/28 09:35:13 INFO ql.Driver: Stage-Stage-1: Map: 1   Cumulative CPU: 0.89 sec   HDFS Read: 3458 HDFS Write: 89 SUCCESS
Total MapReduce CPU Time Spent: 890 msec
16/06/28 09:35:13 INFO ql.Driver: Total MapReduce CPU Time Spent: 890 msec
OK
16/06/28 09:35:13 INFO ql.Driver: OK
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1467120913715 end=1467120913715 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO log.PerfLogger: </PERFLOG method=Driver.run start=1467120903830 end=1467120913715 duration=9885 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO hive.HiveQueryRunner: running hive query: 'insert into table keyvalue select 'are' as key, 'supported' as value from dual'
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO parse.ParseDriver: Parsing command: insert into table keyvalue select 'are' as key, 'supported' as value from dual
16/06/28 09:35:13 INFO parse.ParseDriver: Parse Completed
16/06/28 09:35:13 INFO log.PerfLogger: </PERFLOG method=parse start=1467120913732 end=1467120913732 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO parse.CalcitePlanner: Starting Semantic Analysis
16/06/28 09:35:13 INFO parse.CalcitePlanner: Completed phase 1 of Semantic Analysis
16/06/28 09:35:13 INFO parse.CalcitePlanner: Get metadata for source tables
16/06/28 09:35:13 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=dual
16/06/28 09:35:13 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual   
16/06/28 09:35:13 INFO parse.CalcitePlanner: Get metadata for subqueries
16/06/28 09:35:13 INFO parse.CalcitePlanner: Get metadata for destination tables
16/06/28 09:35:13 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:13 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:13 INFO parse.CalcitePlanner: Completed getting MetaData in Semantic Analysis
16/06/28 09:35:13 INFO parse.BaseSemanticAnalyzer: Not invoking CBO because the statement has too few joins
16/06/28 09:35:13 INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1
16/06/28 09:35:13 INFO parse.CalcitePlanner: Set stats collection dir : hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/-ext-10001
16/06/28 09:35:13 INFO ppd.OpProcFactory: Processing for FS(12)
16/06/28 09:35:13 INFO ppd.OpProcFactory: Processing for SEL(11)
16/06/28 09:35:13 INFO ppd.OpProcFactory: Processing for TS(10)
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/06/28 09:35:13 INFO log.PerfLogger: </PERFLOG method=partition-retrieving start=1467120913764 end=1467120913764 duration=0 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/06/28 09:35:13 INFO optimizer.GenMRFileSink1: using CombineHiveInputformat for the merge job
16/06/28 09:35:13 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/28 09:35:13 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/28 09:35:13 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/28 09:35:13 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/28 09:35:13 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/06/28 09:35:13 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/06/28 09:35:13 INFO parse.CalcitePlanner: Completed plan generation
16/06/28 09:35:13 INFO ql.Driver: Semantic Analysis Completed
16/06/28 09:35:13 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1467120913732 end=1467120913765 duration=33 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:key, type:string, comment:null), FieldSchema(name:value, type:string, comment:null)], properties:null)
16/06/28 09:35:13 INFO log.PerfLogger: </PERFLOG method=compile start=1467120913715 end=1467120913765 duration=50 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO ql.Driver: Starting command(queryId=jmill383_20160628093513_e09d643f-f389-4ff2-b308-1aa821ca90ed): insert into table keyvalue select 'are' as key, 'supported' as value from dual
Query ID = jmill383_20160628093513_e09d643f-f389-4ff2-b308-1aa821ca90ed
16/06/28 09:35:13 INFO ql.Driver: Query ID = jmill383_20160628093513_e09d643f-f389-4ff2-b308-1aa821ca90ed
Total jobs = 3
16/06/28 09:35:13 INFO ql.Driver: Total jobs = 3
16/06/28 09:35:13 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1467120913715 end=1467120913766 duration=51 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=task.MAPRED.Stage-1 from=org.apache.hadoop.hive.ql.Driver>

Launching Job 1 out of 3
16/06/28 09:35:13 INFO ql.Driver: Launching Job 1 out of 3
16/06/28 09:35:13 INFO ql.Driver: Starting task [Stage-1:MAPRED] in serial mode

Number of reduce tasks is set to 0 since there's no reduce operator
16/06/28 09:35:13 INFO exec.Task: Number of reduce tasks is set to 0 since there's no reduce operator
16/06/28 09:35:13 INFO ql.Context: New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-13_732_8803492905504424077-1
16/06/28 09:35:13 INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
16/06/28 09:35:13 INFO exec.Utilities: Processing alias dual
16/06/28 09:35:13 INFO exec.Utilities: Adding input file hdfs://localhost:8025/user/hive/warehouse/dual
16/06/28 09:35:13 INFO exec.Utilities: Content Summary not cached for hdfs://localhost:8025/user/hive/warehouse/dual
16/06/28 09:35:13 INFO ql.Context: New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-13_732_8803492905504424077-1
16/06/28 09:35:13 INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/06/28 09:35:13 INFO exec.Utilities: Serializing MapWork via kryo
16/06/28 09:35:13 INFO log.PerfLogger: </PERFLOG method=serializePlan start=1467120913798 end=1467120913814 duration=16 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/06/28 09:35:13 ERROR mr.ExecDriver: yarn
16/06/28 09:35:13 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/06/28 09:35:13 INFO fs.FSStatsPublisher: created : hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/-ext-10001
16/06/28 09:35:13 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/06/28 09:35:13 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-13_732_8803492905504424077-1/-mr-10004/227233a8-aa2d-4d63-9b59-0185b7289ff0/map.xml
16/06/28 09:35:13 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-13_732_8803492905504424077-1/-mr-10004/227233a8-aa2d-4d63-9b59-0185b7289ff0/reduce.xml
16/06/28 09:35:13 INFO exec.Utilities: ***************non-local mode***************
16/06/28 09:35:13 INFO exec.Utilities: local path = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-13_732_8803492905504424077-1/-mr-10004/227233a8-aa2d-4d63-9b59-0185b7289ff0/reduce.xml
16/06/28 09:35:13 INFO exec.Utilities: Open file to read in plan: hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-13_732_8803492905504424077-1/-mr-10004/227233a8-aa2d-4d63-9b59-0185b7289ff0/reduce.xml
16/06/28 09:35:13 INFO exec.Utilities: File not found: File does not exist: /tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-13_732_8803492905504424077-1/-mr-10004/227233a8-aa2d-4d63-9b59-0185b7289ff0/reduce.xml

    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

16/06/28 09:35:13 INFO exec.Utilities: No plan file found: hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-13_732_8803492905504424077-1/-mr-10004/227233a8-aa2d-4d63-9b59-0185b7289ff0/reduce.xml
16/06/28 09:35:13 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/06/28 09:35:14 INFO log.PerfLogger: <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/06/28 09:35:14 INFO exec.Utilities: PLAN PATH = hdfs://localhost:8025/tmp/hive/jmill383/e633ab4b-3b34-4389-912e-02737ef9a352/hive_2016-06-28_09-35-13_732_8803492905504424077-1/-mr-10004/227233a8-aa2d-4d63-9b59-0185b7289ff0/map.xml
16/06/28 09:35:14 INFO io.CombineHiveInputFormat: Total number of paths: 1, launching 1 threads to check non-combinable ones.
16/06/28 09:35:14 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://localhost:8025/user/hive/warehouse/dual; using filter path hdfs://localhost:8025/user/hive/warehouse/dual
16/06/28 09:35:14 INFO input.FileInputFormat: Total input paths to process : 1
16/06/28 09:35:14 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
16/06/28 09:35:14 INFO io.CombineHiveInputFormat: number of splits 1
16/06/28 09:35:14 INFO io.CombineHiveInputFormat: Number of all splits 1
16/06/28 09:35:14 INFO log.PerfLogger: </PERFLOG method=getSplits start=1467120914034 end=1467120914040 duration=6 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/06/28 09:35:14 INFO mapreduce.JobSubmitter: number of splits:1
16/06/28 09:35:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1466695700491_0041
16/06/28 09:35:14 INFO impl.YarnClientImpl: Submitted application application_1466695700491_0041
16/06/28 09:35:14 INFO mapreduce.Job: The url to track the job: http://starchild.ltsnet.net:8088/proxy/application_1466695700491_0041/
Starting Job = job_1466695700491_0041, Tracking URL = http://starchild.ltsnet.net:8088/proxy/application_1466695700491_0041/
16/06/28 09:35:14 INFO exec.Task: Starting Job = job_1466695700491_0041, Tracking URL = http://starchild.ltsnet.net:8088/proxy/application_1466695700491_0041/
Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1466695700491_0041
16/06/28 09:35:14 INFO exec.Task: Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1466695700491_0041
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
16/06/28 09:35:17 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
16/06/28 09:35:17 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-06-28 09:35:17,297 Stage-1 map = 0%,  reduce = 0%
16/06/28 09:35:17 INFO exec.Task: 2016-06-28 09:35:17,297 Stage-1 map = 0%,  reduce = 0%
2016-06-28 09:35:21,393 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.93 sec
16/06/28 09:35:21 INFO exec.Task: 2016-06-28 09:35:21,393 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.93 sec
MapReduce Total cumulative CPU time: 930 msec
16/06/28 09:35:23 INFO exec.Task: MapReduce Total cumulative CPU time: 930 msec
Ended Job = job_1466695700491_0041
16/06/28 09:35:23 INFO exec.Task: Ended Job = job_1466695700491_0041
16/06/28 09:35:23 INFO exec.FileSinkOperator: Moving tmp dir: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/_tmp.-ext-10002 to: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/-ext-10002
16/06/28 09:35:23 INFO log.PerfLogger: <PERFLOG method=task.CONDITION.Stage-7 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:23 INFO ql.Driver: Starting task [Stage-7:CONDITIONAL] in serial mode
Stage-4 is selected by condition resolver.
16/06/28 09:35:23 INFO exec.Task: Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
16/06/28 09:35:23 INFO exec.Task: Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
16/06/28 09:35:23 INFO exec.Task: Stage-5 is filtered out by condition resolver.
16/06/28 09:35:23 INFO log.PerfLogger: <PERFLOG method=task.MOVE.Stage-4 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:23 INFO ql.Driver: Starting task [Stage-4:MOVE] in serial mode
Moving data to: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/-ext-10000
16/06/28 09:35:23 INFO exec.Task: Moving data to: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/-ext-10000 from hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/-ext-10002
16/06/28 09:35:23 INFO metadata.Hive: Replacing src:hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/-ext-10002, dest: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/-ext-10000, Status:true
16/06/28 09:35:23 INFO log.PerfLogger: <PERFLOG method=task.MOVE.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:23 INFO ql.Driver: Starting task [Stage-0:MOVE] in serial mode
Loading data to table default.keyvalue
16/06/28 09:35:23 INFO exec.Task: Loading data to table default.keyvalue from hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/-ext-10000
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:23 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:23 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:23 INFO metadata.Hive: Renaming src: hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/-ext-10000/000000_0, dest: hdfs://localhost:8025/user/hive/warehouse/keyvalue/000000_0_copy_2, Status:true
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 3: alter_table: db=default tbl=keyvalue newtbl=keyvalue
16/06/28 09:35:23 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=alter_table: db=default tbl=keyvalue newtbl=keyvalue   
16/06/28 09:35:23 INFO hive.log: Updating table stats fast for keyvalue
16/06/28 09:35:23 INFO hive.log: Updated size of table keyvalue to 43
16/06/28 09:35:23 INFO log.PerfLogger: <PERFLOG method=task.STATS.Stage-2 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:23 INFO ql.Driver: Starting task [Stage-2:STATS] in serial mode
16/06/28 09:35:23 INFO exec.StatsTask: Executing stats task
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:23 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:23 INFO fs.FSStatsPublisher: created : hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-06-28_09-35-13_732_8803492905504424077-1/-ext-10001
16/06/28 09:35:23 INFO fs.FSStatsAggregator: Read stats : {default.keyvalue/={numRows=1, rawDataSize=13}}
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 3: get_table : db=default tbl=keyvalue
16/06/28 09:35:23 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue   
16/06/28 09:35:23 INFO fs.FSStatsAggregator: Read stats for : default.keyvalue/    numRows    1
16/06/28 09:35:23 INFO fs.FSStatsAggregator: Read stats for : default.keyvalue/    rawDataSize    13
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 3: alter_table: db=default tbl=keyvalue newtbl=keyvalue
16/06/28 09:35:23 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=alter_table: db=default tbl=keyvalue newtbl=keyvalue   
16/06/28 09:35:23 INFO hive.log: Updating table stats fast for keyvalue
16/06/28 09:35:23 INFO hive.log: Updated size of table keyvalue to 43
Table default.keyvalue stats: [numFiles=3, numRows=3, totalSize=43, rawDataSize=40]
16/06/28 09:35:23 INFO exec.Task: Table default.keyvalue stats: [numFiles=3, numRows=3, totalSize=43, rawDataSize=40]
16/06/28 09:35:23 INFO log.PerfLogger: </PERFLOG method=runTasks start=1467120913766 end=1467120923767 duration=10001 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:23 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1467120913765 end=1467120923767 duration=10002 from=org.apache.hadoop.hive.ql.Driver>
MapReduce Jobs Launched:
16/06/28 09:35:23 INFO ql.Driver: MapReduce Jobs Launched:
Stage-Stage-1: Map: 1   Cumulative CPU: 0.93 sec   HDFS Read: 3458 HDFS Write: 86 SUCCESS
16/06/28 09:35:23 INFO ql.Driver: Stage-Stage-1: Map: 1   Cumulative CPU: 0.93 sec   HDFS Read: 3458 HDFS Write: 86 SUCCESS
Total MapReduce CPU Time Spent: 930 msec
16/06/28 09:35:23 INFO ql.Driver: Total MapReduce CPU Time Spent: 930 msec
OK
16/06/28 09:35:23 INFO ql.Driver: OK
16/06/28 09:35:23 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:23 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1467120923767 end=1467120923767 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:23 INFO log.PerfLogger: </PERFLOG method=Driver.run start=1467120913715 end=1467120923767 duration=10052 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:23 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:23 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1467120923767 end=1467120923767 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:23 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] completed flow: select data from dual into keyvalue
16/06/28 09:35:23 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] starting flow: uppercase kv -> kv2
16/06/28 09:35:23 INFO flow.Flow: [uppercase kv -> kv2 ] at least one sink is marked for delete
16/06/28 09:35:23 INFO flow.Flow: [uppercase kv -> kv2 ] sink oldest modified date: Wed Dec 31 18:59:59 EST 1969
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 1: get_table : db=default tbl=keyvalue2
16/06/28 09:35:23 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue2   
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 1: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:35:23 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:35:23 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:35:23 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:35:23 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:35:23 INFO hive.HiveTap: strict mode: comparing existing hive table with table descriptor
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 1: Shutting down the object store...
16/06/28 09:35:23 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 1: Metastore shutdown complete.
16/06/28 09:35:23 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:35:23 INFO hive.HiveTap: dropping hive table keyvalue2 in database default
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 1: get_table : db=default tbl=keyvalue2
16/06/28 09:35:23 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue2   
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 1: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:35:23 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:35:23 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:35:23 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:35:23 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:35:23 INFO metastore.HiveMetaStore: 1: drop_table : db=default tbl=keyvalue2
16/06/28 09:35:23 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=drop_table : db=default tbl=keyvalue2   
16/06/28 09:35:23 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:35:23 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:35:23 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:35:23 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:35:23 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:35:23 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:35:24 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:35:24 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:35:24 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:35:24 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/06/28 09:35:24 INFO metastore.hivemetastoressimpl: deleting  hdfs://localhost:8025/user/hive/warehouse/keyvalue2
16/06/28 09:35:24 INFO metastore.HiveMetaStore: 1: Shutting down the object store...
16/06/28 09:35:24 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:35:24 INFO metastore.HiveMetaStore: 1: Metastore shutdown complete.
16/06/28 09:35:24 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:35:24 INFO metastore.HiveMetaStore: 1: get_table : db=default tbl=keyvalue2
16/06/28 09:35:24 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue2   
16/06/28 09:35:24 INFO metastore.HiveMetaStore: 1: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:35:24 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:35:24 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:35:24 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:35:24 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:35:24 INFO metastore.HiveMetaStore: 1: Shutting down the object store...
16/06/28 09:35:24 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:35:24 INFO metastore.HiveMetaStore: 1: Metastore shutdown complete.
16/06/28 09:35:24 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:35:24 INFO flow.Flow: [uppercase kv -> kv2 ] starting
16/06/28 09:35:24 INFO flow.Flow: [uppercase kv -> kv2 ]  source: HiveTap["TextDelimited[['key', 'value' | String, String]]"]["hdfs://localhost:8025/user/hive/warehouse/keyvalue"]
16/06/28 09:35:24 INFO flow.Flow: [uppercase kv -> kv2 ]  sink: HiveTap["TextDelimited[['key', 'value' | String, String]]"]["hdfs://localhost:8025/user/hive/warehouse/keyvalue2"]
16/06/28 09:35:24 INFO flow.Flow: [uppercase kv -> kv2 ]  parallel execution of steps is enabled: true
16/06/28 09:35:24 INFO flow.Flow: [uppercase kv -> kv2 ]  executing total steps: 1
16/06/28 09:35:24 INFO flow.Flow: [uppercase kv -> kv2 ]  allocating management threads: 1
16/06/28 09:35:24 INFO flow.Flow: [uppercase kv -> kv2 ] starting step: (1/1) .../hive/warehouse/keyvalue2
16/06/28 09:35:24 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/06/28 09:35:24 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/06/28 09:35:24 INFO mapred.FileInputFormat: Total input paths to process : 3
16/06/28 09:35:24 INFO mapreduce.JobSubmitter: number of splits:3
16/06/28 09:35:24 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1466695700491_0042
16/06/28 09:35:24 INFO impl.YarnClientImpl: Submitted application application_1466695700491_0042
16/06/28 09:35:24 INFO mapreduce.Job: The url to track the job: http://starchild.ltsnet.net:8088/proxy/application_1466695700491_0042/
16/06/28 09:35:24 INFO flow.Flow: [uppercase kv -> kv2 ] submitted hadoop job: job_1466695700491_0042
16/06/28 09:35:24 INFO flow.Flow: [uppercase kv -> kv2 ] tracking url: http://starchild.ltsnet.net:8088/proxy/application_1466695700491_0042/
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 4: get_table : db=default tbl=keyvalue2
16/06/28 09:35:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue2   
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 4: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:35:40 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:35:40 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:35:40 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:35:40 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 4: Shutting down the object store...
16/06/28 09:35:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 4: Metastore shutdown complete.
16/06/28 09:35:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 4: get_database: default
16/06/28 09:35:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_database: default   
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 4: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:35:40 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:35:40 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:35:40 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:35:40 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:35:40 INFO hive.HiveTap: creating table 'keyvalue2' at 'hdfs://localhost:8025/user/hive/warehouse/keyvalue2'
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 4: create_table: Table(tableName:keyvalue2, dbName:default, owner:null, createTime:0, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:key, type:string, comment:created by Cascading), FieldSchema(name:value, type:string, comment:created by Cascading)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:0, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , field.delim= }), bucketCols:null, sortCols:null, parameters:null), partitionKeys:null, parameters:null, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE)
16/06/28 09:35:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=create_table: Table(tableName:keyvalue2, dbName:default, owner:null, createTime:0, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:key, type:string, comment:created by Cascading), FieldSchema(name:value, type:string, comment:created by Cascading)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:0, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , field.delim= }), bucketCols:null, sortCols:null, parameters:null), partitionKeys:null, parameters:null, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE)   
16/06/28 09:35:40 INFO hive.log: Updating table stats fast for keyvalue2
16/06/28 09:35:40 INFO hive.log: Updated size of table keyvalue2 to 43
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 4: Shutting down the object store...
16/06/28 09:35:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 4: Metastore shutdown complete.
16/06/28 09:35:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:35:40 INFO util.Hadoop18TapUtil: deleting temp path hdfs://localhost:8025/user/hive/warehouse/keyvalue2/_temporary
16/06/28 09:35:40 INFO flow.Flow: [uppercase kv -> kv2 ]  completed in: 00:15.724, using cpu time: 00:02.190
16/06/28 09:35:40 INFO cascade.Cascade: [uppercase kv -> kv2 +l...] completed flow: uppercase kv -> kv2
16/06/28 09:35:40 INFO session.SessionState: Created local directory: /tmp/d21a340e-0ddb-4622-b21f-fca517120f97_resources
16/06/28 09:35:40 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/d21a340e-0ddb-4622-b21f-fca517120f97
16/06/28 09:35:40 INFO session.SessionState: Created local directory: /tmp/jmill383/d21a340e-0ddb-4622-b21f-fca517120f97
16/06/28 09:35:40 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/d21a340e-0ddb-4622-b21f-fca517120f97/_tmp_space.db
16/06/28 09:35:40 INFO service.CompositeService: Operation log root directory is created: /tmp/jmill383/operation_logs
16/06/28 09:35:40 INFO service.CompositeService: HiveServer2: Background operation thread pool size: 100
16/06/28 09:35:40 INFO service.CompositeService: HiveServer2: Background operation thread wait queue size: 100
16/06/28 09:35:40 INFO service.CompositeService: HiveServer2: Background operation thread keepalive time: 10 seconds
16/06/28 09:35:40 INFO service.AbstractService: Service:OperationManager is inited.
16/06/28 09:35:40 INFO service.AbstractService: Service:SessionManager is inited.
16/06/28 09:35:40 INFO service.AbstractService: Service:CLIService is inited.
16/06/28 09:35:40 INFO service.AbstractService: Service:OperationManager is started.
16/06/28 09:35:40 INFO service.AbstractService: Service:SessionManager is started.
16/06/28 09:35:40 INFO service.AbstractService: Service:CLIService is started.
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 0: get_databases: default
16/06/28 09:35:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_databases: default   
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:35:40 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:35:40 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:35:40 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:35:40 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/06/28 09:35:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Shutting down the object store...   
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/06/28 09:35:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=Metastore shutdown complete.   
16/06/28 09:35:40 INFO service.AbstractService: Service:ThriftBinaryCLIService is inited.
16/06/28 09:35:40 INFO thrift.ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
16/06/28 09:35:40 INFO session.SessionState: Created local directory: /tmp/c57da4a2-c769-4f7d-911a-16103c8e5e8b_resources
16/06/28 09:35:40 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/c57da4a2-c769-4f7d-911a-16103c8e5e8b
16/06/28 09:35:40 INFO session.SessionState: Created local directory: /tmp/jmill383/c57da4a2-c769-4f7d-911a-16103c8e5e8b
16/06/28 09:35:40 INFO session.SessionState: Created HDFS directory: /tmp/hive/jmill383/c57da4a2-c769-4f7d-911a-16103c8e5e8b/_tmp_space.db
16/06/28 09:35:40 INFO session.HiveSessionImpl: Operation log session directory is created: /tmp/jmill383/operation_logs/c57da4a2-c769-4f7d-911a-16103c8e5e8b
16/06/28 09:35:40 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO parse.ParseDriver: Parsing command: select key, value from keyvalue2
16/06/28 09:35:40 INFO parse.ParseDriver: Parse Completed
16/06/28 09:35:40 INFO log.PerfLogger: </PERFLOG method=parse start=1467120940578 end=1467120940579 duration=1 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO parse.CalcitePlanner: Starting Semantic Analysis
16/06/28 09:35:40 INFO parse.CalcitePlanner: Completed phase 1 of Semantic Analysis
16/06/28 09:35:40 INFO parse.CalcitePlanner: Get metadata for source tables
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=keyvalue2
16/06/28 09:35:40 INFO HiveMetaStore.audit: ugi=jmill383    ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue2   
16/06/28 09:35:40 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/06/28 09:35:40 INFO metastore.ObjectStore: ObjectStore, initialize called
16/06/28 09:35:40 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/06/28 09:35:40 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/06/28 09:35:40 INFO metastore.ObjectStore: Initialized ObjectStore
16/06/28 09:35:40 INFO parse.CalcitePlanner: Get metadata for subqueries
16/06/28 09:35:40 INFO parse.CalcitePlanner: Get metadata for destination tables
16/06/28 09:35:40 INFO ql.Context: New scratch dir is hdfs://localhost:8025/tmp/hive/jmill383/c57da4a2-c769-4f7d-911a-16103c8e5e8b/hive_2016-06-28_09-35-40_578_7365750655777825544-2
16/06/28 09:35:40 INFO parse.CalcitePlanner: Completed getting MetaData in Semantic Analysis
16/06/28 09:35:40 INFO parse.BaseSemanticAnalyzer: Not invoking CBO because the statement has too few joins
16/06/28 09:35:40 INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://localhost:8025/tmp/hive/jmill383/c57da4a2-c769-4f7d-911a-16103c8e5e8b/hive_2016-06-28_09-35-40_578_7365750655777825544-2/-mr-10000/.hive-staging_hive_2016-06-28_09-35-40_578_7365750655777825544-2
16/06/28 09:35:40 INFO parse.CalcitePlanner: Set stats collection dir : hdfs://localhost:8025/tmp/hive/jmill383/c57da4a2-c769-4f7d-911a-16103c8e5e8b/hive_2016-06-28_09-35-40_578_7365750655777825544-2/-mr-10000/.hive-staging_hive_2016-06-28_09-35-40_578_7365750655777825544-2/-ext-10002
16/06/28 09:35:40 INFO ppd.OpProcFactory: Processing for FS(17)
16/06/28 09:35:40 INFO ppd.OpProcFactory: Processing for SEL(16)
16/06/28 09:35:40 INFO ppd.OpProcFactory: Processing for TS(15)
16/06/28 09:35:40 INFO parse.CalcitePlanner: Completed plan generation
16/06/28 09:35:40 INFO ql.Driver: Semantic Analysis Completed
16/06/28 09:35:40 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1467120940579 end=1467120940647 duration=68 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO exec.TableScanOperator: Initializing operator TS[15]
16/06/28 09:35:40 INFO exec.TableScanOperator: Initialization Done 15 TS
16/06/28 09:35:40 INFO exec.TableScanOperator: Operator 15 TS initialized
16/06/28 09:35:40 INFO exec.TableScanOperator: Initializing children of 15 TS
16/06/28 09:35:40 INFO exec.SelectOperator: Initializing child 16 SEL
16/06/28 09:35:40 INFO exec.SelectOperator: Initializing operator SEL[16]
16/06/28 09:35:40 INFO exec.SelectOperator: SELECT struct<key:string,value:string>
16/06/28 09:35:40 INFO exec.SelectOperator: Initialization Done 16 SEL
16/06/28 09:35:40 INFO exec.SelectOperator: Operator 16 SEL initialized
16/06/28 09:35:40 INFO exec.SelectOperator: Initializing children of 16 SEL
16/06/28 09:35:40 INFO exec.ListSinkOperator: Initializing child 18 OP
16/06/28 09:35:40 INFO exec.ListSinkOperator: Initializing operator OP[18]
16/06/28 09:35:40 INFO exec.ListSinkOperator: Initialization Done 18 OP
16/06/28 09:35:40 INFO exec.ListSinkOperator: Operator 18 OP initialized
16/06/28 09:35:40 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:key, type:string, comment:null), FieldSchema(name:value, type:string, comment:null)], properties:null)
16/06/28 09:35:40 INFO log.PerfLogger: </PERFLOG method=compile start=1467120940578 end=1467120940655 duration=77 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
16/06/28 09:35:40 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO ql.Driver: Starting command(queryId=jmill383_20160628093540_cff7f382-4a39-489f-a72c-b89d6f59279b): select key, value from keyvalue2
16/06/28 09:35:40 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1467120940658 end=1467120940658 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO log.PerfLogger: </PERFLOG method=runTasks start=1467120940658 end=1467120940658 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1467120940658 end=1467120940659 duration=1 from=org.apache.hadoop.hive.ql.Driver>
OK
16/06/28 09:35:40 INFO ql.Driver: OK
16/06/28 09:35:40 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1467120940659 end=1467120940659 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO log.PerfLogger: </PERFLOG method=Driver.run start=1467120940658 end=1467120940659 duration=1 from=org.apache.hadoop.hive.ql.Driver>
----------------------Hive JDBC--------------------------
16/06/28 09:35:40 INFO mapred.FileInputFormat: Total input paths to process : 3
data from hive table copy: key=MULTIPLE,value=QUERIES
data from hive table copy: key=ARE,value=SUPPORTED
data from hive table copy: key=HELLO,value=HIVE!
---------------------------------------------------------
16/06/28 09:35:40 INFO exec.TableScanOperator: 15 finished. closing...
16/06/28 09:35:40 INFO exec.SelectOperator: 16 finished. closing...
16/06/28 09:35:40 INFO exec.ListSinkOperator: 18 finished. closing...
16/06/28 09:35:40 INFO exec.ListSinkOperator: 18 Close done
16/06/28 09:35:40 INFO exec.SelectOperator: 16 Close done
16/06/28 09:35:40 INFO exec.TableScanOperator: 15 Close done
16/06/28 09:35:40 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:40 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1467120940729 end=1467120940729 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/06/28 09:35:54 INFO util.Update: newer Cascading release available: 3.1.1-wip-61





john

> 16/06/16 13:52:46 WARN mapreduce.Counters: Group FileSystemC...
Reply all
Reply to author
Forward
0 new messages