Hive Query Execution

22 views
Skip to first unread message

Kunal Ghosh

unread,
Feb 16, 2018, 6:27:42 AM2/16/18
to cascading-user

Hi,

I want to use hive db installed at remote machine (not in my cluster) as a source tap. I want to query that hive to get data in source tap. Is there any standard way to doing this?
I have used following code to achieve same but data is not getting dumped via sink on my end. I am providing code and console output for my program . 
Please guide me if you know any work around for this or any kind of trick.
Thanks In Advance.


       Tap outTap = new Hfs(new TextDelimited(false , "," ), "/test/sample",SinkMode.REPLACE);
       String viewSelect = "select pd.description, p.stdprice from prices p, products pd where p.prodid = pd.prodid";

       Map prop1 = new HashMap<>();

       prop1.put(  HiveConf.ConfVars.METASTOREURIS.varname, "thrift://hive1:9083" );

       HiveFlow viewflow = new HiveFlow( "create view", new String[]{viewSelect}, Arrays.<Tap>asList(HiveNullTap.DEV_ZERO),outTap,prop1);

      Tap outTap1 = new Hfs(new TextDelimited(false , "," ), "/test/sampleTest",SinkMode.REPLACE);
       FlowDef flowDef = FlowDef.flowDef().addSource(processPipe,viewflow.getSink()).addTailSink(processPipe,outTap1) ;

      Properties prop = AppProps.appProps().setName("DataProcessing").buildProperties();

     prop.setProperty( "mapred.reduce.tasks", "4" );
     prop.setProperty( "mapred.map.tasks", "4" );

     Flow flow = new Hadoop2MR1FlowConnector(prop).connect(flowDef);

      Cascade cascade = new CascadeConnector().connect( viewflow, flow );
      cascade.complete();


Logs :
18/02/16 05:48:00 INFO property.AppProps: using app.id: BE9E7865580640D389D9535586FF735C
18/02/16 05:48:01 INFO util.Util: resolving application jar from found main method on: Main_123
18/02/16 05:48:01 INFO planner.HadoopPlanner: using application jar: /home/ICEStore/bigdata_cascading/Main_123.jar
18/02/16 05:48:01 INFO flow.Flow: [] executed rule registry: MapReduceHadoopRuleRegistry, completed as: SUCCESS, in: 00:00.059
18/02/16 05:48:01 INFO flow.Flow: [] rule registry: MapReduceHadoopRuleRegistry, supports assembly with steps: 1, nodes: 1
18/02/16 05:48:01 INFO flow.Flow: [] rule registry: MapReduceHadoopRuleRegistry, result was selected using: 'default comparator: selects plan with fewest steps and fewest nodes'
18/02/16 05:48:01 INFO util.Version: Cascading 4.0.0-wip-38
18/02/16 05:48:01 INFO cascade.Cascade: [create view+] starting
18/02/16 05:48:01 INFO cascade.Cascade: [create view+]  parallel execution of flows is enabled: true
18/02/16 05:48:01 INFO cascade.Cascade: [create view+]  executing total flows: 2
18/02/16 05:48:01 INFO cascade.Cascade: [create view+]  allocating management threads: 2
18/02/16 05:48:01 INFO cascade.Cascade: [create view+] starting flow: create view
18/02/16 05:48:01 INFO flow.Flow: [create view] at least one sink is marked for delete
18/02/16 05:48:01 INFO flow.Flow: [create view] sink oldest modified date: Wed Dec 31 18:59:59 EST 1969
18/02/16 05:48:01 INFO hive.metastore: Trying to connect to metastore with URI thrift://hive1:9083
18/02/16 05:48:01 INFO hive.metastore: Connected to metastore.
18/02/16 05:48:02 INFO session.SessionState: Created local directory: /tmp/4165d6a9-65ba-42fb-bc0a-9ba505738afd_resources
18/02/16 05:48:02 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd
18/02/16 05:48:02 INFO session.SessionState: Created local directory: /tmp/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd
18/02/16 05:48:02 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/_tmp_space.db
18/02/16 05:48:02 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
18/02/16 05:48:02 INFO hive.HiveQueryRunner: running hive query: 'select pd.description, p.stdprice from prices p, products pd where p.prodid = pd.prodid'
18/02/16 05:48:02 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:02 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:02 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:02 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:02 INFO parse.ParseDriver: Parsing command: select pd.description, p.stdprice from prices p, products pd where p.prodid = pd.prodid
18/02/16 05:48:02 INFO parse.ParseDriver: Parse Completed
18/02/16 05:48:02 INFO log.PerfLogger: </PERFLOG method=parse start=1518778082135 end=1518778082421 duration=286 from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:02 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:02 INFO parse.CalcitePlanner: Starting Semantic Analysis
18/02/16 05:48:02 INFO parse.CalcitePlanner: Completed phase 1 of Semantic Analysis
18/02/16 05:48:02 INFO parse.CalcitePlanner: Get metadata for source tables
18/02/16 05:48:02 INFO parse.CalcitePlanner: Get metadata for subqueries
18/02/16 05:48:02 INFO parse.CalcitePlanner: Get metadata for destination tables
18/02/16 05:48:02 ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
18/02/16 05:48:02 INFO ql.Context: New scratch dir is hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1
18/02/16 05:48:02 INFO parse.CalcitePlanner: Completed getting MetaData in Semantic Analysis
18/02/16 05:48:02 INFO parse.BaseSemanticAnalyzer: Not invoking CBO because the statement has too few joins
18/02/16 05:48:02 INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10000/.hive-staging_hive_2018-02-16_05-48-02_134_2141770392882307552-1
18/02/16 05:48:03 INFO parse.CalcitePlanner: Set stats collection dir : hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10000/.hive-staging_hive_2018-02-16_05-48-02_134_2141770392882307552-1/-ext-10002
18/02/16 05:48:03 INFO ppd.OpProcFactory: Processing for FS(9)
18/02/16 05:48:03 INFO ppd.OpProcFactory: Processing for SEL(8)
18/02/16 05:48:03 INFO ppd.OpProcFactory: Processing for FIL(7)
18/02/16 05:48:03 INFO ppd.OpProcFactory: Processing for JOIN(6)
18/02/16 05:48:03 INFO ppd.OpProcFactory: Processing for RS(3)
18/02/16 05:48:03 INFO ppd.OpProcFactory: Processing for FIL(2)
18/02/16 05:48:03 INFO ppd.OpProcFactory: Pushdown Predicates of FIL For Alias : p
18/02/16 05:48:03 INFO ppd.OpProcFactory:       prodid is not null
18/02/16 05:48:03 INFO ppd.OpProcFactory: Processing for TS(0)
18/02/16 05:48:03 INFO ppd.OpProcFactory: Pushdown Predicates of TS For Alias : p
18/02/16 05:48:03 INFO ppd.OpProcFactory:       prodid is not null
18/02/16 05:48:03 INFO ppd.OpProcFactory: Processing for RS(5)
18/02/16 05:48:03 INFO ppd.OpProcFactory: Processing for FIL(4)
18/02/16 05:48:03 INFO ppd.OpProcFactory: Pushdown Predicates of FIL For Alias : pd
18/02/16 05:48:03 INFO ppd.OpProcFactory:       prodid is not null
18/02/16 05:48:03 INFO ppd.OpProcFactory: Processing for TS(1)
18/02/16 05:48:03 INFO ppd.OpProcFactory: Pushdown Predicates of TS For Alias : pd
18/02/16 05:48:03 INFO ppd.OpProcFactory:       prodid is not null
18/02/16 05:48:03 INFO log.PerfLogger: <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
18/02/16 05:48:03 INFO log.PerfLogger: </PERFLOG method=partition-retrieving start=1518778083099 end=1518778083101 duration=2 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
18/02/16 05:48:03 INFO log.PerfLogger: <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
18/02/16 05:48:03 INFO log.PerfLogger: </PERFLOG method=partition-retrieving start=1518778083102 end=1518778083102 duration=0 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
18/02/16 05:48:03 INFO optimizer.ColumnPrunerProcFactory: JOIN 6 oldExprs: {0=[Column[KEY.reducesinkkey0], Column[VALUE._col0], Column[VALUE._col1], Column[VALUE._col2], Column[VALUE._col3], Column[VALUE._col4], Column[VALUE._col5], Column[VALUE._col6], Column[VALUE._col7]], 1=[Column[KEY.reducesinkkey0], Column[VALUE._col0], Column[VALUE._col1], Column[VALUE._col2], Column[VALUE._col3]]}
18/02/16 05:48:03 INFO optimizer.ColumnPrunerProcFactory: JOIN 6 newExprs: {0=[Column[KEY.reducesinkkey0], Column[VALUE._col1]], 1=[Column[KEY.reducesinkkey0], Column[VALUE._col0]]}
18/02/16 05:48:03 INFO optimizer.ColumnPrunerProcFactory: RS 5 oldColExprMap: {VALUE._col2=Column[INPUT__FILE__NAME], VALUE._col3=Column[ROW__ID], VALUE._col0=Column[description], VALUE._col1=Column[BLOCK__OFFSET__INSIDE__FILE], KEY.reducesinkkey0=Column[prodid]}
18/02/16 05:48:03 INFO optimizer.ColumnPrunerProcFactory: RS 5 newColExprMap: {VALUE._col0=Column[description], KEY.reducesinkkey0=Column[prodid]}
18/02/16 05:48:03 INFO optimizer.ColumnPrunerProcFactory: RS 3 oldColExprMap: {VALUE._col2=Column[minprice], VALUE._col3=Column[startdate], VALUE._col4=Column[enddate], VALUE._col5=Column[BLOCK__OFFSET__INSIDE__FILE], VALUE._col0=Column[regionid], VALUE._col1=Column[stdprice], KEY.reducesinkkey0=Column[prodid], VALUE._col6=Column[INPUT__FILE__NAME], VALUE._col7=Column[ROW__ID]}
18/02/16 05:48:03 INFO optimizer.ColumnPrunerProcFactory: RS 3 newColExprMap: {VALUE._col1=Column[stdprice], KEY.reducesinkkey0=Column[prodid]}
18/02/16 05:48:03 INFO log.PerfLogger: <PERFLOG method=getInputSummary from=org.apache.hadoop.hive.ql.exec.Utilities>
18/02/16 05:48:03 INFO exec.Utilities: Cache Content Summary for hdfs://machine2:8020/apps/hive/warehouse/products length: 270 file count: 1 directory count: 1
18/02/16 05:48:03 INFO exec.Utilities: Cache Content Summary for hdfs://machine2:8020/apps/hive/warehouse/prices length: 539 file count: 1 directory count: 1
18/02/16 05:48:03 INFO log.PerfLogger: </PERFLOG method=getInputSummary start=1518778083149 end=1518778083164 duration=15 from=org.apache.hadoop.hive.ql.exec.Utilities>
18/02/16 05:48:03 INFO ql.Context: New scratch dir is hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1
18/02/16 05:48:03 INFO physical.LocalMapJoinProcFactory: Setting max memory usage to 0.9 for table sink not followed by group by
18/02/16 05:48:03 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
18/02/16 05:48:03 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
18/02/16 05:48:03 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
18/02/16 05:48:03 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
18/02/16 05:48:03 INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
18/02/16 05:48:03 INFO physical.NullScanTaskDispatcher: Found 0 null table scans
18/02/16 05:48:03 INFO parse.CalcitePlanner: Completed plan generation
18/02/16 05:48:03 INFO ql.Driver: Semantic Analysis Completed
18/02/16 05:48:03 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1518778082423 end=1518778083180 duration=757 from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:03 INFO exec.ListSinkOperator: Initializing Self OP[15]
18/02/16 05:48:03 INFO exec.ListSinkOperator: Operator 15 OP initialized
18/02/16 05:48:03 INFO exec.ListSinkOperator: Initialization Done 15 OP
18/02/16 05:48:03 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:pd.description, type:string, comment:null), FieldSchema(name:p.stdprice, type:string, comment:null)], properties:null)
18/02/16 05:48:03 INFO log.PerfLogger: </PERFLOG method=compile start=1518778082107 end=1518778083198 duration=1091 from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:03 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
18/02/16 05:48:03 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:03 INFO ql.Driver: Starting command: select pd.description, p.stdprice from prices p, products pd where p.prodid = pd.prodid
Query ID = root_20180216054848_61e4db4b-e057-4ad7-a950-7278c1608366
18/02/16 05:48:03 INFO ql.Driver: Query ID = root_20180216054848_61e4db4b-e057-4ad7-a950-7278c1608366
Total jobs = 1
18/02/16 05:48:03 INFO ql.Driver: Total jobs = 1
18/02/16 05:48:03 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1518778082107 end=1518778083201 duration=1094 from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:03 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:03 INFO log.PerfLogger: <PERFLOG method=task.MAPREDLOCAL.Stage-4 from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:03 INFO ql.Driver: Starting task [Stage-4:MAPREDLOCAL] in serial mode
18/02/16 05:48:03 INFO mr.MapredLocalTask: Generating plan file file:/tmp/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-local-10005/plan.xml
18/02/16 05:48:03 INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
18/02/16 05:48:03 INFO exec.Utilities: Serializing MapredLocalWork via kryo
18/02/16 05:48:03 INFO log.PerfLogger: </PERFLOG method=serializePlan start=1518778083209 end=1518778083264 duration=55 from=org.apache.hadoop.hive.ql.exec.Utilities>
18/02/16 05:48:03 INFO mr.MapredLocalTask: Executing: /usr/hdp/2.6.4.0-91/hadoop/bin/hadoop jar /tmp/hadoop-unjar8459046179446844223/lib/hive-exec-1.1.0.jar org.apache.hadoop.hive.ql.exec.mr.ExecDriver -localtask -plan file:/tmp/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-local-10005/plan.xml   -jobconffile file:/tmp/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-local-10006/jobconf.xml
Execution log at: /tmp/root/root_20180216054848_61e4db4b-e057-4ad7-a950-7278c1608366.log
2018-02-16 05:48:05     Starting to launch local task to process map join;      maximum memory = 954728448
2018-02-16 05:48:05     Dump the side-table for tag: 1 with group count: 11 into file: file:/tmp/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable
2018-02-16 05:48:05     Uploaded 1 File to: file:/tmp/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable (669 bytes)
2018-02-16 05:48:05     End of local task; Time Taken: 0.691 sec.
Execution completed successfully
18/02/16 05:48:06 INFO exec.Task: Execution completed successfully
MapredLocal task succeeded
18/02/16 05:48:06 INFO exec.Task: MapredLocal task succeeded
18/02/16 05:48:06 INFO mr.MapredLocalTask: Execution completed successfully
18/02/16 05:48:06 INFO log.PerfLogger: <PERFLOG method=task.MAPRED.Stage-3 from=org.apache.hadoop.hive.ql.Driver>
Launching Job 1 out of 1
18/02/16 05:48:06 INFO ql.Driver: Launching Job 1 out of 1
18/02/16 05:48:06 INFO ql.Driver: Starting task [Stage-3:MAPRED] in serial mode
Number of reduce tasks is set to 0 since there's no reduce operator
18/02/16 05:48:06 INFO exec.Task: Number of reduce tasks is set to 0 since there's no reduce operator
18/02/16 05:48:06 INFO ql.Context: New scratch dir is hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1
18/02/16 05:48:06 INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
18/02/16 05:48:06 INFO mr.ExecDriver: Archive 1 hash table files to file:/tmp/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-local-10003/HashTable-Stage-3/Stage-3.tar.gz
18/02/16 05:48:06 INFO mr.ExecDriver: Upload 1 archive file  fromfile:/tmp/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-local-10003/HashTable-Stage-3/Stage-3.tar.gz to: hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10004/HashTable-Stage-3/Stage-3.tar.gz
18/02/16 05:48:06 INFO mr.ExecDriver: Add 1 archive file to distributed cache. Archive file: hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10004/HashTable-Stage-3/Stage-3.tar.gz
18/02/16 05:48:06 INFO exec.Utilities: Processing alias p
18/02/16 05:48:06 INFO exec.Utilities: Adding input file hdfs://machine2:8020/apps/hive/warehouse/prices
18/02/16 05:48:06 INFO exec.Utilities: Content Summary hdfs://machine2:8020/apps/hive/warehouse/priceslength: 539 num files: 1 num directories: 1
18/02/16 05:48:06 INFO ql.Context: New scratch dir is hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1
18/02/16 05:48:06 INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
18/02/16 05:48:06 INFO exec.Utilities: Serializing MapWork via kryo
18/02/16 05:48:06 INFO log.PerfLogger: </PERFLOG method=serializePlan start=1518778086599 end=1518778086632 duration=33 from=org.apache.hadoop.hive.ql.exec.Utilities>
18/02/16 05:48:06 INFO client.RMProxy: Connecting to ResourceManager at machine4/192.168.1.69:8050
18/02/16 05:48:06 INFO client.AHSProxy: Connecting to Application History server at machine4/192.168.1.69:10200
18/02/16 05:48:06 INFO client.RMProxy: Connecting to ResourceManager at machine4/192.168.1.69:8050
18/02/16 05:48:06 INFO client.AHSProxy: Connecting to Application History server at machine4/192.168.1.69:10200
18/02/16 05:48:06 INFO exec.Utilities: PLAN PATH = hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10008/79335c25-407f-405a-bf29-48e536ba79c6/map.xml
18/02/16 05:48:06 INFO exec.Utilities: PLAN PATH = hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10008/79335c25-407f-405a-bf29-48e536ba79c6/reduce.xml
18/02/16 05:48:06 INFO exec.Utilities: ***************non-local mode***************
18/02/16 05:48:06 INFO exec.Utilities: local path = hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10008/79335c25-407f-405a-bf29-48e536ba79c6/reduce.xml
18/02/16 05:48:06 INFO exec.Utilities: Open file to read in plan: hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10008/79335c25-407f-405a-bf29-48e536ba79c6/reduce.xml
18/02/16 05:48:06 INFO exec.Utilities: File not found: File does not exist: /tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10008/79335c25-407f-405a-bf29-48e536ba79c6/reduce.xml
        at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
        at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2025)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1996)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1909)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:700)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:377)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)

18/02/16 05:48:06 INFO exec.Utilities: No plan file found: hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10008/79335c25-407f-405a-bf29-48e536ba79c6/reduce.xml
18/02/16 05:48:06 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
18/02/16 05:48:07 INFO log.PerfLogger: <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
18/02/16 05:48:07 INFO exec.Utilities: PLAN PATH = hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10008/79335c25-407f-405a-bf29-48e536ba79c6/map.xml
18/02/16 05:48:07 INFO io.CombineHiveInputFormat: Total number of paths: 1, launching 1 threads to check non-combinable ones.
18/02/16 05:48:07 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://machine2:8020/apps/hive/warehouse/prices; using filter path hdfs://machine2:8020/apps/hive/warehouse/prices
18/02/16 05:48:07 INFO input.FileInputFormat: Total input paths to process : 1
18/02/16 05:48:07 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 3, size left: 0
18/02/16 05:48:07 INFO io.CombineHiveInputFormat: number of splits 1
18/02/16 05:48:07 INFO io.CombineHiveInputFormat: Number of all splits 1
18/02/16 05:48:07 INFO log.PerfLogger: </PERFLOG method=getSplits start=1518778087196 end=1518778087215 duration=19 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
18/02/16 05:48:07 INFO mapreduce.JobSubmitter: number of splits:1
18/02/16 05:48:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1518681121199_0077
18/02/16 05:48:07 INFO impl.YarnClientImpl: Submitted application application_1518681121199_0077
18/02/16 05:48:07 INFO mapreduce.Job: The url to track the job: http://machine4:8088/proxy/application_1518681121199_0077/
Starting Job = job_1518681121199_0077, Tracking URL = http://machine4:8088/proxy/application_1518681121199_0077/
18/02/16 05:48:07 INFO exec.Task: Starting Job = job_1518681121199_0077, Tracking URL = http://machine4:8088/proxy/application_1518681121199_0077/
Kill Command = /usr/hdp/2.6.4.0-91/hadoop/bin/hadoop job  -kill job_1518681121199_0077
18/02/16 05:48:07 INFO exec.Task: Kill Command = /usr/hdp/2.6.4.0-91/hadoop/bin/hadoop job  -kill job_1518681121199_0077
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
18/02/16 05:48:12 INFO exec.Task: Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
18/02/16 05:48:12 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2018-02-16 05:48:12,899 Stage-3 map = 0%,  reduce = 0%
18/02/16 05:48:12 INFO exec.Task: 2018-02-16 05:48:12,899 Stage-3 map = 0%,  reduce = 0%
2018-02-16 05:48:19,119 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 3.29 sec
18/02/16 05:48:19 INFO exec.Task: 2018-02-16 05:48:19,119 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 3.29 sec
MapReduce Total cumulative CPU time: 3 seconds 290 msec
18/02/16 05:48:20 INFO exec.Task: MapReduce Total cumulative CPU time: 3 seconds 290 msec
Ended Job = job_1518681121199_0077
18/02/16 05:48:20 INFO exec.Task: Ended Job = job_1518681121199_0077
18/02/16 05:48:20 INFO exec.FileSinkOperator: Moving tmp dir: hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10000/.hive-staging_hive_2018-02-16_05-48-02_134_2141770392882307552-1/_tmp.-ext-10001 to: hdfs://machine2:8020/tmp/hive/root/4165d6a9-65ba-42fb-bc0a-9ba505738afd/hive_2018-02-16_05-48-02_134_2141770392882307552-1/-mr-10000/.hive-staging_hive_2018-02-16_05-48-02_134_2141770392882307552-1/-ext-10001
18/02/16 05:48:20 INFO log.PerfLogger: </PERFLOG method=runTasks start=1518778083201 end=1518778100232 duration=17031 from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:20 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1518778083198 end=1518778100232 duration=17034 from=org.apache.hadoop.hive.ql.Driver>
MapReduce Jobs Launched:
18/02/16 05:48:20 INFO ql.Driver: MapReduce Jobs Launched:
Stage-Stage-3: Map: 1   Cumulative CPU: 3.29 sec   HDFS Read: 6906 HDFS Write: 234 SUCCESS
18/02/16 05:48:20 INFO ql.Driver: Stage-Stage-3: Map: 1   Cumulative CPU: 3.29 sec   HDFS Read: 6906 HDFS Write: 234 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 290 msec
18/02/16 05:48:20 INFO ql.Driver: Total MapReduce CPU Time Spent: 3 seconds 290 msec
OK
18/02/16 05:48:20 INFO ql.Driver: OK
18/02/16 05:48:20 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:20 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1518778100232 end=1518778100232 duration=0 from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:20 INFO log.PerfLogger: </PERFLOG method=Driver.run start=1518778082107 end=1518778100233 duration=18126 from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:20 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:20 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1518778100233 end=1518778100234 duration=1 from=org.apache.hadoop.hive.ql.Driver>
18/02/16 05:48:20 INFO cascade.Cascade: [create view+] completed flow: create view
18/02/16 05:48:20 INFO cascade.Cascade: [create view+] starting flow: null
18/02/16 05:48:20 INFO flow.Flow: [] at least one sink is marked for delete
18/02/16 05:48:20 INFO flow.Flow: [] sink oldest modified date: Wed Dec 31 18:59:59 EST 1969
18/02/16 05:48:20 INFO flow.Flow: [] starting
18/02/16 05:48:20 INFO flow.Flow: []  source: Hfs["TextDelimited[[UNKNOWN]->[ALL]]"]["/test/sample"]
18/02/16 05:48:20 INFO flow.Flow: []  sink: Hfs["TextDelimited[[UNKNOWN]->[ALL]]"]["/test/sampleTest"]
18/02/16 05:48:20 INFO flow.Flow: []  parallel execution of steps is enabled: true
18/02/16 05:48:20 INFO flow.Flow: []  executing total steps: 1
18/02/16 05:48:20 INFO flow.Flow: []  allocating management threads: 1
18/02/16 05:48:20 INFO flow.Flow: [] starting step: (1/1) /test/sampleTest
18/02/16 05:48:20 INFO client.RMProxy: Connecting to ResourceManager at machine4/192.168.1.69:8050
18/02/16 05:48:20 INFO client.AHSProxy: Connecting to Application History server at machine4/192.168.1.69:10200
18/02/16 05:48:20 INFO client.RMProxy: Connecting to ResourceManager at machine4/192.168.1.69:8050
18/02/16 05:48:20 INFO client.AHSProxy: Connecting to Application History server at machine4/192.168.1.69:10200
18/02/16 05:48:21 INFO mapred.FileInputFormat: Total input paths to process : 0
18/02/16 05:48:21 INFO mapreduce.JobSubmitter: number of splits:0
18/02/16 05:48:21 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1518681121199_0078
18/02/16 05:48:21 INFO impl.YarnClientImpl: Submitted application application_1518681121199_0078
18/02/16 05:48:21 INFO mapreduce.Job: The url to track the job: http://machine4:8088/proxy/application_1518681121199_0078/
18/02/16 05:48:21 INFO flow.Flow: [] submitted hadoop job: job_1518681121199_0078
18/02/16 05:48:21 INFO flow.Flow: [] tracking url: http://machine4:8088/proxy/application_1518681121199_0078/
18/02/16 05:48:31 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
18/02/16 05:48:31 INFO util.Hadoop18TapUtil: deleting temp path /test/sampleTest/_temporary
18/02/16 05:48:31 INFO flow.Flow: []  completed in: 00:11.494
18/02/16 05:48:31 INFO cascade.Cascade: [create view+] completed flow: null

Reply all
Reply to author
Forward
0 new messages