Power_test failure (error when running query 5)

213 views
Skip to first unread message

林柏年

unread,
Apr 18, 2017, 5:05:54 AM4/18/17
to Big Data Benchmark for BigBench
My first run of big-bench in POWER_TEST

Benchmark run terminated
Reason: An error occured while running a command in phase ENGINE_VALIDATION_POWE
R_TEST
===============
java
.io.IOException: Error while running query 5 .More information in log file:/
root/
big-bench/q05_hive_engine_validation_power_test_0.log



Error Messages in q05_hive_engine_validation_power_test_0.log:

MapReduce Jobs Launched:
Stage-Stage-2:  HDFS Read: 0 HDFS Write: 0 SUCCESS
Stage-Stage-10:  HDFS Read: 0 HDFS Write: 0 SUCCESS
Stage-Stage-9:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
An error occured while running command:
==========
runEngineCmd
-f /root/big-bench/Big-Data-Benchmark-for-Big-Bench/engines/hive/queries/q05/q05.sql
==========
Please check the log files for details
======= q05_hive_engine_validation_power_test_0 time =======
Start timestamp: 2017/02/24:22:12:00 1487992320
Stop  timestamp: 2017/02/24:22:15:54 1487992554
Duration:  0h 3m 54s
q05_hive_engine_validation_power_test_0 FAILED
exit code: 2
----- result -----
EMPTY  bytes
: 0
to display
: hadoop fs -cat /user/root/benchmarks/bigbench/queryResults/q05_hive_engine_validation_power_test_0_result/*
----- logs -----
time&status: /root/big-bench/Big-Data-Benchmark-for-Big-Bench/logs/times.csv
full log: /root/big-bench/Big-Data-Benchmark-for-Big-Bench/logs/q05_hive_engine_validation_power_test_0.log
=========================





Michael Frank

unread,
Apr 18, 2017, 3:15:50 PM4/18/17
to Big Data Benchmark for BigBench
Hi,

First: Additionally to your snippets -> Allway attach the full! log file please. (in this case: /root/big-bench/Big-Data-Benchmark-for-Big-Bench/logs/q05_hive_engine_validation_power_test_0.log )
Second: you need to check your hive log and/or your task logs of the failed job to find out the real cause of the error.

BigBench just runs Hive and spark jobs and logs the execution time. If Queries/jobs fail, you have to search for the cause in hive and spark logs, or even deeper within the yarn/task logs.

Cheers,
Michael

林柏年

unread,
Apr 21, 2017, 4:06:43 AM4/21/17
to Big Data Benchmark for BigBench


Michael Frank於 2017年4月19日星期三 UTC+8上午3時15分50秒寫道:
Hi,Michael Frank

Thanks,  I`ll try to check hive & spark log first.



林柏年

unread,
Apr 24, 2017, 9:36:34 PM4/24/17
to Big Data Benchmark for BigBench


林柏年於 2017年4月18日星期二 UTC+8下午5時05分54秒寫道:
Here is my q05_hive_engine_validation_power_test_0_result log :




=========================
q05 Step 1/3: Executing hive queries
tmp output: /user/root/benchmarks/bigbench/temp/q05_hive_engine_validation_power_test_0_temp
=========================
Additional local hive settings found. Adding /root/big-bench/Big-Data-Benchmark-for-Big-Bench/engines/hive/queries/q05/engineLocalSettings.sql to hive init.

Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
hive.execution.engine=mr
hive.cbo.enable=true
hive.stats.fetch.partition.stats=true
hive.script.operator.truncate.env=false
hive.compute.query.using.stats=false
hive.vectorized.execution.enabled=false
hive.vectorized.execution.reduce.enabled=true
hive.stats.autogather=true
mapreduce.input.fileinputformat.split.minsize=1
mapreduce.input.fileinputformat.split.maxsize=256000000
hive.exec.reducers.bytes.per.reducer=256000000
hive.exec.reducers.max=1009
hive.exec.parallel=false
hive.exec.parallel.thread.number=8
hive.exec.compress.intermediate=false
hive.exec.compress.output=false
mapred.map.output.compression.codec=org.apache.hadoop.io.compress.DefaultCodec
mapred.output.compression.codec=org.apache.hadoop.io.compress.DefaultCodec
hive.default.fileformat=TEXTFILE
hive.auto.convert.sortmerge.join=false
hive.auto.convert.sortmerge.join.noconditionaltask is undefined
hive.optimize.bucketmapjoin=false
hive.optimize.bucketmapjoin.sortedmerge=false
hive.auto.convert.join.noconditionaltask.size=10000000
hive.auto.convert.join=true
hive.optimize.mapjoin.mapreduce is undefined
hive.mapred.local.mem=0
hive.mapjoin.smalltable.filesize=25000000
hive.mapjoin.localtask.max.memory.usage=0.9
hive.optimize.skewjoin=false
hive.optimize.skewjoin.compiletime=false
hive.optimize.ppd=true
hive.optimize.ppd.storage=true
hive.ppd.recognizetransivity=true
hive.optimize.index.filter=false
hive.optimize.sampling.orderby=false
hive.optimize.sampling.orderby.number=1000
hive.optimize.sampling.orderby.percent=0.1
bigbench.hive.optimize.sampling.orderby=true
bigbench.hive.optimize.sampling.orderby.number=20000
bigbench.hive.optimize.sampling.orderby.percent=0.1
hive.groupby.skewindata=false
hive.exec.submit.local.task.via.child=true
OK
Time taken: 0.18 seconds
Warning: fs.defaultFs is not set when running "chgrp" command.
Warning: fs.defaultFs is not set when running "chmod" command.
OK
Time taken: 1.065 seconds
Warning: fs.defaultFs is not set when running "chgrp" command.
Warning: fs.defaultFs is not set when running "chmod" command.
Query ID = root_20170224230101_f0209f0a-4788-4431-93fa-26669e8d2ae6
Total jobs = 3
Execution log at: /tmp/root/root_20170224230101_f0209f0a-4788-4431-93fa-26669e8d2ae6.log
2017-02-24 11:01:56     Starting to launch local task to process map join;      maximum memory = 1013645312
2017-02-24 11:01:58     Dump the side-table for tag: 1 with group count: 17820 into file: file:/tmp/root/fd78687d-5874-47fe-a925-569af1471966/hive_2017-02-24_23-01-37_470_9122052348346478795-1/-local-10009/HashTable-Stage-2/MapJoin-mapfile21--.hashtable
2017-02-24 11:01:59     Uploaded 1 File to: file:/tmp/root/fd78687d-5874-47fe-a925-569af1471966/hive_2017-02-24_23-01-37_470_9122052348346478795-1/-local-10009/HashTable-Stage-2/MapJoin-mapfile21--.hashtable (544168 bytes)
2017-02-24 11:01:59     End of local task; Time Taken: 3.267 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 3
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2017-02-24 23:02:03,523 Stage-2 map = 0%,  reduce = 0%
2017-02-24 23:02:39,679 Stage-2 map = 100%,  reduce = 0%
2017-02-24 23:02:45,717 Stage-2 map = 100%,  reduce = 100%
Ended Job = job_local463176514_0001
Execution log at: /tmp/root/root_20170224230101_f0209f0a-4788-4431-93fa-26669e8d2ae6.log
2017-02-24 11:02:56     Starting to launch local task to process map join;      maximum memory = 1013645312
2017-02-24 11:02:59     Dump the side-table for tag: 1 with group count: 98982 into file: file:/tmp/root/fd78687d-5874-47fe-a925-569af1471966/hive_2017-02-24_23-01-37_470_9122052348346478795-1/-local-10007/HashTable-Stage-10/MapJoin-mapfile11--.hashtable
2017-02-24 11:03:00     Uploaded 1 File to: file:/tmp/root/fd78687d-5874-47fe-a925-569af1471966/hive_2017-02-24_23-01-37_470_9122052348346478795-1/-local-10007/HashTable-Stage-10/MapJoin-mapfile11--.hashtable (2516730 bytes)
2017-02-24 11:03:00     End of local task; Time Taken: 4.008 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 2 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2017-02-24 23:03:01,988 Stage-10 map = 0%,  reduce = 0%
2017-02-24 23:03:04,994 Stage-10 map = 100%,  reduce = 0%
Ended Job = job_local547471894_0002
Execution log at: /tmp/root/root_20170224230101_f0209f0a-4788-4431-93fa-26669e8d2ae6.log
2017-02-24 11:03:14     Starting to launch local task to process map join;      maximum memory = 1013645312
2017-02-24 11:03:19     Processing rows:        200000  Hashtable size: 199999  Memory usage:   73341088        percentage:     0.072
2017-02-24 11:03:19     Processing rows:        300000  Hashtable size: 299999  Memory usage:   106512184       percentage:     0.105
2017-02-24 11:03:19     Processing rows:        400000  Hashtable size: 399999  Memory usage:   140428968       percentage:     0.139
2017-02-24 11:03:21     Processing rows:        500000  Hashtable size: 499999  Memory usage:   172044296       percentage:     0.17
2017-02-24 11:03:21     Processing rows:        600000  Hashtable size: 599999  Memory usage:   210755416       percentage:     0.208
2017-02-24 11:03:23     Processing rows:        700000  Hashtable size: 699999  Memory usage:   238890976       percentage:     0.236
2017-02-24 11:03:23     Processing rows:        800000  Hashtable size: 799999  Memory usage:   272441288       percentage:     0.269
2017-02-24 11:03:26     Processing rows:        900000  Hashtable size: 899999  Memory usage:   301332840       percentage:     0.297
2017-02-24 11:03:27     Processing rows:        1000000 Hashtable size: 999999  Memory usage:   334662576       percentage:     0.33
2017-02-24 11:03:27     Processing rows:        1100000 Hashtable size: 1099999 Memory usage:   380084240       percentage:     0.375
2017-02-24 11:03:27     Processing rows:        1200000 Hashtable size: 1199999 Memory usage:   413413968       percentage:     0.408
2017-02-24 11:03:27     Processing rows:        1300000 Hashtable size: 1299999 Memory usage:   446743704       percentage:     0.441
2017-02-24 11:03:32     Processing rows:        1400000 Hashtable size: 1399999 Memory usage:   469522624       percentage:     0.463
2017-02-24 11:03:32     Processing rows:        1500000 Hashtable size: 1499999 Memory usage:   502152456       percentage:     0.495
2017-02-24 11:03:32     Processing rows:        1600000 Hashtable size: 1599999 Memory usage:   534782288       percentage:     0.528
2017-02-24 11:03:33     Processing rows:        1700000 Hashtable size: 1699999 Memory usage:   571037672       percentage:     0.563
2017-02-24 11:03:33     Processing rows:        1800000 Hashtable size: 1799999 Memory usage:   603667504       percentage:     0.596
2017-02-24 11:03:33     Processing rows:        1900000 Hashtable size: 1899999 Memory usage:   639922864       percentage:     0.631
2017-02-24 11:03:33     Dump the side-table for tag: 1 with group count: 1920800 into file: file:/tmp/root/fd78687d-5874-47fe-a925-569af1471966/hive_2017-02-24_23-01-37_470_9122052348346478795-1/-local-10005/HashTable-Stage-9/MapJoin-mapfile01--.hashtable
2017-02-24 11:03:56     Uploaded 1 File to: file:/tmp/root/fd78687d-5874-47fe-a925-569af1471966/hive_2017-02-24_23-01-37_470_9122052348346478795-1/-local-10005/HashTable-Stage-9/MapJoin-mapfile01--.hashtable (66661611 bytes)
2017-02-24 11:03:56     End of local task; Time Taken: 42.169 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 3 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2017-02-24 23:04:06,221 Stage-9 map = 0%,  reduce = 0%
Ended Job = job_local1808407024_0003 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-2:  HDFS Read: 0 HDFS Write: 0 SUCCESS
Stage-Stage-10:  HDFS Read: 0 HDFS Write: 0 SUCCESS
Stage-Stage-9:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
An error occured while running command:
==========
runEngineCmd -f /root/big-bench/Big-Data-Benchmark-for-Big-Bench/engines/hive/queries/q05/q05.sql
==========
==========
Please check the log files for details
======= q05_hive_engine_validation_power_test_0 time =======
Start timestamp: 2017/02/24:23:01:04 1487995264
Stop  timestamp: 2017/02/24:23:04:50 1487995490
Duration:  0h 3m 46s
Reply all
Reply to author
Forward
0 new messages