-- set mapreduce.input.fileinputformat.split.minsize=1048576;
-- set mapreduce.input.fileinputformat.split.maxsize=134217728;
-- ###########################
-- reducer settings
-- ###########################
-- Number of reducers used by HIVE
-- hives metric for estimating reducers is mostly controlled by the following settings. Node: Some Query functions like count(*) or Distinct will lead to hive always using only 1 reducer
-- 1GB default
-- set hive.exec.reducers.bytes.per.reducer=1000000
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Kill Command = /opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/hadoop/bin/hadoop job -kill job_1453316369292_0129
Hadoop job information for Stage-2: number of mappers: 5; number of reducers: 1
2016-01-20 16:18:44,055 Stage-2 map = 0%, reduce = 0%
Ended Job = job_1453316369292_0129 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-2: Map: 5 Reduce: 1 FAIL
Total MapReduce CPU Time Spent: -1 msec
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
An error occured while running command:
==========
runEngineCmd -f /root/Big-Data-Benchmark-for-Big-Bench/engines/hive/queries/q01/q01.sql
==========
Please check the log files for details
======= q01_hive_power_test_0 time =======
Start timestamp: 2016/01/20:16:18:23 1453324703
Stop timestamp: 2016/01/20:16:18:44 1453324724
Duration: 0h 0m 21s
q01_hive_power_test_0 FAILED exit code: 2
----- result -----
EMPTY bytes: 0
to display: hadoop fs -cat /user/root/benchmarks/bigbench/queryResults/q01_hive_power_test_0_result/*
----- logs -----