Running Dr_elephant

152 views
Skip to first unread message

Mohan P

unread,
Jul 10, 2018, 5:44:53 PM7/10/18
to dr-elephant-users
Hello Guys,

I'm a newbie to hadoop and Dr.Elephant. 

I have couple of questions regarding the installation of Dr.Elephant.

1. Does Dr.Elephant work with mysql-server 5.1.73?? The reason for me asking this question is I'm using CDH 5.13 VM and the VM comes with the mysql-server 5.1.73 and I can't update the mysql database to a newer version.

2.Can we configure Dr Elephant with postgresql and what are all the files that needs to be edited in that case.

3. How do we add scala 2.11 dependencies in Dependencies.scala in the project folder if we have a system that runs with spark 2.3.0

The below attached is the dr.log with the errors I got. Please 



Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /etc/dr-elephant-2.1.7/dist/dr-elephant-2.0.13../logs/elephant/dr-gc.201807101026 due to No such file or directory

Play server process ID is 14421
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/etc/dr-elephant-2.1.7/dist/dr-elephant-2.0.13/lib/ch.qos.logback.logback-classic-1.0.13.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/etc/dr-elephant-2.1.7/dist/dr-elephant-2.0.13/lib/org.slf4j.slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/etc/dr-elephant-2.1.7/dist/dr-elephant-2.0.13/lib/org.slf4j.slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
[ [37minfo [0m] play - database [default] connected at jdbc:mysql://localhost/drelephant?characterEncoding=UTF-8
[ [37minfo [0m] application - Starting Application...
Oops, cannot start the server.
java.lang.RuntimeException: Could not invoke class com.linkedin.drelephant.tez.fetchers.TezFetcher
at com.linkedin.drelephant.ElephantContext.loadFetchers(ElephantContext.java:189)
at com.linkedin.drelephant.ElephantContext.loadConfiguration(ElephantContext.java:110)
at com.linkedin.drelephant.ElephantContext.<init>(ElephantContext.java:101)
at com.linkedin.drelephant.ElephantContext.instance(ElephantContext.java:94)
at com.linkedin.drelephant.DrElephant.<init>(DrElephant.java:42)
at Global.onStart(Global.java:43)
at play.core.j.JavaGlobalSettingsAdapter.onStart(JavaGlobalSettingsAdapter.scala:18)
at play.api.GlobalPlugin.onStart(GlobalSettings.scala:203)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88)
at scala.collection.immutable.List.foreach(List.scala:318)
at play.api.Play$$anonfun$start$1.apply$mcV$sp(Play.scala:88)
at play.api.Play$$anonfun$start$1.apply(Play.scala:88)
at play.api.Play$$anonfun$start$1.apply(Play.scala:88)
at play.utils.Threads$.withContextClassLoader(Threads.scala:18)
at play.api.Play$.start(Play.scala:87)
at play.core.StaticApplication.<init>(ApplicationProvider.scala:52)
at play.core.server.NettyServer$.createServer(NettyServer.scala:243)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:279)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:274)
at scala.Option.map(Option.scala:145)
at play.core.server.NettyServer$.main(NettyServer.scala:274)
at play.core.server.NettyServer.main(NettyServer.scala)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.linkedin.drelephant.ElephantContext.loadFetchers(ElephantContext.java:168)
... 22 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
at com.linkedin.drelephant.tez.fetchers.TezFetcher$URLFactory.verifyURL(TezFetcher.java:150)
at com.linkedin.drelephant.tez.fetchers.TezFetcher$URLFactory.<init>(TezFetcher.java:144)
at com.linkedin.drelephant.tez.fetchers.TezFetcher$URLFactory.<init>(TezFetcher.java:138)
at com.linkedin.drelephant.tez.fetchers.TezFetcher.<init>(TezFetcher.java:57)
... 27 more

==============================================================================================================================
Application.log file 

==============================================================================================================================


2018-07-10 10:24:25,744 - [ERROR] - from com.jolbox.bonecp.hooks.AbstractConnectionHook in main
Failed to obtain initial connection Sleeping for 0ms and trying again. Attempts left: 0. Exception: null.Message:Access denied for user 'root'@'localhost' (using password: NO)

2018-07-10 10:26:41,773 - [INFO] - from play in main
database [default] connected at jdbc:mysql://localhost/drelephant?characterEncoding=UTF-8

2018-07-10 10:26:42,682 - [INFO] - from application in main
Starting Application...

=======================================================================================================================================

dr_elephant.log in the before folder

========================================================================================================================================

07-10-2018 01:52:32 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:35694/api/v1/applications/application_1
07-10-2018 01:52:33 INFO  [ForkJoinPool-1-worker-1] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:35694/api/v1/applications/application_1
07-10-2018 01:52:33 INFO  [ForkJoinPool-1-worker-3] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:35694/api/v1/applications/application_1/2/logs to get eventlogs
07-10-2018 01:52:34 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:42943/api/v1/applications/application_1
07-10-2018 01:52:34 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : creating SparkApplication by calling REST API at http://localhost:42943/api/v1/applications/application_1/2/logs to get eventlogs
07-10-2018 01:52:34 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:53361/api/v1/applications/application_1
07-10-2018 01:52:34 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : creating SparkApplication by calling REST API at http://localhost:53361/api/v1/applications/application_1/2/logs to get eventlogs
07-10-2018 01:52:34 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:53704/api/v1/applications/application_1
07-10-2018 01:52:34 INFO  [ForkJoinPool-1-worker-1] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:53704/api/v1/applications/application_1
07-10-2018 01:52:34 INFO  [ForkJoinPool-1-worker-3] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:53704/api/v1/applications/application_1/logs to get eventlogs
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:40365/api/v1/applications/application_1
07-10-2018 01:52:35 WARN  [pool-1-thread-1-ScalaTest-running-SparkMetricsAggregatorTest] com.linkedin.drelephant.spark.SparkMetricsAggregator : applicationDurationMillis is negative. Skipping Metrics Aggregation:-8000000
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Fetching data for application_1
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Succeeded fetching data for application_1
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Fetching data for application_1
07-10-2018 01:52:35 WARN  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Failed fetching data for application_1. I will retry after some time! Exception Message is: null
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Fetching data for application_1
07-10-2018 01:52:35 WARN  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Failed fetching data for application_1. I will retry after some time! Exception Message is: null
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Fetching data for application_1
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Succeeded fetching data for application_1
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Fetching data for application_1
07-10-2018 01:52:35 WARN  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Failed fetching data for application_1. I will retry after some time! Exception Message is: null
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log limit of Spark application is set to 50.0 MB
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log limit of Spark application is set to 100.0 MB
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log limit of Spark application is set to 100.0 MB
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log limit of Spark application is set to 100.0 MB
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log location of Spark application is set to None
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : Replaying Spark logs for application: application_1 withlogPath: webhdfs://nn1.grid.example.com:50070/logs/spark/application_1_1.snappy with codec:Some(org.apache.spark.io.SnappyCompressionCodec@688cec11)
07-10-2018 01:52:35 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : Replay completed for application: application_1
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Loading configuration file AggregatorConf.xml
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuation file loaded. File: AggregatorConf.xml
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Aggregator : com.linkedin.drelephant.mapreduce.MapReduceMetricsAggregator
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Aggregator : com.linkedin.drelephant.spark.SparkMetricsAggregator
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Loading configuration file FetcherConf.xml
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuation file loaded. File: FetcherConf.xml
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 500.0 MB
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: PST
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Fetcher : com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Fetcher : com.linkedin.drelephant.spark.fetchers.FSFetcher
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Loading configuration file HeuristicConf.xml
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuation file loaded. File: HeuristicConf.xml
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Mapper Skew will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Mapper Skew will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Mapper Skew will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperSkewHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpMapperSkew
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : Mapper GC will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : Mapper GC will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperGCHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpGC
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : Mapper Time will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : Mapper Time will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : Mapper Time will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpMapperTime
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic : Mapper Speed will use disk_speed_severity with the following threshold settings: [0.5, 0.25, 0.125, 0.03125]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic : Mapper Speed will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 15.0, 30.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpMapperSpeed
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic : Mapper Spill will use num_tasks_severity with the following threshold settings: [50.0, 100.0, 500.0, 1000.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic : Mapper Spill will use spill_severity with the following threshold settings: [2.01, 2.2, 2.5, 3.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpMapperSpill
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Mapper Memory will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Mapper Memory will use container_memory_default_mb with the following threshold setting: 2147483648
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Mapper Memory will use container_memory_severity with the following threshold settings: [1.1, 1.5, 2.0, 2.5]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperMemoryHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpMapperMemory
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Reducer Skew will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Reducer Skew will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Reducer Skew will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ReducerSkewHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpReducerSkew
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : Reducer GC will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : Reducer GC will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ReducerGCHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpGC
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : Reducer Time will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : Reducer Time will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : Reducer Time will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpReducerTime
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Reducer Memory will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Reducer Memory will use container_memory_default_mb with the following threshold setting: 2147483648
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Reducer Memory will use container_memory_severity with the following threshold settings: [1.1, 1.5, 2.0, 2.5]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ReducerMemoryHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpReducerMemory
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic : Shuffle & Sort will use runtime_ratio_severity with the following threshold settings: [1.0, 2.0, 4.0, 8.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic : Shuffle & Sort will use runtime_severity_in_min with the following threshold settings: [1.0, 5.0, 10.0, 30.0]
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpShuffleSort
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ExceptionHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpException
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.DistributedCacheLimitHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpDistributedCacheLimit
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.spark.heuristics.ConfigurationHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.spark.helpConfigurationHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.spark.heuristics.ExecutorsHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.spark.helpExecutorsHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.spark.heuristics.JobsHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.spark.helpJobsHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.spark.heuristics.StagesHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.spark.helpStagesHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.spark.heuristics.ExecutorGcHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.spark.helpExecutorGcHeuristic
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Loading configuration file JobTypeConf.xml
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuation file loaded. File: JobTypeConf.xml
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Spark, for application type:spark, isDefault:true, confName:spark.app.id, confValue:.*.
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Pig, for application type:mapreduce, isDefault:false, confName:pig.script, confValue:.*.
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Hive, for application type:mapreduce, isDefault:false, confName:hive.mapred.mode, confValue:.*.
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:OozieLauncher, for application type:mapreduce, isDefault:false, confName:oozie.launcher.action.main.class, confValue:.*.
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Cascading, for application type:mapreduce, isDefault:false, confName:cascading.app.frameworks, confValue:.*.
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Voldemort, for application type:mapreduce, isDefault:false, confName:mapred.reducer.class, confValue:voldemort.store.readonly.mr.*.
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Kafka, for application type:mapreduce, isDefault:false, confName:kafka.url, confValue:.*.
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:HadoopJava, for application type:mapreduce, isDefault:true, confName:mapred.child.java.opts, confValue:.*.
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded total 8 job types for 2 app types
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Loading configuration file GeneralConf.xml
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Loading configuration file AutoTuningConf.xml
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Configuring ElephantContext...
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Supports SPARK application type, using com.linkedin.drelephant.spark.fetchers.FSFetcher@68c16c1a fetcher class with Heuristics [com.linkedin.drelephant.spark.heuristics.ConfigurationHeuristic, com.linkedin.drelephant.spark.heuristics.ExecutorsHeuristic, com.linkedin.drelephant.spark.heuristics.JobsHeuristic, com.linkedin.drelephant.spark.heuristics.StagesHeuristic, com.linkedin.drelephant.spark.heuristics.ExecutorGcHeuristic] and following JobTypes [Spark].
07-10-2018 01:52:38 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Supports MAPREDUCE application type, using com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2@6e0da99d fetcher class with Heuristics [com.linkedin.drelephant.mapreduce.heuristics.MapperSkewHeuristic, com.linkedin.drelephant.mapreduce.heuristics.MapperGCHeuristic, com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic, com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic, com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic, com.linkedin.drelephant.mapreduce.heuristics.MapperMemoryHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ReducerSkewHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ReducerGCHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ReducerMemoryHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ExceptionHeuristic, com.linkedin.drelephant.mapreduce.heuristics.DistributedCacheLimitHeuristic] and following JobTypes [Pig, Hive, OozieLauncher, Cascading, Voldemort, Kafka, HadoopJava].
07-10-2018 01:52:38 INFO  [Thread-12] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:52:38 INFO  [Thread-12] com.linkedin.drelephant.security.HadoopSecurity : No login user. Creating login user
07-10-2018 01:52:38 INFO  [Thread-12] com.linkedin.drelephant.security.HadoopSecurity : Logging with null and null
07-10-2018 01:52:38 INFO  [Thread-12] com.linkedin.drelephant.security.HadoopSecurity : Logged in with user root (auth:SIMPLE)
07-10-2018 01:52:38 INFO  [Thread-12] com.linkedin.drelephant.security.HadoopSecurity : Login is not keytab based
07-10-2018 01:52:38 INFO  [Thread-12] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:39 ERROR [Thread-12] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:52:39 ERROR [Thread-12] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Spark, for application type:spark, isDefault:true, confName:spark.app.id, confValue:.*.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Pig, for application type:mapreduce, isDefault:false, confName:pig.script, confValue:.*.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Hive, for application type:mapreduce, isDefault:false, confName:hive.mapred.mode, confValue:.*.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Cascading, for application type:mapreduce, isDefault:false, confName:cascading.app.frameworks, confValue:.*.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:HadoopJava, for application type:mapreduce, isDefault:true, confName:mapred.child.java.opts, confValue:.*.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded total 5 job types for 2 app types
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Spark, for application type:spark, isDefault:true, confName:spark.app.id, confValue:.*.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Pig, for application type:mapreduce, isDefault:false, confName:pig.script, confValue:.*.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Spark, for application type:spark, isDefault:true, confName:spark.app.id, confValue:.*.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Spark, for application type:spark, isDefault:true, confName:spark.app.id, confValue:.*.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Pig, for application type:mapreduce, isDefault:true, confName:pig.script, confValue:.*.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Hive, for application type:mapreduce, isDefault:true, confName:hive.mapred.mode, confValue:.*.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 200.0 MB
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: PST
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFetcher : appId needs sampling.
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 500.0 MB
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: America/Los_Angeles
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 500.0 MB
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: America/Los_Angeles
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 500.0 MB
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: America/Los_Angeles
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 500.0 MB
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: America/Los_Angeles
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:52:39 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Could not find 4 threshold levels in 2, 4, 8
07-10-2018 01:52:39 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Could not evaluate 2& in 2&
07-10-2018 01:52:39 WARN  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuration foo2 is negative. Resetting it to 0
07-10-2018 01:52:39 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Invalid configuration foo4. Value is 0.5. Resetting it to default value: 50
07-10-2018 01:52:39 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Invalid configuration foo5. Value is 9999999999999999. Resetting it to default value: 50
07-10-2018 01:52:39 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Invalid configuration foo6. Value is bar. Resetting it to default value: 50
07-10-2018 01:52:39 WARN  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuration foo2 is negative. Resetting it to 0
07-10-2018 01:52:39 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Invalid configuration foo4. Value is 0.5. Resetting it to default value: 50
07-10-2018 01:52:39 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Invalid configuration foo6. Value is bar. Resetting it to default value: 50
07-10-2018 01:52:39 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Truncating foo-bar to 6 characters for id
07-10-2018 01:52:41 INFO  [play-akka.actor.default-dispatcher-5] org.hibernate.validator.internal.util.Version : HV000001: Hibernate Validator 5.0.1.Final
07-10-2018 01:52:42 INFO  [play-akka.actor.default-dispatcher-5] com.linkedin.drelephant.util.Utils : Loading configuration file SchedulerConf.xml
07-10-2018 01:52:42 INFO  [play-akka.actor.default-dispatcher-5] com.linkedin.drelephant.util.Utils : Configuation file loaded. File: SchedulerConf.xml
07-10-2018 01:52:42 INFO  [play-akka.actor.default-dispatcher-5] com.linkedin.drelephant.util.InfoExtractor : Load Scheduler airflow with class : com.linkedin.drelephant.schedulers.AirflowScheduler
07-10-2018 01:52:42 INFO  [play-akka.actor.default-dispatcher-5] com.linkedin.drelephant.util.InfoExtractor : Load Scheduler azkaban with class : com.linkedin.drelephant.schedulers.AzkabanScheduler
07-10-2018 01:52:42 INFO  [play-akka.actor.default-dispatcher-5] com.linkedin.drelephant.util.InfoExtractor : Load Scheduler oozie with class : com.linkedin.drelephant.schedulers.OozieScheduler
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.exceptions.EventExceptionTest : correct messagePath is not a file: /data/sample/Sample/Sample/1466675602538-PT-472724050
07-10-2018 01:52:45 WARN  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.DistributedCacheLimitHeuristic : Mismatch in the number of files and their corresponding sizes for mapreduce.job.cache.archives
07-10-2018 01:52:45 WARN  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.DistributedCacheLimitHeuristic : Mismatch in the number of files and their corresponding sizes for mapreduce.job.cache.files
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use container_memory_default_mb with the following threshold setting: 2147483648
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use container_memory_severity with the following threshold settings: [1.1, 1.5, 2.0, 2.5]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use container_memory_default_mb with the following threshold setting: 2147483648
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use container_memory_severity with the following threshold settings: [1.1, 1.5, 2.0, 2.5]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : test_heuristic will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : test_heuristic will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 100.0, 500.0, 1000.0]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic : test_heuristic will use spill_severity with the following threshold settings: [2.01, 2.2, 2.5, 3.0]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : test_heuristic will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericGCHeuristic : test_heuristic will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericGCHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperSpillHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 100.0, 500.0, 1000.0]
07-10-2018 01:52:45 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperSpillHeuristic : test_heuristic will use spill_severity with the following threshold settings: [2.01, 2.2, 2.5, 3.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic : test_heuristic will use runtime_ratio_severity with the following threshold settings: [1.0, 2.0, 4.0, 8.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [1.0, 5.0, 10.0, 30.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericMemoryHeuristic : test_heuristic will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperTimeHeuristic : test_heuristic will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperTimeHeuristic : test_heuristic will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperTimeHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic : test_heuristic will use disk_speed_severity with the following threshold settings: [0.5, 0.25, 0.125, 0.03125]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 15.0, 30.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.ReducerTimeHeuristic : test_heuristic will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.ReducerTimeHeuristic : test_heuristic will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.ReducerTimeHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : test_heuristic will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericMemoryHeuristic : test_heuristic will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : test_heuristic will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : test_heuristic will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericGCHeuristic : test_heuristic will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericGCHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperSpeedHeuristic : test_heuristic will use disk_speed_severity with the following threshold settings: [0.5, 0.25, 0.125, 0.03125]
07-10-2018 01:52:46 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperSpeedHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 15.0, 30.0]
07-10-2018 01:52:47 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:47 INFO  [Thread-224] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:52:48 INFO  [Thread-224] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:48 ERROR [Thread-224] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:52:48 ERROR [Thread-224] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:52:48 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:48 INFO  [Thread-231] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:52:48 INFO  [Thread-231] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:48 ERROR [Thread-231] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:52:48 ERROR [Thread-231] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:52:48 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:48 INFO  [Thread-238] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:52:48 INFO  [Thread-238] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:48 ERROR [Thread-238] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:52:48 ERROR [Thread-238] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:52:48 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:48 INFO  [Thread-245] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:52:48 INFO  [Thread-245] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:49 ERROR [Thread-245] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:52:49 ERROR [Thread-245] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:52:49 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:49 INFO  [Thread-252] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:52:49 INFO  [Thread-252] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:49 ERROR [Thread-252] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:52:49 ERROR [Thread-252] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:52:49 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:49 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.InfoExtractor : Unable to retrieve the scheduler info for application [application_5678]. It does not contain [spark.driver.extraJavaOptions] property in its spark properties.
07-10-2018 01:52:49 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.InfoExtractor : No Scheduler found for appid: application_5678
07-10-2018 01:52:49 INFO  [Thread-259] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:52:49 INFO  [Thread-259] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:49 ERROR [Thread-259] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:52:49 ERROR [Thread-259] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:52:49 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:49 INFO  [Thread-266] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:52:49 INFO  [Thread-266] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:49 ERROR [Thread-266] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:52:49 ERROR [Thread-266] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:52:49 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:49 INFO  [Thread-273] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:52:49 INFO  [Thread-273] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:52:49 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:52:49 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: org.apache.oozie.client.$Impl_WorkflowJob@57e15318
07-10-2018 01:52:49 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0004166-160629080632562-oozie-oozi-W
07-10-2018 01:52:49 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action was manually submitted
07-10-2018 01:52:49 ERROR [Thread-273] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:52:49 ERROR [Thread-273] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: workflowJob
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action was manually submitted
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: manualChildJob
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0143705-160828184536493-oozie-oozi-W
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action was manually submitted
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_url_template param for Oozie Scheduler
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_exec_url_template param for Oozie Scheduler
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_job_url_template param for Oozie Scheduler
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_job_exec_url_template param for Oozie Scheduler
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: scheduledChildJob
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0163255-160828184536493-oozie-oozie-C@1537
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action is scheduled with coordinator
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_url_template param for Oozie Scheduler
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_exec_url_template param for Oozie Scheduler
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: manualChildJob
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0143705-160828184536493-oozie-oozi-W
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action was manually submitted
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_url_template param for Oozie Scheduler
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_exec_url_template param for Oozie Scheduler
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_job_url_template param for Oozie Scheduler
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_job_exec_url_template param for Oozie Scheduler
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: scheduledChildJob
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0163255-160828184536493-oozie-oozie-C@1537
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action is scheduled with coordinator
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_url_template param for Oozie Scheduler
07-10-2018 01:52:50 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_exec_url_template param for Oozie Scheduler
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: manualChildJob
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0143705-160828184536493-oozie-oozi-W
07-10-2018 01:52:50 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action was manually submitted
07-10-2018 01:58:47 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:38892/api/v1/applications/application_1
07-10-2018 01:58:48 INFO  [ForkJoinPool-1-worker-1] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:38892/api/v1/applications/application_1
07-10-2018 01:58:48 INFO  [ForkJoinPool-1-worker-1] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:38892/api/v1/applications/application_1/2/logs to get eventlogs
07-10-2018 01:58:48 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:37141/api/v1/applications/application_1
07-10-2018 01:58:48 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : creating SparkApplication by calling REST API at http://localhost:37141/api/v1/applications/application_1/2/logs to get eventlogs
07-10-2018 01:58:48 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:38449/api/v1/applications/application_1
07-10-2018 01:58:48 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : creating SparkApplication by calling REST API at http://localhost:38449/api/v1/applications/application_1/2/logs to get eventlogs
07-10-2018 01:58:48 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:46438/api/v1/applications/application_1
07-10-2018 01:58:49 INFO  [ForkJoinPool-1-worker-1] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:46438/api/v1/applications/application_1
07-10-2018 01:58:49 INFO  [ForkJoinPool-1-worker-3] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:46438/api/v1/applications/application_1/logs to get eventlogs
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkRestClientTest] com.linkedin.drelephant.spark.fetchers.SparkRestClient : calling REST API at http://localhost:47082/api/v1/applications/application_1
07-10-2018 01:58:49 WARN  [pool-1-thread-1-ScalaTest-running-SparkMetricsAggregatorTest] com.linkedin.drelephant.spark.SparkMetricsAggregator : applicationDurationMillis is negative. Skipping Metrics Aggregation:-8000000
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Fetching data for application_1
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Succeeded fetching data for application_1
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Fetching data for application_1
07-10-2018 01:58:49 WARN  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Failed fetching data for application_1. I will retry after some time! Exception Message is: null
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Fetching data for application_1
07-10-2018 01:58:49 WARN  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Failed fetching data for application_1. I will retry after some time! Exception Message is: null
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Fetching data for application_1
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Succeeded fetching data for application_1
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Fetching data for application_1
07-10-2018 01:58:49 WARN  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : Failed fetching data for application_1. I will retry after some time! Exception Message is: null
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFetcherTest] com.linkedin.drelephant.spark.fetchers.SparkFetcher : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log limit of Spark application is set to 50.0 MB
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log limit of Spark application is set to 100.0 MB
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log limit of Spark application is set to 100.0 MB
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log limit of Spark application is set to 100.0 MB
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : The event log location of Spark application is set to None
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : Replaying Spark logs for application: application_1 withlogPath: webhdfs://nn1.grid.example.com:50070/logs/spark/application_1_1.snappy with codec:Some(org.apache.spark.io.SnappyCompressionCodec@69d51728)
07-10-2018 01:58:49 INFO  [pool-1-thread-1-ScalaTest-running-SparkFsFetcherTest] org.apache.spark.deploy.history.SparkFSFetcher$ : Replay completed for application: application_1
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Loading configuration file AggregatorConf.xml
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuation file loaded. File: AggregatorConf.xml
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Aggregator : com.linkedin.drelephant.mapreduce.MapReduceMetricsAggregator
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Aggregator : com.linkedin.drelephant.spark.SparkMetricsAggregator
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Loading configuration file FetcherConf.xml
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuation file loaded. File: FetcherConf.xml
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 500.0 MB
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: PST
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Fetcher : com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Fetcher : com.linkedin.drelephant.spark.fetchers.FSFetcher
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Loading configuration file HeuristicConf.xml
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuation file loaded. File: HeuristicConf.xml
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Mapper Skew will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Mapper Skew will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Mapper Skew will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperSkewHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpMapperSkew
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : Mapper GC will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : Mapper GC will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperGCHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpGC
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : Mapper Time will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : Mapper Time will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : Mapper Time will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpMapperTime
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic : Mapper Speed will use disk_speed_severity with the following threshold settings: [0.5, 0.25, 0.125, 0.03125]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic : Mapper Speed will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 15.0, 30.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpMapperSpeed
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic : Mapper Spill will use num_tasks_severity with the following threshold settings: [50.0, 100.0, 500.0, 1000.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic : Mapper Spill will use spill_severity with the following threshold settings: [2.01, 2.2, 2.5, 3.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpMapperSpill
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Mapper Memory will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Mapper Memory will use container_memory_default_mb with the following threshold setting: 2147483648
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Mapper Memory will use container_memory_severity with the following threshold settings: [1.1, 1.5, 2.0, 2.5]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.MapperMemoryHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpMapperMemory
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Reducer Skew will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Reducer Skew will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : Reducer Skew will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ReducerSkewHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpReducerSkew
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : Reducer GC will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : Reducer GC will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ReducerGCHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpGC
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : Reducer Time will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : Reducer Time will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : Reducer Time will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpReducerTime
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Reducer Memory will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Reducer Memory will use container_memory_default_mb with the following threshold setting: 2147483648
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : Reducer Memory will use container_memory_severity with the following threshold settings: [1.1, 1.5, 2.0, 2.5]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ReducerMemoryHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpReducerMemory
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic : Shuffle & Sort will use runtime_ratio_severity with the following threshold settings: [1.0, 2.0, 4.0, 8.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic : Shuffle & Sort will use runtime_severity_in_min with the following threshold settings: [1.0, 5.0, 10.0, 30.0]
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpShuffleSort
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.ExceptionHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpException
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.mapreduce.heuristics.DistributedCacheLimitHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.mapreduce.helpDistributedCacheLimit
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.spark.heuristics.ConfigurationHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.spark.helpConfigurationHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.spark.heuristics.ExecutorsHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.spark.helpExecutorsHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.spark.heuristics.JobsHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.spark.helpJobsHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.spark.heuristics.StagesHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.spark.helpStagesHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load Heuristic : com.linkedin.drelephant.spark.heuristics.ExecutorGcHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Load View : views.html.help.spark.helpExecutorGcHeuristic
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Loading configuration file JobTypeConf.xml
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuation file loaded. File: JobTypeConf.xml
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Spark, for application type:spark, isDefault:true, confName:spark.app.id, confValue:.*.
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Pig, for application type:mapreduce, isDefault:false, confName:pig.script, confValue:.*.
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Hive, for application type:mapreduce, isDefault:false, confName:hive.mapred.mode, confValue:.*.
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:OozieLauncher, for application type:mapreduce, isDefault:false, confName:oozie.launcher.action.main.class, confValue:.*.
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Cascading, for application type:mapreduce, isDefault:false, confName:cascading.app.frameworks, confValue:.*.
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Voldemort, for application type:mapreduce, isDefault:false, confName:mapred.reducer.class, confValue:voldemort.store.readonly.mr.*.
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Kafka, for application type:mapreduce, isDefault:false, confName:kafka.url, confValue:.*.
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:HadoopJava, for application type:mapreduce, isDefault:true, confName:mapred.child.java.opts, confValue:.*.
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded total 8 job types for 2 app types
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Loading configuration file GeneralConf.xml
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Loading configuration file AutoTuningConf.xml
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Configuring ElephantContext...
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Supports SPARK application type, using com.linkedin.drelephant.spark.fetchers.FSFetcher@1b286bcf fetcher class with Heuristics [com.linkedin.drelephant.spark.heuristics.ConfigurationHeuristic, com.linkedin.drelephant.spark.heuristics.ExecutorsHeuristic, com.linkedin.drelephant.spark.heuristics.JobsHeuristic, com.linkedin.drelephant.spark.heuristics.StagesHeuristic, com.linkedin.drelephant.spark.heuristics.ExecutorGcHeuristic] and following JobTypes [Spark].
07-10-2018 01:58:51 INFO  [pool-1-thread-1] com.linkedin.drelephant.ElephantContext : Supports MAPREDUCE application type, using com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2@4d8eb4d4 fetcher class with Heuristics [com.linkedin.drelephant.mapreduce.heuristics.MapperSkewHeuristic, com.linkedin.drelephant.mapreduce.heuristics.MapperGCHeuristic, com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic, com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic, com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic, com.linkedin.drelephant.mapreduce.heuristics.MapperMemoryHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ReducerSkewHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ReducerGCHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ReducerMemoryHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic, com.linkedin.drelephant.mapreduce.heuristics.ExceptionHeuristic, com.linkedin.drelephant.mapreduce.heuristics.DistributedCacheLimitHeuristic] and following JobTypes [Pig, Hive, OozieLauncher, Cascading, Voldemort, Kafka, HadoopJava].
07-10-2018 01:58:51 INFO  [Thread-12] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:58:51 INFO  [Thread-12] com.linkedin.drelephant.security.HadoopSecurity : No login user. Creating login user
07-10-2018 01:58:51 INFO  [Thread-12] com.linkedin.drelephant.security.HadoopSecurity : Logging with null and null
07-10-2018 01:58:51 INFO  [Thread-12] com.linkedin.drelephant.security.HadoopSecurity : Logged in with user root (auth:SIMPLE)
07-10-2018 01:58:51 INFO  [Thread-12] com.linkedin.drelephant.security.HadoopSecurity : Login is not keytab based
07-10-2018 01:58:52 INFO  [Thread-12] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:58:52 ERROR [Thread-12] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:58:52 ERROR [Thread-12] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Spark, for application type:spark, isDefault:true, confName:spark.app.id, confValue:.*.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Pig, for application type:mapreduce, isDefault:false, confName:pig.script, confValue:.*.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Hive, for application type:mapreduce, isDefault:false, confName:hive.mapred.mode, confValue:.*.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Cascading, for application type:mapreduce, isDefault:false, confName:cascading.app.frameworks, confValue:.*.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:HadoopJava, for application type:mapreduce, isDefault:true, confName:mapred.child.java.opts, confValue:.*.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded total 5 job types for 2 app types
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Spark, for application type:spark, isDefault:true, confName:spark.app.id, confValue:.*.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Pig, for application type:mapreduce, isDefault:false, confName:pig.script, confValue:.*.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Spark, for application type:spark, isDefault:true, confName:spark.app.id, confValue:.*.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Spark, for application type:spark, isDefault:true, confName:spark.app.id, confValue:.*.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Pig, for application type:mapreduce, isDefault:true, confName:pig.script, confValue:.*.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.configurations.jobtype.JobTypeConfiguration : Loaded jobType:Hive, for application type:mapreduce, isDefault:true, confName:hive.mapred.mode, confValue:.*.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 200.0 MB
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: PST
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFetcher : appId needs sampling.
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 500.0 MB
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: America/Los_Angeles
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 500.0 MB
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: America/Los_Angeles
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 500.0 MB
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: America/Los_Angeles
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : The history log limit of MapReduce application is set to 500.0 MB
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Using timezone: America/Los_Angeles
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : Intermediate history dir: /tmp/hadoop-yarn/staging/history/done_intermediate
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.fetchers.MapReduceFSFetcherHadoop2 : History done dir: /tmp/hadoop-yarn/staging/history/done
07-10-2018 01:58:52 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Could not find 4 threshold levels in 2, 4, 8
07-10-2018 01:58:52 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Could not evaluate 2& in 2&
07-10-2018 01:58:52 WARN  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuration foo2 is negative. Resetting it to 0
07-10-2018 01:58:52 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Invalid configuration foo4. Value is 0.5. Resetting it to default value: 50
07-10-2018 01:58:52 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Invalid configuration foo5. Value is 9999999999999999. Resetting it to default value: 50
07-10-2018 01:58:52 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Invalid configuration foo6. Value is bar. Resetting it to default value: 50
07-10-2018 01:58:52 WARN  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Configuration foo2 is negative. Resetting it to 0
07-10-2018 01:58:52 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Invalid configuration foo4. Value is 0.5. Resetting it to default value: 50
07-10-2018 01:58:52 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Invalid configuration foo6. Value is bar. Resetting it to default value: 50
07-10-2018 01:58:52 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.Utils : Truncating foo-bar to 6 characters for id
07-10-2018 01:58:54 INFO  [play-akka.actor.default-dispatcher-4] org.hibernate.validator.internal.util.Version : HV000001: Hibernate Validator 5.0.1.Final
07-10-2018 01:58:55 INFO  [play-akka.actor.default-dispatcher-4] com.linkedin.drelephant.util.Utils : Loading configuration file SchedulerConf.xml
07-10-2018 01:58:55 INFO  [play-akka.actor.default-dispatcher-4] com.linkedin.drelephant.util.Utils : Configuation file loaded. File: SchedulerConf.xml
07-10-2018 01:58:55 INFO  [play-akka.actor.default-dispatcher-4] com.linkedin.drelephant.util.InfoExtractor : Load Scheduler airflow with class : com.linkedin.drelephant.schedulers.AirflowScheduler
07-10-2018 01:58:55 INFO  [play-akka.actor.default-dispatcher-4] com.linkedin.drelephant.util.InfoExtractor : Load Scheduler azkaban with class : com.linkedin.drelephant.schedulers.AzkabanScheduler
07-10-2018 01:58:55 INFO  [play-akka.actor.default-dispatcher-4] com.linkedin.drelephant.util.InfoExtractor : Load Scheduler oozie with class : com.linkedin.drelephant.schedulers.OozieScheduler
07-10-2018 01:58:58 INFO  [pool-1-thread-1] com.linkedin.drelephant.exceptions.EventExceptionTest : correct messagePath is not a file: /data/sample/Sample/Sample/1466675602538-PT-472724050
07-10-2018 01:58:58 WARN  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.DistributedCacheLimitHeuristic : Mismatch in the number of files and their corresponding sizes for mapreduce.job.cache.archives
07-10-2018 01:58:58 WARN  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.DistributedCacheLimitHeuristic : Mismatch in the number of files and their corresponding sizes for mapreduce.job.cache.files
07-10-2018 01:58:58 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:58:58 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use container_memory_default_mb with the following threshold setting: 2147483648
07-10-2018 01:58:58 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use container_memory_severity with the following threshold settings: [1.1, 1.5, 2.0, 2.5]
07-10-2018 01:58:58 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:58:58 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use container_memory_default_mb with the following threshold setting: 2147483648
07-10-2018 01:58:58 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericMemoryHeuristic : test_heuristic will use container_memory_severity with the following threshold settings: [1.1, 1.5, 2.0, 2.5]
07-10-2018 01:58:58 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:58:58 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:58:58 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : test_heuristic will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : test_heuristic will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ReducerTimeHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 100.0, 500.0, 1000.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpillHeuristic : test_heuristic will use spill_severity with the following threshold settings: [2.01, 2.2, 2.5, 3.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : test_heuristic will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericGCHeuristic : test_heuristic will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericGCHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperSpillHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 100.0, 500.0, 1000.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperSpillHeuristic : test_heuristic will use spill_severity with the following threshold settings: [2.01, 2.2, 2.5, 3.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic : test_heuristic will use runtime_ratio_severity with the following threshold settings: [1.0, 2.0, 4.0, 8.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.ShuffleSortHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [1.0, 5.0, 10.0, 30.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericMemoryHeuristic : test_heuristic will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperTimeHeuristic : test_heuristic will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperTimeHeuristic : test_heuristic will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperTimeHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic : test_heuristic will use disk_speed_severity with the following threshold settings: [0.5, 0.25, 0.125, 0.03125]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperSpeedHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 15.0, 30.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.ReducerTimeHeuristic : test_heuristic will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.ReducerTimeHeuristic : test_heuristic will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.ReducerTimeHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericSkewHeuristic : test_heuristic will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : test_heuristic will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.GenericGCHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericMemoryHeuristic : test_heuristic will use memory_ratio_severity with the following threshold settings: [0.6, 0.5, 0.4, 0.3]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [10.0, 50.0, 100.0, 200.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use deviation_severity with the following threshold settings: [2.0, 4.0, 8.0, 16.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericDataSkewHeuristic : test_heuristic will use files_severity with the following threshold settings: [0.125, 0.25, 0.5, 1.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : test_heuristic will use short_runtime_severity_in_min with the following threshold settings: [10.0, 4.0, 2.0, 1.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : test_heuristic will use long_runtime_severity_in_min with the following threshold settings: [15.0, 30.0, 60.0, 120.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.mapreduce.heuristics.MapperTimeHeuristic : test_heuristic will use num_tasks_severity with the following threshold settings: [50.0, 101.0, 500.0, 1000.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericGCHeuristic : test_heuristic will use gc_ratio_severity with the following threshold settings: [0.01, 0.02, 0.03, 0.04]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.GenericGCHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 12.0, 15.0]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperSpeedHeuristic : test_heuristic will use disk_speed_severity with the following threshold settings: [0.5, 0.25, 0.125, 0.03125]
07-10-2018 01:58:59 INFO  [pool-1-thread-1] com.linkedin.drelephant.tez.heuristics.MapperSpeedHeuristic : test_heuristic will use runtime_severity_in_min with the following threshold settings: [5.0, 10.0, 15.0, 30.0]
07-10-2018 01:59:01 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:01 INFO  [Thread-224] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:59:01 INFO  [Thread-224] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:01 ERROR [Thread-224] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:59:01 ERROR [Thread-224] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:59:01 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:01 INFO  [Thread-231] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:59:01 INFO  [Thread-231] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:01 ERROR [Thread-231] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:59:01 ERROR [Thread-231] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:59:01 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:01 INFO  [Thread-238] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:59:01 INFO  [Thread-238] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:01 ERROR [Thread-238] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:59:01 ERROR [Thread-238] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:59:02 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:02 INFO  [Thread-245] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:59:02 INFO  [Thread-245] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:02 ERROR [Thread-245] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:59:02 ERROR [Thread-245] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:59:02 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:02 INFO  [Thread-252] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:59:02 INFO  [Thread-252] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:02 ERROR [Thread-252] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:59:02 ERROR [Thread-252] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:59:02 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:02 ERROR [pool-1-thread-1] com.linkedin.drelephant.util.InfoExtractor : Unable to retrieve the scheduler info for application [application_5678]. It does not contain [spark.driver.extraJavaOptions] property in its spark properties.
07-10-2018 01:59:02 INFO  [pool-1-thread-1] com.linkedin.drelephant.util.InfoExtractor : No Scheduler found for appid: application_5678
07-10-2018 01:59:02 INFO  [Thread-259] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:59:02 INFO  [Thread-259] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:02 ERROR [Thread-259] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:59:02 ERROR [Thread-259] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:59:02 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:02 INFO  [Thread-266] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:59:02 INFO  [Thread-266] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:02 ERROR [Thread-266] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:59:02 ERROR [Thread-266] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:03 INFO  [Thread-273] com.linkedin.drelephant.ElephantRunner : Dr.elephant has started
07-10-2018 01:59:03 INFO  [Thread-273] com.linkedin.drelephant.analysis.HDFSContext : HDFS BLock size: 33554432
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: org.apache.oozie.client.$Impl_WorkflowJob@1756e1a7
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0004166-160629080632562-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action was manually submitted
07-10-2018 01:59:03 ERROR [Thread-273] com.linkedin.drelephant.ElephantRunner : Unsupported Hadoop major version detected. It is not 2.x.
07-10-2018 01:59:03 ERROR [Thread-273] com.linkedin.drelephant.ElephantRunner : java.lang.RuntimeException: Unsupported Hadoop major version detected. It is not 2.x.
at com.linkedin.drelephant.ElephantRunner.loadAnalyticJobGenerator(ElephantRunner.java:80)
at com.linkedin.drelephant.ElephantRunner.access$100(ElephantRunner.java:48)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:101)
at com.linkedin.drelephant.ElephantRunner$1.run(ElephantRunner.java:96)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1608)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:109)
at com.linkedin.drelephant.ElephantRunner.run(ElephantRunner.java:96)
at com.linkedin.drelephant.DrElephant.run(DrElephant.java:58)

07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: workflowJob
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action was manually submitted
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: manualChildJob
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0143705-160828184536493-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action was manually submitted
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_url_template param for Oozie Scheduler
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_exec_url_template param for Oozie Scheduler
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_job_url_template param for Oozie Scheduler
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_job_exec_url_template param for Oozie Scheduler
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: scheduledChildJob
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0163255-160828184536493-oozie-oozie-C@1537
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action is scheduled with coordinator
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_url_template param for Oozie Scheduler
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_exec_url_template param for Oozie Scheduler
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: manualChildJob
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0143705-160828184536493-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action was manually submitted
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_url_template param for Oozie Scheduler
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_exec_url_template param for Oozie Scheduler
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_job_url_template param for Oozie Scheduler
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_job_exec_url_template param for Oozie Scheduler
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: scheduledChildJob
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0163255-160828184536493-oozie-oozie-C@1537
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action is scheduled with coordinator
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_url_template param for Oozie Scheduler
07-10-2018 01:59:03 WARN  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Missing oozie_workflow_exec_url_template param for Oozie Scheduler
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Fetching Oozie workflow info for 0004167-160629080632562-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow for 0004167-160629080632562-oozie-oozi-W: manualChildJob
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie super parent for: 0004167-160629080632562-oozie-oozi-W: 0143705-160828184536493-oozie-oozi-W
07-10-2018 01:59:03 INFO  [pool-1-thread-1] com.linkedin.drelephant.schedulers.OozieScheduler : Oozie workflow 0004167-160629080632562-oozie-oozi-W@some-action was manually submitted




Reply all
Reply to author
Forward
0 new messages