Installed CDAP 3.4.1 on a Hadoop cluster, can not start GUI.

408 views
Skip to first unread message

pop...@hotmail.com

unread,
Jun 9, 2016, 11:54:48 AM6/9/16
to CDAP User
Hi,

  I installed CDAP 3.4.1 on a Hadoop cluster, I can not start GUI. I found the following error in the router log.

co.cask.cdap.common.HandlerException: No endpoint strategy found for request : /ping

2016-06-08 19:39:30,235 - ERROR [New I/O worker #12:c.c.c.g.r.h.HttpRequestHandler@184] - Exception raised in Request Handler [id: 0x724da3ad, /10.20.0.12:49974 => /10.20.0.12:10000]
co.cask.cdap.common.HandlerException: No endpoint strategy found for request : /v3/version
        at co.cask.cdap.gateway.router.handlers.HttpRequestHandler.getDiscoverable(HttpRequestHandler.java:221) ~[co.cask.cdap.cdap-gateway-3.4.2.jar:na]
        at co.cask.cdap.gateway.router.handlers.HttpRequestHandler.messageReceived(HttpRequestHandler.java:108) ~[co.cask.cdap.cdap-gateway-3.4.2.jar:na]
        at co.cask.cdap.gateway.router.handlers.HttpStatusRequestHandler.messageReceived(HttpStatusRequestHandler.java:65) ~[co.cask.cdap.cdap-gateway-3.4.2.jar:na]
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) ~[io.netty.netty-3.6.6.Final.jar:na]
        at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459) ~[io.netty.netty-3.6.6.Final.jar:na]
        at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536) ~[io.netty.netty-3.6.6.Final.jar:na]
        at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) ~[io.netty.netty-3.6.6.Final.jar:na]
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) ~[io.netty.netty-3.6.6.Final.jar:na]
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) ~[io.netty.netty-3.6.6.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) ~[io.netty.netty-3.6.6.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) ~[io.netty.netty-3.6.6.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) ~[io.netty.netty-3.6.6.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) ~[io.netty.netty-3.6.6.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) ~[io.netty.netty-3.6.6.Final.jar:na]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_80]
        at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]


Best Regards

Pope Xie

Ali Anwar

unread,
Jun 9, 2016, 5:45:16 PM6/9/16
to pop...@hotmail.com, CDAP User
Hi Pope.

CDAP Master does take a few minutes to start up and you will see these error logs during the first few minutes of startup (2-5 minutes, usually).
If you see that these errors continue after several minutes, then it is a problem. Otherwise if it starts working after a few minutes, it shouldn't be an issue.

If its still problematic, can you attach the entire master and router logs? Otherwise, there's not much to look into.

Regards,

Ali Anwar


--
You received this message because you are subscribed to the Google Groups "CDAP User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cdap-user+...@googlegroups.com.
To post to this group, send email to cdap...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cdap-user/c882c471-dada-4bf5-9ec3-3b665f4ce84a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Pope Xie

unread,
Jun 9, 2016, 9:39:58 PM6/9/16
to CDAP User, pop...@hotmail.com
I waited for an hour. I still can see that error in the router log.
master-cdap-cdapnode3.att.com.zip

Poorna Chandra

unread,
Jun 9, 2016, 10:34:03 PM6/9/16
to Pope Xie, CDAP User
Hi Pope,

From the logs attached, I don't see CDAP Master complete its initialization. If there are no more log lines after time 2016-06-09 21:12:40,108 in master-cdap-cdapnode3.att.com.log, then the process may have hung. Can you attach the thread dump of the CDAP Master process? 

To find the pid of master process, you can run -

$ ps auxww | grep cdap.service=master


Thanks,
Poorna.

Pope Xie

unread,
Jun 10, 2016, 8:57:32 AM6/10/16
to CDAP User
Please check the log file.

Thanks

Pope Xie
master-cdap-cdapnode3.att.com.zip

Pope Xie

unread,
Jun 10, 2016, 9:21:01 PM6/10/16
to CDAP User
It seems tx service could not be started.


On Thursday, June 9, 2016 at 11:54:48 AM UTC-4, Pope Xie wrote:
master-cdap-cdapnode3.att.com.zip

Sreevatsan Raman

unread,
Jun 10, 2016, 9:27:12 PM6/10/16
to Pope Xie, CDAP User
Hi Pope, 

Looking through the thread dump, we see that the call is stuck on HBase RPC operation. Please see below. 
Is your HBase in a healthy state? Does it show any inconsistency? You can run hbase hbck command to see if there are any underlying HBase issue. Let us know how that test goes. 


Thanks,
Sree

 

"main" prio=10 tid=0x00007ff39001f000 nid=0xe10 waiting on condition [0x00007ff396cd5000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1298)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1090)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1047)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:888)
        at org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:72)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:113)
        - locked <0x00000000b07b9f38> (a org.apache.hadoop.hbase.client.RpcRetryingCaller)
        at org.apache.hadoop.hbase.client.HTable.delete(HTable.java:875)
        at co.cask.cdap.data2.util.hbase.ConfigurationTable.write(ConfigurationTable.java:98)
        at co.cask.cdap.data.runtime.main.MasterServiceMain.updateConfigurationTable(MasterServiceMain.java:539)
        at co.cask.cdap.data.runtime.main.MasterServiceMain.start(MasterServiceMain.java:217)
        at co.cask.cdap.common.runtime.DaemonMain.doMain(DaemonMain.java:58)
        at co.cask.cdap.data.runtime.main.MasterServiceMain.main(MasterServiceMain.java:157)

--
You received this message because you are subscribed to the Google Groups "CDAP User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cdap-user+...@googlegroups.com.
To post to this group, send email to cdap...@googlegroups.com.

Poorna Chandra

unread,
Jun 10, 2016, 10:04:56 PM6/10/16
to Pope Xie, CDAP User
Hi Pope,

In the latest logs you have attached I see that CDAP started up for a short time and then shutdown. There are a couple of issues in the log -
  1. There seems to be some mismatch in the Hive version determined, and the Hive jars present in the classpath.
  2. Same version mismatch issue for an internal library we use (Twill)
Looks like there are some extraneous jars in /opt/cdap/master lib directory that are causing the version mismatch. Can you send us the list of jar files in that directory? 

Thanks,
Poorna.


On Fri, Jun 10, 2016 at 6:21 PM, Pope Xie <pop...@hotmail.com> wrote:

--
You received this message because you are subscribed to the Google Groups "CDAP User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cdap-user+...@googlegroups.com.
To post to this group, send email to cdap...@googlegroups.com.

Pope Xie

unread,
Jun 13, 2016, 8:47:57 AM6/13/16
to CDAP User
Please check the list.

Best Regards

Pope Xie


On Thursday, June 9, 2016 at 11:54:48 AM UTC-4, Pope Xie wrote:
jar.lst

Rohit Sinha

unread,
Jun 13, 2016, 10:00:22 PM6/13/16
to CDAP User
Hello Pope,

Thanks for sending over the list of jars.

In the logs we see that the Hive Conf files is not being set: 


2016-06-10 21:11:56,128 - DEBUG [leader-election-election-master.services:c.c.c.d.r.m.MasterServiceMain@771] - Hive conf files =

Can you please share the details of your environment. What distro and version are you running CDAP on ? Also, specifically what's the version of Hive ?

You can find out hive version by the following command: 
hive --version

Thanks,
Rohit Sinha

Pope Xie

unread,
Jun 16, 2016, 2:05:08 PM6/16/16
to CDAP User
Hive 1.1.1
Subversion git://glacier.local/Users/chao/Documents/hive -r 3e8d832a1a8e2b12029adcb55862cf040098ef0f
Compiled by chao on Thu May 14 15:23:15 PDT 2015
From source with checksum 5820e7473159988fd33a0afcb10be30a

hbase(main):001:0> version
1.0.3, rf1e1312f9790a7c40f6a4b5a1bab2ea1dd559890, Tue Jan 19 19:26:53 PST 2016

hadoop version
Hadoop 2.6.4
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 5082c73637530b0b7e115f9625ed7fac69f937e6
Compiled by jenkins on 2016-02-12T09:45Z
Compiled with protoc 2.5.0
From source with checksum 8dee2286ecdbbbc930a6c87b65cbc010
This command was run using /opt/hadoop-2.6.4/share/hadoop/common/hadoop-common-2.6.4.jar

zookeeper is 3.4.8.
Message has been deleted

Rohit Sinha

unread,
Jun 16, 2016, 5:01:57 PM6/16/16
to CDAP User
Hello Pope, 

Thanks for sending us the distribution version. 

It will be helpful for us to debug this issue, if you can send us the following information:

1. Can you please run the following command on your cluster and send us the output: 

hive -e 'set -v'


2. You can also try starting CDAP without explore which will eliminate CDAP's dependency on hive. To do this edit your cdap-site.xml and set the following property to false: 

explore.enabled

After this you can try restarting CDAP and update us with the outcome. 

Thanks.

Pope Xie

unread,
Jun 17, 2016, 6:30:03 PM6/17/16
to CDAP User
 hive -e 'set -v'

Logging initialized using configuration in jar:file:/opt/apache-hive-1.1.1-bin/lib/hive-common-1.1.1.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-1.1.1-bin/lib/hive-jdbc-1.1.1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
_hive.hdfs.session.path=/tmp/hive/cdap/6960a884-e807-480a-8e68-817966321210
_hive.local.session.path=/tmp/hive/6960a884-e807-480a-8e68-817966321210
_hive.tmp_table_space=/tmp/hive/cdap/6960a884-e807-480a-8e68-817966321210/_tmp_space.db
datanucleus.autoCreateSchema=true
datanucleus.autoStartMechanismMode=checked
datanucleus.cache.level2=false
datanucleus.cache.level2.type=none
datanucleus.connectionPoolingType=BONECP
datanucleus.fixedDatastore=false
datanucleus.identifierFactory=datanucleus1
datanucleus.plugin.pluginRegistryBundleCheck=LOG
datanucleus.rdbms.useLegacyNativeValueStrategy=true
datanucleus.storeManagerType=rdbms
datanucleus.transactionIsolation=read-committed
datanucleus.validateColumns=false
datanucleus.validateConstraints=false
datanucleus.validateTables=false
dfs.block.access.key.update.interval=600
dfs.block.access.token.enable=false
dfs.block.access.token.lifetime=600
dfs.blockreport.initialDelay=0
dfs.blockreport.intervalMsec=21600000
dfs.blockreport.split.threshold=1000000
dfs.blocksize=134217728
dfs.bytes-per-checksum=512
dfs.cachereport.intervalMsec=10000
dfs.client-write-packet-size=65536
dfs.client.block.write.replace-datanode-on-failure.best-effort=false
dfs.client.block.write.replace-datanode-on-failure.enable=true
dfs.client.block.write.replace-datanode-on-failure.policy=DEFAULT
dfs.client.block.write.retries=3
dfs.client.cached.conn.retry=3
dfs.client.context=default
dfs.client.datanode-restart.timeout=30
dfs.client.domain.socket.data.traffic=false
dfs.client.failover.connection.retries=0
dfs.client.failover.connection.retries.on.timeouts=0
dfs.client.failover.max.attempts=15
dfs.client.failover.sleep.base.millis=500
dfs.client.failover.sleep.max.millis=15000
dfs.client.file-block-storage-locations.num-threads=10
dfs.client.file-block-storage-locations.timeout.millis=1000
dfs.client.https.keystore.resource=ssl-client.xml
dfs.client.https.need-auth=false
dfs.client.mmap.cache.size=256
dfs.client.mmap.enabled=true
dfs.client.read.shortcircuit=false
dfs.client.read.shortcircuit.skip.checksum=false
dfs.client.read.shortcircuit.streams.cache.size=256
dfs.client.use.datanode.hostname=false
dfs.client.use.legacy.blockreader.local=false
dfs.client.write.exclude.nodes.cache.expiry.interval.millis=600000
dfs.datanode.address=0.0.0.0:50010
dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction=0.75f
dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold=10737418240
dfs.datanode.balance.bandwidthPerSec=1048576
dfs.datanode.block.id.layout.upgrade.threads=12
dfs.datanode.bp-ready.timeout=20
dfs.datanode.data.dir=file:///home/cdap/hadoopdata/hdfs/datanode
dfs.datanode.data.dir.perm=700
dfs.datanode.directoryscan.interval=21600
dfs.datanode.directoryscan.threads=1
dfs.datanode.dns.interface=default
dfs.datanode.dns.nameserver=default
dfs.datanode.drop.cache.behind.reads=false
dfs.datanode.drop.cache.behind.writes=false
dfs.datanode.du.reserved=0
dfs.datanode.failed.volumes.tolerated=0
dfs.datanode.fsdatasetcache.max.threads.per.volume=4
dfs.datanode.handler.count=10
dfs.datanode.hdfs-blocks-metadata.enabled=false
dfs.datanode.http.address=0.0.0.0:50075
dfs.datanode.https.address=0.0.0.0:50475
dfs.datanode.ipc.address=0.0.0.0:50020
dfs.datanode.max.locked.memory=0
dfs.datanode.max.transfer.threads=4096
dfs.datanode.readahead.bytes=4193404
dfs.datanode.shared.file.descriptor.paths=/dev/shm,/tmp
dfs.datanode.sync.behind.writes=false
dfs.datanode.use.datanode.hostname=false
dfs.default.chunk.view.size=32768
dfs.encrypt.data.transfer=false
dfs.encrypt.data.transfer.cipher.key.bitlength=128
dfs.ha.automatic-failover.enabled=false
dfs.ha.fencing.ssh.connect-timeout=30000
dfs.ha.log-roll.period=120
dfs.ha.tail-edits.period=60
dfs.heartbeat.interval=3
dfs.http.policy=HTTP_ONLY
dfs.https.enable=false
dfs.https.server.keystore.resource=ssl-server.xml
dfs.image.compress=false
dfs.image.compression.codec=org.apache.hadoop.io.compress.DefaultCodec
dfs.image.transfer.bandwidthPerSec=0
dfs.image.transfer.chunksize=65536
dfs.image.transfer.timeout=60000
dfs.journalnode.http-address=0.0.0.0:8480
dfs.journalnode.https-address=0.0.0.0:8481
dfs.journalnode.rpc-address=0.0.0.0:8485
dfs.namenode.accesstime.precision=3600000
dfs.namenode.acls.enabled=false
dfs.namenode.audit.loggers=default
dfs.namenode.avoid.read.stale.datanode=false
dfs.namenode.avoid.write.stale.datanode=false
dfs.namenode.backup.address=0.0.0.0:50100
dfs.namenode.backup.http-address=0.0.0.0:50105
dfs.namenode.checkpoint.check.period=60
dfs.namenode.checkpoint.dir=file://${hadoop.tmp.dir}/dfs/namesecondary
dfs.namenode.checkpoint.edits.dir=${dfs.namenode.checkpoint.dir}
dfs.namenode.checkpoint.max-retries=3
dfs.namenode.checkpoint.period=3600
dfs.namenode.checkpoint.txns=1000000
dfs.namenode.datanode.registration.ip-hostname-check=true
dfs.namenode.decommission.interval=30
dfs.namenode.decommission.nodes.per.interval=5
dfs.namenode.delegation.key.update-interval=86400000
dfs.namenode.delegation.token.max-lifetime=604800000
dfs.namenode.delegation.token.renew-interval=86400000
dfs.namenode.edit.log.autoroll.multiplier.threshold=2.0
dfs.namenode.edits.dir=${dfs.namenode.name.dir}
dfs.namenode.edits.journal-plugin.qjournal=org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager
dfs.namenode.edits.noeditlogchannelflush=false
dfs.namenode.enable.retrycache=true
dfs.namenode.fs-limits.max-blocks-per-file=1048576
dfs.namenode.fs-limits.max-component-length=255
dfs.namenode.fs-limits.max-directory-items=1048576
dfs.namenode.fs-limits.max-xattr-size=16384
dfs.namenode.fs-limits.max-xattrs-per-inode=32
dfs.namenode.fs-limits.min-block-size=1048576
dfs.namenode.handler.count=10
dfs.namenode.http-address=0.0.0.0:50070
dfs.namenode.https-address=0.0.0.0:50470
dfs.namenode.inotify.max.events.per.rpc=1000
dfs.namenode.invalidate.work.pct.per.iteration=0.32f
dfs.namenode.kerberos.internal.spnego.principal=${dfs.web.authentication.kerberos.principal}
dfs.namenode.lazypersist.file.scrub.interval.sec=300
dfs.namenode.list.cache.directives.num.responses=100
dfs.namenode.list.cache.pools.num.responses=100
dfs.namenode.list.encryption.zones.num.responses=100
dfs.namenode.logging.level=info
dfs.namenode.max.extra.edits.segments.retained=10000
dfs.namenode.max.objects=0
dfs.namenode.name.dir=file:///home/cdap/hadoopdata/hdfs/namenode
dfs.namenode.name.dir.restore=false
dfs.namenode.num.checkpoints.retained=2
dfs.namenode.num.extra.edits.retained=1000000
dfs.namenode.path.based.cache.block.map.allocation.percent=0.25
dfs.namenode.reject-unresolved-dn-topology-mapping=false
dfs.namenode.replication.considerLoad=true
dfs.namenode.replication.interval=3
dfs.namenode.replication.min=1
dfs.namenode.replication.work.multiplier.per.iteration=2
dfs.namenode.resource.check.interval=5000
dfs.namenode.resource.checked.volumes.minimum=1
dfs.namenode.resource.du.reserved=104857600
dfs.namenode.retrycache.expirytime.millis=600000
dfs.namenode.retrycache.heap.percent=0.03f
dfs.namenode.safemode.extension=30000
dfs.namenode.safemode.min.datanodes=0
dfs.namenode.safemode.threshold-pct=0.999f
dfs.namenode.secondary.http-address=cdapnode4.att.com:9001
dfs.namenode.secondary.https-address=0.0.0.0:50091
dfs.namenode.stale.datanode.interval=30000
dfs.namenode.startup.delay.block.deletion.sec=0
dfs.namenode.support.allow.format=true
dfs.namenode.write.stale.datanode.ratio=0.5f
dfs.namenode.xattrs.enabled=true
dfs.permissions.enabled=true
dfs.permissions.superusergroup=supergroup
dfs.replication=1
dfs.replication.max=512
dfs.secondary.namenode.kerberos.internal.spnego.principal=${dfs.web.authentication.kerberos.principal}
dfs.storage.policy.enabled=true
dfs.stream-buffer-size=4096
dfs.support.append=true
dfs.user.home.dir.prefix=/user
dfs.webhdfs.enabled=true
dfs.webhdfs.user.provider.user.pattern=^[A-Za-z_][A-Za-z0-9._-]*[$]?$
file.blocksize=67108864
file.bytes-per-checksum=512
file.client-write-packet-size=65536
file.replication=1
file.stream-buffer-size=4096
fs.AbstractFileSystem.file.impl=org.apache.hadoop.fs.local.LocalFs
fs.AbstractFileSystem.har.impl=org.apache.hadoop.fs.HarFs
fs.AbstractFileSystem.hdfs.impl=org.apache.hadoop.fs.Hdfs
fs.AbstractFileSystem.viewfs.impl=org.apache.hadoop.fs.viewfs.ViewFs
fs.automatic.close=true
fs.client.resolve.remote.symlinks=true
fs.defaultFS=hdfs://cdapnode3.att.com:9000
fs.df.interval=60000
fs.du.interval=600000
fs.ftp.host=0.0.0.0
fs.ftp.host.port=21
fs.har.impl=org.apache.hadoop.hive.shims.HiveHarFileSystem
fs.har.impl.disable.cache=true
fs.permissions.umask-mode=022
fs.s3.block.size=67108864
fs.s3.buffer.dir=${hadoop.tmp.dir}/s3
fs.s3.maxRetries=4
fs.s3.sleepTimeSeconds=10
fs.s3a.attempts.maximum=10
fs.s3a.buffer.dir=${hadoop.tmp.dir}/s3a
fs.s3a.connection.maximum=15
fs.s3a.connection.ssl.enabled=true
fs.s3a.connection.timeout=5000
fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
fs.s3a.multipart.purge=false
fs.s3a.multipart.purge.age=86400
fs.s3a.multipart.size=104857600
fs.s3a.multipart.threshold=2147483647
fs.s3a.paging.maximum=5000
fs.s3n.block.size=67108864
fs.s3n.multipart.copy.block.size=5368709120
fs.s3n.multipart.uploads.block.size=67108864
fs.s3n.multipart.uploads.enabled=false
fs.swift.impl=org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem
fs.trash.checkpoint.interval=0
fs.trash.interval=0
ftp.blocksize=67108864
ftp.bytes-per-checksum=512
ftp.client-write-packet-size=65536
ftp.replication=3
ftp.stream-buffer-size=4096
ha.failover-controller.graceful-fence.connection.retries=1
ha.zookeeper.acl=world:anyone:rwcda
ha.zookeeper.parent-znode=/hadoop-ha
hadoop.bin.path=/opt/hadoop-2.6.4/bin/hadoop
hadoop.common.configuration.version=0.23.0
hadoop.fuse.connection.timeout=300
hadoop.fuse.timer.period=5
hadoop.hdfs.configuration.version=1
hadoop.http.authentication.kerberos.keytab=${user.home}/hadoop.keytab
hadoop.http.authentication.kerberos.principal=HTTP/_HOST@LOCALHOST
hadoop.http.authentication.signature.secret.file=${user.home}/hadoop-http-auth-signature-secret
hadoop.http.authentication.simple.anonymous.allowed=true
hadoop.http.authentication.token.validity=36000
hadoop.http.authentication.type=simple
hadoop.http.filter.initializers=org.apache.hadoop.http.lib.StaticUserWebFilter
hadoop.http.staticuser.user=dr.who
hadoop.jetty.logs.serve.aliases=true
hadoop.kerberos.kinit.command=kinit
hadoop.registry.jaas.context=Client
hadoop.registry.rm.enabled=false
hadoop.registry.secure=false
hadoop.registry.system.acls=sasl:yarn@, sasl:mapred@, sasl:mapred@hdfs@
hadoop.registry.zk.quorum=localhost:2181
hadoop.registry.zk.retry.times=5
hadoop.registry.zk.root=/registry
hadoop.rpc.protection=authentication
hadoop.rpc.socket.factory.class.default=org.apache.hadoop.net.StandardSocketFactory
hadoop.security.authentication=simple
hadoop.security.authorization=false
hadoop.security.crypto.buffer.size=8192
hadoop.security.crypto.cipher.suite=AES/CTR/NoPadding
hadoop.security.crypto.codec.classes.aes.ctr.nopadding=org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec,org.apache.hadoop.crypto.JceAesCtrCryptoCodec
hadoop.security.group.mapping=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback
hadoop.security.group.mapping.ldap.directory.search.timeout=10000
hadoop.security.group.mapping.ldap.search.attr.member=member
hadoop.security.group.mapping.ldap.search.filter.group=(objectClass=group)
hadoop.security.group.mapping.ldap.search.filter.user=(&(objectClass=user)(sAMAccountName={0}))
hadoop.security.group.mapping.ldap.ssl=false
hadoop.security.groups.cache.secs=300
hadoop.security.groups.negative-cache.secs=30
hadoop.security.instrumentation.requires.admin=false
hadoop.security.java.secure.random.algorithm=SHA1PRNG
hadoop.security.kms.client.authentication.retry-count=1
hadoop.security.kms.client.encrypted.key.cache.expiry=43200000
hadoop.security.kms.client.encrypted.key.cache.low-watermark=0.3f
hadoop.security.kms.client.encrypted.key.cache.num.refill.threads=2
hadoop.security.kms.client.encrypted.key.cache.size=500
hadoop.security.random.device.file.path=/dev/urandom
hadoop.security.uid.cache.secs=14400
hadoop.ssl.client.conf=ssl-client.xml
hadoop.ssl.enabled=false
hadoop.ssl.enabled.protocols=TLSv1
hadoop.ssl.hostname.verifier=DEFAULT
hadoop.ssl.keystores.factory.class=org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
hadoop.ssl.require.client.cert=false
hadoop.ssl.server.conf=ssl-server.xml
hadoop.tmp.dir=/tmp/hadoop-${user.name}
hadoop.user.group.static.mapping.overrides=dr.who=;
hadoop.util.hash.type=murmur
hadoop.work.around.non.threadsafe.getpwuid=false
hive.analyze.stmt.collect.partlevel.stats=true
hive.archive.enabled=false
hive.auto.convert.join=true
hive.auto.convert.join.noconditionaltask=true
hive.auto.convert.join.noconditionaltask.size=10000000
hive.auto.convert.join.use.nonstaged=false
hive.auto.convert.sortmerge.join=false
hive.auto.convert.sortmerge.join.bigtable.selection.policy=org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ
hive.auto.convert.sortmerge.join.to.mapjoin=false
hive.auto.progress.timeout=0s
hive.autogen.columnalias.prefix.includefuncname=false
hive.autogen.columnalias.prefix.label=_c
hive.aux.jars.path=file:///opt/db-derby-10.12.1.1-bin/lib/derbyclient.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derby.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_cs.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_de_DE.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_es.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_fr.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_hu.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_it.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ja_JP.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ko_KR.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pl.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pt_BR.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ru.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_CN.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_TW.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbynet.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyoptionaltools.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyrun.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbytools.jar
hive.binary.record.max.length=1000
hive.cache.expr.evaluation=true
hive.cbo.enable=true
hive.cli.errors.ignore=false
hive.cli.pretty.output.num.cols=-1
hive.cli.print.current.db=false
hive.cli.print.header=false
hive.cli.prompt=hive
hive.cluster.delegation.token.store.class=org.apache.hadoop.hive.thrift.MemoryTokenStore
hive.cluster.delegation.token.store.zookeeper.znode=/hivedelegation
hive.compactor.abortedtxn.threshold=1000
hive.compactor.check.interval=300s
hive.compactor.cleaner.run.interval=5000ms
hive.compactor.delta.num.threshold=10
hive.compactor.delta.pct.threshold=0.1
hive.compactor.initiator.on=false
hive.compactor.worker.threads=0
hive.compactor.worker.timeout=86400s
hive.compat=0.12
hive.compute.query.using.stats=false
hive.conf.restricted.list=hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role
hive.conf.validation=true
hive.convert.join.bucket.mapjoin.tez=false
hive.debug.localtask=false
hive.default.fileformat=TextFile
hive.default.rcfile.serde=org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe
hive.default.serde=org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
hive.display.partition.cols.separately=true
hive.downloaded.resources.dir=/tmp/hive
hive.enforce.bucketing=false
hive.enforce.bucketmapjoin=false
hive.enforce.sorting=false
hive.enforce.sortmergebucketmapjoin=false
hive.entity.capture.transform=false
hive.entity.separator=@
hive.error.on.empty.partition=false
hive.exec.check.crossproducts=true
hive.exec.compress.intermediate=false
hive.exec.compress.output=false
hive.exec.concatenate.check.index=true
hive.exec.copyfile.maxsize=33554432
hive.exec.counters.pull.interval=1000
hive.exec.default.partition.name=__HIVE_DEFAULT_PARTITION__
hive.exec.drop.ignorenonexistent=true
hive.exec.dynamic.partition=true
hive.exec.dynamic.partition.mode=strict
hive.exec.infer.bucket.sort=false
hive.exec.infer.bucket.sort.num.buckets.power.two=false
hive.exec.job.debug.capture.stacktraces=true
hive.exec.job.debug.timeout=30000
hive.exec.local.scratchdir=/tmp/hive
hive.exec.max.created.files=100000
hive.exec.max.dynamic.partitions=1000
hive.exec.max.dynamic.partitions.pernode=100
hive.exec.mode.local.auto=false
hive.exec.mode.local.auto.input.files.max=4
hive.exec.mode.local.auto.inputbytes.max=134217728
hive.exec.orc.block.padding.tolerance=0.05
hive.exec.orc.compression.strategy=SPEED
hive.exec.orc.default.block.padding=true
hive.exec.orc.default.block.size=268435456
hive.exec.orc.default.buffer.size=262144
hive.exec.orc.default.compress=ZLIB
hive.exec.orc.default.row.index.stride=10000
hive.exec.orc.default.stripe.size=67108864
hive.exec.orc.dictionary.key.size.threshold=0.8
hive.exec.orc.encoding.strategy=SPEED
hive.exec.orc.memory.pool=0.5
hive.exec.orc.skip.corrupt.data=false
hive.exec.orc.zerocopy=false
hive.exec.parallel=false
hive.exec.parallel.thread.number=8
hive.exec.perf.logger=org.apache.hadoop.hive.ql.log.PerfLogger
hive.exec.rcfile.use.explicit.header=true
hive.exec.rcfile.use.sync.cache=true
hive.exec.reducers.bytes.per.reducer=256000000
hive.exec.reducers.max=1009
hive.exec.rowoffset=false
hive.exec.scratchdir=/tmp/hive
hive.exec.script.allow.partial.consumption=false
hive.exec.script.maxerrsize=100000
hive.exec.script.trust=false
hive.exec.stagingdir=.hive-staging
hive.exec.submit.local.task.via.child=true
hive.exec.submitviachild=false
hive.exec.tasklog.debug.timeout=20000
hive.exec.temporary.table.storage=default
hive.execution.engine=mr
hive.exim.uri.scheme.whitelist=hdfs,pfile
hive.explain.dependency.append.tasktype=false
hive.fetch.output.serde=org.apache.hadoop.hive.serde2.DelimitedJSONSerDe
hive.fetch.task.aggr=false
hive.fetch.task.conversion=more
hive.fetch.task.conversion.threshold=1073741824
hive.file.max.footer=100
hive.fileformat.check=true
hive.groupby.mapaggr.checkinterval=100000
hive.groupby.orderby.position.alias=false
hive.groupby.skewindata=false
hive.hadoop.supports.splittable.combineinputformat=false
hive.hashtable.initialCapacity=100000
hive.hashtable.key.count.adjustment=1.0
hive.hashtable.loadfactor=0.75
hive.hbase.generatehfiles=false
hive.hbase.snapshot.restoredir=/tmp
hive.hbase.wal.enabled=true
hive.heartbeat.interval=1000
hive.hmshandler.force.reload.conf=false
hive.hmshandler.retry.attempts=10
hive.hmshandler.retry.interval=2000ms
hive.hwi.listen.host=0.0.0.0
hive.hwi.listen.port=9999
hive.hwi.war.file=${env:HWI_WAR_FILE}
hive.ignore.mapjoin.hint=true
hive.in.test=false
hive.in.tez.test=false
hive.index.compact.binary.search=true
hive.index.compact.file.ignore.hdfs=false
hive.index.compact.query.max.entries=10000000
hive.index.compact.query.max.size=10737418240
hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
hive.insert.into.external.tables=true
hive.insert.into.multilevel.dirs=false
hive.io.rcfile.column.number.conf=0
hive.io.rcfile.record.buffer.size=4194304
hive.io.rcfile.record.interval=2147483647
hive.io.rcfile.tolerate.corruptions=false
hive.jobname.length=50
hive.join.cache.size=25000
hive.join.emit.interval=1000
hive.lazysimple.extended_boolean_literal=false
hive.limit.optimize.enable=false
hive.limit.optimize.fetch.max=50000
hive.limit.optimize.limit.file=10
hive.limit.pushdown.memory.usage=-1.0
hive.limit.query.max.table.partition=-1
hive.limit.row.max.size=100000
hive.localize.resource.num.wait.attempts=5
hive.localize.resource.wait.interval=5000ms
hive.lock.manager=org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
hive.lock.mapred.only.operation=false
hive.lock.numretries=100
hive.lock.sleep.between.retries=60s
hive.lockmgr.zookeeper.default.partition.name=__HIVE_DEFAULT_ZOOKEEPER_PARTITION__
hive.log.explain.output=false
hive.map.aggr=true
hive.map.aggr.hash.force.flush.memory.threshold=0.9
hive.map.aggr.hash.min.reduction=0.5
hive.map.aggr.hash.percentmemory=0.5
hive.map.groupby.sorted=false
hive.map.groupby.sorted.testmode=false
hive.mapjoin.bucket.cache.size=100
hive.mapjoin.check.memory.rows=100000
hive.mapjoin.followby.gby.localtask.max.memory.usage=0.55
hive.mapjoin.followby.map.aggr.hash.percentmemory=0.3
hive.mapjoin.localtask.max.memory.usage=0.9
hive.mapjoin.optimized.hashtable=true
hive.mapjoin.optimized.hashtable.wbsize=10485760
hive.mapjoin.smalltable.filesize=25000000
hive.mapper.cannot.span.multiple.partitions=false
hive.mapred.local.mem=0
hive.mapred.mode=nonstrict
hive.mapred.partitioner=org.apache.hadoop.hive.ql.io.DefaultHivePartitioner
hive.mapred.reduce.tasks.speculative.execution=true
hive.mapred.supports.subdirectories=false
hive.merge.mapfiles=true
hive.merge.mapredfiles=false
hive.merge.orcfile.stripe.level=true
hive.merge.rcfile.block.level=true
hive.merge.size.per.task=256000000
hive.merge.smallfiles.avgsize=16000000
hive.merge.sparkfiles=false
hive.merge.tezfiles=false
hive.metadata.move.exported.metadata.to.trash=true
hive.metastore.archive.intermediate.archived=_INTERMEDIATE_ARCHIVED
hive.metastore.archive.intermediate.extracted=_INTERMEDIATE_EXTRACTED
hive.metastore.archive.intermediate.original=_INTERMEDIATE_ORIGINAL
hive.metastore.authorization.storage.checks=false
hive.metastore.batch.retrieve.max=300
hive.metastore.batch.retrieve.table.partition.max=1000
hive.metastore.cache.pinobjtypes=Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order
hive.metastore.client.connect.retry.delay=1s
hive.metastore.client.socket.timeout=600s
hive.metastore.connect.retries=3
hive.metastore.direct.sql.batch.size=0
hive.metastore.disallow.incompatible.col.type.changes=false
hive.metastore.event.clean.freq=0s
hive.metastore.event.db.listener.timetolive=86400s
hive.metastore.event.expiry.duration=0s
hive.metastore.execute.setugi=true
hive.metastore.expression.proxy=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore
hive.metastore.failure.retries=1
hive.metastore.filter.hook=org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl
hive.metastore.fs.handler.class=org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl
hive.metastore.integral.jdo.pushdown=false
hive.metastore.kerberos.principal=hive-metastore/_HO...@EXAMPLE.COM
hive.metastore.orm.retrieveMapNullsAsEmptyStrings=false
hive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore
hive.metastore.sasl.enabled=false
hive.metastore.schema.verification=false
hive.metastore.server.max.message.size=104857600
hive.metastore.server.max.threads=1000
hive.metastore.server.min.threads=200
hive.metastore.server.tcp.keepalive=true
hive.metastore.thrift.compact.protocol.enabled=false
hive.metastore.thrift.framed.transport.enabled=false
hive.metastore.try.direct.sql=true
hive.metastore.try.direct.sql.ddl=true
hive.metastore.warehouse.dir=/user/hive/warehouse
hive.multi.insert.move.tasks.share.dependencies=false
hive.multigroupby.singlereducer=true
hive.new.job.grouping.set.cardinality=30
hive.optimize.bucketingsorting=true
hive.optimize.bucketmapjoin=false
hive.optimize.bucketmapjoin.sortedmerge=false
hive.optimize.constant.propagation=true
hive.optimize.correlation=false
hive.optimize.groupby=true
hive.optimize.index.autoupdate=false
hive.optimize.index.filter=false
hive.optimize.index.filter.compact.maxsize=-1
hive.optimize.index.filter.compact.minsize=5368709120
hive.optimize.index.groupby=false
hive.optimize.listbucketing=false
hive.optimize.metadataonly=true
hive.optimize.multigroupby.common.distincts=true
hive.optimize.null.scan=true
hive.optimize.ppd=true
hive.optimize.ppd.storage=true
hive.optimize.reducededuplication=true
hive.optimize.reducededuplication.min.reducer=4
hive.optimize.remove.identity.project=true
hive.optimize.sampling.orderby=false
hive.optimize.sampling.orderby.number=1000
hive.optimize.sampling.orderby.percent=0.1
hive.optimize.skewjoin=false
hive.optimize.skewjoin.compiletime=false
hive.optimize.sort.dynamic.partition=false
hive.optimize.union.remove=false
hive.orc.cache.stripe.details.size=10000
hive.orc.compute.splits.num.threads=10
hive.orc.row.index.stride.dictionary.check=true
hive.orc.splits.include.file.footer=false
hive.outerjoin.supports.filters=true
hive.plan.serialization.format=kryo
hive.ppd.recognizetransivity=true
hive.ppd.remove.duplicatefilters=true
hive.prewarm.enabled=false
hive.prewarm.numcontainers=10
hive.query.result.fileformat=TextFile
hive.querylog.enable.plan.progress=true
hive.querylog.location=${system:java.io.tmpdir}/${system:user.name}
hive.querylog.plan.progress.interval=60000ms
hive.resultset.use.unique.column.names=true
hive.rework.mapredwork=false
hive.rpc.query.plan=false
hive.sample.seednumber=0
hive.scratch.dir.permission=700
hive.script.auto.progress=false
hive.script.operator.env.blacklist=hive.txn.valid.txns,hive.script.operator.env.blacklist
hive.script.operator.id.env.var=HIVE_SCRIPT_OPERATOR_ID
hive.script.operator.truncate.env=false
hive.script.recordreader=org.apache.hadoop.hive.ql.exec.TextRecordReader
hive.script.recordwriter=org.apache.hadoop.hive.ql.exec.TextRecordWriter
hive.script.serde=org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
hive.security.authenticator.manager=org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator
hive.security.authorization.enabled=false
hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider
hive.security.authorization.sqlstd.confwhitelist=hive\.auto\..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.dynamic\.partition.*|hive\.exec\..*\.dynamic\.partitions\..*|hive\.exec\.compress\..*|hive\.exec\.infer\..*|hive\.exec\.mode.local\..*|hive\.exec\.orc\..*|hive\.fetch.task\..*|hive\.hbase\..*|hive\.index\..*|hive\.index\..*|hive\.intermediate\..*|hive\.join\..*|hive\.limit\..*|hive\.mapjoin\..*|hive\.merge\..*|hive\.optimize\..*|hive\.orc\..*|hive\.outerjoin\..*|hive\.ppd\..*|hive\.prewarm\..*|hive\.skewjoin\..*|hive\.smbjoin\..*|hive\.stats\..*|hive\.tez\..*|hive\.vectorized\..*|mapred\.map\..*|mapred\.reduce\..*|mapred\.output\.compression\.codec|mapreduce\.job\.reduce\.slowstart\.completedmaps|mapreduce\.job\.queuename|mapreduce\.input\.fileinputformat\.split\.minsize|mapreduce\.map\..*|mapreduce\.reduce\..*|tez\.am\..*|tez\.task\..*|tez\.runtime\..*|hive\.exec\.reducers\.bytes\.per\.reducer|hive\.client\.stats\.counters|hive\.exec\.default\.partition\.name|hive\.exec\.drop\.ignorenonexistent|hive\.counters\.group\.name|hive\.enforce\.bucketing|hive\.enforce\.bucketmapjoin|hive\.enforce\.sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.cache\.expr\.evaluation|hive\.groupby\.skewindata|hive\.hashtable\.loadfactor|hive\.hashtable\.initialCapacity|hive\.ignore\.mapjoin\.hint|hive\.limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.aggr|hive\.compute\.query\.using\.stats|hive\.exec\.rowoffset|hive\.variable\.substitute|hive\.variable\.substitute\.depth|hive\.autogen\.columnalias\.prefix\.includefuncname|hive\.autogen\.columnalias\.prefix\.label|hive\.exec\.check\.crossproducts|hive\.compat|hive\.exec\.concatenate\.check\.index|hive\.display\.partition\.cols\.separately|hive\.error\.on\.empty\.partition|hive\.execution\.engine|hive\.exim\.uri\.scheme\.whitelist|hive\.file\.max\.footer|hive\.mapred\.supports\.subdirectories|hive\.insert\.into\.multilevel\.dirs|hive\.localize\.resource\.num\.wait\.attempts|hive\.multi\.insert\.move\.tasks\.share\.dependencies|hive\.support\.quoted\.identifiers|hive\.resultset\.use\.unique\.column\.names|hive\.analyze\.stmt\.collect\.partlevel\.stats|hive\.exec\.job\.debug\.capture\.stacktraces|hive\.exec\.job\.debug\.timeout|hive\.exec\.max\.created\.files|hive\.exec\.reducers\.max|hive\.output\.file\.extension|hive\.exec\.show\.job\.failure\.debug\.info|hive\.exec\.tasklog\.debug\.timeout
hive.security.authorization.task.factory=org.apache.hadoop.hive.ql.parse.authorization.HiveAuthorizationTaskFactoryImpl
hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile
hive.security.metastore.authenticator.manager=org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator
hive.security.metastore.authorization.auth.reads=true
hive.security.metastore.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.DefaultHiveMetastoreAuthorizationProvider
hive.serdes.using.metastore.for.schema=org.apache.hadoop.hive.ql.io.orc.OrcSerde,org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe,org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe,org.apache.hadoop.hive.serde2.dynamic_type.DynamicSerDe,org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe,org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe,org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe,org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
hive.server.read.socket.timeout=10s
hive.server.tcp.keepalive=true
hive.server2.allow.user.substitution=true
hive.server2.async.exec.keepalive.time=10s
hive.server2.async.exec.shutdown.timeout=10s
hive.server2.async.exec.threads=100
hive.server2.async.exec.wait.queue.size=100
hive.server2.authentication=NONE
hive.server2.enable.doAs=true
hive.server2.global.init.file.location=${env:HIVE_CONF_DIR}
hive.server2.idle.operation.timeout=0ms
hive.server2.idle.session.timeout=0ms
hive.server2.logging.operation.enabled=true
hive.server2.logging.operation.log.location=${system:java.io.tmpdir}/${system:user.name}/operation_logs
hive.server2.logging.operation.verbose=false
hive.server2.long.polling.timeout=5000ms
hive.server2.map.fair.scheduler.queue=true
hive.server2.max.start.attempts=30
hive.server2.session.check.interval=0ms
hive.server2.support.dynamic.service.discovery=false
hive.server2.table.type.mapping=CLASSIC
hive.server2.tez.initialize.default.sessions=false
hive.server2.tez.sessions.per.default.queue=1
hive.server2.thrift.exponential.backoff.slot.length=100ms
hive.server2.thrift.http.max.idle.time=1800s
hive.server2.thrift.http.max.worker.threads=500
hive.server2.thrift.http.min.worker.threads=5
hive.server2.thrift.http.path=cliservice
hive.server2.thrift.http.port=10001
hive.server2.thrift.http.worker.keepalive.time=60s
hive.server2.thrift.login.timeout=20s
hive.server2.thrift.max.message.size=104857600
hive.server2.thrift.max.worker.threads=500
hive.server2.thrift.min.worker.threads=5
hive.server2.thrift.port=10000
hive.server2.thrift.sasl.qop=auth
hive.server2.thrift.worker.keepalive.time=60s
hive.server2.transport.mode=binary
hive.server2.use.SSL=false
hive.server2.zookeeper.namespace=hiveserver2
hive.session.history.enabled=false
hive.session.id=6960a884-e807-480a-8e68-817966321210
hive.session.silent=false
hive.skewjoin.key=100000
hive.skewjoin.mapjoin.map.tasks=10000
hive.skewjoin.mapjoin.min.split=33554432
hive.smbjoin.cache.rows=10000
hive.spark.client.connect.timeout=1000ms
hive.spark.client.future.timeout=60s
hive.spark.client.rpc.max.size=52428800
hive.spark.client.rpc.sasl.mechanisms=DIGEST-MD5
hive.spark.client.rpc.threads=8
hive.spark.client.secret.bits=256
hive.spark.client.server.connect.timeout=90000ms
hive.spark.job.monitor.timeout=60s
hive.ssl.protocol.blacklist=SSLv2,SSLv3
hive.stageid.rearrange=none
hive.start.cleanup.scratchdir=false
hive.stats.atomic=false
hive.stats.autogather=true
hive.stats.collect.rawdatasize=true
hive.stats.collect.scancols=false
hive.stats.collect.tablekeys=false
hive.stats.dbclass=fs
hive.stats.dbconnectionstring=jdbc:derby:;databaseName=TempStatsStore;create=true
hive.stats.deserialization.factor=1.0
hive.stats.fetch.column.stats=false
hive.stats.fetch.partition.stats=true
hive.stats.gather.num.threads=10
hive.stats.jdbc.timeout=30s
hive.stats.jdbcdriver=org.apache.derby.jdbc.EmbeddedDriver
hive.stats.join.factor=1.1
hive.stats.key.prefix.max.length=150
hive.stats.key.prefix.reserve.length=24
hive.stats.list.num.entries=10
hive.stats.map.num.entries=10
hive.stats.max.variable.length=100
hive.stats.ndv.error=20.0
hive.stats.reliable=false
hive.stats.retries.max=0
hive.stats.retries.wait=3000ms
hive.support.concurrency=false
hive.support.quoted.identifiers=column
hive.test.authz.sstd.hs2.mode=false
hive.test.mode=false
hive.test.mode.prefix=test_
hive.test.mode.samplefreq=32
hive.tez.auto.reducer.parallelism=false
hive.tez.container.size=-1
hive.tez.cpu.vcores=-1
hive.tez.dynamic.partition.pruning=true
hive.tez.dynamic.partition.pruning.max.data.size=104857600
hive.tez.dynamic.partition.pruning.max.event.size=1048576
hive.tez.exec.inplace.progress=true
hive.tez.exec.print.summary=false
hive.tez.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat
hive.tez.log.level=INFO
hive.tez.max.partition.factor=2.0
hive.tez.min.partition.factor=0.25
hive.tez.smb.number.waves=0.5
hive.transform.escape.input=false
hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager
hive.txn.max.open.batch=1000
hive.txn.timeout=300s
hive.typecheck.on.insert=true
hive.udtf.auto.progress=false
hive.unlock.numretries=10
hive.user.install.directory=hdfs:///user/
hive.variable.substitute=true
hive.variable.substitute.depth=40
hive.vectorized.execution.enabled=false
hive.vectorized.execution.reduce.enabled=true
hive.vectorized.execution.reduce.groupby.enabled=true
hive.vectorized.groupby.checkinterval=100000
hive.vectorized.groupby.flush.percent=0.1
hive.vectorized.groupby.maxentries=1000000
hive.warehouse.subdir.inherit.perms=true
hive.zookeeper.clean.extra.nodes=false
hive.zookeeper.client.port=2181
hive.zookeeper.connection.basesleeptime=1000ms
hive.zookeeper.connection.max.retries=3
hive.zookeeper.namespace=hive_zookeeper_namespace
hive.zookeeper.session.timeout=600000ms
io.compression.codec.bzip2.library=system-native
io.file.buffer.size=4096
io.map.index.interval=128
io.map.index.skip=0
io.mapfile.bloom.error.rate=0.005
io.mapfile.bloom.size=1048576
io.native.lib.available=true
io.seqfile.compress.blocksize=1000000
io.seqfile.lazydecompress=true
io.seqfile.local.dir=${hadoop.tmp.dir}/io/local
io.seqfile.sorter.recordlimit=1000000
io.serializations=org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
io.skip.checksum.errors=false
ipc.client.connect.max.retries=10
ipc.client.connect.max.retries.on.timeouts=45
ipc.client.connect.retry.interval=1000
ipc.client.connect.timeout=20000
ipc.client.connection.maxidletime=10000
ipc.client.fallback-to-simple-auth-allowed=false
ipc.client.idlethreshold=4000
ipc.client.kill.max=10
ipc.client.ping=true
ipc.ping.interval=60000
ipc.server.listen.queue.size=128
javax.jdo.PersistenceManagerFactoryClass=org.datanucleus.api.jdo.JDOPersistenceManagerFactory
javax.jdo.option.ConnectionDriverName=org.apache.derby.jdbc.EmbeddedDriver
javax.jdo.option.ConnectionPassword=mine
javax.jdo.option.ConnectionURL=jdbc:derby://localhost:1527/metastore_db;create=true
javax.jdo.option.ConnectionUserName=APP
javax.jdo.option.DetachAllOnCommit=true
javax.jdo.option.Multithreaded=true
javax.jdo.option.NonTransactionalRead=true
map.sort.class=org.apache.hadoop.util.QuickSort
mapred.child.java.opts=-Xmx200m
mapreduce.am.max-attempts=2
mapreduce.app-submission.cross-platform=false
mapreduce.client.completion.pollinterval=5000
mapreduce.client.output.filter=FAILED
mapreduce.client.progressmonitor.pollinterval=1000
mapreduce.client.submit.file.replication=10
mapreduce.cluster.acls.enabled=false
mapreduce.cluster.local.dir=${hadoop.tmp.dir}/mapred/local
mapreduce.cluster.temp.dir=${hadoop.tmp.dir}/mapred/temp
mapreduce.ifile.readahead=true
mapreduce.ifile.readahead.bytes=4194304
mapreduce.input.fileinputformat.input.dir.recursive=false
mapreduce.input.fileinputformat.list-status.num-threads=1
mapreduce.input.fileinputformat.split.maxsize=256000000
mapreduce.input.fileinputformat.split.minsize=1
mapreduce.input.fileinputformat.split.minsize.per.node=1
mapreduce.input.fileinputformat.split.minsize.per.rack=1
mapreduce.input.lineinputformat.linespermap=1
mapreduce.job.acl-modify-job=
mapreduce.job.acl-view-job=
mapreduce.job.classloader=false
mapreduce.job.committer.setup.cleanup.needed=false
mapreduce.job.committer.task.cleanup.needed=false
mapreduce.job.complete.cancel.delegation.tokens=true
mapreduce.job.counters.max=120
mapreduce.job.emit-timeline-data=false
mapreduce.job.end-notification.max.attempts=5
mapreduce.job.end-notification.max.retry.interval=5000
mapreduce.job.end-notification.retry.attempts=0
mapreduce.job.end-notification.retry.interval=1000
mapreduce.job.hdfs-servers=${fs.defaultFS}
mapreduce.job.jvm.numtasks=1
mapreduce.job.map.output.collector.class=org.apache.hadoop.mapred.MapTask$MapOutputBuffer
mapreduce.job.maps=2
mapreduce.job.max.split.locations=10
mapreduce.job.maxtaskfailures.per.tracker=3
mapreduce.job.queuename=default
mapreduce.job.reduce.shuffle.consumer.plugin.class=org.apache.hadoop.mapreduce.task.reduce.Shuffle
mapreduce.job.reduce.slowstart.completedmaps=0.05
mapreduce.job.reducer.preempt.delay.sec=0
mapreduce.job.reduces=-1
mapreduce.job.speculative.slownodethreshold=1.0
mapreduce.job.speculative.slowtaskthreshold=1.0
mapreduce.job.speculative.speculativecap=0.1
mapreduce.job.split.metainfo.maxsize=10000000
mapreduce.job.token.tracking.ids.enabled=false
mapreduce.job.ubertask.enable=false
mapreduce.job.ubertask.maxmaps=9
mapreduce.job.ubertask.maxreduces=1
mapreduce.job.userlog.retain.hours=24
mapreduce.jobhistory.address=0.0.0.0:10020
mapreduce.jobhistory.admin.acl=*
mapreduce.jobhistory.admin.address=0.0.0.0:10033
mapreduce.jobhistory.cleaner.enable=true
mapreduce.jobhistory.cleaner.interval-ms=86400000
mapreduce.jobhistory.client.thread-count=10
mapreduce.jobhistory.datestring.cache.size=200000
mapreduce.jobhistory.done-dir=${yarn.app.mapreduce.am.staging-dir}/history/done
mapreduce.jobhistory.http.policy=HTTP_ONLY
mapreduce.jobhistory.intermediate-done-dir=${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate
mapreduce.jobhistory.joblist.cache.size=20000
mapreduce.jobhistory.keytab=/etc/security/keytab/jhs.service.keytab
mapreduce.jobhistory.loadedjobs.cache.size=5
mapreduce.jobhistory.max-age-ms=604800000
mapreduce.jobhistory.minicluster.fixed.ports=false
mapreduce.jobhistory.move.interval-ms=180000
mapreduce.jobhistory.move.thread-count=3
mapreduce.jobhistory.principal=jhs/_H...@REALM.TLD
mapreduce.jobhistory.recovery.enable=false
mapreduce.jobhistory.recovery.store.class=org.apache.hadoop.mapreduce.v2.hs.HistoryServerFileSystemStateStoreService
mapreduce.jobhistory.recovery.store.fs.uri=${hadoop.tmp.dir}/mapred/history/recoverystore
mapreduce.jobhistory.webapp.address=0.0.0.0:19888
mapreduce.jobtracker.address=local
mapreduce.jobtracker.expire.trackers.interval=600000
mapreduce.jobtracker.handler.count=10
mapreduce.jobtracker.heartbeats.in.second=100
mapreduce.jobtracker.http.address=0.0.0.0:50030
mapreduce.jobtracker.instrumentation=org.apache.hadoop.mapred.JobTrackerMetricsInst
mapreduce.jobtracker.jobhistory.block.size=3145728
mapreduce.jobtracker.jobhistory.lru.cache.size=5
mapreduce.jobtracker.jobhistory.task.numberprogresssplits=12
mapreduce.jobtracker.maxtasks.perjob=-1
mapreduce.jobtracker.persist.jobstatus.active=true
mapreduce.jobtracker.persist.jobstatus.dir=/jobtracker/jobsInfo
mapreduce.jobtracker.persist.jobstatus.hours=1
mapreduce.jobtracker.restart.recover=false
mapreduce.jobtracker.retiredjobs.cache.size=1000
mapreduce.jobtracker.staging.root.dir=${hadoop.tmp.dir}/mapred/staging
mapreduce.jobtracker.system.dir=${hadoop.tmp.dir}/mapred/system
mapreduce.jobtracker.taskcache.levels=2
mapreduce.jobtracker.taskscheduler=org.apache.hadoop.mapred.JobQueueTaskScheduler
mapreduce.jobtracker.tasktracker.maxblacklists=4
mapreduce.local.clientfactory.class.name=org.apache.hadoop.mapred.LocalClientFactory
mapreduce.map.cpu.vcores=1
mapreduce.map.log.level=INFO
mapreduce.map.maxattempts=4
mapreduce.map.memory.mb=1024
mapreduce.map.output.compress=false
mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.DefaultCodec
mapreduce.map.skip.maxrecords=0
mapreduce.map.skip.proc.count.autoincr=true
mapreduce.map.sort.spill.percent=0.80
mapreduce.map.speculative=true
mapreduce.output.fileoutputformat.compress=false
mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.DefaultCodec
mapreduce.output.fileoutputformat.compress.type=RECORD
mapreduce.reduce.cpu.vcores=1
mapreduce.reduce.input.buffer.percent=0.0
mapreduce.reduce.log.level=INFO
mapreduce.reduce.markreset.buffer.percent=0.0
mapreduce.reduce.maxattempts=4
mapreduce.reduce.memory.mb=1024
mapreduce.reduce.merge.inmem.threshold=1000
mapreduce.reduce.shuffle.connect.timeout=180000
mapreduce.reduce.shuffle.fetch.retry.enabled=${yarn.nodemanager.recovery.enabled}
mapreduce.reduce.shuffle.fetch.retry.interval-ms=1000
mapreduce.reduce.shuffle.fetch.retry.timeout-ms=30000
mapreduce.reduce.shuffle.input.buffer.percent=0.70
mapreduce.reduce.shuffle.memory.limit.percent=0.25
mapreduce.reduce.shuffle.merge.percent=0.66
mapreduce.reduce.shuffle.parallelcopies=5
mapreduce.reduce.shuffle.read.timeout=180000
mapreduce.reduce.skip.maxgroups=0
mapreduce.reduce.skip.proc.count.autoincr=true
mapreduce.reduce.speculative=true
mapreduce.shuffle.connection-keep-alive.enable=false
mapreduce.shuffle.connection-keep-alive.timeout=5
mapreduce.shuffle.max.connections=0
mapreduce.shuffle.max.threads=0
mapreduce.shuffle.port=13562
mapreduce.shuffle.ssl.enabled=false
mapreduce.shuffle.ssl.file.buffer.size=65536
mapreduce.shuffle.transfer.buffer.size=131072
mapreduce.task.combine.progress.records=10000
mapreduce.task.files.preserve.failedtasks=false
mapreduce.task.io.sort.factor=10
mapreduce.task.io.sort.mb=100
mapreduce.task.merge.progress.records=10000
mapreduce.task.profile=false
mapreduce.task.profile.map.params=${mapreduce.task.profile.params}
mapreduce.task.profile.maps=0-2
mapreduce.task.profile.params=-agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s
mapreduce.task.profile.reduce.params=${mapreduce.task.profile.params}
mapreduce.task.profile.reduces=0-2
mapreduce.task.skip.start.attempts=2
mapreduce.task.timeout=600000
mapreduce.task.tmp.dir=./tmp
mapreduce.task.userlog.limit.kb=0
mapreduce.tasktracker.dns.interface=default
mapreduce.tasktracker.dns.nameserver=default
mapreduce.tasktracker.healthchecker.interval=60000
mapreduce.tasktracker.healthchecker.script.timeout=600000
mapreduce.tasktracker.http.address=0.0.0.0:50060
mapreduce.tasktracker.http.threads=40
mapreduce.tasktracker.indexcache.mb=10
mapreduce.tasktracker.instrumentation=org.apache.hadoop.mapred.TaskTrackerMetricsInst
mapreduce.tasktracker.local.dir.minspacekill=0
mapreduce.tasktracker.local.dir.minspacestart=0
mapreduce.tasktracker.map.tasks.maximum=2
mapreduce.tasktracker.outofband.heartbeat=false
mapreduce.tasktracker.reduce.tasks.maximum=2
mapreduce.tasktracker.report.address=127.0.0.1:0
mapreduce.tasktracker.taskcontroller=org.apache.hadoop.mapred.DefaultTaskController
mapreduce.tasktracker.taskmemorymanager.monitoringinterval=5000
mapreduce.tasktracker.tasks.sleeptimebeforesigkill=5000
net.topology.impl=org.apache.hadoop.net.NetworkTopology
net.topology.node.switch.mapping.impl=org.apache.hadoop.net.ScriptBasedMapping
net.topology.script.number.args=100
nfs.allow.insecure.ports=true
nfs.dump.dir=/tmp/.hdfs-nfs
nfs.exports.allowed.hosts=* rw
nfs.mountd.port=4242
nfs.rtmax=1048576
nfs.server.port=2049
nfs.wtmax=1048576
parquet.memory.pool.ratio=0.5
rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB=org.apache.hadoop.ipc.ProtobufRpcEngine
rpc.metrics.quantile.enable=false
s3.blocksize=67108864
s3.bytes-per-checksum=512
s3.client-write-packet-size=65536
s3.replication=3
s3.stream-buffer-size=4096
s3native.blocksize=67108864
s3native.bytes-per-checksum=512
s3native.client-write-packet-size=65536
s3native.replication=3
s3native.stream-buffer-size=4096
silent=off
stream.stderr.reporter.enabled=true
stream.stderr.reporter.prefix=reporter:
tfile.fs.input.buffer.size=262144
tfile.fs.output.buffer.size=262144
tfile.io.chunk.size=1048576
yarn.acl.enable=false
yarn.admin.acl=*
yarn.am.liveness-monitor.expiry-interval-ms=600000
yarn.app.mapreduce.am.command-opts=-Xmx1024m
yarn.app.mapreduce.am.container.log.backups=0
yarn.app.mapreduce.am.container.log.limit.kb=0
yarn.app.mapreduce.am.job.committer.cancel-timeout=60000
yarn.app.mapreduce.am.job.committer.commit-window=10000
yarn.app.mapreduce.am.job.task.listener.thread-count=30
yarn.app.mapreduce.am.resource.cpu-vcores=1
yarn.app.mapreduce.am.resource.mb=1536
yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms=1000
yarn.app.mapreduce.am.staging-dir=/tmp/hadoop-yarn/staging
yarn.app.mapreduce.client-am.ipc.max-retries=3
yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts=3
yarn.app.mapreduce.client.max-retries=3
yarn.app.mapreduce.task.container.log.backups=0
yarn.client.application-client-protocol.poll-interval-ms=200
yarn.client.failover-proxy-provider=org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider
yarn.client.failover-retries=0
yarn.client.failover-retries-on-socket-timeouts=0
yarn.client.max-cached-nodemanagers-proxies=0
yarn.client.nodemanager-client-async.thread-pool-max-size=500
yarn.client.nodemanager-connect.max-wait-ms=180000
yarn.client.nodemanager-connect.retry-interval-ms=10000
yarn.dispatcher.drain-events.timeout=300000
yarn.fail-fast=false
yarn.http.policy=HTTP_ONLY
yarn.ipc.rpc.class=org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
yarn.log-aggregation-enable=false
yarn.log-aggregation.retain-check-interval-seconds=-1
yarn.log-aggregation.retain-seconds=-1
yarn.nm.liveness-monitor.expiry-interval-ms=600000
yarn.nodemanager.address=${yarn.nodemanager.hostname}:0
yarn.nodemanager.admin-env=MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
yarn.nodemanager.aux-services=mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class=org.apache.hadoop.mapred.ShuffleHandler
yarn.nodemanager.aux-services.mapreduce_shuffle.class=org.apache.hadoop.mapred.ShuffleHandler
yarn.nodemanager.container-executor.class=org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
yarn.nodemanager.container-manager.thread-count=20
yarn.nodemanager.container-monitor.interval-ms=3000
yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled=false
yarn.nodemanager.delete.debug-delay-sec=43200
yarn.nodemanager.delete.thread-count=4
yarn.nodemanager.disk-health-checker.interval-ms=120000
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=90.0
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb=0
yarn.nodemanager.disk-health-checker.min-healthy-disks=0.25
yarn.nodemanager.docker-container-executor.exec-name=/usr/bin/docker
yarn.nodemanager.env-whitelist=JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME
yarn.nodemanager.health-checker.interval-ms=600000
yarn.nodemanager.health-checker.script.timeout-ms=1200000
yarn.nodemanager.hostname=0.0.0.0
yarn.nodemanager.keytab=/etc/krb5.keytab
yarn.nodemanager.linux-container-executor.cgroups.hierarchy=/hadoop-yarn
yarn.nodemanager.linux-container-executor.cgroups.mount=false
yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage=false
yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users=true
yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user=nobody
yarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern=^[_.A-Za-z0-9][-@_.A-Za-z0-9]{0,255}?[$]?$
yarn.nodemanager.linux-container-executor.resources-handler.class=org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler
yarn.nodemanager.local-cache.max-files-per-directory=8192
yarn.nodemanager.local-dirs=${hadoop.tmp.dir}/nm-local-dir
yarn.nodemanager.localizer.address=${yarn.nodemanager.hostname}:8040
yarn.nodemanager.localizer.cache.cleanup.interval-ms=600000
yarn.nodemanager.localizer.cache.target-size-mb=10240
yarn.nodemanager.localizer.client.thread-count=5
yarn.nodemanager.localizer.fetch.thread-count=4
yarn.nodemanager.log-aggregation.compression-type=none
yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds=-1
yarn.nodemanager.log-dirs=${yarn.log.dir}/userlogs
yarn.nodemanager.log.retain-seconds=10800
yarn.nodemanager.pmem-check-enabled=true
yarn.nodemanager.recovery.dir=${hadoop.tmp.dir}/yarn-nm-recovery
yarn.nodemanager.recovery.enabled=false
yarn.nodemanager.remote-app-log-dir=/tmp/logs
yarn.nodemanager.remote-app-log-dir-suffix=logs
yarn.nodemanager.resource.cpu-vcores=8
yarn.nodemanager.resource.memory-mb=8192
yarn.nodemanager.resource.percentage-physical-cpu-limit=100
yarn.nodemanager.resourcemanager.minimum.version=NONE
yarn.nodemanager.vmem-check-enabled=true
yarn.nodemanager.vmem-pmem-ratio=2.1
yarn.nodemanager.webapp.address=${yarn.nodemanager.hostname}:8042
yarn.resourcemanager.address=cdapnode3.att.com:8040
yarn.resourcemanager.admin.address=${yarn.resourcemanager.hostname}:8033
yarn.resourcemanager.admin.client.thread-count=1
yarn.resourcemanager.am-rm-tokens.master-key-rolling-interval-secs=86400
yarn.resourcemanager.am.max-attempts=2
yarn.resourcemanager.client.thread-count=50
yarn.resourcemanager.configuration.provider-class=org.apache.hadoop.yarn.LocalConfigurationProvider
yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs=86400
yarn.resourcemanager.container.liveness-monitor.interval-ms=600000
yarn.resourcemanager.delayed.delegation-token.removal-interval-ms=30000
yarn.resourcemanager.fail-fast=${yarn.fail-fast}
yarn.resourcemanager.fs.state-store.retry-policy-spec=2000, 500
yarn.resourcemanager.fs.state-store.uri=${hadoop.tmp.dir}/yarn/system/rmstore
yarn.resourcemanager.ha.automatic-failover.embedded=true
yarn.resourcemanager.ha.automatic-failover.enabled=true
yarn.resourcemanager.ha.automatic-failover.zk-base-path=/yarn-leader-election
yarn.resourcemanager.ha.enabled=false
yarn.resourcemanager.hostname=0.0.0.0
yarn.resourcemanager.keytab=/etc/krb5.keytab
yarn.resourcemanager.max-completed-applications=10000
yarn.resourcemanager.nodemanager.minimum.version=NONE
yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=1000
yarn.resourcemanager.proxy-user-privileges.enabled=false
yarn.resourcemanager.recovery.enabled=false
yarn.resourcemanager.resource-tracker.address=cdapnode3.att.com:8025
yarn.resourcemanager.resource-tracker.client.thread-count=50
yarn.resourcemanager.scheduler.address=cdapnode3.att.com:8030
yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
yarn.resourcemanager.scheduler.client.thread-count=50
yarn.resourcemanager.scheduler.monitor.enable=false
yarn.resourcemanager.scheduler.monitor.policies=org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy
yarn.resourcemanager.state-store.max-completed-applications=${yarn.resourcemanager.max-completed-applications}
yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size=10
yarn.resourcemanager.system-metrics-publisher.enabled=false
yarn.resourcemanager.webapp.address=${yarn.resourcemanager.hostname}:8088
yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled=true
yarn.resourcemanager.webapp.https.address=${yarn.resourcemanager.hostname}:8090
yarn.resourcemanager.work-preserving-recovery.enabled=false
yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms=10000
yarn.resourcemanager.zk-acl=world:anyone:rwcda
yarn.resourcemanager.zk-num-retries=1000
yarn.resourcemanager.zk-retry-interval-ms=1000
yarn.resourcemanager.zk-state-store.parent-path=/rmstore
yarn.resourcemanager.zk-timeout-ms=10000
yarn.scheduler.maximum-allocation-mb=8192
yarn.scheduler.maximum-allocation-vcores=32
yarn.scheduler.minimum-allocation-mb=512
yarn.scheduler.minimum-allocation-vcores=1
yarn.timeline-service.address=${yarn.timeline-service.hostname}:10200
yarn.timeline-service.client.max-retries=30
yarn.timeline-service.client.retry-interval-ms=1000
yarn.timeline-service.enabled=false
yarn.timeline-service.generic-application-history.max-applications=10000
yarn.timeline-service.handler-thread-count=10
yarn.timeline-service.hostname=0.0.0.0
yarn.timeline-service.http-authentication.simple.anonymous.allowed=true
yarn.timeline-service.http-authentication.type=simple
yarn.timeline-service.keytab=/etc/krb5.keytab
yarn.timeline-service.leveldb-timeline-store.path=${hadoop.tmp.dir}/yarn/timeline
yarn.timeline-service.leveldb-timeline-store.read-cache-size=104857600
yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size=10000
yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size=10000
yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms=300000
yarn.timeline-service.store-class=org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore
yarn.timeline-service.ttl-enable=true
yarn.timeline-service.ttl-ms=604800000
yarn.timeline-service.webapp.address=${yarn.timeline-service.hostname}:8188
yarn.timeline-service.webapp.https.address=${yarn.timeline-service.hostname}:8190
env:CDAP_HOME=/opt/cdap
env:CLASSPATH=/opt/apache-hive-1.1.1-bin/conf:/opt/apache-hive-1.1.1-bin/lib/accumulo-core-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/accumulo-fate-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/accumulo-start-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/accumulo-trace-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/activation-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/ant-1.9.1.jar:/opt/apache-hive-1.1.1-bin/lib/ant-launcher-1.9.1.jar:/opt/apache-hive-1.1.1-bin/lib/antlr-2.7.7.jar:/opt/apache-hive-1.1.1-bin/lib/antlr-runtime-3.4.jar:/opt/apache-hive-1.1.1-bin/lib/apache-log4j-extras-1.2.17.jar:/opt/apache-hive-1.1.1-bin/lib/asm-commons-3.1.jar:/opt/apache-hive-1.1.1-bin/lib/asm-tree-3.1.jar:/opt/apache-hive-1.1.1-bin/lib/avro-1.7.5.jar:/opt/apache-hive-1.1.1-bin/lib/bonecp-0.8.0.RELEASE.jar:/opt/apache-hive-1.1.1-bin/lib/calcite-avatica-1.0.0-incubating.jar:/opt/apache-hive-1.1.1-bin/lib/calcite-core-1.0.0-incubating.jar:/opt/apache-hive-1.1.1-bin/lib/calcite-linq4j-1.0.0-incubating.jar:/opt/apache-hive-1.1.1-bin/lib/commons-beanutils-1.7.0.jar:/opt/apache-hive-1.1.1-bin/lib/commons-beanutils-core-1.8.0.jar:/opt/apache-hive-1.1.1-bin/lib/commons-cli-1.2.jar:/opt/apache-hive-1.1.1-bin/lib/commons-codec-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-collections-3.2.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-compiler-2.7.6.jar:/opt/apache-hive-1.1.1-bin/lib/commons-compress-1.4.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-configuration-1.6.jar:/opt/apache-hive-1.1.1-bin/lib/commons-dbcp-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-digester-1.8.jar:/opt/apache-hive-1.1.1-bin/lib/commons-httpclient-3.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-io-2.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-lang-2.6.jar:/opt/apache-hive-1.1.1-bin/lib/commons-logging-1.1.3.jar:/opt/apache-hive-1.1.1-bin/lib/commons-math-2.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-pool-1.5.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-vfs2-2.0.jar:/opt/apache-hive-1.1.1-bin/lib/curator-client-2.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/curator-framework-2.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/datanucleus-api-jdo-3.2.6.jar:/opt/apache-hive-1.1.1-bin/lib/datanucleus-core-3.2.10.jar:/opt/apache-hive-1.1.1-bin/lib/datanucleus-rdbms-3.2.9.jar:/opt/apache-hive-1.1.1-bin/lib/derby-10.11.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/eigenbase-properties-1.1.4.jar:/opt/apache-hive-1.1.1-bin/lib/geronimo-annotation_1.0_spec-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/geronimo-jaspic_1.0_spec-1.0.jar:/opt/apache-hive-1.1.1-bin/lib/geronimo-jta_1.1_spec-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/groovy-all-2.1.6.jar:/opt/apache-hive-1.1.1-bin/lib/guava-14.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/hamcrest-core-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-accumulo-handler-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-ant-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-beeline-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-cli-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-common-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-contrib-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-exec-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-hbase-handler-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-hwi-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-jdbc-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-jdbc-1.1.1-standalone.jar:/opt/apache-hive-1.1.1-bin/lib/hive-metastore-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-serde-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-service-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-0.20S-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-0.23-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-common-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-scheduler-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-testutils-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/httpclient-4.2.5.jar:/opt/apache-hive-1.1.1-bin/lib/httpcore-4.2.5.jar:/opt/apache-hive-1.1.1-bin/lib/janino-2.7.6.jar:/opt/apache-hive-1.1.1-bin/lib/jcommander-1.32.jar:/opt/apache-hive-1.1.1-bin/lib/jdo-api-3.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/jetty-all-7.6.0.v20120127.jar:/opt/apache-hive-1.1.1-bin/lib/jetty-all-server-7.6.0.v20120127.jar:/opt/apache-hive-1.1.1-bin/lib/jline-2.12.jar:/opt/apache-hive-1.1.1-bin/lib/jpam-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/jsr305-3.0.0.jar:/opt/apache-hive-1.1.1-bin/lib/jta-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/junit-4.11.jar:/opt/apache-hive-1.1.1-bin/lib/libfb303-0.9.2.jar:/opt/apache-hive-1.1.1-bin/lib/libthrift-0.9.2.jar:/opt/apache-hive-1.1.1-bin/lib/log4j-1.2.16.jar:/opt/apache-hive-1.1.1-bin/lib/mail-1.4.1.jar:/opt/apache-hive-1.1.1-bin/lib/maven-scm-api-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/maven-scm-provider-svn-commons-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/maven-scm-provider-svnexe-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/netty-3.7.0.Final.jar:/opt/apache-hive-1.1.1-bin/lib/opencsv-2.3.jar:/opt/apache-hive-1.1.1-bin/lib/oro-2.0.8.jar:/opt/apache-hive-1.1.1-bin/lib/paranamer-2.3.jar:/opt/apache-hive-1.1.1-bin/lib/parquet-hadoop-bundle-1.6.0rc3.jar:/opt/apache-hive-1.1.1-bin/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/opt/apache-hive-1.1.1-bin/lib/plexus-utils-1.5.6.jar:/opt/apache-hive-1.1.1-bin/lib/regexp-1.3.jar:/opt/apache-hive-1.1.1-bin/lib/servlet-api-2.5.jar:/opt/apache-hive-1.1.1-bin/lib/snappy-java-1.0.5.jar:/opt/apache-hive-1.1.1-bin/lib/ST4-4.0.4.jar:/opt/apache-hive-1.1.1-bin/lib/stax-api-1.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/stringtemplate-3.2.1.jar:/opt/apache-hive-1.1.1-bin/lib/super-csv-2.2.0.jar:/opt/apache-hive-1.1.1-bin/lib/tempus-fugit-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/velocity-1.5.jar:/opt/apache-hive-1.1.1-bin/lib/xz-1.0.jar:/opt/apache-hive-1.1.1-bin/lib/zookeeper-3.4.6.jar::/opt/db-derby-10.12.1.1-bin/lib/derbyclient.jar:/opt/db-derby-10.12.1.1-bin/lib/derby.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_cs.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_de_DE.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_es.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_fr.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_hu.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_it.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ja_JP.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ko_KR.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pl.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pt_BR.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ru.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_CN.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_TW.jar:/opt/db-derby-10.12.1.1-bin/lib/derbynet.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyoptionaltools.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyrun.jar:/opt/db-derby-10.12.1.1-bin/lib/derbytools.jar:/opt/hbase-1.0.3/conf:/opt/hbase-1.0.3/lib/metrics-core-2.2.0.jar:/opt/hbase-1.0.3/lib/htrace-core-3.1.0-incubating.jar:/opt/hbase-1.0.3/lib/hbase-server-1.0.3.jar:/opt/hbase-1.0.3/lib/netty-all-4.0.23.Final.jar:/opt/hbase-1.0.3/lib/hbase-protocol-1.0.3.jar:/opt/hbase-1.0.3/lib/hbase-client-1.0.3.jar:/opt/hbase-1.0.3/lib/hbase-hadoop-compat-1.0.3.jar:/opt/hbase-1.0.3/lib/hbase-common-1.0.3.jar:/opt/hadoop-2.6.4/contrib/capacity-scheduler/*.jar:/opt/hadoop-2.6.4/etc/hadoop:/opt/hadoop-2.6.4/share/hadoop/common/lib/*:/opt/hadoop-2.6.4/share/hadoop/common/*:/opt/hadoop-2.6.4/share/hadoop/hdfs:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/*:/opt/hadoop-2.6.4/share/hadoop/hdfs/*:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/*:/opt/hadoop-2.6.4/share/hadoop/yarn/*:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.6.4/share/hadoop/mapreduce/*
env:DERBY_HOME=/opt/db-derby-10.12.1.1-bin
env:DISPLAY=localhost:11.0
env:HADOOP_CLASSPATH=/opt/apache-hive-1.1.1-bin/conf:/opt/apache-hive-1.1.1-bin/lib/accumulo-core-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/accumulo-fate-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/accumulo-start-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/accumulo-trace-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/activation-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/ant-1.9.1.jar:/opt/apache-hive-1.1.1-bin/lib/ant-launcher-1.9.1.jar:/opt/apache-hive-1.1.1-bin/lib/antlr-2.7.7.jar:/opt/apache-hive-1.1.1-bin/lib/antlr-runtime-3.4.jar:/opt/apache-hive-1.1.1-bin/lib/apache-log4j-extras-1.2.17.jar:/opt/apache-hive-1.1.1-bin/lib/asm-commons-3.1.jar:/opt/apache-hive-1.1.1-bin/lib/asm-tree-3.1.jar:/opt/apache-hive-1.1.1-bin/lib/avro-1.7.5.jar:/opt/apache-hive-1.1.1-bin/lib/bonecp-0.8.0.RELEASE.jar:/opt/apache-hive-1.1.1-bin/lib/calcite-avatica-1.0.0-incubating.jar:/opt/apache-hive-1.1.1-bin/lib/calcite-core-1.0.0-incubating.jar:/opt/apache-hive-1.1.1-bin/lib/calcite-linq4j-1.0.0-incubating.jar:/opt/apache-hive-1.1.1-bin/lib/commons-beanutils-1.7.0.jar:/opt/apache-hive-1.1.1-bin/lib/commons-beanutils-core-1.8.0.jar:/opt/apache-hive-1.1.1-bin/lib/commons-cli-1.2.jar:/opt/apache-hive-1.1.1-bin/lib/commons-codec-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-collections-3.2.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-compiler-2.7.6.jar:/opt/apache-hive-1.1.1-bin/lib/commons-compress-1.4.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-configuration-1.6.jar:/opt/apache-hive-1.1.1-bin/lib/commons-dbcp-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-digester-1.8.jar:/opt/apache-hive-1.1.1-bin/lib/commons-httpclient-3.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-io-2.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-lang-2.6.jar:/opt/apache-hive-1.1.1-bin/lib/commons-logging-1.1.3.jar:/opt/apache-hive-1.1.1-bin/lib/commons-math-2.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-pool-1.5.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-vfs2-2.0.jar:/opt/apache-hive-1.1.1-bin/lib/curator-client-2.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/curator-framework-2.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/datanucleus-api-jdo-3.2.6.jar:/opt/apache-hive-1.1.1-bin/lib/datanucleus-core-3.2.10.jar:/opt/apache-hive-1.1.1-bin/lib/datanucleus-rdbms-3.2.9.jar:/opt/apache-hive-1.1.1-bin/lib/derby-10.11.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/eigenbase-properties-1.1.4.jar:/opt/apache-hive-1.1.1-bin/lib/geronimo-annotation_1.0_spec-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/geronimo-jaspic_1.0_spec-1.0.jar:/opt/apache-hive-1.1.1-bin/lib/geronimo-jta_1.1_spec-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/groovy-all-2.1.6.jar:/opt/apache-hive-1.1.1-bin/lib/guava-14.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/hamcrest-core-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-accumulo-handler-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-ant-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-beeline-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-cli-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-common-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-contrib-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-exec-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-hbase-handler-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-hwi-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-jdbc-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-jdbc-1.1.1-standalone.jar:/opt/apache-hive-1.1.1-bin/lib/hive-metastore-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-serde-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-service-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-0.20S-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-0.23-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-common-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-scheduler-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-testutils-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/httpclient-4.2.5.jar:/opt/apache-hive-1.1.1-bin/lib/httpcore-4.2.5.jar:/opt/apache-hive-1.1.1-bin/lib/janino-2.7.6.jar:/opt/apache-hive-1.1.1-bin/lib/jcommander-1.32.jar:/opt/apache-hive-1.1.1-bin/lib/jdo-api-3.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/jetty-all-7.6.0.v20120127.jar:/opt/apache-hive-1.1.1-bin/lib/jetty-all-server-7.6.0.v20120127.jar:/opt/apache-hive-1.1.1-bin/lib/jline-2.12.jar:/opt/apache-hive-1.1.1-bin/lib/jpam-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/jsr305-3.0.0.jar:/opt/apache-hive-1.1.1-bin/lib/jta-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/junit-4.11.jar:/opt/apache-hive-1.1.1-bin/lib/libfb303-0.9.2.jar:/opt/apache-hive-1.1.1-bin/lib/libthrift-0.9.2.jar:/opt/apache-hive-1.1.1-bin/lib/log4j-1.2.16.jar:/opt/apache-hive-1.1.1-bin/lib/mail-1.4.1.jar:/opt/apache-hive-1.1.1-bin/lib/maven-scm-api-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/maven-scm-provider-svn-commons-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/maven-scm-provider-svnexe-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/netty-3.7.0.Final.jar:/opt/apache-hive-1.1.1-bin/lib/opencsv-2.3.jar:/opt/apache-hive-1.1.1-bin/lib/oro-2.0.8.jar:/opt/apache-hive-1.1.1-bin/lib/paranamer-2.3.jar:/opt/apache-hive-1.1.1-bin/lib/parquet-hadoop-bundle-1.6.0rc3.jar:/opt/apache-hive-1.1.1-bin/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/opt/apache-hive-1.1.1-bin/lib/plexus-utils-1.5.6.jar:/opt/apache-hive-1.1.1-bin/lib/regexp-1.3.jar:/opt/apache-hive-1.1.1-bin/lib/servlet-api-2.5.jar:/opt/apache-hive-1.1.1-bin/lib/snappy-java-1.0.5.jar:/opt/apache-hive-1.1.1-bin/lib/ST4-4.0.4.jar:/opt/apache-hive-1.1.1-bin/lib/stax-api-1.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/stringtemplate-3.2.1.jar:/opt/apache-hive-1.1.1-bin/lib/super-csv-2.2.0.jar:/opt/apache-hive-1.1.1-bin/lib/tempus-fugit-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/velocity-1.5.jar:/opt/apache-hive-1.1.1-bin/lib/xz-1.0.jar:/opt/apache-hive-1.1.1-bin/lib/zookeeper-3.4.6.jar::/opt/db-derby-10.12.1.1-bin/lib/derbyclient.jar:/opt/db-derby-10.12.1.1-bin/lib/derby.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_cs.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_de_DE.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_es.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_fr.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_hu.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_it.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ja_JP.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ko_KR.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pl.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pt_BR.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ru.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_CN.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_TW.jar:/opt/db-derby-10.12.1.1-bin/lib/derbynet.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyoptionaltools.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyrun.jar:/opt/db-derby-10.12.1.1-bin/lib/derbytools.jar:/opt/hbase-1.0.3/conf:/opt/hbase-1.0.3/lib/metrics-core-2.2.0.jar:/opt/hbase-1.0.3/lib/htrace-core-3.1.0-incubating.jar:/opt/hbase-1.0.3/lib/hbase-server-1.0.3.jar:/opt/hbase-1.0.3/lib/netty-all-4.0.23.Final.jar:/opt/hbase-1.0.3/lib/hbase-protocol-1.0.3.jar:/opt/hbase-1.0.3/lib/hbase-client-1.0.3.jar:/opt/hbase-1.0.3/lib/hbase-hadoop-compat-1.0.3.jar:/opt/hbase-1.0.3/lib/hbase-common-1.0.3.jar:/opt/hadoop-2.6.4/contrib/capacity-scheduler/*.jar
env:HADOOP_CLIENT_OPTS=-Xmx512m
env:HADOOP_COMMON_HOME=/opt/hadoop-2.6.4
env:HADOOP_COMMON_LIB_NATIVE_DIR=/opt/hadoop-2.6.4/lib/native
env:HADOOP_CONF_DIR=/opt/hadoop-2.6.4/etc/hadoop
env:HADOOP_DATANODE_OPTS=-Dhadoop.security.logger=ERROR,RFAS
env:HADOOP_HDFS_HOME=/opt/hadoop-2.6.4
env:HADOOP_HEAPSIZE=256
env:HADOOP_HOME=/opt/hadoop-2.6.4
env:HADOOP_HOME_WARN_SUPPRESS=true
env:HADOOP_IDENT_STRING=cdap
env:HADOOP_INSTALL=/opt/hadoop-2.6.4
env:HADOOP_MAPRED_HOME=/opt/hadoop-2.6.4
env:HADOOP_NAMENODE_OPTS=-Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
env:HADOOP_NFS3_OPTS=
env:HADOOP_OPTS=-Djava.library.path=/opt/hadoop-2.6.4/lib/native -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/opt/hadoop-2.6.4/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/hadoop-2.6.4 -Dhadoop.id.str=cdap -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m  -Dhadoop.security.logger=INFO,NullAppender
env:HADOOP_PID_DIR=
env:HADOOP_PORTMAP_OPTS=-Xmx512m
env:HADOOP_PREFIX=/opt/hadoop-2.6.4
env:HADOOP_SECONDARYNAMENODE_OPTS=-Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
env:HADOOP_SECURE_DN_LOG_DIR=/
env:HADOOP_SECURE_DN_PID_DIR=
env:HADOOP_SECURE_DN_USER=
env:HADOOP_USER_CLASSPATH_FIRST=true
env:HADOOP_YARN_HOME=/opt/hadoop-2.6.4
env:HBASE_HOME=/opt/hbase-1.0.3
env:HBASE_LIBRARY_PATH=/opt/hadoop-2.6.4/lib/native/:/usr/local/lib:
env:HBASE_MANAGES_ZK=false
env:HBASE_VERSION=1.0
env:HIVE_AUX_JARS_PATH=/opt/db-derby-10.12.1.1-bin/lib
env:HIVE_CONF_DIR=/opt/apache-hive-1.1.1-bin/conf
env:HIVE_HOME=/opt/apache-hive-1.1.1-bin
env:HOME=/home/cdap
env:JAVA_HOME=/opt/jdk1.7.0_80
env:LANG=en_US.UTF-8
env:LD_LIBRARY_PATH=/opt/hadoop-2.6.4/lib/native/:/usr/local/lib:
env:LESSCLOSE=/usr/bin/lesspipe %s %s
env:LESSOPEN=| /usr/bin/lesspipe %s
env:LOGNAME=cdap
env:LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:
env:MAIL=/var/mail/cdap
env:MALLOC_ARENA_MAX=4
env:MAVEN_HOME=/opt/apache-maven-3.3.9
env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
env:NODE_PATH=/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascript
env:PATH=/opt/apache-hive-1.1.1-bin/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/opt/jdk1.7.0_80/bin:/opt/hadoop-2.6.4/sbin:/opt/hadoop-2.6.4/bin:/opt/db-derby-10.12.1.1-bin/bin:/opt/pig-0.15.0/bin:/opt/apache-maven-3.3.9/bin:/opt/cdap/cli/bin:/opt/hbase-1.0.3/bin:/opt/zookeeper-3.4.8/bin:/opt/spark-1.4.1-bin-hadoop2.6/bin
env:PIG_HOME=/opt/pig-0.15.0
env:PWD=/home/cdap
env:SERVICE_LIST=beeline cli help hiveburninclient hiveserver2 hiveserver hwi jar lineage metastore metatool orcfiledump rcfilecat schemaTool version
env:SHELL=/bin/bash
env:SHLVL=1
env:SPARK_HOME=/opt/spark-1.4.1-bin-hadoop2.6
env:SSH_CLIENT=10.20.0.1 55337 22
env:SSH_CONNECTION=10.20.0.1 55337 10.20.0.12 22
env:SSH_TTY=/dev/pts/2
env:TERM=xterm
env:USER=cdap
env:XDG_RUNTIME_DIR=/run/user/1000
env:XDG_SESSION_ID=2
env:XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
env:YARN_HOME=/opt/hadoop-2.6.4
env:ZOOKEEPER_HOME=/opt/zookeeper-3.4.8
system:awt.toolkit=sun.awt.X11.XToolkit
system:file.encoding=UTF-8
system:file.encoding.pkg=sun.io
system:file.separator=/
system:hadoop.home.dir=/opt/hadoop-2.6.4
system:hadoop.id.str=cdap
system:hadoop.log.dir=/opt/hadoop-2.6.4/logs
system:hadoop.log.file=hadoop.log
system:hadoop.policy.file=hadoop-policy.xml
system:hadoop.root.logger=INFO,console
system:hadoop.security.logger=INFO,NullAppender
system:hive.aux.jars.path=file:///opt/db-derby-10.12.1.1-bin/lib/derbyclient.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derby.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_cs.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_de_DE.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_es.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_fr.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_hu.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_it.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ja_JP.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ko_KR.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pl.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pt_BR.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ru.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_CN.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_TW.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbynet.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyoptionaltools.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyrun.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbytools.jar
system:java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment
system:java.awt.printerjob=sun.print.PSPrinterJob
system:java.class.path=/opt/apache-hive-1.1.1-bin/conf:/opt/apache-hive-1.1.1-bin/lib/accumulo-core-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/accumulo-fate-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/accumulo-start-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/accumulo-trace-1.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/activation-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/ant-1.9.1.jar:/opt/apache-hive-1.1.1-bin/lib/ant-launcher-1.9.1.jar:/opt/apache-hive-1.1.1-bin/lib/antlr-2.7.7.jar:/opt/apache-hive-1.1.1-bin/lib/antlr-runtime-3.4.jar:/opt/apache-hive-1.1.1-bin/lib/apache-log4j-extras-1.2.17.jar:/opt/apache-hive-1.1.1-bin/lib/asm-commons-3.1.jar:/opt/apache-hive-1.1.1-bin/lib/asm-tree-3.1.jar:/opt/apache-hive-1.1.1-bin/lib/avro-1.7.5.jar:/opt/apache-hive-1.1.1-bin/lib/bonecp-0.8.0.RELEASE.jar:/opt/apache-hive-1.1.1-bin/lib/calcite-avatica-1.0.0-incubating.jar:/opt/apache-hive-1.1.1-bin/lib/calcite-core-1.0.0-incubating.jar:/opt/apache-hive-1.1.1-bin/lib/calcite-linq4j-1.0.0-incubating.jar:/opt/apache-hive-1.1.1-bin/lib/commons-beanutils-1.7.0.jar:/opt/apache-hive-1.1.1-bin/lib/commons-beanutils-core-1.8.0.jar:/opt/apache-hive-1.1.1-bin/lib/commons-cli-1.2.jar:/opt/apache-hive-1.1.1-bin/lib/commons-codec-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-collections-3.2.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-compiler-2.7.6.jar:/opt/apache-hive-1.1.1-bin/lib/commons-compress-1.4.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-configuration-1.6.jar:/opt/apache-hive-1.1.1-bin/lib/commons-dbcp-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-digester-1.8.jar:/opt/apache-hive-1.1.1-bin/lib/commons-httpclient-3.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-io-2.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-lang-2.6.jar:/opt/apache-hive-1.1.1-bin/lib/commons-logging-1.1.3.jar:/opt/apache-hive-1.1.1-bin/lib/commons-math-2.1.jar:/opt/apache-hive-1.1.1-bin/lib/commons-pool-1.5.4.jar:/opt/apache-hive-1.1.1-bin/lib/commons-vfs2-2.0.jar:/opt/apache-hive-1.1.1-bin/lib/curator-client-2.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/curator-framework-2.6.0.jar:/opt/apache-hive-1.1.1-bin/lib/datanucleus-api-jdo-3.2.6.jar:/opt/apache-hive-1.1.1-bin/lib/datanucleus-core-3.2.10.jar:/opt/apache-hive-1.1.1-bin/lib/datanucleus-rdbms-3.2.9.jar:/opt/apache-hive-1.1.1-bin/lib/derby-10.11.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/eigenbase-properties-1.1.4.jar:/opt/apache-hive-1.1.1-bin/lib/geronimo-annotation_1.0_spec-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/geronimo-jaspic_1.0_spec-1.0.jar:/opt/apache-hive-1.1.1-bin/lib/geronimo-jta_1.1_spec-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/groovy-all-2.1.6.jar:/opt/apache-hive-1.1.1-bin/lib/guava-14.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/hamcrest-core-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-accumulo-handler-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-ant-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-beeline-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-cli-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-common-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-contrib-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-exec-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-hbase-handler-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-hwi-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-jdbc-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-jdbc-1.1.1-standalone.jar:/opt/apache-hive-1.1.1-bin/lib/hive-metastore-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-serde-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-service-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-0.20S-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-0.23-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-common-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-shims-scheduler-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/hive-testutils-1.1.1.jar:/opt/apache-hive-1.1.1-bin/lib/httpclient-4.2.5.jar:/opt/apache-hive-1.1.1-bin/lib/httpcore-4.2.5.jar:/opt/apache-hive-1.1.1-bin/lib/janino-2.7.6.jar:/opt/apache-hive-1.1.1-bin/lib/jcommander-1.32.jar:/opt/apache-hive-1.1.1-bin/lib/jdo-api-3.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/jetty-all-7.6.0.v20120127.jar:/opt/apache-hive-1.1.1-bin/lib/jetty-all-server-7.6.0.v20120127.jar:/opt/apache-hive-1.1.1-bin/lib/jline-2.12.jar:/opt/apache-hive-1.1.1-bin/lib/jpam-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/jsr305-3.0.0.jar:/opt/apache-hive-1.1.1-bin/lib/jta-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/junit-4.11.jar:/opt/apache-hive-1.1.1-bin/lib/libfb303-0.9.2.jar:/opt/apache-hive-1.1.1-bin/lib/libthrift-0.9.2.jar:/opt/apache-hive-1.1.1-bin/lib/log4j-1.2.16.jar:/opt/apache-hive-1.1.1-bin/lib/mail-1.4.1.jar:/opt/apache-hive-1.1.1-bin/lib/maven-scm-api-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/maven-scm-provider-svn-commons-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/maven-scm-provider-svnexe-1.4.jar:/opt/apache-hive-1.1.1-bin/lib/netty-3.7.0.Final.jar:/opt/apache-hive-1.1.1-bin/lib/opencsv-2.3.jar:/opt/apache-hive-1.1.1-bin/lib/oro-2.0.8.jar:/opt/apache-hive-1.1.1-bin/lib/paranamer-2.3.jar:/opt/apache-hive-1.1.1-bin/lib/parquet-hadoop-bundle-1.6.0rc3.jar:/opt/apache-hive-1.1.1-bin/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/opt/apache-hive-1.1.1-bin/lib/plexus-utils-1.5.6.jar:/opt/apache-hive-1.1.1-bin/lib/regexp-1.3.jar:/opt/apache-hive-1.1.1-bin/lib/servlet-api-2.5.jar:/opt/apache-hive-1.1.1-bin/lib/snappy-java-1.0.5.jar:/opt/apache-hive-1.1.1-bin/lib/ST4-4.0.4.jar:/opt/apache-hive-1.1.1-bin/lib/stax-api-1.0.1.jar:/opt/apache-hive-1.1.1-bin/lib/stringtemplate-3.2.1.jar:/opt/apache-hive-1.1.1-bin/lib/super-csv-2.2.0.jar:/opt/apache-hive-1.1.1-bin/lib/tempus-fugit-1.1.jar:/opt/apache-hive-1.1.1-bin/lib/velocity-1.5.jar:/opt/apache-hive-1.1.1-bin/lib/xz-1.0.jar:/opt/apache-hive-1.1.1-bin/lib/zookeeper-3.4.6.jar::/opt/db-derby-10.12.1.1-bin/lib/derbyclient.jar:/opt/db-derby-10.12.1.1-bin/lib/derby.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_cs.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_de_DE.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_es.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_fr.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_hu.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_it.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ja_JP.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ko_KR.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pl.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pt_BR.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ru.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_CN.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_TW.jar:/opt/db-derby-10.12.1.1-bin/lib/derbynet.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyoptionaltools.jar:/opt/db-derby-10.12.1.1-bin/lib/derbyrun.jar:/opt/db-derby-10.12.1.1-bin/lib/derbytools.jar:/opt/hbase-1.0.3/conf:/opt/hbase-1.0.3/lib/metrics-core-2.2.0.jar:/opt/hbase-1.0.3/lib/htrace-core-3.1.0-incubating.jar:/opt/hbase-1.0.3/lib/hbase-server-1.0.3.jar:/opt/hbase-1.0.3/lib/netty-all-4.0.23.Final.jar:/opt/hbase-1.0.3/lib/hbase-protocol-1.0.3.jar:/opt/hbase-1.0.3/lib/hbase-client-1.0.3.jar:/opt/hbase-1.0.3/lib/hbase-hadoop-compat-1.0.3.jar:/opt/hbase-1.0.3/lib/hbase-common-1.0.3.jar:/opt/hadoop-2.6.4/contrib/capacity-scheduler/*.jar:/opt/hadoop-2.6.4/etc/hadoop:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/hadoop-auth-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/curator-framework-2.6.0.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/curator-client-2.6.0.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/htrace-core-3.0.4.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/hadoop-annotations-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop-2.6.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/common/hadoop-nfs-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/common/hadoop-common-2.6.4-tests.jar:/opt/hadoop-2.6.4/share/hadoop/common/hadoop-common-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-2.6.4-tests.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jline-0.9.94.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-api-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-common-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-common-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-registry-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-client-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.4-tests.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.4.jar:/opt/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.4.jar
system:java.class.version=51.0
system:java.endorsed.dirs=/opt/jdk1.7.0_80/jre/lib/endorsed
system:java.ext.dirs=/opt/jdk1.7.0_80/jre/lib/ext:/usr/java/packages/lib/ext
system:java.home=/opt/jdk1.7.0_80/jre
system:java.io.tmpdir=/tmp
system:java.library.path=/opt/hadoop-2.6.4/lib/native
system:java.net.preferIPv4Stack=true
system:java.runtime.name=Java(TM) SE Runtime Environment
system:java.runtime.version=1.7.0_80-b15
system:java.specification.name=Java Platform API Specification
system:java.specification.vendor=Oracle Corporation
system:java.specification.version=1.7
system:java.vendor=Oracle Corporation
system:java.vendor.url=http://java.oracle.com/
system:java.vendor.url.bug=http://bugreport.sun.com/bugreport/
system:java.version=1.7.0_80
system:java.vm.info=mixed mode
system:java.vm.name=Java HotSpot(TM) 64-Bit Server VM
system:java.vm.specification.name=Java Virtual Machine Specification
system:java.vm.specification.vendor=Oracle Corporation
system:java.vm.specification.version=1.7
system:java.vm.vendor=Oracle Corporation
system:java.vm.version=24.80-b11
system:line.separator=

system:os.arch=amd64
system:os.name=Linux
system:os.version=4.2.0-27-generic
system:path.separator=:
system:sun.arch.data.model=64
system:sun.boot.class.path=/opt/jdk1.7.0_80/jre/lib/resources.jar:/opt/jdk1.7.0_80/jre/lib/rt.jar:/opt/jdk1.7.0_80/jre/lib/sunrsasign.jar:/opt/jdk1.7.0_80/jre/lib/jsse.jar:/opt/jdk1.7.0_80/jre/lib/jce.jar:/opt/jdk1.7.0_80/jre/lib/charsets.jar:/opt/jdk1.7.0_80/jre/lib/jfr.jar:/opt/jdk1.7.0_80/jre/classes
system:sun.boot.library.path=/opt/jdk1.7.0_80/jre/lib/amd64
system:sun.cpu.endian=little
system:sun.cpu.isalist=
system:sun.io.unicode.encoding=UnicodeLittle
system:sun.java.command=org.apache.hadoop.util.RunJar /opt/apache-hive-1.1.1-bin/lib/hive-cli-1.1.1.jar org.apache.hadoop.hive.cli.CliDriver --hiveconf hive.aux.jars.path=file:///opt/db-derby-10.12.1.1-bin/lib/derbyclient.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derby.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_cs.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_de_DE.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_es.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_fr.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_hu.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_it.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ja_JP.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ko_KR.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pl.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_pt_BR.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_ru.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_CN.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyLocale_zh_TW.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbynet.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyoptionaltools.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbyrun.jar,file:///opt/db-derby-10.12.1.1-bin/lib/derbytools.jar -e set -v
system:sun.java.launcher=SUN_STANDARD
system:sun.jnu.encoding=UTF-8
system:sun.management.compiler=HotSpot 64-Bit Tiered Compilers
system:sun.os.patch.level=unknown
system:user.country=US
system:user.dir=/home/cdap
system:user.home=/home/cdap
system:user.language=en
system:user.name=cdap
system:user.timezone=America/New_York

Pope Xie

unread,
Jun 17, 2016, 7:23:58 PM6/17/16
to CDAP User
The new master log file.

Best Regards

Pope Xie

hive.metastore.kerberos.principal=hive-metastore/_HOST@EXAMPLE.COM
...
master-cdap-cdapnode3.att.com.zip

Rohit Sinha

unread,
Jun 17, 2016, 9:08:39 PM6/17/16
to CDAP User
Hello Pope, 
Thanks for sending us the output and the logs. We are looking into this will get back to you with our findings. 

Thanks.
...

Pope Xie

unread,
Jun 21, 2016, 9:23:27 AM6/21/16
to CDAP User
Can we meet on 6/23?

Thanks

Pope Xie
Reply all
Reply to author
Forward
0 new messages