please help. Thanks for your support!
starting
Generating SSH1 RSA host key: [ OK ]
Starting sshd: [ OK ]
Starting ZooKeeper
---------------------------------------------------------------
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... /opt/zookeeper/bin/zkServer.sh: line 113: /tmp/zookeeper/zookeeper_server.pid: Permission denied
FAILED TO WRITE PID
Starting Hadoop
---------------------------------------------------------------
**************** FORMATING NAMENODE ****************
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
15/01/25 22:38:43 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.3.0
STARTUP_MSG: classpath = /opt/hadoop/etc/hadoop/:/opt/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-2.3.0.jar:/opt/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/opt/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-2.3.0.jar:/opt/hadoop/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/opt/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.3.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.3.0-tests.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-2.3.0.jar:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.3.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.3.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/opt/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.3.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.3.0-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.3.0.jar:/opt/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG: java = 1.7.0_71
************************************************************/
15/01/25 22:38:43 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-cd8c7ad5-062f-4b17-975c-cba881919254
15/01/25 22:38:44 INFO namenode.FSNamesystem: fsLock is fair:true
15/01/25 22:38:44 INFO namenode.HostFileManager: read includes:
HostSet(
)
15/01/25 22:38:44 INFO namenode.HostFileManager: read excludes:
HostSet(
)
15/01/25 22:38:44 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/01/25 22:38:44 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/01/25 22:38:45 INFO util.GSet: Computing capacity for map BlocksMap
15/01/25 22:38:45 INFO util.GSet: VM type = 64-bit
15/01/25 22:38:45 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/01/25 22:38:45 INFO util.GSet: capacity = 2^21 = 2097152 entries
15/01/25 22:38:45 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/01/25 22:38:45 INFO blockmanagement.BlockManager: defaultReplication = 3
15/01/25 22:38:45 INFO blockmanagement.BlockManager: maxReplication = 512
15/01/25 22:38:45 INFO blockmanagement.BlockManager: minReplication = 1
15/01/25 22:38:45 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
15/01/25 22:38:45 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
15/01/25 22:38:45 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/01/25 22:38:45 INFO blockmanagement.BlockManager: encryptDataTransfer = false
15/01/25 22:38:45 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
15/01/25 22:38:45 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
15/01/25 22:38:45 INFO namenode.FSNamesystem: supergroup = supergroup
15/01/25 22:38:45 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/01/25 22:38:45 INFO namenode.FSNamesystem: HA Enabled: false
15/01/25 22:38:45 INFO namenode.FSNamesystem: Append Enabled: true
15/01/25 22:38:45 INFO util.GSet: Computing capacity for map INodeMap
15/01/25 22:38:45 INFO util.GSet: VM type = 64-bit
15/01/25 22:38:45 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/01/25 22:38:45 INFO util.GSet: capacity = 2^20 = 1048576 entries
15/01/25 22:38:45 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/01/25 22:38:45 INFO util.GSet: Computing capacity for map cachedBlocks
15/01/25 22:38:45 INFO util.GSet: VM type = 64-bit
15/01/25 22:38:45 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/01/25 22:38:45 INFO util.GSet: capacity = 2^18 = 262144 entries
15/01/25 22:38:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/01/25 22:38:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/01/25 22:38:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/01/25 22:38:45 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/01/25 22:38:45 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/01/25 22:38:45 INFO util.GSet: Computing capacity for map Namenode Retry Cache
15/01/25 22:38:45 INFO util.GSet: VM type = 64-bit
15/01/25 22:38:45 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/01/25 22:38:45 INFO util.GSet: capacity = 2^15 = 32768 entries
15/01/25 22:38:45 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /tmp/hadoop-root/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:311)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:523)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:544)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:147)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:829)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1218)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1325)
15/01/25 22:38:45 FATAL namenode.NameNode: Exception in namenode join
java.io.IOException: Cannot create directory /tmp/hadoop-root/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:311)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:523)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:544)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:147)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:829)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1218)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1325)
15/01/25 22:38:45 INFO util.ExitUtil: Exiting with status 1
15/01/25 22:38:45 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at lumify-dev/
172.17.0.8************************************************************/
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Starting namenodes on [lumify-dev]
lumify-dev: chown: changing ownership of `/opt/hadoop/logs': Permission denied
lumify-dev: starting namenode, logging to /opt/hadoop/logs/hadoop-root-namenode-lumify-dev.out
lumify-dev: /opt/hadoop/sbin/hadoop-daemon.sh: line 157: /tmp/hadoop-root-namenode.pid: Permission denied
lumify-dev: /opt/hadoop/sbin/hadoop-daemon.sh: line 151: /opt/hadoop/logs/hadoop-root-namenode-lumify-dev.out: Permission denied
lumify-dev: head: cannot open `/opt/hadoop/logs/hadoop-root-namenode-lumify-dev.out' for reading: No such file or directory
lumify-dev: /opt/hadoop/sbin/hadoop-daemon.sh: line 166: /opt/hadoop/logs/hadoop-root-namenode-lumify-dev.out: Permission denied
lumify-dev: /opt/hadoop/sbin/hadoop-daemon.sh: line 167: /opt/hadoop/logs/hadoop-root-namenode-lumify-dev.out: Permission denied
localhost: chown: changing ownership of `/opt/hadoop/logs': Permission denied
localhost: starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-lumify-dev.out
localhost: /opt/hadoop/sbin/hadoop-daemon.sh: line 157: /tmp/hadoop-root-datanode.pid: Permission denied
localhost: /opt/hadoop/sbin/hadoop-daemon.sh: line 151: /opt/hadoop/logs/hadoop-root-datanode-lumify-dev.out: Permission denied
localhost: head: cannot open `/opt/hadoop/logs/hadoop-root-datanode-lumify-dev.out' for reading: No such file or directory
localhost: /opt/hadoop/sbin/hadoop-daemon.sh: line 166: /opt/hadoop/logs/hadoop-root-datanode-lumify-dev.out: Permission denied
localhost: /opt/hadoop/sbin/hadoop-daemon.sh: line 167: /opt/hadoop/logs/hadoop-root-datanode-lumify-dev.out: Permission denied
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-root-secondarynamenode-lumify-dev.out
0.0.0.0: chown: changing ownership of `/opt/hadoop/logs': Permission denied
0.0.0.0: /opt/hadoop/sbin/hadoop-daemon.sh: line 157: /tmp/hadoop-root-secondarynamenode.pid: Permission denied
0.0.0.0: /opt/hadoop/sbin/hadoop-daemon.sh: line 151: /opt/hadoop/logs/hadoop-root-secondarynamenode-lumify-dev.out: Permission denied
0.0.0.0: head: cannot open `/opt/hadoop/logs/hadoop-root-secondarynamenode-lumify-dev.out' for reading: No such file or directory
0.0.0.0: /opt/hadoop/sbin/hadoop-daemon.sh: line 166: /opt/hadoop/logs/hadoop-root-secondarynamenode-lumify-dev.out: Permission denied
0.0.0.0: /opt/hadoop/sbin/hadoop-daemon.sh: line 167: /opt/hadoop/logs/hadoop-root-secondarynamenode-lumify-dev.out: Permission denied
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
starting yarn daemons
chown: missing operand after `/opt/hadoop/logs'
Try `chown --help' for more information.
starting resourcemanager, logging to /opt/hadoop/logs/yarn--resourcemanager-lumify-dev.out
/opt/hadoop/sbin/yarn-daemon.sh: line 125: /tmp/yarn--resourcemanager.pid: Permission denied
/opt/hadoop/sbin/yarn-daemon.sh: line 124: /opt/hadoop/logs/yarn--resourcemanager-lumify-dev.out: Permission denied
head: cannot open `/opt/hadoop/logs/yarn--resourcemanager-lumify-dev.out' for reading: No such file or directory
/opt/hadoop/sbin/yarn-daemon.sh: line 129: /opt/hadoop/logs/yarn--resourcemanager-lumify-dev.out: Permission denied
/opt/hadoop/sbin/yarn-daemon.sh: line 130: /opt/hadoop/logs/yarn--resourcemanager-lumify-dev.out: Permission denied
localhost: chown: changing ownership of `/opt/hadoop/logs': Permission denied
localhost: starting nodemanager, logging to /opt/hadoop/logs/yarn-root-nodemanager-lumify-dev.out
localhost: /opt/hadoop/sbin/yarn-daemon.sh: line 125: /tmp/yarn-root-nodemanager.pid: Permission denied
localhost: /opt/hadoop/sbin/yarn-daemon.sh: line 124: /opt/hadoop/logs/yarn-root-nodemanager-lumify-dev.out: Permission denied
localhost: head: cannot open `/opt/hadoop/logs/yarn-root-nodemanager-lumify-dev.out' for reading: No such file or directory
localhost: /opt/hadoop/sbin/yarn-daemon.sh: line 129: /opt/hadoop/logs/yarn-root-nodemanager-lumify-dev.out: Permission denied
localhost: /opt/hadoop/sbin/yarn-daemon.sh: line 130: /opt/hadoop/logs/yarn-root-nodemanager-lumify-dev.out: Permission denied
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Starting Accumulo
---------------------------------------------------------------
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Creating accumulo user in hdfs
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Starting monitor on lumify-dev
/opt/accumulo-1.6.1/bin/start-server.sh: line 78: /opt/accumulo-1.6.1/logs/monitor_lumify-dev.out: Permission denied
Starting tablet servers .... done
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Starting tablet server on lumify-dev
/opt/accumulo-1.6.1/bin/start-server.sh: line 78: /opt/accumulo-1.6.1/logs/tserver_lumify-dev.out: Permission denied
2015-01-25 22:39:22,722 [vfs.UniqueFileReplicator] WARN : Unexpected error creating directory /tmp/accumulo-vfs-cache-716@lumify-dev-root
2015-01-25 22:39:22,760 [vfs.UniqueFileReplicator] WARN : Unexpected error creating directory /tmp/accumulo-vfs-cache-716@lumify-dev-root
2015-01-25 22:39:24,621 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on system reset or power loss
2015-01-25 22:39:24,628 [server.Accumulo] INFO : Attempting to talk to zookeeper
Thread "org.apache.accumulo.master.state.SetGoalState" died Failed to connect to zookeeper (localhost:2181) within 2x zookeeper timeout period 30000
java.lang.RuntimeException: Failed to connect to zookeeper (localhost:2181) within 2x zookeeper timeout period 30000
at org.apache.accumulo.fate.zookeeper.ZooSession.connect(ZooSession.java:117)
at org.apache.accumulo.fate.zookeeper.ZooSession.getSession(ZooSession.java:161)
at org.apache.accumulo.fate.zookeeper.ZooReader.getSession(ZooReader.java:35)
at org.apache.accumulo.fate.zookeeper.ZooReaderWriter.getZooKeeper(ZooReaderWriter.java:50)
at org.apache.accumulo.fate.zookeeper.ZooReader.getChildren(ZooReader.java:59)
at org.apache.accumulo.server.Accumulo.waitForZookeeperAndHdfs(Accumulo.java:246)
at org.apache.accumulo.master.state.SetGoalState.main(SetGoalState.java:44)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.accumulo.start.Main$1.run(Main.java:141)
at java.lang.Thread.run(Thread.java:745)
Starting master on lumify-dev
/opt/accumulo-1.6.1/bin/start-server.sh: line 78: /opt/accumulo-1.6.1/logs/master_lumify-dev.out: Permission denied
Starting garbage collector on lumify-dev
/opt/accumulo-1.6.1/bin/start-server.sh: line 78: /opt/accumulo-1.6.1/logs/gc_lumify-dev.out: Permission denied
Starting tracer on lumify-dev
/opt/accumulo-1.6.1/bin/start-server.sh: line 78: /opt/accumulo-1.6.1/logs/tracer_lumify-dev.out: Permission denied
Starting Elasticsearch
---------------------------------------------------------------
Starting RabbitMQ
---------------------------------------------------------------
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
The following plugins have been enabled:
mochiweb
webmachine
rabbitmq_web_dispatch
amqp_client
rabbitmq_management_agent
rabbitmq_management
Offline change; changes will take effect at broker restart.
Starting Lumify Config
---------------------------------------------------------------
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /opt/elasticsearch/logs/elasticsearch.log (Permission denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at java.io.FileOutputStream.<init>(FileOutputStream.java:142)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at org.apache.log4j.PropertyConfigurator.configure(PropertyConfigurator.java:440)
at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:105)
at org.elasticsearch.bootstrap.Bootstrap.setupLogging(Bootstrap.java:94)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:178)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
log4j:ERROR Either File or DatePattern options are not set for appender [file].
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /opt/elasticsearch/logs/elasticsearch_index_indexing_slowlog.log (Permission denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at java.io.FileOutputStream.<init>(FileOutputStream.java:142)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.configure(PropertyConfigurator.java:440)
at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:105)
at org.elasticsearch.bootstrap.Bootstrap.setupLogging(Bootstrap.java:94)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:178)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
log4j:ERROR Either File or DatePattern options are not set for appender [index_indexing_slow_log_file].
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /opt/elasticsearch/logs/elasticsearch_index_search_slowlog.log (Permission denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at java.io.FileOutputStream.<init>(FileOutputStream.java:142)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.configure(PropertyConfigurator.java:440)
at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:105)
at org.elasticsearch.bootstrap.Bootstrap.setupLogging(Bootstrap.java:94)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:178)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
log4j:ERROR Either File or DatePattern options are not set for appender [index_search_slow_log_file].
{1.4.1}: Initialization Failed ...
- ElasticsearchIllegalStateException[Failed to obtain node lock, is the following location writable?: [/opt/elasticsearch/data/elasticsearch]]
IOException[failed to obtain lock on /opt/elasticsearch/data/elasticsearch/nodes/49]
IOException[Cannot create directory: /opt/elasticsearch/data/elasticsearch/nodes/49]
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true