Kafka setup problem.

5,997 views
Skip to first unread message

akshay naidu

unread,
Jan 3, 2017, 9:16:56 AM1/3/17
to Confluent Platform
Hi, 
I am trying to set-up kafka on my ubuntu laptop. 
I downloaded the kafka and extracted it.
when I ran the command
./sbt update
it says no such file or directory, I moved on with next step of starting the server, 1st starting zookeeper server
sudo bin/zookeeper-server-start.sh config/zookeeper.properties
[2017-01-03 19:19:45,484] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-01-03 19:19:45,509] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-01-03 19:19:45,510] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-01-03 19:19:45,510] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-01-03 19:19:45,510] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2017-01-03 19:19:45,583] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-01-03 19:19:45,584] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2017-01-03 19:19:45,620] INFO Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,620] INFO Server environment:host.name=akshay-300E4Z-300E5Z-300E7Z (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,620] INFO Server environment:java.version=1.8.0_101 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,620] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,620] INFO Server environment:java.home=/usr/lib/jvm/java-8-oracle/jre (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,620] INFO Server environment:java.class.path=:/usr/local/kafka/bin/../libs/connect-json-0.9.0.0.jar:/usr/local/kafka/bin/../libs/javassist-3.18.1-GA.jar:/usr/local/kafka/bin/../libs/jetty-security-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/scala-library-2.11.7.jar:/usr/local/kafka/bin/../libs/hk2-locator-2.4.0-b31.jar:/usr/local/kafka/bin/../libs/jersey-common-2.22.1.jar:/usr/local/kafka/bin/../libs/metrics-core-2.2.0.jar:/usr/local/kafka/bin/../libs/kafka_2.11-0.9.0.0-test.jar:/usr/local/kafka/bin/../libs/connect-api-0.9.0.0.jar:/usr/local/kafka/bin/../libs/jersey-guava-2.22.1.jar:/usr/local/kafka/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/kafka/bin/../libs/kafka_2.11-0.9.0.0-scaladoc.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-core-2.22.1.jar:/usr/local/kafka/bin/../libs/kafka-tools-0.9.0.0.jar:/usr/local/kafka/bin/../libs/jersey-media-jaxb-2.22.1.jar:/usr/local/kafka/bin/../libs/zookeeper-3.4.6.jar:/usr/local/kafka/bin/../libs/hk2-utils-2.4.0-b31.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-base-2.5.4.jar:/usr/local/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-2.22.1.jar:/usr/local/kafka/bin/../libs/jetty-io-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/kafka_2.11-0.9.0.0-javadoc.jar:/usr/local/kafka/bin/../libs/kafka_2.11-0.9.0.0.jar:/usr/local/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/usr/local/kafka/bin/../libs/scala-xml_2.11-1.0.4.jar:/usr/local/kafka/bin/../libs/kafka-clients-0.9.0.0.jar:/usr/local/kafka/bin/../libs/javax.inject-2.4.0-b31.jar:/usr/local/kafka/bin/../libs/zkclient-0.7.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-json-provider-2.5.4.jar:/usr/local/kafka/bin/../libs/connect-file-0.9.0.0.jar:/usr/local/kafka/bin/../libs/connect-runtime-0.9.0.0.jar:/usr/local/kafka/bin/../libs/jetty-http-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/kafka-log4j-appender-0.9.0.0.jar:/usr/local/kafka/bin/../libs/argparse4j-0.5.0.jar:/usr/local/kafka/bin/../libs/jetty-server-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/jackson-annotations-2.5.0.jar:/usr/local/kafka/bin/../libs/snappy-java-1.1.1.7.jar:/usr/local/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/kafka/bin/../libs/jersey-server-2.22.1.jar:/usr/local/kafka/bin/../libs/javax.inject-1.jar:/usr/local/kafka/bin/../libs/jetty-servlet-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/hk2-api-2.4.0-b31.jar:/usr/local/kafka/bin/../libs/jackson-core-2.5.4.jar:/usr/local/kafka/bin/../libs/jackson-module-jaxb-annotations-2.5.4.jar:/usr/local/kafka/bin/../libs/lz4-1.2.0.jar:/usr/local/kafka/bin/../libs/jersey-client-2.22.1.jar:/usr/local/kafka/bin/../libs/jopt-simple-3.2.jar:/usr/local/kafka/bin/../libs/jetty-util-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/slf4j-api-1.7.6.jar:/usr/local/kafka/bin/../libs/log4j-1.2.17.jar:/usr/local/kafka/bin/../libs/jackson-databind-2.5.4.jar:/usr/local/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/kafka/bin/../libs/slf4j-log4j12-1.7.6.jar:/usr/local/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/usr/local/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b31.jar:/usr/local/kafka/bin/../libs/kafka_2.11-0.9.0.0-sources.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,620] INFO Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,620] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,620] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,620] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,620] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,620] INFO Server environment:os.version=3.19.0-64-generic (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,621] INFO Server environment:user.name=root (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,621] INFO Server environment:user.home=/root (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,621] INFO Server environment:user.dir=/usr/local/kafka (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,644] INFO tickTime set to 3000 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,645] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,645] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:45,727] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2017-01-03 19:19:46,825] INFO Accepted socket connection from /127.0.0.1:42954 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2017-01-03 19:19:47,023] INFO Client attempting to renew session 0x159647b7a440001 at /127.0.0.1:42954 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:19:47,027] INFO Established session 0x159647b7a440001 with negotiated timeout 6000 for client /127.0.0.1:42954 (org.apache.zookeeper.server.ZooKeeperServer)

[2017-01-03 19:31:06,113] INFO Accepted socket connection from /127.0.0.1:43061 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2017-01-03 19:31:06,128] INFO Client attempting to establish new session at /127.0.0.1:43061 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:31:06,129] INFO Creating new log file: log.20 (org.apache.zookeeper.server.persistence.FileTxnLog)
[2017-01-03 19:31:06,210] INFO Established session 0x1596497c8270000 with negotiated timeout 6000 for client /127.0.0.1:43061 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:31:06,513] INFO Processed session termination for sessionid: 0x1596497c8270000 (org.apache.zookeeper.server.PrepRequestProcessor)
[2017-01-03 19:31:06,537] INFO Closed socket connection for client /127.0.0.1:43061 which had sessionid 0x1596497c8270000 (org.apache.zookeeper.server.NIOServerCnxn)
[2017-01-03 19:31:15,089] INFO Accepted socket connection from /127.0.0.1:43064 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2017-01-03 19:31:15,092] INFO Client attempting to establish new session at /127.0.0.1:43064 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:31:15,100] INFO Established session 0x1596497c8270001 with negotiated timeout 6000 for client /127.0.0.1:43064 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-03 19:31:15,165] INFO Processed session termination for sessionid: 0x1596497c8270001 (org.apache.zookeeper.server.PrepRequestProcessor)
[2017-01-03 19:31:15,179] INFO Closed socket connection for client /127.0.0.1:43064 which had sessionid 0x1596497c8270001 (org.apache.zookeeper.server.NIOServerCnxn)


and this doesnt gave me prompt back.

I moved to the next step in new terminal
$ sudo bin/kafka-server-start.sh config/server.properties
[2017-01-03 19:31:14,880] INFO KafkaConfig values: 
metric.reporters = []
quota.producer.default = 9223372036854775807
offsets.topic.num.partitions = 50
log.flush.interval.messages = 9223372036854775807
auto.create.topics.enable = true
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
replica.socket.receive.buffer.bytes = 65536
min.insync.replicas = 1
num.recovery.threads.per.data.dir = 1
ssl.keystore.type = JKS
default.replication.factor = 1
ssl.truststore.password = null
log.preallocate = false
sasl.kerberos.principal.to.local.rules = [DEFAULT]
fetch.purgatory.purge.interval.requests = 1000
ssl.endpoint.identification.algorithm = null
message.max.bytes = 1000012
num.io.threads = 8
offsets.commit.required.acks = -1
delete.topic.enable = false
quota.window.size.seconds = 1
ssl.truststore.type = JKS
quota.window.num = 11
zookeeper.connect = localhost:2181
num.replica.fetchers = 1
log.roll.jitter.hours = 0
log.cleaner.enable = false
offsets.load.buffer.size = 5242880
ssl.client.auth = none
controlled.shutdown.max.retries = 3
queued.max.requests = 500
offsets.topic.replication.factor = 3
log.cleaner.threads = 1
sasl.kerberos.ticket.renew.jitter = 0.05
socket.request.max.bytes = 104857600
ssl.trustmanager.algorithm = PKIX
log.retention.bytes = -1
sasl.kerberos.min.time.before.relogin = 60000
zookeeper.set.acl = false
offsets.retention.minutes = 1440
inter.broker.protocol.version = 0.9.0.X
log.retention.hours = 168
num.partitions = 1
listeners = PLAINTEXT://:9092
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
log.roll.ms = null
log.flush.scheduler.interval.ms = 9223372036854775807
ssl.cipher.suites = null
log.index.size.max.bytes = 10485760
ssl.keymanager.algorithm = SunX509
security.inter.broker.protocol = PLAINTEXT
replica.fetch.max.bytes = 1048576
advertised.port = null
log.cleaner.dedupe.buffer.size = 524288000
log.cleaner.io.buffer.size = 524288
sasl.kerberos.ticket.renew.window.factor = 0.8
log.roll.hours = 168
log.cleanup.policy = delete
max.connections.per.ip = 2147483647
offsets.topic.segment.bytes = 104857600
background.threads = 10
quota.consumer.default = 9223372036854775807
log.index.interval.bytes = 4096
log.dir = /tmp/kafka-logs
log.segment.bytes = 1073741824
offset.metadata.max.bytes = 4096
ssl.truststore.location = null
ssl.keystore.password = null
port = 9092
log.retention.minutes = null
log.dirs = /tmp/kafka-logs
controlled.shutdown.enable = true
compression.type = producer
max.connections.per.ip.overrides = 
sasl.kerberos.kinit.cmd = /usr/bin/kinit
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
auto.leader.rebalance.enable = true
leader.imbalance.check.interval.seconds = 300
log.cleaner.min.cleanable.ratio = 0.5
num.network.threads = 3
ssl.key.password = null
metrics.num.samples = 2
socket.send.buffer.bytes = 102400
ssl.protocol = TLS
socket.receive.buffer.bytes = 102400
ssl.keystore.location = null
replica.fetch.min.bytes = 1
unclean.leader.election.enable = true
log.cleaner.io.buffer.load.factor = 0.9
producer.purgatory.purge.interval.requests = 1000
offsets.topic.compression.codec = 0
advertised.listeners = null
leader.imbalance.per.broker.percentage = 10
 (kafka.server.KafkaConfig)
[2017-01-03 19:31:14,971] INFO starting (kafka.server.KafkaServer)
[2017-01-03 19:31:14,975] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2017-01-03 19:31:14,989] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-01-03 19:31:14,995] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:14,995] INFO Client environment:host.name=akshay-300E4Z-300E5Z-300E7Z (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:14,995] INFO Client environment:java.version=1.8.0_101 (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:14,995] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,003] INFO Client environment:java.home=/usr/lib/jvm/java-8-oracle/jre (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,003] INFO Client environment:java.class.path=:/usr/local/kafka/bin/../libs/connect-json-0.9.0.0.jar:/usr/local/kafka/bin/../libs/javassist-3.18.1-GA.jar:/usr/local/kafka/bin/../libs/jetty-security-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/scala-library-2.11.7.jar:/usr/local/kafka/bin/../libs/hk2-locator-2.4.0-b31.jar:/usr/local/kafka/bin/../libs/jersey-common-2.22.1.jar:/usr/local/kafka/bin/../libs/metrics-core-2.2.0.jar:/usr/local/kafka/bin/../libs/kafka_2.11-0.9.0.0-test.jar:/usr/local/kafka/bin/../libs/connect-api-0.9.0.0.jar:/usr/local/kafka/bin/../libs/jersey-guava-2.22.1.jar:/usr/local/kafka/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/kafka/bin/../libs/kafka_2.11-0.9.0.0-scaladoc.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-core-2.22.1.jar:/usr/local/kafka/bin/../libs/kafka-tools-0.9.0.0.jar:/usr/local/kafka/bin/../libs/jersey-media-jaxb-2.22.1.jar:/usr/local/kafka/bin/../libs/zookeeper-3.4.6.jar:/usr/local/kafka/bin/../libs/hk2-utils-2.4.0-b31.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-base-2.5.4.jar:/usr/local/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-2.22.1.jar:/usr/local/kafka/bin/../libs/jetty-io-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/kafka_2.11-0.9.0.0-javadoc.jar:/usr/local/kafka/bin/../libs/kafka_2.11-0.9.0.0.jar:/usr/local/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/usr/local/kafka/bin/../libs/scala-xml_2.11-1.0.4.jar:/usr/local/kafka/bin/../libs/kafka-clients-0.9.0.0.jar:/usr/local/kafka/bin/../libs/javax.inject-2.4.0-b31.jar:/usr/local/kafka/bin/../libs/zkclient-0.7.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-json-provider-2.5.4.jar:/usr/local/kafka/bin/../libs/connect-file-0.9.0.0.jar:/usr/local/kafka/bin/../libs/connect-runtime-0.9.0.0.jar:/usr/local/kafka/bin/../libs/jetty-http-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/kafka-log4j-appender-0.9.0.0.jar:/usr/local/kafka/bin/../libs/argparse4j-0.5.0.jar:/usr/local/kafka/bin/../libs/jetty-server-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/jackson-annotations-2.5.0.jar:/usr/local/kafka/bin/../libs/snappy-java-1.1.1.7.jar:/usr/local/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/kafka/bin/../libs/jersey-server-2.22.1.jar:/usr/local/kafka/bin/../libs/javax.inject-1.jar:/usr/local/kafka/bin/../libs/jetty-servlet-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/hk2-api-2.4.0-b31.jar:/usr/local/kafka/bin/../libs/jackson-core-2.5.4.jar:/usr/local/kafka/bin/../libs/jackson-module-jaxb-annotations-2.5.4.jar:/usr/local/kafka/bin/../libs/lz4-1.2.0.jar:/usr/local/kafka/bin/../libs/jersey-client-2.22.1.jar:/usr/local/kafka/bin/../libs/jopt-simple-3.2.jar:/usr/local/kafka/bin/../libs/jetty-util-9.2.12.v20150709.jar:/usr/local/kafka/bin/../libs/slf4j-api-1.7.6.jar:/usr/local/kafka/bin/../libs/log4j-1.2.17.jar:/usr/local/kafka/bin/../libs/jackson-databind-2.5.4.jar:/usr/local/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/kafka/bin/../libs/slf4j-log4j12-1.7.6.jar:/usr/local/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/usr/local/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b31.jar:/usr/local/kafka/bin/../libs/kafka_2.11-0.9.0.0-sources.jar (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,004] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,004] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,004] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,004] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,004] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,004] INFO Client environment:os.version=3.19.0-64-generic (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,004] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,005] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,005] INFO Client environment:user.dir=/usr/local/kafka (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,007] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@23f7d05d (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,022] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2017-01-03 19:31:15,027] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-01-03 19:31:15,090] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2017-01-03 19:31:15,102] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1596497c8270001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2017-01-03 19:31:15,104] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2017-01-03 19:31:15,157] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.KafkaException: Failed to acquire lock on file .lock in /tmp/kafka-logs. A Kafka instance in another process or thread is using this directory.
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:98)
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:95)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:95)
at kafka.log.LogManager.<init>(LogManager.scala:57)
at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:589)
at kafka.server.KafkaServer.startup(KafkaServer.scala:171)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-01-03 19:31:15,160] INFO shutting down (kafka.server.KafkaServer)
[2017-01-03 19:31:15,164] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-01-03 19:31:15,178] INFO EventThread shut down (org.apache.zookeeper.ClientCnxn)
[2017-01-03 19:31:15,178] INFO Session: 0x1596497c8270001 closed (org.apache.zookeeper.ZooKeeper)
[2017-01-03 19:31:15,180] INFO shut down completed (kafka.server.KafkaServer)
[2017-01-03 19:31:15,181] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
kafka.common.KafkaException: Failed to acquire lock on file .lock in /tmp/kafka-logs. A Kafka instance in another process or thread is using this directory.
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:98)
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:95)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:95)
at kafka.log.LogManager.<init>(LogManager.scala:57)
at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:589)
at kafka.server.KafkaServer.startup(KafkaServer.scala:171)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-01-03 19:31:15,182] INFO shutting down (kafka.server.KafkaServer)
$

What am I doing wrong. Please assist me.
Thank you.

Dustin Cote

unread,
Jan 3, 2017, 11:03:58 AM1/3/17
to confluent...@googlegroups.com
It looks like the user you are starting Kafka as does not have permission to write to /tmp based on this error:

FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
kafka.common.KafkaException: Failed to acquire lock on file .lock in /tmp/kafka-logs. A Kafka instance in another process or thread is using this directory.

You can try changing the `log.dirs` parameter in your config to a directory where your current user can write to.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/72d08652-d0b2-4a40-831a-698f212417d3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Dustin Cote
Customer Operations Engineer | Confluent
Follow us: Twitter | blog

akshay naidu

unread,
Jan 3, 2017, 11:25:53 AM1/3/17
to Confluent Platform
Hi Dustin,
Thanks for replying. 
I just gave the 777 permission to entire kafka folder. same problem is appearing . 

To post to this group, send email to confluent...@googlegroups.com.

akshay naidu

unread,
Jan 3, 2017, 11:36:01 AM1/3/17
to Confluent Platform
Also the kafka-create-topic.sh command is not found


hdpusr@akshay-300E4Z-300E5Z-300E7Z:/usr/local/kafka$ sudo bin/kafka-create-topic.sh --zookeeper localhost:2181 --replica 1 --partition 1 --topic test
sudo
: bin/kafka-create-topic.sh: command not found
hdpusr@akshay
-300E4Z-300E5Z-300E7Z:/usr/local/kafka$ ls bin
connect
-distributed.sh     kafka-console-producer.sh         kafka-preferred-replica-election.sh  kafka-run-class.sh              kafka-verifiable-consumer.sh     zookeeper-server-stop.sh
connect
-standalone.sh      kafka-consumer-groups.sh          kafka-producer-perf-test.sh          kafka-server-start.sh           kafka-verifiable-producer.sh     zookeeper-shell.sh
kafka
-acls.sh              kafka-consumer-offset-checker.sh  kafka-reassign-partitions.sh         kafka-server-stop.sh            windows
kafka
-configs.sh           kafka-consumer-perf-test.sh       kafka-replay-log-producer.sh         kafka-simple-consumer-shell.sh  zookeeper-security-migration.sh
kafka
-console-consumer.sh  kafka-mirror-maker.sh             kafka-replica-verification.sh        kafka-topics.sh                 zookeeper-server-start.sh
hdpusr@akshay
-300E4Z-300E5Z-300E7Z:/usr/local/kafka$






On Tuesday, January 3, 2017 at 9:33:58 PM UTC+5:30, Dustin Cote wrote:
To post to this group, send email to confluent...@googlegroups.com.

Dustin Cote

unread,
Jan 3, 2017, 2:15:55 PM1/3/17
to confluent...@googlegroups.com
I think you are looking for the kafka-topics.sh script and using the --create option.

The error you've mentioned is basically because a lock can't be obtained on /tmp/kafka-logs, so you have two possibilities if permissions on /tmp are not the problem:
1) You have another Kafka process already using /tmp/kafka-logs 
2) You have a stale lock hanging around in /tmp/kafka-logs

You can delete /tmp/kafka-logs entirely and start from scratch. That should get you passed the error if your permissions on /tmp are indeed correct.

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsubscribe@googlegroups.com.



--
Dustin Cote
Customer Operations Engineer | Confluent
Follow us: Twitter | blog

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

akshay naidu

unread,
Jan 3, 2017, 2:26:09 PM1/3/17
to Confluent Platform
hdpusr@akshay-300E4Z-300E5Z-300E7Z:/usr/local/kafka$ ls -l /tmp/kafka-logs
total 4
-rwxrwxrwx 1 hdpusr hadoop 54 Jan  3 18:50 meta.properties
-rwxrwxrwx 1 hdpusr hadoop  0 Jan  3 18:50 recovery-point-offset-checkpoint
-rwxrwxrwx 1 hdpusr hadoop  0 Jan  3 18:50 replication-offset-checkpoint
hdpusr@akshay-300E4Z-300E5Z-300E7Z:/usr/local/kafka$ 


should i delete this /tmp/kafka-logs completely, it will be regenerated automatically?
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.



--
Dustin Cote
Customer Operations Engineer | Confluent
Follow us: Twitter | blog

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

akshay naidu

unread,
Jan 3, 2017, 3:37:48 PM1/3/17
to Confluent Platform
Hey Dustin,
I deleted /tmp/kafka-logs and tried to start kafka server, -->>
again some erorrs, what is "already in use" as mentioned in the error.

[2017-01-04 02:00:50,004] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2017-01-04 02:00:50,004] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2017-01-04 02:00:50,004] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2017-01-04 02:00:50,004] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2017-01-04 02:00:50,004] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2017-01-04 02:00:50,004] INFO Client environment:os.version=3.19.0-64-generic (org.apache.zookeeper.ZooKeeper)
[2017-01-04 02:00:50,004] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2017-01-04 02:00:50,004] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2017-01-04 02:00:50,004] INFO Client environment:user.dir=/usr/local/kafka (org.apache.zookeeper.ZooKeeper)
[2017-01-04 02:00:50,005] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@23f7d05d (org.apache.zookeeper.ZooKeeper)
[2017-01-04 02:00:50,173] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2017-01-04 02:00:50,233] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-01-04 02:00:50,507] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2017-01-04 02:00:50,614] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1596497c827000b, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2017-01-04 02:00:50,616] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2017-01-04 02:00:50,893] INFO Log directory '/tmp/kafka-logs' not found, creating it. (kafka.log.LogManager)
[2017-01-04 02:00:51,001] INFO Loading logs. (kafka.log.LogManager)
[2017-01-04 02:00:51,047] INFO Logs loading complete. (kafka.log.LogManager)
[2017-01-04 02:00:51,048] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2017-01-04 02:00:51,065] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2017-01-04 02:00:51,094] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2017-01-04 02:00:51,295] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:9092: Address already in use.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:309)
at kafka.network.Acceptor.<init>(SocketServer.scala:237)
at kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:108)
at kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:91)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
at kafka.network.SocketServer.startup(SocketServer.scala:91)
at kafka.server.KafkaServer.startup(KafkaServer.scala:179)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:305)
... 11 more
[2017-01-04 02:00:51,330] INFO [Kafka Server 0], shutting down (kafka.server.KafkaServer)
[2017-01-04 02:00:51,333] INFO [Socket Server on Broker 0], Shutting down (kafka.network.SocketServer)


To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.



--
Dustin Cote
Customer Operations Engineer | Confluent
Follow us: Twitter | blog

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

akshay naidu

unread,
Jan 3, 2017, 4:28:43 PM1/3/17
to Confluent Platform
this appeared after some time below the above code for kafka-server start

[2017-01-04 02:51:47,933] INFO Client session timed out, have not heard from server in 4002ms for sessionid 0x1596497c827000b, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2017-01-04 02:51:48,944] INFO zookeeper state changed (Disconnected) (org.I0Itec.zkclient.ZkClient)
[2017-01-04 02:51:50,779] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-01-04 02:51:50,781] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2017-01-04 02:51:54,863] INFO zookeeper state changed (Expired) (org.I0Itec.zkclient.ZkClient)
[2017-01-04 02:51:54,863] INFO Unable to reconnect to ZooKeeper service, session 0x1596497c827000b has expired, closing socket connection (org.apache.zookeeper.ClientCnxn)
[2017-01-04 02:51:54,863] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@23f7d05d (org.apache.zookeeper.ZooKeeper)
[2017-01-04 02:51:54,866] INFO EventThread shut down (org.apache.zookeeper.ClientCnxn)
[2017-01-04 02:51:54,871] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-01-04 02:51:54,872] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2017-01-04 02:51:55,523] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1596497c827000c, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2017-01-04 02:51:55,523] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)

Dustin Cote

unread,
Jan 4, 2017, 8:54:03 AM1/4/17
to confluent...@googlegroups.com
The following indicates you have a process already listening on port 9092:
kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:9092: Address already in use

You should do something like `ps -ef |grep kafka` to see if you have a Kafka process already running. If so, I recommend killing it and starting from scratch by deleting /tmp/kafka-logs after killing the process. Then start a broker up and it will automatically create /tmp/kafka-logs for you if the user has permission to write into /tmp.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

akshay naidu

unread,
Jan 4, 2017, 9:11:55 AM1/4/17
to confluent...@googlegroups.com

Thanks for response Dustin.. i had  deleted /tmp/kafka-logs and followed the procedure from start.. now its working absolutely fine.. i am able to create topics out of sample.csv file.. thank you for your support..


To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsubscribe@googlegroups.com.

To post to this group, send email to confluent-platform@googlegroups.com.



--
Dustin Cote
Customer Operations Engineer | Confluent
Follow us: Twitter | blog

--
You received this message because you are subscribed to a topic in the Google Groups "Confluent Platform" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/confluent-platform/0Q7RwI-EQzo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent-platform@googlegroups.com.

M.Aruna Devi

unread,
Nov 16, 2018, 1:32:18 AM11/16/18
to Confluent Platform
Hi ,
I am trying to start the Kafka server and getting the error below  while doing this. 

I am running the command kafka-server-start.sh config/server.properties to start Kafka.


OpenJDK 64-Bit Server VM warning: Cannot open file /opt/Kafka/kafka_2.12-1.0.0/bin/../logs/kafkaServer-gc.log due to Permission denied

log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /opt/Kafka/kafka_2.12-1.0.0/bin/../logs/server.log (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.kafka.common.utils.Utils.<clinit>(Utils.java:75)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:44)
at kafka.Kafka$.main(Kafka.scala:81)
at kafka.Kafka.main(Kafka.scala)
log4j:ERROR Either File or DatePattern options are not set for appender [kafkaAppender].
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /opt/Kafka/kafka_2.12-1.0.0/bin/../logs/kafka-request.log (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.kafka.common.utils.Utils.<clinit>(Utils.java:75)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:44)
at kafka.Kafka$.main(Kafka.scala:81)
at kafka.Kafka.main(Kafka.scala)
log4j:ERROR Either File or DatePattern options are not set for appender [requestAppender].
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /opt/Kafka/kafka_2.12-1.0.0/bin/../logs/kafka-authorizer.log (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.kafka.common.utils.Utils.<clinit>(Utils.java:75)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:44)
at kafka.Kafka$.main(Kafka.scala:81)
at kafka.Kafka.main(Kafka.scala)
log4j:ERROR Either File or DatePattern options are not set for appender [authorizerAppender].
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /opt/Kafka/kafka_2.12-1.0.0/bin/../logs/controller.log (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.kafka.common.utils.Utils.<clinit>(Utils.java:75)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:44)
at kafka.Kafka$.main(Kafka.scala:81)
at kafka.Kafka.main(Kafka.scala)
log4j:ERROR Either File or DatePattern options are not set for appender [controllerAppender].
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /opt/Kafka/kafka_2.12-1.0.0/bin/../logs/log-cleaner.log (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.kafka.common.utils.Utils.<clinit>(Utils.java:75)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:44)
at kafka.Kafka$.main(Kafka.scala:81)
at kafka.Kafka.main(Kafka.scala)
log4j:ERROR Either File or DatePattern options are not set for appender [cleanerAppender].
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /opt/Kafka/kafka_2.12-1.0.0/bin/../logs/state-change.log (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.kafka.common.utils.Utils.<clinit>(Utils.java:75)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:44)
at kafka.Kafka$.main(Kafka.scala:81)
at kafka.Kafka.main(Kafka.scala)
log4j:ERROR Either File or DatePattern options are not set for appender [stateChangeAppender].
[2018-11-16 11:59:27,546] FATAL  (kafka.Kafka$)
java.io.FileNotFoundException: config/server.properties (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:495)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:44)
at kafka.Kafka$.main(Kafka.scala:81)
at kafka.Kafka.main(Kafka.scala)

chinchu chinchu

unread,
Nov 16, 2018, 12:42:19 PM11/16/18
to confluent...@googlegroups.com
Are   you in the right directory and  does the user  have  permission to write  to  /opt/Kafka/kafka_2.12-1.0.0/bin/../logs directory .Try giving the absolute path to server.properties to be sure.



--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/7e13db08-708e-4841-ae0c-707d0e83d075%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages