Docker error on OS X (with new Docker)

1,313 views
Skip to first unread message

İnanç Gümüş

unread,
May 8, 2017, 5:12:10 PM5/8/17
to Confluent Platform
Hi,

I ran these commands:

git clone https://github.com/confluentinc/cp-docker-images
cd cp
-docker-images/examples/kafka-single-node
docker
-compose up

However, the log displays errors and keeps trying:

PS: I tried to run Kafka without Docker a few days ago, again, I failed. It's very hard to setup. So, I decided to use Docker, then I failed again. What could be the problem?

kafka_1      | [2017-05-08 21:00:37,382] WARN [Controller-1-to-broker-1-send-thread], Controller 1 epoch 1 fails to send request (type: UpdateMetadataRequest=, controllerId=1, controllerEpoch=1, partitionStates={}, liveBrokers=(id=1, endPoints=(host=localhost, port=29092, listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT), rack=null)) to broker localhost:29092 (id: 1 rack: null). Reconnecting to broker. (kafka.controller.RequestSendThread)
kafka_1      
| java.io.IOException: Connection to 1 was disconnected before the response was read
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:114)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:112)
kafka_1      
| at scala.Option.foreach(Option.scala:257)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:112)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:108)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:136)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollContinuously$extension(NetworkClientBlockingOps.scala:142)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:108)
kafka_1      
| at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:196)
kafka_1      
| at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:188)
kafka_1      
| at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
kafka_1      
| [2017-05-08 21:00:37,484] INFO [Controller-1-to-broker-1-send-thread], Controller 1 connected to localhost:29092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread)
kafka_1      
| [2017-05-08 21:00:37,486] ERROR Processor got uncaught exception. (kafka.network.Processor)
kafka_1      
| java.lang.NoClassDefFoundError: Could not initialize class kafka.network.RequestChannel$
kafka_1      
| at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:111)
kafka_1      
| at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:513)
kafka_1      
| at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:505)
kafka_1      
| at scala.collection.Iterator$class.foreach(Iterator.scala:893)
kafka_1      
| at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
kafka_1      
| at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
kafka_1      
| at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
kafka_1      
| at kafka.network.Processor.processCompletedReceives(SocketServer.scala:505)
kafka_1      
| at kafka.network.Processor.run(SocketServer.scala:433)
kafka_1      
| at java.lang.Thread.run(Thread.java:745)
kafka_1      
| [2017-05-08 21:01:07,516] WARN [Controller-1-to-broker-1-send-thread], Controller 1 epoch 1 fails to send request (type: UpdateMetadataRequest=, controllerId=1, controllerEpoch=1, partitionStates={}, liveBrokers=(id=1, endPoints=(host=localhost, port=29092, listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT), rack=null)) to broker localhost:29092 (id: 1 rack: null). Reconnecting to broker. (kafka.controller.RequestSendThread)
kafka_1      
| java.io.IOException: Connection to 1 was disconnected before the response was read
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:114)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:112)
kafka_1      
| at scala.Option.foreach(Option.scala:257)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:112)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:108)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:136)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollContinuously$extension(NetworkClientBlockingOps.scala:142)
kafka_1      
| at kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:108)
kafka_1      
| at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:196)
kafka_1      
| at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:188)
kafka_1      
| at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
kafka_1      
| [2017-05-08 21:01:07,618] INFO [Controller-1-to-broker-1-send-thread], Controller 1 connected to localhost:29092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread)
kafka_1      
| [2017-05-08 21:01:07,619] ERROR Processor got uncaught exception. (kafka.network.Processor)
kafka_1      
| java.lang.NoClassDefFoundError: Could not initialize class kafka.network.RequestChannel$
kafka_1      
| at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:111)
kafka_1      
| at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:513)
kafka_1      
| at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:505)
kafka_1      
| at scala.collection.Iterator$class.foreach(Iterator.scala:893)
kafka_1      
| at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
kafka_1      
| at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
kafka_1      
| at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
kafka_1      
| at kafka.network.Processor.processCompletedReceives(SocketServer.scala:505)
kafka_1      
| at kafka.network.Processor.run(SocketServer.scala:433)
kafka_1      
| at java.lang.Thread.run(Thread.java:745)


PS: I'm using the new Docker release. So, most of the issues talked in Confluent Docker documentation will be easier to address, I believe.

İnanç Gümüş

unread,
May 8, 2017, 5:14:23 PM5/8/17
to Confluent Platform
Btw, this is my:

- Docker version: 17.03.1-ce
- Docker compose version: 1.11.2, build dfed245


It uses a different architecture than the previous ones.

9 Mayıs 2017 Salı 00:12:10 UTC+3 tarihinde İnanç Gümüş yazdı:

Roger Hoover

unread,
May 8, 2017, 8:07:59 PM5/8/17
to confluent...@googlegroups.com
This looks like an issue with the classpath:  `java.lang.NoClassDefFoundError: Could not initialize class kafka.network.RequestChannel$`

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/fc89f208-acb1-4c6a-8252-253c2ba0d935%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

dan

unread,
May 8, 2017, 8:52:17 PM5/8/17
to confluent...@googlegroups.com
hi İnanç,

i tried this locally and saw the following:
kafka_1      | java.lang.ExceptionInInitializerError
kafka_1      |     at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:111)
kafka_1      |     at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:513)
kafka_1      |     at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:505)
kafka_1      |     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
kafka_1      |     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
kafka_1      |     at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
kafka_1      |     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
kafka_1      |     at kafka.network.Processor.processCompletedReceives(SocketServer.scala:505)
kafka_1      |     at kafka.network.Processor.run(SocketServer.scala:433)
kafka_1      |     at java.lang.Thread.run(Thread.java:745)
kafka_1      | Caused by: java.net.UnknownHostException: moby: moby: Name or service not known
kafka_1      |     at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
kafka_1      |     at kafka.network.RequestChannel$.<init>(RequestChannel.scala:41)
kafka_1      |     at kafka.network.RequestChannel$.<clinit>(RequestChannel.scala)
kafka_1      |     ... 10 more
kafka_1      | Caused by: java.net.UnknownHostException: moby: Name or service not known
kafka_1      |     at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
kafka_1      |     at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
kafka_1      |     at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
kafka_1      |     at java.net.InetAddress.getLocalHost(InetAddress.java:1500)
kafka_1      |     ... 12 more

based on http://blog.yohanliyanage.com/2016/09/docker-machine-moby-name-or-service-not-known/, i have made the following pr https://github.com/confluentinc/cp-docker-images/pull/258 which solves this issue for me locally. can you try it out and let me know if it works for you too?

thanks
dan

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsubscribe@googlegroups.com.

To post to this group, send email to confluent-platform@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.

İnanç Gümüş

unread,
May 9, 2017, 4:31:15 AM5/9/17
to confluent...@googlegroups.com
Yeah, what do you suggest?

Roger Hoover <roger....@gmail.com> şunları yazdı (9 May 2017 03:07):

You received this message because you are subscribed to a topic in the Google Groups "Confluent Platform" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/confluent-platform/DRd_XfeW0C8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/CAPOm%3DTOGNe0kps1wGONPoOfKrAGLDyLoJXc8OZhOj7%3DDauTQiA%40mail.gmail.com.

Roger Hoover

unread,
May 9, 2017, 1:29:00 PM5/9/17
to confluent...@googlegroups.com
Can you try Dan's suggestion?

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsubscribe@googlegroups.com.

To post to this group, send email to confluent-platform@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Confluent Platform" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/confluent-platform/DRd_XfeW0C8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent-platform@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.

İnanç Gümüş

unread,
May 10, 2017, 6:45:45 AM5/10/17
to Confluent Platform
Hi Dan, thank you very much that worked! I saw that you've updated all the other image's docker-compose configs too. So, they'll probably work too. I'll try.

Btw, there are some different errors in the log, do you think that these errors are fine?

va/kafka/connect-transforms-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/jackson-core-2.8.5.jar:/usr/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,382] INFO Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,382] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,382] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,382] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,383] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,383] INFO Server environment:os.version=4.9.13-moby (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,383] INFO Server environment:user.name=root (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,383] INFO Server environment:user.home=/root (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,383] INFO Server environment:user.dir=/ (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,388] INFO tickTime set to 2000 (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,388] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,388] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,396] INFO binding to port 0.0.0.0/0.0.0.0:32181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
kafka_1      |
kafka_1      | if [[ -z "${KAFKA_LOG_DIRS-}" ]]
kafka_1      | then
kafka_1      |   export KAFKA_LOG_DIRS
kafka_1      |   KAFKA_LOG_DIRS="/var/lib/kafka/data"
kafka_1      | fi
kafka_1      | + [[ -z '' ]]
kafka_1      | + export KAFKA_LOG_DIRS
kafka_1      | + KAFKA_LOG_DIRS=/var/lib/kafka/data
kafka_1      |
kafka_1      | # advertised.host, advertised.port, host and port are deprecated. Exit if these properties are set.
kafka_1      | if [[ -n "${KAFKA_ADVERTISED_PORT-}" ]]
kafka_1      | then
kafka_1      |   echo "advertised.port is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead."
kafka_1      |   exit 1
kafka_1      | fi
kafka_1      | + [[ -n '' ]]
kafka_1      |
kafka_1      | if [[ -n "${KAFKA_ADVERTISED_HOST-}" ]]
kafka_1      | then
kafka_1      |   echo "advertised.host is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead."
kafka_1      |   exit 1
kafka_1      | fi
kafka_1      | + [[ -n '' ]]
kafka_1      |
kafka_1      | if [[ -n "${KAFKA_HOST-}" ]]
kafka_1      | then
kafka_1      |   echo "host is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead."
kafka_1      |   exit 1
kafka_1      | fi
kafka_1      | + [[ -n '' ]]
kafka_1      |
kafka_1      | if [[ -n "${KAFKA_PORT-}" ]]
kafka_1      | then
kafka_1      |   echo "port is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead."
kafka_1      |   exit 1
kafka_1      | fi
kafka_1      | + [[ -n '' ]]
kafka_1      |
kafka_1      | # Set if ADVERTISED_LISTENERS has SSL:// or SASL_SSL:// endpoints.
kafka_1      | if [[ $KAFKA_ADVERTISED_LISTENERS == *"SSL://"* ]]
kafka_1      | then
kafka_1      |   echo "SSL is enabled."
kafka_1      |
kafka_1      |   dub ensure KAFKA_SSL_KEYSTORE_FILENAME
kafka_1      |   export KAFKA_SSL_KEYSTORE_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_KEYSTORE_FILENAME"
kafka_1      |   dub path "$KAFKA_SSL_KEYSTORE_LOCATION" exists
kafka_1      |
kafka_1      |   dub ensure KAFKA_SSL_KEY_CREDENTIALS
kafka_1      |   KAFKA_SSL_KEY_CREDENTIALS_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_KEY_CREDENTIALS"
kafka_1      |   dub path "$KAFKA_SSL_KEY_CREDENTIALS_LOCATION" exists
kafka_1      |   export KAFKA_SSL_KEY_PASSWORD
kafka_1      |   KAFKA_SSL_KEY_PASSWORD=$(cat "$KAFKA_SSL_KEY_CREDENTIALS_LOCATION")
kafka_1      |
kafka_1      |   dub ensure KAFKA_SSL_KEYSTORE_CREDENTIALS
kafka_1      |   KAFKA_SSL_KEYSTORE_CREDENTIALS_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_KEYSTORE_CREDENTIALS"
kafka_1      |   dub path "$KAFKA_SSL_KEYSTORE_CREDENTIALS_LOCATION" exists
kafka_1      |   export KAFKA_SSL_KEYSTORE_PASSWORD
kafka_1      |   KAFKA_SSL_KEYSTORE_PASSWORD=$(cat "$KAFKA_SSL_KEYSTORE_CREDENTIALS_LOCATION")
kafka_1      |
kafka_1      |   dub ensure KAFKA_SSL_TRUSTSTORE_FILENAME
kafka_1      |   export KAFKA_SSL_TRUSTSTORE_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_TRUSTSTORE_FILENAME"
kafka_1      |   dub path "$KAFKA_SSL_TRUSTSTORE_LOCATION" exists
kafka_1      |
kafka_1      |   dub ensure KAFKA_SSL_TRUSTSTORE_CREDENTIALS
kafka_1      |   KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_TRUSTSTORE_CREDENTIALS"
kafka_1      |   dub path "$KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION" exists
kafka_1      |   export KAFKA_SSL_TRUSTSTORE_PASSWORD
kafka_1      |   KAFKA_SSL_TRUSTSTORE_PASSWORD=$(cat "$KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION")
kafka_1      |
kafka_1      | fi
kafka_1      | + [[ PLAINTEXT://localhost:29092 == *\S\S\L\:\/\/* ]]
kafka_1      |
kafka_1      | # Set if KAFKA_ADVERTISED_LISTENERS has SASL_PLAINTEXT:// or SASL_SSL:// endpoints.
kafka_1      | if [[ $KAFKA_ADVERTISED_LISTENERS =~ .*SASL_.*://.* ]]
kafka_1      | then
kafka_1      |   echo "SASL" is enabled.
kafka_1      |
kafka_1      |   dub ensure KAFKA_OPTS
kafka_1      |
kafka_1      |   if [[ ! $KAFKA_OPTS == *"java.security.auth.login.config"*  ]]
kafka_1      |   then
kafka_1      |     echo "KAFKA_OPTS should contain 'java.security.auth.login.config' property."
kafka_1      |   fi
kafka_1      | fi
kafka_1      | + [[ PLAINTEXT://localhost:29092 =~ .*SASL_.*://.* ]]
kafka_1      |
kafka_1      | if [[ -n "${KAFKA_JMX_OPTS-}" ]]
kafka_1      | then
kafka_1      |   if [[ ! $KAFKA_JMX_OPTS == *"com.sun.management.jmxremote.rmi.port"*  ]]
kafka_1      |   then
kafka_1      |     echo "KAFKA_OPTS should contain 'com.sun.management.jmxremote.rmi.port' property. It is required for accessing the JMX metrics externally."
kafka_1      |   fi
kafka_1      | fi
kafka_1      | + [[ -n '' ]]
kafka_1      |
kafka_1      | dub template "/etc/confluent/docker/${COMPONENT}.properties.template" "/etc/${COMPONENT}/${COMPONENT}.properties"
kafka_1      | + dub template /etc/confluent/docker/kafka.properties.template /etc/kafka/kafka.properties
kafka_1      | dub template "/etc/confluent/docker/log4j.properties.template" "/etc/${COMPONENT}/log4j.properties"
kafka_1      | + dub template /etc/confluent/docker/log4j.properties.template /etc/kafka/log4j.properties
kafka_1      | dub template "/etc/confluent/docker/tools-log4j.properties.template" "/etc/${COMPONENT}/tools-log4j.properties"
kafka_1      | + dub template /etc/confluent/docker/tools-log4j.properties.template /etc/kafka/tools-log4j.properties
kafka_1      |
kafka_1      | echo "===> Running preflight checks ... "
kafka_1      | + echo '===> Running preflight checks ... '
kafka_1      | /etc/confluent/docker/ensure
kafka_1      | + /etc/confluent/docker/ensure
kafka_1      | ===> Running preflight checks ...
kafka_1      |
kafka_1      | export KAFKA_DATA_DIRS=${KAFKA_DATA_DIRS:-"/var/lib/kafka/data"}
kafka_1      | + export KAFKA_DATA_DIRS=/var/lib/kafka/data
kafka_1      | + KAFKA_DATA_DIRS=/var/lib/kafka/data
kafka_1      | echo "===> Check if $KAFKA_DATA_DIRS is writable ..."
kafka_1      | + echo '===> Check if /var/lib/kafka/data is writable ...'
kafka_1      | dub path "$KAFKA_DATA_DIRS" writable
kafka_1      | ===> Check if /var/lib/kafka/data is writable ...
kafka_1      | + dub path /var/lib/kafka/data writable
kafka_1      |
kafka_1      | echo "===> Check if Zookeeper is healthy ..."
kafka_1      | + echo '===> Check if Zookeeper is healthy ...'
kafka_1      | ===> Check if Zookeeper is healthy ...
kafka_1      | cub zk-ready "$KAFKA_ZOOKEEPER_CONNECT" "${KAFKA_CUB_ZK_TIMEOUT:-40}"
kafka_1      | + cub zk-ready localhost:32181 40
kafka_1      | Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
kafka_1      | Client environment:host.name=localhost
kafka_1      | Client environment:java.version=1.8.0_102
kafka_1      | Client environment:java.vendor=Azul Systems, Inc.
kafka_1      | Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
kafka_1      | Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
kafka_1      | Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
kafka_1      | Client environment:java.io.tmpdir=/tmp
kafka_1      | Client environment:java.compiler=<NA>
kafka_1      | Client environment:os.name=Linux
kafka_1      | Client environment:os.arch=amd64
kafka_1      | Client environment:os.version=4.9.13-moby
kafka_1      | Client environment:user.name=root
kafka_1      | Client environment:user.home=/root
kafka_1      | Client environment:user.dir=/
kafka_1      | Initiating client connection, connectString=localhost:32181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@14514713
kafka_1      | Opening socket connection to server localhost/0:0:0:0:0:0:0:1:32181. Will not attempt to authenticate using SASL (unknown error)
kafka_1      | Socket connection established to localhost/0:0:0:0:0:0:0:1:32181, initiating session
zookeeper_1  | [2017-05-10 10:41:59,894] INFO Accepted socket connection from /0:0:0:0:0:0:0:1:34640 (org.apache.zookeeper.server.NIOServerCnxnFactory)
zookeeper_1  | [2017-05-10 10:41:59,930] INFO Client attempting to establish new session at /0:0:0:0:0:0:0:1:34640 (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:41:59,932] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog)
zookeeper_1  | [2017-05-10 10:41:59,952] INFO Established session 0x15bf1f382780000 with negotiated timeout 40000 for client /0:0:0:0:0:0:0:1:34640 (org.apache.zookeeper.server.ZooKeeperServer)
kafka_1      | Session establishment complete on server localhost/0:0:0:0:0:0:0:1:32181, sessionid = 0x15bf1f382780000, negotiated timeout = 40000
zookeeper_1  | [2017-05-10 10:41:59,959] INFO Processed session termination for sessionid: 0x15bf1f382780000 (org.apache.zookeeper.server.PrepRequestProcessor)
zookeeper_1  | [2017-05-10 10:41:59,965] INFO Closed socket connection for client /0:0:0:0:0:0:0:1:34640 which had sessionid 0x15bf1f382780000 (org.apache.zookeeper.server.NIOServerCnxn)
kafka_1      | Session: 0x15bf1f382780000 closed
kafka_1      |
kafka_1      | echo "===> Launching ... "
kafka_1      | + echo '===> Launching ... '
kafka_1      | exec /etc/confluent/docker/launch
kafka_1      | + exec /etc/confluent/docker/launch
kafka_1      | ===> Launching ...
kafka_1      | ===> Launching kafka ...
kafka_1      | [2017-05-10 10:42:00,354] INFO KafkaConfig values:
kafka_1      | advertised.host.name = null
kafka_1      | advertised.listeners = PLAINTEXT://localhost:29092
kafka_1      | advertised.port = null
kafka_1      | authorizer.class.name =
kafka_1      | auto.create.topics.enable = true
kafka_1      | auto.leader.rebalance.enable = true
kafka_1      | background.threads = 10
kafka_1      | broker.id = 1
kafka_1      | broker.id.generation.enable = true
kafka_1      | broker.rack = null
kafka_1      | compression.type = producer
kafka_1      | connections.max.idle.ms = 600000
kafka_1      | controlled.shutdown.enable = true
kafka_1      | controlled.shutdown.max.retries = 3
kafka_1      | controlled.shutdown.retry.backoff.ms = 5000
kafka_1      | controller.socket.timeout.ms = 30000
kafka_1      | create.topic.policy.class.name = null
kafka_1      | default.replication.factor = 1
kafka_1      | delete.topic.enable = false
kafka_1      | fetch.purgatory.purge.interval.requests = 1000
kafka_1      | group.max.session.timeout.ms = 300000
kafka_1      | group.min.session.timeout.ms = 6000
kafka_1      | host.name =
kafka_1      | inter.broker.listener.name = null
kafka_1      | inter.broker.protocol.version = 0.10.2-IV0
kafka_1      | leader.imbalance.check.interval.seconds = 300
kafka_1      | leader.imbalance.per.broker.percentage = 10
kafka_1      | listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
kafka_1      | listeners = PLAINTEXT://0.0.0.0:29092
kafka_1      | log.cleaner.backoff.ms = 15000
kafka_1      | log.cleaner.dedupe.buffer.size = 134217728
kafka_1      | log.cleaner.delete.retention.ms = 86400000
kafka_1      | log.cleaner.enable = true
kafka_1      | log.cleaner.io.buffer.load.factor = 0.9
kafka_1      | log.cleaner.io.buffer.size = 524288
kafka_1      | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka_1      | log.cleaner.min.cleanable.ratio = 0.5
kafka_1      | log.cleaner.min.compaction.lag.ms = 0
kafka_1      | log.cleaner.threads = 1
kafka_1      | log.cleanup.policy = [delete]
kafka_1      | log.dir = /tmp/kafka-logs
kafka_1      | log.dirs = /var/lib/kafka/data
kafka_1      | log.flush.interval.messages = 9223372036854775807
kafka_1      | log.flush.interval.ms = null
kafka_1      | log.flush.offset.checkpoint.interval.ms = 60000
kafka_1      | log.flush.scheduler.interval.ms = 9223372036854775807
kafka_1      | log.index.interval.bytes = 4096
kafka_1      | log.index.size.max.bytes = 10485760
kafka_1      | log.message.format.version = 0.10.2-IV0
kafka_1      | log.message.timestamp.difference.max.ms = 9223372036854775807
kafka_1      | log.message.timestamp.type = CreateTime
kafka_1      | log.preallocate = false
kafka_1      | log.retention.bytes = -1
kafka_1      | log.retention.check.interval.ms = 300000
kafka_1      | log.retention.hours = 168
kafka_1      | log.retention.minutes = null
kafka_1      | log.retention.ms = null
kafka_1      | log.roll.hours = 168
kafka_1      | log.roll.jitter.hours = 0
kafka_1      | log.roll.jitter.ms = null
kafka_1      | log.roll.ms = null
kafka_1      | log.segment.bytes = 1073741824
kafka_1      | log.segment.delete.delay.ms = 60000
kafka_1      | max.connections.per.ip = 2147483647
kafka_1      | max.connections.per.ip.overrides =
kafka_1      | message.max.bytes = 1000012
kafka_1      | metric.reporters = []
kafka_1      | metrics.num.samples = 2
kafka_1      | metrics.recording.level = INFO
kafka_1      | metrics.sample.window.ms = 30000
kafka_1      | min.insync.replicas = 1
kafka_1      | num.io.threads = 8
kafka_1      | num.network.threads = 3
kafka_1      | num.partitions = 1
kafka_1      | num.recovery.threads.per.data.dir = 1
kafka_1      | num.replica.fetchers = 1
kafka_1      | offset.metadata.max.bytes = 4096
kafka_1      | offsets.commit.required.acks = -1
kafka_1      | offsets.commit.timeout.ms = 5000
kafka_1      | offsets.load.buffer.size = 5242880
kafka_1      | offsets.retention.check.interval.ms = 600000
kafka_1      | offsets.retention.minutes = 1440
kafka_1      | offsets.topic.compression.codec = 0
kafka_1      | offsets.topic.num.partitions = 50
kafka_1      | offsets.topic.replication.factor = 3
kafka_1      | offsets.topic.segment.bytes = 104857600
kafka_1      | port = 9092
kafka_1      | principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
kafka_1      | producer.purgatory.purge.interval.requests = 1000
kafka_1      | queued.max.requests = 500
kafka_1      | quota.consumer.default = 9223372036854775807
kafka_1      | quota.producer.default = 9223372036854775807
kafka_1      | quota.window.num = 11
kafka_1      | quota.window.size.seconds = 1
kafka_1      | replica.fetch.backoff.ms = 1000
kafka_1      | replica.fetch.max.bytes = 1048576
kafka_1      | replica.fetch.min.bytes = 1
kafka_1      | replica.fetch.response.max.bytes = 10485760
kafka_1      | replica.fetch.wait.max.ms = 500
kafka_1      | replica.lag.time.max.ms = 10000
kafka_1      | replica.socket.receive.buffer.bytes = 65536
kafka_1      | replica.socket.timeout.ms = 30000
kafka_1      | replication.quota.window.num = 11
kafka_1      | replication.quota.window.size.seconds = 1
kafka_1      | request.timeout.ms = 30000
kafka_1      | reserved.broker.max.id = 1000
kafka_1      | sasl.enabled.mechanisms = [GSSAPI]
kafka_1      | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka_1      | sasl.kerberos.min.time.before.relogin = 60000
kafka_1      | sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka_1      | sasl.kerberos.service.name = null
kafka_1      | sasl.kerberos.ticket.renew.jitter = 0.05
kafka_1      | sasl.kerberos.ticket.renew.window.factor = 0.8
kafka_1      | sasl.mechanism.inter.broker.protocol = GSSAPI
kafka_1      | security.inter.broker.protocol = PLAINTEXT
kafka_1      | socket.receive.buffer.bytes = 102400
kafka_1      | socket.request.max.bytes = 104857600
kafka_1      | socket.send.buffer.bytes = 102400
kafka_1      | ssl.cipher.suites = null
kafka_1      | ssl.client.auth = none
kafka_1      | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka_1      | ssl.endpoint.identification.algorithm = null
kafka_1      | ssl.key.password = null
kafka_1      | ssl.keymanager.algorithm = SunX509
kafka_1      | ssl.keystore.location = null
kafka_1      | ssl.keystore.password = null
kafka_1      | ssl.keystore.type = JKS
kafka_1      | ssl.protocol = TLS
kafka_1      | ssl.provider = null
kafka_1      | ssl.secure.random.implementation = null
kafka_1      | ssl.trustmanager.algorithm = PKIX
kafka_1      | ssl.truststore.location = null
kafka_1      | ssl.truststore.password = null
kafka_1      | ssl.truststore.type = JKS
kafka_1      | unclean.leader.election.enable = true
kafka_1      | zookeeper.connect = localhost:32181
kafka_1      | zookeeper.connection.timeout.ms = null
kafka_1      | zookeeper.session.timeout.ms = 6000
kafka_1      | zookeeper.set.acl = false
kafka_1      | zookeeper.sync.time.ms = 2000
kafka_1      |  (kafka.server.KafkaConfig)
kafka_1      | [2017-05-10 10:42:00,383] WARN The support metrics collection feature ("Metrics") of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
kafka_1      | [2017-05-10 10:42:00,384] INFO starting (kafka.server.KafkaServer)
kafka_1      | [2017-05-10 10:42:00,385] INFO Connecting to zookeeper on localhost:32181 (kafka.server.KafkaServer)
kafka_1      | [2017-05-10 10:42:00,393] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
kafka_1      | [2017-05-10 10:42:00,396] INFO Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,396] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,396] INFO Client environment:java.version=1.8.0_102 (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,397] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,397] INFO Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,397] INFO Client environment:java.class.path=:/usr/bin/../share/java/kafka/commons-validator-1.4.1.jar:/usr/bin/../share/java/kafka/log4j-1.2.17.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.21.jar:/usr/bin/../share/java/kafka/connect-json-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/lz4-1.3.0.jar:/usr/bin/../share/java/kafka/guava-18.0.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.24.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.24.jar:/usr/bin/../share/java/kafka/reflections-0.9.10.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/httpmime-4.5.2.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.2.6.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.8.5.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.3.jar:/usr/bin/../share/java/kafka/xz-1.0.jar:/usr/bin/../share/java/kafka/jersey-common-2.24.jar:/usr/bin/../share/java/kafka/jetty-server-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.2.1-cp1-javadoc.jar:/usr/bin/../share/java/kafka/commons-codec-1.9.jar:/usr/bin/../share/java/kafka/rocksdbjni-5.0.1.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.8.5.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.5.0-b05.jar:/usr/bin/../share/java/kafka/jersey-client-2.24.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.2.1-cp1-sources.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jackson-mapper-asl-1.9.13.jar:/usr/bin/../share/java/kafka/jetty-util-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/javax.inject-2.5.0-b05.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.8.3.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.8.0.jar:/usr/bin/../share/java/kafka/httpcore-4.4.4.jar:/usr/bin/../share/java/kafka/commons-digester-1.8.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/kafka-clients-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/validation-api-1.1.0.Final.jar:/usr/bin/../share/java/kafka/jersey-guava-2.24.jar:/usr/bin/../share/java/kafka/support-metrics-common-3.2.1.jar:/usr/bin/../share/java/kafka/avro-1.7.7.jar:/usr/bin/../share/java/kafka/connect-file-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.8.5.jar:/usr/bin/../share/java/kafka/zookeeper-3.4.9.jar:/usr/bin/../share/java/kafka/jersey-media-jaxb-2.24.jar:/usr/bin/../share/java/kafka/jackson-core-asl-1.9.13.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/kafka/kafka-tools-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/commons-compress-1.4.1.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.1.jar:/usr/bin/../share/java/kafka/connect-api-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/httpclient-4.5.2.jar:/usr/bin/../share/java/kafka/hk2-utils-2.5.0-b05.jar:/usr/bin/../share/java/kafka/hk2-locator-2.5.0-b05.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.2.1-cp1-scaladoc.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jersey-server-2.24.jar:/usr/bin/../share/java/kafka/connect-runtime-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.8.5.jar:/usr/bin/../share/java/kafka/support-metrics-client-3.2.1.jar:/usr/bin/../share/java/kafka/javax.inject-1.jar:/usr/bin/../share/java/kafka/hk2-api-2.5.0-b05.jar:/usr/bin/../share/java/kafka/scala-parser-combinators_2.11-1.0.4.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.2.1-cp1-test-sources.jar:/usr/bin/../share/java/kafka/commons-lang3-3.1.jar:/usr/bin/../share/java/kafka/jetty-security-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.8.5.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.0.1.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.2.1-cp1-test.jar:/usr/bin/../share/java/kafka/kafka-streams-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/slf4j-log4j12-1.7.21.jar:/usr/bin/../share/java/kafka/jetty-http-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/paranamer-2.3.jar:/usr/bin/../share/java/kafka/scala-library-2.11.8.jar:/usr/bin/../share/java/kafka/jetty-io-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/javassist-3.20.0-GA.jar:/usr/bin/../share/java/kafka/zkclient-0.10.jar:/usr/bin/../share/java/kafka/connect-transforms-0.10.2.1-cp1.jar:/usr/bin/../share/java/kafka/jackson-core-2.8.5.jar:/usr/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,397] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,397] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,397] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,397] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,398] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,398] INFO Client environment:os.version=4.9.13-moby (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,398] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,398] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,398] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,399] INFO Initiating client connection, connectString=localhost:32181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@635eaaf1 (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-05-10 10:42:00,411] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:32181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
kafka_1      | [2017-05-10 10:42:00,412] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
zookeeper_1  | [2017-05-10 10:42:00,450] INFO Accepted socket connection from /0:0:0:0:0:0:0:1:34642 (org.apache.zookeeper.server.NIOServerCnxnFactory)
kafka_1      | [2017-05-10 10:42:00,450] INFO Socket connection established to localhost/0:0:0:0:0:0:0:1:32181, initiating session (org.apache.zookeeper.ClientCnxn)
zookeeper_1  | [2017-05-10 10:42:00,455] INFO Client attempting to establish new session at /0:0:0:0:0:0:0:1:34642 (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper_1  | [2017-05-10 10:42:00,461] INFO Established session 0x15bf1f382780001 with negotiated timeout 6000 for client /0:0:0:0:0:0:0:1:34642 (org.apache.zookeeper.server.ZooKeeperServer)
kafka_1      | [2017-05-10 10:42:00,463] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:32181, sessionid = 0x15bf1f382780001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka_1      | [2017-05-10 10:42:00,464] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
zookeeper_1  | [2017-05-10 10:42:00,494] INFO Got user-level KeeperException when processing sessionid:0x15bf1f382780001 type:create cxid:0x5 zxid:0x5 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers (org.apache.zookeeper.server.PrepRequestProcessor)
zookeeper_1  | [2017-05-10 10:42:00,525] INFO Got user-level KeeperException when processing sessionid:0x15bf1f382780001 type:create cxid:0xb zxid:0x9 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config (org.apache.zookeeper.server.PrepRequestProcessor)
zookeeper_1  | [2017-05-10 10:42:00,552] INFO Got user-level KeeperException when processing sessionid:0x15bf1f382780001 type:create cxid:0x13 zxid:0xe txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin (org.apache.zookeeper.server.PrepRequestProcessor)
zookeeper_1  | [2017-05-10 10:42:00,609] INFO Got user-level KeeperException when processing sessionid:0x15bf1f382780001 type:create cxid:0x1b zxid:0x13 txntype:-1 reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster (org.apache.zookeeper.server.PrepRequestProcessor)
kafka_1      | [2017-05-10 10:42:00,628] INFO Cluster ID = PrMjnXLeQ9mz0FrNSYVRyQ (kafka.server.KafkaServer)
kafka_1      | [2017-05-10 10:42:00,633] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1      | [2017-05-10 10:42:00,644] INFO [ThrottledRequestReaper-Fetch], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1      | [2017-05-10 10:42:00,646] INFO [ThrottledRequestReaper-Produce], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1      | [2017-05-10 10:42:00,672] INFO Loading logs. (kafka.log.LogManager)
kafka_1      | [2017-05-10 10:42:00,676] INFO Logs loading complete in 4 ms. (kafka.log.LogManager)
kafka_1      | [2017-05-10 10:42:00,721] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_1      | [2017-05-10 10:42:00,737] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka_1      | [2017-05-10 10:42:00,739] INFO Starting the log cleaner (kafka.log.LogCleaner)
kafka_1      | [2017-05-10 10:42:00,741] INFO [kafka-log-cleaner-thread-0], Starting  (kafka.log.LogCleaner)
kafka_1      | [2017-05-10 10:42:00,763] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.Acceptor)
kafka_1      | [2017-05-10 10:42:00,766] INFO [Socket Server on Broker 1], Started 1 acceptor threads (kafka.network.SocketServer)
kafka_1      | [2017-05-10 10:42:00,776] INFO [ExpirationReaper-1], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | [2017-05-10 10:42:00,776] INFO [ExpirationReaper-1], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | [2017-05-10 10:42:00,791] INFO [Controller 1]: Controller starting up (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,797] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
kafka_1      | [2017-05-10 10:42:00,808] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka_1      | [2017-05-10 10:42:00,809] INFO 1 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
kafka_1      | [2017-05-10 10:42:00,810] INFO [Controller 1]: Broker 1 starting become controller state transition (kafka.controller.KafkaController)
zookeeper_1  | [2017-05-10 10:42:00,812] INFO Got user-level KeeperException when processing sessionid:0x15bf1f382780001 type:setData cxid:0x25 zxid:0x17 txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch (org.apache.zookeeper.server.PrepRequestProcessor)
kafka_1      | [2017-05-10 10:42:00,828] INFO [Controller 1]: Controller 1 incremented epoch to 1 (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,830] DEBUG [Controller 1]: Registering IsrChangeNotificationListener (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,841] INFO [Controller 1]: Partitions undergoing preferred replica election:  (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,842] INFO [Controller 1]: Partitions that completed preferred replica election:  (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,842] INFO [Controller 1]: Resuming preferred replica election for partitions:  (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,845] INFO [Controller 1]: Partitions being reassigned: Map() (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,845] INFO [Controller 1]: Partitions already reassigned: Set() (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,847] INFO [Controller 1]: Resuming reassignment of partitions: Map() (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,850] INFO [Controller 1]: List of topics to be deleted:  (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,850] INFO [Controller 1]: List of topics ineligible for deletion:  (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,852] INFO [Controller 1]: Currently active brokers in the cluster: Set() (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,853] INFO [Controller 1]: Currently shutting brokers in the cluster: Set() (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,853] INFO [Controller 1]: Current list of topics in the cluster: Set() (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,865] INFO [Replica state machine on controller 1]: Started replica state machine with initial state -> Map() (kafka.controller.ReplicaStateMachine)
kafka_1      | [2017-05-10 10:42:00,867] INFO [Partition state machine on Controller 1]: Started partition state machine with initial state -> Map() (kafka.controller.PartitionStateMachine)
kafka_1      | [2017-05-10 10:42:00,867] INFO [Controller 1]: Broker 1 is ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,868] INFO [Controller 1]: Starting preferred replica leader election for partitions  (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,871] INFO [Partition state machine on Controller 1]: Invoking state change to OnlinePartition for partitions  (kafka.controller.PartitionStateMachine)
zookeeper_1  | [2017-05-10 10:42:00,874] INFO Got user-level KeeperException when processing sessionid:0x15bf1f382780001 type:delete cxid:0x36 zxid:0x19 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election (org.apache.zookeeper.server.PrepRequestProcessor)
kafka_1      | [2017-05-10 10:42:00,880] INFO [Controller 1]: starting the partition rebalance scheduler (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,881] INFO [Controller 1]: Controller startup complete (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:00,885] INFO [ExpirationReaper-1], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | [2017-05-10 10:42:00,887] INFO [ExpirationReaper-1], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | [2017-05-10 10:42:00,889] INFO [ExpirationReaper-1], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | [2017-05-10 10:42:00,899] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.GroupCoordinator)
kafka_1      | [2017-05-10 10:42:00,900] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.GroupCoordinator)
kafka_1      | [2017-05-10 10:42:00,901] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager)
kafka_1      | [2017-05-10 10:42:00,924] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
kafka_1      | [2017-05-10 10:42:00,952] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
zookeeper_1  | [2017-05-10 10:42:00,957] INFO Got user-level KeeperException when processing sessionid:0x15bf1f382780001 type:create cxid:0x41 zxid:0x1a txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers (org.apache.zookeeper.server.PrepRequestProcessor)
zookeeper_1  | [2017-05-10 10:42:00,957] INFO Got user-level KeeperException when processing sessionid:0x15bf1f382780001 type:create cxid:0x42 zxid:0x1b txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)
kafka_1      | [2017-05-10 10:42:00,968] INFO New leader is 1 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
kafka_1      | [2017-05-10 10:42:00,969] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka_1      | [2017-05-10 10:42:00,971] INFO Registered broker 1 at path /brokers/ids/1 with addresses: EndPoint(localhost,29092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
kafka_1      | [2017-05-10 10:42:00,971] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1      | [2017-05-10 10:42:00,974] INFO [BrokerChangeListener on Controller 1]: Broker change listener fired for path /brokers/ids with children 1 (kafka.controller.ReplicaStateMachine$BrokerChangeListener)
kafka_1      | [2017-05-10 10:42:01,003] INFO Kafka version : 0.10.2.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1      | [2017-05-10 10:42:01,003] INFO Kafka commitId : 4332ed2a2fbf1bcd (org.apache.kafka.common.utils.AppInfoParser)
kafka_1      | [2017-05-10 10:42:01,005] INFO [Kafka Server 1], started (kafka.server.KafkaServer)
kafka_1      | [2017-05-10 10:42:01,041] INFO [BrokerChangeListener on Controller 1]: Newly added brokers: 1, deleted brokers: , all live brokers: 1 (kafka.controller.ReplicaStateMachine$BrokerChangeListener)
kafka_1      | [2017-05-10 10:42:01,043] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager)
kafka_1      | [2017-05-10 10:42:01,052] INFO [Controller-1-to-broker-1-send-thread], Starting  (kafka.controller.RequestSendThread)
kafka_1      | [2017-05-10 10:42:01,054] INFO [Controller 1]: New broker startup callback for 1 (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:01,065] INFO [Controller-1-to-broker-1-send-thread], Controller 1 connected to localhost:29092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread)
kafka_1      | [2017-05-10 10:42:01,096] TRACE Controller 1 epoch 1 received response {error_code=0} for a request sent to broker localhost:29092 (id: 1 rack: null) (state.change.logger)
kafka_1      | [2017-05-10 10:42:05,883] TRACE [Controller 1]: checking need to trigger partition rebalance (kafka.controller.KafkaController)
kafka_1      | [2017-05-10 10:42:05,885] DEBUG [Controller 1]: preferred replicas by broker Map() (kafka.controller.KafkaController)

9 Mayıs 2017 Salı 03:52:17 UTC+3 tarihinde dan norwood yazdı:
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

İnanç Gümüş

unread,
May 10, 2017, 6:47:48 AM5/10/17
to Confluent Platform
Sorry, I'm resending the error log output again: https://pastebin.com/kfebx1NV


10 Mayıs 2017 Çarşamba 13:45:45 UTC+3 tarihinde İnanç Gümüş yazdı:

dan

unread,
May 11, 2017, 3:59:09 AM5/11/17
to confluent...@googlegroups.com
i don't see any errors that should stop kafka from working. are you able to connect to the broker?

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

İnanç Gümüş

unread,
May 11, 2017, 6:19:46 AM5/11/17
to confluent...@googlegroups.com
No, I can't connect due network_mode: host, I believe. I wrote about more on this here: https://groups.google.com/forum/#!topic/confluent-platform/E95NYUqhdgY

I also posted new issue about this on Github: https://github.com/confluentinc/cp-docker-images/issues/265

I believe, without "network_mode: host" and "moby in extra_hosts", it can work if I bind to 0.0.0.0 as KAFKA_ADVERTISED_LISTENERS instead of binding to localhost in the container.

$ docker container inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' kafkasinglenode_kafka_1
172.18.0.3
$ nc 172.18.0.3 29092
# it hangs here indefinitely
# timeouts afterwards


To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsubscribe@googlegroups.com.

To post to this group, send email to confluent-platform@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Confluent Platform" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/confluent-platform/DRd_XfeW0C8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to confluent-platform+unsubscribe@googlegroups.com.

To post to this group, send email to confluent-platform@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages