Confluent Control Center does not report broker metrics

1,212 views
Skip to first unread message

Eugene Dvorkin

unread,
Apr 20, 2017, 2:50:34 PM4/20/17
to Confluent Platform
I am running single broker kafka cluster and checking out Kafka Control Center. I created a cluster using official docker containers. (https://hub.docker.com/u/confluent/?page=1)  Control Center starts, I can see data in Data Streams. 
Everything version 3.2.
What missing is System Health - no information about broker or zookeeper. 

What am I missing here?

My kafka.properties file has following entries:

advertised.listeners=PLAINTEXT://localhost:9092
confluent.metrics.reporter.topic.replicas=1
zookeeper.connect=localhost:2181
log.dirs=/var/lib/kafka/data
listeners=PLAINTEXT://0.0.0.0:9092
confluent.metrics.reporter.bootstrap.servers=localhost:9092
confluent.metrics.reporter.zookeeper.connect=localhost:2181
metrics.reporter=io.confluent.metrics.reporter.ConfluentMetricsReporter

kafka server started:
java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Xloggc:/var/log/kafka/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/var/log/kafka -Dlog4j.configuration=file:/etc/kafka/log4j.properties -cp :/usr/bin/../share/java/kafka/*:/usr/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* io.confluent.support.metrics.SupportedKafka /etc/kafka/kafka.properties

I run everything in host mode. 
I do not see any error in Control Center logs. 




dan

unread,
Apr 20, 2017, 5:01:52 PM4/20/17
to confluent...@googlegroups.com
can you send the logs from your kafka broker. specifically the first couple hundred lines should have something about 
INFO KafkaConfig values: 
and 
INFO ConfluentMetricsReporterConfig values

thanks
dan
 

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/17d1b9d0-a2dc-468f-80ae-611726ba2919%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Eugene Dvorkin

unread,
Apr 21, 2017, 11:03:50 AM4/21/17
to Confluent Platform
Below is my log file.  What interesting, I see this WARN message:
[2017-04-21 14:52:18,590] WARN The support metrics collection feature ("Metrics") of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)

Value of ConfluentMetricsReporterConfig is not in the logs. 

Everything else is here:

===> ENV Variables ...
APT_ALLOW_UNAUTHENTICATED=false
COMPONENT=kafka
CONFLUENT_DEB_VERSION=1
CONFLUENT_MAJOR_VERSION=3
CONFLUENT_MINOR_VERSION=2
CONFLUENT_PATCH_VERSION=0
CONFLUENT_VERSION=3.2.0
HOME=/root
HOSTNAME=localhost
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
KAFKA_BROKER_ID=1
KAFKA_CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS=localhost:9092
KAFKA_CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS=1
KAFKA_CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT=localhost:2181
KAFKA_METRICS_REPORTER=io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_VERSION=0.10.2.0
KAFKA_ZOOKEEPER_CONNECT=localhost:2181
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.11
SHLVL=1
ZULU_OPENJDK_VERSION=8=8.17.0.3
_=/usr/bin/env
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring ...
===> Running preflight checks ...
===> Check if /var/lib/kafka/data is writable ...
===> Check if Zookeeper is healthy ...
===> Launching ...
===> Launching kafka ...
[2017-04-21 14:52:18,541] INFO KafkaConfig values:
        advertised.host.name = null
        advertised.listeners = PLAINTEXT://localhost:9092
        advertised.port = null
        authorizer.class.name =
        auto.create.topics.enable = true
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.id = 1
        broker.id.generation.enable = true
        broker.rack = null
        compression.type = producer
        connections.max.idle.ms = 600000
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controller.socket.timeout.ms = 30000
        default.replication.factor = 1
        delete.topic.enable = false
        fetch.purgatory.purge.interval.requests = 1000
        group.max.session.timeout.ms = 300000
        group.min.session.timeout.ms = 6000
        host.name =
        inter.broker.listener.name = null
        inter.broker.protocol.version = 0.10.2-IV0
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
        listeners = PLAINTEXT://0.0.0.0:9092
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = /tmp/kafka-logs
        log.dirs = /var/lib/kafka/data
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.message.format.version = 0.10.2-IV0
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides =
        message.max.bytes = 1000012
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.minutes = 1440
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 3
        offsets.topic.segment.bytes = 104857600
        port = 9092
        principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
        producer.purgatory.purge.interval.requests = 1000
        queued.max.requests = 500
        quota.consumer.default = 9223372036854775807
        quota.producer.default = 9223372036854775807
        quota.window.num = 11
        quota.window.size.seconds = 1
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.lag.time.max.ms = 10000
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.enabled.mechanisms = [GSSAPI]
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.mechanism.inter.broker.protocol = GSSAPI
        security.inter.broker.protocol = PLAINTEXT
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.cipher.suites = null
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        unclean.leader.election.enable = true
        zookeeper.connect = localhost:2181
        zookeeper.session.timeout.ms = 6000
        zookeeper.set.acl = false
        zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2017-04-21 14:52:18,590] WARN The support metrics collection feature ("Metrics") of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
[2017-04-21 14:52:18,591] INFO starting (kafka.server.KafkaServer)
[2017-04-21 14:52:18,593] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
To post to this group, send email to confluent...@googlegroups.com.

dan

unread,
Apr 21, 2017, 1:57:10 PM4/21/17
to confluent...@googlegroups.com
        metric.reporters = []

is the issue. it looks like your config has `metrics.reporter=io.confluent.metrics.reporter.ConfluentMetricsReporter`, when it should actually be `
metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter`

dan

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

Eugene Dvorkin

unread,
Apr 21, 2017, 2:19:09 PM4/21/17
to Confluent Platform
Thank. that was it.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages