Issue following QuickStart

500 views
Skip to first unread message

Claudio Gonzalez

unread,
Apr 15, 2015, 1:55:38 PM4/15/15
to confluent...@googlegroups.com
I'm sure it is something obvious and I'm just not seeing it.  Any assistance would be appreciated.

java -version

java version "1.7.0_60"
Java(TM) SE Runtime Environment (build 1.7.0_60-b19)
Java HotSpot(TM) Client VM (build 24.60-b09, mixed mode)

On starting Zookeeper:

root@C2C-Node0:~/confluent-1.0# ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties
[2015-04-14 18:23:46,274] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)

 

On starting Kafka:

[2015-04-14 18:25:46,221] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)

 

On starting Schema Registry:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/confluent-1.0/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/confluent-1.0/share/java/schema-registry/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2015-04-14 18:29:46,788] INFO SchemaRegistryConfig values:
master.eligibility = true
port = 8081
kafkastore.timeout.ms = 500
kafkastore.init.timeout.ms = 60000
debug = false
kafkastore.zk.session.timeout.ms = 30000
request.logger.name = io.confluent.rest-utils.requests
metrics.sample.window.ms = 30000
schema.registry.zk.namespace = schema_registry
kafkastore.topic = _schemas
avro.compatibility.level = backward
shutdown.graceful.ms = 1000
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
metrics.jmx.prefix = kafka.schema.registry
host.name = C2C-Node0
metric.reporters = []
kafkastore.commit.interval.ms = -1
kafkastore.connection.url = localhost:2181
metrics.num.samples = 2
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.topic.replication.factor = 3
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2015-04-14 18:29:48,517] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:87)
[2015-04-14 18:29:50,519] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:172)
[2015-04-14 18:29:50,749] INFO [kafka-store-reader-thread-_schemas], Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)
[2015-04-14 18:30:51,089] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
at io.confluent.rest.Application.createServer(Application.java:104)
at io.confluent.kafka.schemaregistry.rest.Main.main(Main.java:42)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:151)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:162)
... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:363)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:220)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:149)
... 5 more

Ewen Cheslack-Postava

unread,
Apr 15, 2015, 2:33:33 PM4/15/15
to confluent...@googlegroups.com
Lets start with Kafka before getting to the schema registry since it looks like Kafka can't even connect to Zookeeper. If it can't connect, it won't be able to properly start up and handle requests from the schema registry (which stores its data in Kafka).

I'm assuming you're working with a stock installation of the Confluent Platform, and it looks like it's from one of the zip/tgz downloads? Since it's saying the connection is refused (rather than, e.g., timing out), I'm wondering if something is wrong with how it's resolving the hostname. Where are you running this (e.g. on your local box, in a VM, on a cloud VM)? And have you changed any of the configs?

I'd start by looking at two possible issues. First, check what the hostnames used in the configs are resolving to. If you're using the defaults, they should just be localhost and should work, although I seem to recall having issues one time where Ubuntu was overriding localhost to a non-standard value. Second, try using netstat to make sure the services are listening on the interfaces and ports expected. I think you can sometimes end up listening only via IPv6.

-Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/c6d1f0c3-022a-4f46-8bc8-5da7440db5e0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Ewen

Claudio Gonzalez

unread,
Apr 15, 2015, 6:44:20 PM4/15/15
to confluent...@googlegroups.com
Hi Ewen, thanks for getting back to me.

I am using the stock install via zip download on a local box.  I was trying to get to get everything to work out of the box without changing the config. Fiddled with configs to replace localhost with internal IP and seem to be getting somewhere.  I think.  Though now when I try to send Avro data I get no output?  Please take a look below and point me in the right direction if you have a chance.

Thanks

Zookeeper:
root@C2C-Node0:~/confluent-1.0# ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties
[2015-04-15 16:47:09,171] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2015-04-15 17:32:38,687] WARN Connection request from old client /192.168.6.124:52994; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)
[2015-04-15 17:32:45,149] WARN Connection request from old client /192.168.6.124:52995; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)
[2015-04-15 17:32:57,109] WARN Connection request from old client /192.168.6.124:53001; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)
[2015-04-15 17:33:02,132] WARN Connection request from old client /192.168.6.124:53002; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)
root@C2C-Node0:~/confluent-1.0# ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties
[2015-04-15 16:47:09,171] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2015-04-15 17:32:38,687] WARN Connection request from old client /192.168.6.124:52994; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)
[2015-04-15 17:32:45,149] WARN Connection request from old client /192.168.6.124:52995; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)
[2015-04-15 17:32:57,109] WARN Connection request from old client /192.168.6.124:53001; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)
[2015-04-15 17:33:02,132] WARN Connection request from old client /192.168.6.124:53002; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)

Kafka:
root@C2C-Node0:~/confluent-1.0# ./bin/kafka-server-start ./etc/kafka/server.properties
[2015-04-15 17:32:46,862] WARN Partition [_schemas,0] on broker 0: No checkpointed highwatermark is found for partition [_schemas,0] (kafka.cluster.Partition)

Schema:
root@C2C-Node0:~/confluent-1.0# ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/confluent-1.0/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/confluent-1.0/share/java/schema-registry/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2015-04-15 17:32:32,571] INFO SchemaRegistryConfig values:
        master.eligibility = true
        port = 8081
        kafkastore.timeout.ms = 500
        kafkastore.init.timeout.ms = 60000
        debug = false
        request.logger.name = io.confluent.rest-utils.requests
        metrics.sample.window.ms = 30000
        schema.registry.zk.namespace = schema_registry
        kafkastore.topic = _schemas
        avro.compatibility.level = backward
        shutdown.graceful.ms = 1000
        response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
        metrics.jmx.prefix = kafka.schema.registry
        host.name = C2C-Node0
        metric.reporters = []
        kafkastore.connection.url = 192.168.6.124:2181
        metrics.num.samples = 2
        response.mediatype.default = application/vnd.schemaregistry.v1+json
        kafkastore.topic.replication.factor = 3
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2015-04-15 17:32:39,301] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:87)
[2015-04-15 17:32:46,318] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:172)
[2015-04-15 17:32:51,546] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)
[2015-04-15 17:32:51,973] INFO Wait to catch up until the offset of the last message at 0 (io.confluent.kafka.schemaregistry.storage.KafkaStore:221)
[2015-04-15 17:32:57,119] INFO Created schema registry namespace 192.168.6.124:2181/schema_registry (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:194)
[2015-04-15 17:33:02,164] INFO Successfully elected the new master: {"host":"C2C-Node0","port":8081,"master_eligibility":true,"version":1} (io.confluent.kafka.schemaregistry.zookeeper.ZookeeperMasterElector:80)
[2015-04-15 17:33:02,175] INFO Successfully elected the new master: {"host":"C2C-Node0","port":8081,"master_eligibility":true,"version":1} (io.confluent.kafka.schemaregistry.zookeeper.ZookeeperMasterElector:80)
[2015-04-15 17:33:02,424] INFO jetty-8.1.16.v20140903 (org.eclipse.jetty.server.Server:272)
Apr 15, 2015 5:33:03 PM org.glassfish.jersey.server.ApplicationHandler initialize
INFO: Initiating Jersey application, version Jersey: 2.6 2014-02-18 21:52:53...
[2015-04-15 17:33:04,633] INFO Started MetricsSelectC...@0.0.0.0:8081 (org.eclipse.jetty.server.AbstractConnector:338)
[2015-04-15 17:33:04,634] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.Main:44)


root@C2C-Node0:~/confluent-1.0# ./bin/kafka-avro-console-producer \
>              --broker-list 192.168.6.124:9092 --topic test \
>              --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}'
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.



--
Thanks,
Ewen

Ewen Cheslack-Postava

unread,
Apr 15, 2015, 6:57:15 PM4/15/15
to confluent...@googlegroups.com
It looks like everything is starting up ok now. kafka-avro-console-producer lets you enter messages in manually on the command line, one per line. So to produce data you should enter something like this (which matches the schema specified on the command line):

{"f1": "value1"}

and hit enter. You shouldn't see any output from that. To verify that the data was produced, you'll want to use the kafka-avro-console-consumer program (the next step in the quickstart guide) to show the messages that have been produced to that topic. If you keep them both running at the same time, you can enter values into the producer and should see the same values show up in the consumer output.

-Ewen

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Ewen

Claudio Gonzalez

unread,
Apr 15, 2015, 7:19:56 PM4/15/15
to confluent...@googlegroups.com
Looks like everything is working as intended now.  I'm sure I'll be back with more questions soon.

Thanks for your help!
[2015-04-15 17:33:04,633] INFO Started MetricsSelectChannelConnector@0.0.0.0:8081 (org.eclipse.jetty.server.AbstractConnector:338)
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsubscribe@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen
Reply all
Reply to author
Forward
0 new messages