schema-registry won't start

1,693 views
Skip to first unread message

Andrew Otto

unread,
Apr 1, 2015, 3:15:24 PM4/1/15
to confluent...@googlegroups.com
Hi all,

I’m surely doing something wrong here, but I can’t seem to get the schema-registry to start with our non-Confluent packaged Kafka.  I’m using a 3 node Kafka cluster running Kafka 0.8.1.1, installed via our custom .deb packaging[1].  This Kafka cluster uses a chrooted zookeeper path.

My schema-registry.properites file has this:

  port=8081
  kafkastore.connection.url=localhost:2181/kafka/analytics-kafka
  kafkastore.topic=_schemas
  debug=false

Zookeeper is running on localhost:2181 as well as 2 other nodes, and the Kafka cluster is configure to use all 3 zookeepers.  I have also tried including all zookeepers in kafkastore.connection.url with the same effect.

When I run schema-registry-start, I can see the schema registry create the _schemas topic, as well as produce a null (empty?) message to it.  It then says:

[2015-04-01 19:11:32,228] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:87)
[2015-04-01 19:11:37,138] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)
[2015-04-01 19:11:37,389] INFO Wait to catch up until the offset of the last message at 2 (io.confluent.kafka.schemaregistry.storage.KafkaStore:221)


It then waits for 60 seconds, and then prints out:

[2015-04-01 19:12:37,391] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
at io.confluent.rest.Application.createServer(Application.java:104)
at io.confluent.kafka.schemaregistry.rest.Main.main(Main.java:42)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException: KafkaStoreReaderThread failed to reach target offset within the timeout interval. targetOffset: 2, offsetReached: 1, timeout(ms): 60000
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:151)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:162)
... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException: KafkaStoreReaderThread failed to reach target offset within the timeout interval. targetOffset: 2, offsetReached: 1, timeout(ms): 60000
at io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread.waitUntilOffset(KafkaStoreReaderThread.java:229)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:222)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:149)
... 5 more 


The Kafka broker that the schema-registry connected to has this in the logs:

[2015-04-01 19:12:37,719] 925129 [kafka-processor-9092-0] ERROR kafka.network.Processor  - Closing socket for /10.68.16.118 because of error
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at kafka.utils.Utils$.read(Utils.scala:375)
at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
at kafka.network.Processor.read(SocketServer.scala:347)
at kafka.network.Processor.run(SocketServer.scala:245)
at java.lang.Thread.run(Thread.java:745)



Does anyone have any obvious tips for me?  Will the schema-registry work with Kafka 0.8.1.1?  The kafka-rest-proxy works just fine with this setup.

Thanks!
-Ao


Geoffrey Anderson

unread,
Apr 1, 2015, 3:48:29 PM4/1/15
to confluent...@googlegroups.com
Hi Andrew,

Sorry about the inconvenience, but the short answer is your kafka brokers need to be running 0.8.2 or later. 

See this discussion for a bit more detail

Cheers,
Geoff

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/2EBD18C8-E21E-4544-9738-6BA3C6A02438%40wikimedia.org.
For more options, visit https://groups.google.com/d/optout.

Andrew Otto

unread,
Apr 1, 2015, 4:09:23 PM4/1/15
to confluent...@googlegroups.com
Aww man, ok.  Thanks!  Good to know.

:)


Kotesh Banoth

unread,
May 10, 2016, 9:18:37 AM5/10/16
to Confluent Platform, ramesh m, koteshbanoth
Hi I am also facing the same issue by my kafka version is 0.9.0.1-cp1

I am loosing connection after some time, starting schema-registry-star


 ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/schema-registry/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-05-10 19:48:05,313] INFO SchemaRegistryConfig values:
    master.eligibility = true
    port = 8081
    kafkastore.timeout.ms = 500
    kafkastore.init.timeout.ms = 60000
    debug = false
    kafkastore.zk.session.timeout.ms = 30000
    schema.registry.zk.namespace = schema_registry
    request.logger.name = io.confluent.rest-utils.requests
    metrics.sample.window.ms = 30000
    kafkastore.topic = _schemas
    avro.compatibility.level = backward
    shutdown.graceful.ms = 1000
    access.control.allow.origin =
    response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
    metrics.jmx.prefix = kafka.schema.registry
    host.name = 198.105.244.11
    metric.reporters = []
    kafkastore.commit.interval.ms = -1
    kafkastore.connection.url = localhost:2181
    metrics.num.samples = 2
    response.mediatype.default = application/vnd.schemaregistry.v1+json
    kafkastore.topic.replication.factor = 3
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2016-05-10 19:48:06,673] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:86)
[2016-05-10 19:48:08,072] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:205)
[2016-05-10 19:48:08,410] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)




Michael Dolgonos

unread,
Sep 23, 2016, 11:49:52 AM9/23/16
to Confluent Platform
I'm having the same issue on a single node with Confluent 3.0.

The only difference I have is that I renamed the topic name from "_schemas" to "_schemas1" in the schema-registry.properties file. I created this new topic and see it is listed in kafka-topics --list --zookeeper localhost:2181. My Kafka server as well as Zookeeper are running and I can exchange messages in other topics except _schems1. When I try to send a regular message to it I see the following error in the server console:

Topic and partition to exceptions: _schemas1-0 -> org.apache.kafka.common.errors.CorruptRecordException (kafka.server.KafkaApis)
[2016-09-23 09:47:31,595] INFO [KafkaApi-0] Closing connection due to error during produce request with correlation id 5 from client id console-producer with ack=0

Any advise would be greatly appreciated.

Michael D.
Reply all
Reply to author
Forward
0 new messages