StoreTimeoutException when starting schema registry

2,251 views
Skip to first unread message

Sigurd Knippenberg

unread,
Mar 19, 2015, 10:22:02 AM3/19/15
to confluent...@googlegroups.com
I'm trying to start the Schema Registry against a Kafka broker that was installed as part of HortonWorks HDP (it's Kafka 0.8.1.2.2), but I get the following exception:

[2015-03-19 13:39:48,858] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
at io.confluent.rest.Application.createServer(Application.java:104)
at io.confluent.kafka.schemaregistry.rest.Main.main(Main.java:42)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException: KafkaStoreReaderThread failed to reach target offset within the timeout interval. targetOffset: 7, offsetReached: 6, timeout(ms): 60000
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:151)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:162)
... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException: KafkaStoreReaderThread failed to reach target offset within the timeout interval. targetOffset: 7, offsetReached: 6, timeout(ms): 60000
at io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread.waitUntilOffset(KafkaStoreReaderThread.java:229)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:222)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:149)
... 5 more

I turned on debug logging and before the error  I see this:

[2015-03-19 13:38:48,449] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)
[2015-03-19 13:38:48,474] INFO [ConsumerFetcherThread-schema-registry-vm-01-8081_vm-01-1426772327308-5bf32081-0-2], Starting  (kafka.consumer.ConsumerFetcherThread:68)
[2015-03-19 13:38:48,484] DEBUG Disconnecting from vm-04:6667 (kafka.consumer.SimpleConsumer:52)
[2015-03-19 13:38:48,493] DEBUG Created socket with SO_TIMEOUT = 30000 (requested 30000), SO_RCVBUF = 65536 (requested 65536), SO_SNDBUF = 43520 (requested -1), connectTimeoutMs = 30000. (kafka.network.BlockingChannel:52)
[2015-03-19 13:38:48,563] DEBUG reset fetch offset of ( _schemas:0: fetched offset = 0: consumed offset = -1 ) to 0 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,570] DEBUG reset consume offset of _schemas:0: fetched offset = 0: consumed offset = 0 to 0 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,606] INFO [ConsumerFetcherManager-1426772327334] Added fetcher for partitions ArrayBuffer([[_schemas,0], initOffset -1 to broker id:2,host:vm-04,port:6667] ) (kafka.consumer.ConsumerFetcherManager:68)
[2015-03-19 13:38:48,704] DEBUG reset consume offset of _schemas:0: fetched offset = 0: consumed offset = 1 to 1 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,713] DEBUG updated fetch offset of (_schemas:0: fetched offset = 6: consumed offset = 1) to 6 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,765] DEBUG Trying to send metadata request to node -1 (org.apache.kafka.clients.NetworkClient:387)
[2015-03-19 13:38:48,766] DEBUG Init connection to node -1 for sending metadata request in the next iteration (org.apache.kafka.clients.NetworkClient:397)
[2015-03-19 13:38:48,767] DEBUG Initiating connection to node -1 at vm-01:6667. (org.apache.kafka.clients.NetworkClient:415)
[2015-03-19 13:38:48,770] DEBUG Completed connection to node -1 (org.apache.kafka.clients.NetworkClient:348)
[2015-03-19 13:38:48,771] DEBUG Trying to send metadata request to node -1 (org.apache.kafka.clients.NetworkClient:387)
[2015-03-19 13:38:48,786] DEBUG Sending metadata request ClientRequest(expectResponse=true, payload=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1}, body={topics=[_schemas]})) to node -1 (org.apache.kafka.clients.NetworkClient:392)
[2015-03-19 13:38:48,803] DEBUG reset consume offset of _schemas:0: fetched offset = 6: consumed offset = 2 to 2 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,810] DEBUG Updated cluster metadata version 2 to Cluster(nodes = [Node(1, vm-03, 6667), Node(2, vm-04, 6667), Node(0, vm-01, 6667)], partitions = [Partition(topic = _schemas, partition = 0, leader = 2, replicas = [2,0,1,], isr = [2,]]) (org.apache.kafka.clients.producer.internals.Metadata:141)
[2015-03-19 13:38:48,812] DEBUG reset consume offset of _schemas:0: fetched offset = 6: consumed offset = 3 to 3 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,822] DEBUG Initiating connection to node 2 at vm-04:6667. (org.apache.kafka.clients.NetworkClient:415)
[2015-03-19 13:38:48,825] DEBUG Completed connection to node 2 (org.apache.kafka.clients.NetworkClient:348)
[2015-03-19 13:38:48,831] DEBUG reset consume offset of _schemas:0: fetched offset = 6: consumed offset = 4 to 4 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,836] DEBUG updated fetch offset of (_schemas:0: fetched offset = 7: consumed offset = 4) to 7 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,842] DEBUG reset consume offset of _schemas:0: fetched offset = 7: consumed offset = 5 to 5 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,856] INFO Wait to catch up until the offset of the last message at 7 (io.confluent.kafka.schemaregistry.storage.KafkaStore:221)
[2015-03-19 13:38:48,859] DEBUG reset consume offset of _schemas:0: fetched offset = 7: consumed offset = 6 to 6 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,867] DEBUG reset consume offset of _schemas:0: fetched offset = 7: consumed offset = 7 to 7 (kafka.consumer.PartitionTopicInfo:52)

Followed by a bunch of these before it times out:
[2015-03-19 13:38:50,207] DEBUG Got ping response for sessionid: 0x14c2defeec60010 after 1ms (org.apache.zookeeper.ClientCnxn:721)

Does the Schema Registry depend on a particular version of Kafka (and is that the issue?) or is something else wrong?

Thanks,

Sigurd

Geoffrey Anderson

unread,
Mar 19, 2015, 7:07:01 PM3/19/15
to confluent...@googlegroups.com
Hi Sigurd,

Thanks for trying out the Confluent Platform!

I was able to reproduce your issue using 0.8.1.1 brokers, and the short answer to your question is that you need to be running Kafka 0.8.2 or higher - Kafka provided as part of the Confluent Platform download suffices, and I'm guessing you already have it given that you're trying out the Schema Registry (http://confluent.io/downloads/). Alternatively you can download 0.8.2.1 directly from Apache here: http://kafka.apache.org/downloads.html

The longer explanation is that Kafka 0.8.1.x has a bug where the offset returned by a produce request is incorrect when ack=-1 is used (see https://issues.apache.org/jira/browse/KAFKA-1255). This bug is fixed in https://issues.apache.org/jira/browse/KAFKA-1260, and the fix appears in 0.8.2 release and later. Under the hood, the schema registry writes to Kafka with ack=-1 to ensure high durability of registered schemas and therefore triggers this bug when 0.8.1.x brokers are used.

Thanks very much,
Geoff






--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/CAK8UGOBHcCbR78u67wYRjHOeHMBwaB-Yktu26Zdn%3DbUgjpeKTw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Sigurd Knippenberg

unread,
Mar 20, 2015, 10:38:03 AM3/20/15
to confluent...@googlegroups.com
Thank you for the detailed response. I'll give 0.8.2 a try.

Sigurd

ABDELOUHAB Mohammed

unread,
Feb 16, 2016, 8:38:53 AM2/16/16
to Confluent Platform, sigurd.kn...@gmail.com
Hello

Im using kafka 0.9.0, ansd i have the same probleme:

[2016-02-16 12:59:41,881] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:166)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
        at io.confluent.rest.Application.createServer(Application.java:109)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException: KafkaStoreReaderThread failed to reach target offset within the timeout interval. targetOffset: 4, offsetReached: -1, timeout(ms): 60000
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:155)
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
        ... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException: KafkaStoreReaderThread failed to reach target offset within the timeout interval. targetOffset: 4, offsetReached: -1, timeout(ms): 60000
        at io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread.waitUntilOffset(KafkaStoreReaderThread.java:228)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:226)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:153)
        ... 5 more
[2016-02-16 13:00:00,342] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:166)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
        at io.confluent.rest.Application.createServer(Application.java:109)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException: KafkaStoreReaderThread failed to reach target offset within the timeout interval. targetOffset: 5, offsetReached: -1, timeout(ms): 60000
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:155)
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
        ... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException: KafkaStoreReaderThread failed to reach target offset within the timeout interval. targetOffset: 5, offsetReached: -1, timeout(ms): 60000
        at io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread.waitUntilOffset(KafkaStoreReaderThread.java:228)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:226)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:153)
        ... 5 more

Sri Shanmugam

unread,
Mar 2, 2016, 7:06:12 AM3/2/16
to Confluent Platform, sigurd.kn...@gmail.com
Hi,

I am running in to the same problem. I installed confluent through rpm and the version is 0.9.0.1 cp1.


I tried increasing the timeout as specified in the github issue - https://github.com/confluentinc/schema-registry/issues/97. But that did not help.

        
        kafkastore.timeout.ms = 2000
      
        kafkastore.init.timeout.ms = 2400000


And here is the exception stack from the schema registry start 


 ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:166)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
        at io.confluent.rest.Application.createServer(Application.java:109)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:155)
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
        ... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:367)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:224)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:153)
        ... 5 more
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
        at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:686)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:449)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:362)
        ... 7 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.


I dont see any exceptions in zookeeper and kafka logs.

Are there any suggestions on this issue?

Thanks for the help,

Sri

Sri Shanmugam

unread,
Mar 2, 2016, 9:24:43 AM3/2/16
to Confluent Platform, sigurd.kn...@gmail.com
Please ignore this issue. It was problem in my /etc/hosts file, how I have mapped the ip to hostname. Once I fixed the hosts file, I was able to produce and consume messages along with schema in kafka topics.

Kotesh Banoth

unread,
May 10, 2016, 9:26:06 AM5/10/16
to Confluent Platform, sigurd.kn...@gmail.com
Hi Sigurd, I am also getting the same error can you please help me out.


[hpvertica1@hpvertica3 confluent-2.0.1]$  ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/schema-registry/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-05-10 19:48:05,313] INFO SchemaRegistryConfig values:
    master.eligibility = true
    port = 8081
    kafkastore.timeout.ms = 500
    kafkastore.init.timeout.ms = 60000
    debug = false
    kafkastore.zk.session.timeout.ms = 30000
    schema.registry.zk.namespace = schema_registry
    request.logger.name = io.confluent.rest-utils.requests
    metrics.sample.window.ms = 30000
    kafkastore.topic = _schemas
    avro.compatibility.level = backward
    shutdown.graceful.ms = 1000
    access.control.allow.origin =
    response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
    metrics.jmx.prefix = kafka.schema.registry
    host.name = 198.105.244.11
    metric.reporters = []
    kafkastore.commit.interval.ms = -1
    kafkastore.connection.url = localhost:2181
    metrics.num.samples = 2
    response.mediatype.default = application/vnd.schemaregistry.v1+json
    kafkastore.topic.replication.factor = 3
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2016-05-10 19:48:06,673] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:86)
[2016-05-10 19:48:08,072] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:205)
[2016-05-10 19:48:08,410] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)


[2016-05-10 19:49:08,556] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)

io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
    at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:166)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
    at io.confluent.rest.Application.createServer(Application.java:109)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)

Ewen Cheslack-Postava

unread,
May 19, 2016, 4:27:19 AM5/19/16
to Confluent Platform, sigurd.kn...@gmail.com
Kotesh,

That TimeoutException for metadata updates almost always means you cannot contact the Kafka brokers. This usually means you have a misconfiguration, e.g. hostnames used in configs don't resolve on all the hosts you're using.

-Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Ewen

Michael Dolgonos

unread,
Sep 23, 2016, 11:47:45 AM9/23/16
to Confluent Platform, sig...@knippenberg.com
I'm having the same issue on a single node with Confluent 3.0.

The only difference I have is that I renamed the topic name from "_schemas" to "_schemas1" in the schema-registry.properties file. I created this new topic and see it is listed in kafka-topics --list --zookeeper localhost:2181. My Kafka server as well as Zookeeper are running and I can exchange messages in other topics except _schems1. When I try to send a regular message to it I see the following error in the server console:

Topic and partition to exceptions: _schemas1-0 -> org.apache.kafka.common.errors.CorruptRecordException (kafka.server.KafkaApis)
[2016-09-23 09:47:31,595] INFO [KafkaApi-0] Closing connection due to error during produce request with correlation id 5 from client id console-producer with ack=0

Any advise would be greatly appreciated.

Michael D 
Reply all
Reply to author
Forward
0 new messages