I'm trying to start the Schema Registry against a Kafka broker that was installed as part of HortonWorks HDP (it's Kafka
[2015-03-19 13:39:48,858] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
at io.confluent.rest.Application.createServer(Application.java:104)
at io.confluent.kafka.schemaregistry.rest.Main.main(Main.java:42)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException: KafkaStoreReaderThread failed to reach target offset within the timeout interval. targetOffset: 7, offsetReached: 6, timeout(ms): 60000
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:151)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:162)
... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreTimeoutException: KafkaStoreReaderThread failed to reach target offset within the timeout interval. targetOffset: 7, offsetReached: 6, timeout(ms): 60000
at io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread.waitUntilOffset(KafkaStoreReaderThread.java:229)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:222)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:149)
... 5 more
[2015-03-19 13:38:48,449] INFO [kafka-store-reader-thread-_schemas], Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)
[2015-03-19 13:38:48,474] INFO [ConsumerFetcherThread-schema-registry-vm-01-8081_vm-01-1426772327308-5bf32081-0-2], Starting (kafka.consumer.ConsumerFetcherThread:68)
[2015-03-19 13:38:48,484] DEBUG Disconnecting from vm-04:6667 (kafka.consumer.SimpleConsumer:52)
[2015-03-19 13:38:48,493] DEBUG Created socket with SO_TIMEOUT = 30000 (requested 30000), SO_RCVBUF = 65536 (requested 65536), SO_SNDBUF = 43520 (requested -1), connectTimeoutMs = 30000. (kafka.network.BlockingChannel:52)
[2015-03-19 13:38:48,563] DEBUG reset fetch offset of ( _schemas:0: fetched offset = 0: consumed offset = -1 ) to 0 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,570] DEBUG reset consume offset of _schemas:0: fetched offset = 0: consumed offset = 0 to 0 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,606] INFO [ConsumerFetcherManager-1426772327334] Added fetcher for partitions ArrayBuffer([[_schemas,0], initOffset -1 to broker id:2,host:vm-04,port:6667] ) (kafka.consumer.ConsumerFetcherManager:68)
[2015-03-19 13:38:48,704] DEBUG reset consume offset of _schemas:0: fetched offset = 0: consumed offset = 1 to 1 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,713] DEBUG updated fetch offset of (_schemas:0: fetched offset = 6: consumed offset = 1) to 6 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,765] DEBUG Trying to send metadata request to node -1 (org.apache.kafka.clients.NetworkClient:387)
[2015-03-19 13:38:48,766] DEBUG Init connection to node -1 for sending metadata request in the next iteration (org.apache.kafka.clients.NetworkClient:397)
[2015-03-19 13:38:48,767] DEBUG Initiating connection to node -1 at vm-01:6667. (org.apache.kafka.clients.NetworkClient:415)
[2015-03-19 13:38:48,770] DEBUG Completed connection to node -1 (org.apache.kafka.clients.NetworkClient:348)
[2015-03-19 13:38:48,771] DEBUG Trying to send metadata request to node -1 (org.apache.kafka.clients.NetworkClient:387)
[2015-03-19 13:38:48,786] DEBUG Sending metadata request ClientRequest(expectResponse=true, payload=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1}, body={topics=[_schemas]})) to node -1 (org.apache.kafka.clients.NetworkClient:392)
[2015-03-19 13:38:48,803] DEBUG reset consume offset of _schemas:0: fetched offset = 6: consumed offset = 2 to 2 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,810] DEBUG Updated cluster metadata version 2 to Cluster(nodes = [Node(1, vm-03, 6667), Node(2, vm-04, 6667), Node(0, vm-01, 6667)], partitions = [Partition(topic = _schemas, partition = 0, leader = 2, replicas = [2,0,1,], isr = [2,]]) (org.apache.kafka.clients.producer.internals.Metadata:141)
[2015-03-19 13:38:48,812] DEBUG reset consume offset of _schemas:0: fetched offset = 6: consumed offset = 3 to 3 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,822] DEBUG Initiating connection to node 2 at vm-04:6667. (org.apache.kafka.clients.NetworkClient:415)
[2015-03-19 13:38:48,825] DEBUG Completed connection to node 2 (org.apache.kafka.clients.NetworkClient:348)
[2015-03-19 13:38:48,831] DEBUG reset consume offset of _schemas:0: fetched offset = 6: consumed offset = 4 to 4 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,836] DEBUG updated fetch offset of (_schemas:0: fetched offset = 7: consumed offset = 4) to 7 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,842] DEBUG reset consume offset of _schemas:0: fetched offset = 7: consumed offset = 5 to 5 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,856] INFO Wait to catch up until the offset of the last message at 7 (io.confluent.kafka.schemaregistry.storage.KafkaStore:221)
[2015-03-19 13:38:48,859] DEBUG reset consume offset of _schemas:0: fetched offset = 7: consumed offset = 6 to 6 (kafka.consumer.PartitionTopicInfo:52)
[2015-03-19 13:38:48,867] DEBUG reset consume offset of _schemas:0: fetched offset = 7: consumed offset = 7 to 7 (kafka.consumer.PartitionTopicInfo:52)