error starting schema registry

1,034 views
Skip to first unread message

Andrey Plaksin

unread,
May 10, 2016, 10:15:05 AM5/10/16
to Confluent Platform
Hi 

I installed a single server confluent platform on google cloud vm 

i cant start the schema registry with the following error 

ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:166) at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55) at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37) at io.confluent.rest.Application.createServer(Application.java:109) at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store. at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:155) at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164) ... 4 moreCaused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store. at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:367) at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:224) at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:153) ... 5 moreCaused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 81 ms. at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:686) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:449) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339) at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:362) ... 7 moreCaused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 81 ms.


Roger Hoover

unread,
May 10, 2016, 12:27:18 PM5/10/16
to confluent...@googlegroups.com
Hi Andrey,

Is Kafka running?  Are there any errors in it's logs?

Roger

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/6d93f920-8841-4102-9216-419cebe2768d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Andrey Plaksin

unread,
May 10, 2016, 1:41:52 PM5/10/16
to Confluent Platform
Zookeeper and kafka started successfully, I was able to connect to the zookeeper from external machine and see the broker up and running

Alex Loddengaard

unread,
May 10, 2016, 7:00:04 PM5/10/16
to confluent...@googlegroups.com
Looks like the error is that the Noop key couldn't be produced to Kafka. Can you make sure the _schemas topic exists? Can you also confirm that you can produce to a test Kafka topic with the console producer?

Alex

On Tue, May 10, 2016 at 10:41 AM, Andrey Plaksin <andrey....@gmail.com> wrote:
Zookeeper and kafka started successfully, I was able to connect to the zookeeper from external machine and see the broker up and running
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

Michael Dolgonos

unread,
Sep 23, 2016, 9:50:41 AM9/23/16
to Confluent Platform
I'm having the same problem. The only difference I have is that I renamed the topic name from "_schemas" to "_schemas1" in the schema-registry.properties file. I created this new topic and see it is listed in kafka-topics --list --zookeeper localhost:2181. My Kafka server as well as Zookeeper are running and I can exchange messages in other topics except _schems1. When I try to send a regular message to it I see the following error in the server console:

Topic and partition to exceptions: _schemas1-0 -> org.apache.kafka.common.errors.CorruptRecordException (kafka.server.KafkaApis)
[2016-09-23 09:47:31,595] INFO [KafkaApi-0] Closing connection due to error during produce request with correlation id 5 from client id console-producer with ack=0

Any advise would be greatly appreciated.

Michael D.

On Tuesday, May 10, 2016 at 7:00:04 PM UTC-4, Alex Loddengaard wrote:
Looks like the error is that the Noop key couldn't be produced to Kafka. Can you make sure the _schemas topic exists? Can you also confirm that you can produce to a test Kafka topic with the console producer?

Alex
On Tue, May 10, 2016 at 10:41 AM, Andrey Plaksin <andrey....@gmail.com> wrote:
Zookeeper and kafka started successfully, I was able to connect to the zookeeper from external machine and see the broker up and running

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages