Should kafka-connect 0.10 work with broker 0.9.0.1 ?

1,431 views
Skip to first unread message

Barry Kaplan

unread,
Jun 14, 2016, 6:47:26 AM6/14/16
to Confluent Platform
Early at startup I get

org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'brokers': Error reading field 'host': Error reading string of length 12592, only 7741 bytes available
 at org
.apache.kafka.common.protocol.types.Schema.read(Schema.java:73) ~[kafka-clients-0.10.0.0.jar:na]
 at org
.apache.kafka.clients.NetworkClient.parseResponse(NetworkClient.java:380) ~[kafka-clients-0.10.0.0.jar:na]
 at org
.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:449) ~[kafka-clients-0.10.0.0.jar:na]
 at org
.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:269) ~[kafka-clients-0.10.0.0.jar:na]
 at org
.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360) ~[kafka-clients-0.10.0.0.jar:na]
 at org
.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224) ~[kafka-clients-0.10.0.0.jar:na]
 at org
.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:178) ~[kafka-clients-0.10.0.0.jar:na]
 at org
.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:205) ~[kafka-clients-0.10.0.0.jar:na]
 at org
.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1272) ~[kafka-clients-0.10.0.0.jar:na]
 at org
.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:131) ~[connect-runtime-0.10.0.0.jar:na]
 at org
.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:86) ~[connect-runtime-0.10.0.0.jar:na]
 at org
.apache.kafka.connect.runtime.Worker.start(Worker.java:121) ~[connect-runtime-0.10.0.0.jar:na]
 at org
.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:105) ~[connect-runtime-0.10.0.0.jar:na]
 at org
.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:171) ~[connect-runtime-0.10.0.0.jar:na]
 at java
.lang.Thread.run(Thread.java:745) [na:1.8.0_72]

I do not see this error when using the 0.9.0.1 kafka connect library against the 0.9.0.1 broker. 

Also debugging thru Schema#read fields[i].name has this sequence of values:
- correlation_id
- brokers
- node_id
- host
- port
- rack
- node_id
- host        <- exception
- brokers   <- exception

Does it mean anything that is trying read the same key multiple times from the buffer?


Dustin Cote

unread,
Jun 14, 2016, 7:09:31 AM6/14/16
to confluent...@googlegroups.com

Hi Barry,

I believe your issue here is the attempt to use a client version higher than the broker version. Brokers must be upgraded first in general and in this case the message format changed between 0.9.0.1 and 0.10.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/50b36a48-5c55-4781-96ca-32858742a49a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Barry Kaplan

unread,
Jun 14, 2016, 7:18:38 AM6/14/16
to Confluent Platform
Thanks Dustin, I just read the upgrade section for 0.10 and saw that. Sorry for the noise.
Reply all
Reply to author
Forward
0 new messages