Backward compatibility problem with SpecificData.get().deepCopy

265 views
Skip to first unread message

George @paytm.com

unread,
Mar 22, 2016, 3:25:36 PM3/22/16
to Confluent Platform
Hi,

Here is my scenario:

1. create some record with schema version 1
2. add a new field with default value to the schema. Thus, the schema is evolved into version 2
3. In java, I try reading the record produced in schema version 1 with schema version 2 with deepCopy

SpecificData.get().deepCopy(version2.SCHEMA$, genericRecordAtV1)

It gives ArrayIndexOutOfBound exception, right at the index of the new field added in 2).

I thought by being backward compatible, my new schema should be able to deep copy record in older schemas. Did I miss anything? 

Thanks,

George

George @paytm.com

unread,
Mar 22, 2016, 4:13:08 PM3/22/16
to Confluent Platform
Stacktrace:

java.lang.ArrayIndexOutOfBoundsException: 23
at org.apache.avro.generic.GenericData$Record.get(GenericData.java:135)
at org.apache.avro.generic.GenericData.getField(GenericData.java:580)
at org.apache.avro.generic.GenericData.getField(GenericData.java:595)
at org.apache.avro.generic.GenericData.deepCopy(GenericData.java:970)

gerard...@dizzit.com

unread,
Mar 24, 2016, 3:42:55 AM3/24/16
to Confluent Platform
If you let the conversion over by avro, by setting specific.avro.reader=true in the client, does it work as expected?

Roger Hoover

unread,
Mar 24, 2016, 10:15:24 PM3/24/16
to confluent...@googlegroups.com
Yes, I use specific.avro.reader=true and it doesn't require any copying.  You can just cast the result of IncomingMessageEnvelope.getMessage()

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/c6993d85-3822-461a-967a-34d81b16b1c4%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

George @paytm.com

unread,
Mar 24, 2016, 11:28:52 PM3/24/16
to Confluent Platform
Hi Gerard and Roger,

Thanks for the reply. For my case, upgrading deserializer jar from 1.0.1 to 2.0.1 fixed it (we just migrated to Kafka 0.9 and Confluent 2.0.1).

In 1.0.1, the underlying GenericRecord class type is org.apache.avro.generic.GenericData$Record instead of my generated avro java class.

George

On Thursday, March 24, 2016 at 10:15:24 PM UTC-4, Roger Hoover wrote:
Yes, I use specific.avro.reader=true and it doesn't require any copying.  You can just cast the result of IncomingMessageEnvelope.getMessage()
On Thu, Mar 24, 2016 at 12:42 AM, <gerard...@dizzit.com> wrote:
If you let the conversion over by avro, by setting specific.avro.reader=true in the client, does it work as expected?

On Tuesday, March 22, 2016 at 9:13:08 PM UTC+1, George @paytm.com wrote:
Stacktrace:

java.lang.ArrayIndexOutOfBoundsException: 23
at org.apache.avro.generic.GenericData$Record.get(GenericData.java:135)
at org.apache.avro.generic.GenericData.getField(GenericData.java:580)
at org.apache.avro.generic.GenericData.getField(GenericData.java:595)
at org.apache.avro.generic.GenericData.deepCopy(GenericData.java:970)

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages