Recently we had outage on mysql main db. So we forced recreate new RDS and new read replica for mysql. Our debezium (1.9 version) connects to read replica on sends data to kafka. After creation of new read replica we are getting following frequent(once or more in a day) deserialization errors
```{
"state": "FAILED",
"trace": "org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.\n\tat io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:50)\n\tat io.debezium.connector.mysql.MySqlStreamingChangeEventSource$ReaderThreadLifecycleListener.onCommunicationFailure(MySqlStreamingChangeEventSource.java:1239)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:1079)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:631)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:932)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: io.debezium.DebeziumException: Failed to deserialize data of EventHeaderV4{timestamp=1760515309000, eventType=ROWS_QUERY, serverId=1568727633, headerLength=19, dataLength=2161, nextPosition=41218112, flags=128}\n\tat io.debezium.connector.mysql.MySqlStreamingChangeEventSource.wrap(MySqlStreamingChangeEventSource.java:1194)\n\t... 5 more\nCaused by: com.github.shyiko.mysql.binlog.event.deserialization.EventDataDeserializationException: Failed to deserialize data of EventHeaderV4{timestamp=1760515309000, eventType=ROWS_QUERY, serverId=1568727633, headerLength=19, dataLength=2161, nextPosition=41218112, flags=128}\n\tat com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.deserializeEventData(EventDeserializer.java:341)\n\tat com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.nextEvent(EventDeserializer.java:244)\n\tat io.debezium.connector.mysql.MySqlStreamingChangeEventSource$1.nextEvent(MySqlStreamingChangeEventSource.java:230)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:1051)\n\t... 3 more\nCaused by: java.io.EOFException: Failed to read remaining 1027 of 2156 bytes from position 1052290618. Block length: 1027. Initial block length: 2157.\n\tat com.github.shyiko.mysql.binlog.io.ByteArrayInputStream.fill(ByteArrayInputStream.java:115)\n\tat com.github.shyiko.mysql.binlog.io.ByteArrayInputStream.read(ByteArrayInputStream.java:105)\n\tat com.github.shyiko.mysql.binlog.io.ByteArrayInputStream.readString(ByteArrayInputStream.java:78)\n\tat com.github.shyiko.mysql.binlog.event.deserialization.RowsQueryEventDataDeserializer.deserialize(RowsQueryEventDataDeserializer.java:31)\n\tat com.github.shyiko.mysql.binlog.event.deserialization.RowsQueryEventDataDeserializer.deserialize(RowsQueryEventDataDeserializer.java:25)\n\tat com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.deserializeEventData(EventDeserializer.java:335)\n\t... 6 more\n",
"worker_id": "kafka-connect-ccloud:8083",
"generation": 8
}
```
every other day. We are getting this only on one env but remaining envs are stable. we will get errors once in a while. We are unable to find root cause on this. We have checked
FAQ to increase timeout. Apart from increasing timeout what are other options / fixes that we can apply to reduce these errors?