Hello,
We are receiving the following error for debezium:
org.apache.kafka.connect.errors.ConnectException: Unrecoverable exception from producer send callback
at org.apache.kafka.connect.runtime.WorkerSourceTask.maybeThrowProducerSendException(WorkerSourceTask.java:282)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:336)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:264)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1740572 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.
Which having set the server parameters:
auto.create.topics.enable= true
default.replication.factor=3
num.partitions=6
message.max.bytes=104857600
min.insync.replicas=2
num.io.threads=8
num.network.threads=5
num.replica.fetchers=2
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
unclean.leader.election.enable=true
replica.fetch.max.bytes=10485760
and the debezium producer properties:
{
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"snapshot.locking.mode": "none",
"errors.log.include.messages": "true",
"database.dbname": "pim",
"database.user": "manomano",
"producer.max.request.size": "104857600",
"database.history.kafka.topic": "app_common.pim.dbhistory",
"database.history.producer.max.request.size": "104857600",
"tombstones.on.delete": "false",
"decimal.handling.mode": "double",
"database.serverTimezone": "Europe/Paris",
"database.hostname": "prd-app-common-pg.manomano.tech",
"producer.compression_type": "lz4",
"name": "PIM4",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"errors.log.enable": "true",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"snapshot.mode": "exported"
}
The producer configuration "producer.max.request.bytes" does not seem to be taking effect... any ideas?