Re: producer max.request.size

3,668 views
Skip to first unread message
Message has been deleted
Message has been deleted

Ayan Mukhuty

unread,
Mar 25, 2021, 3:26:03 AM3/25/21
to debezium
Hi,

I have a similar issue, and I am not sure how do i increase max.request.size.
I have made changes in connect-distributed.properties and restarted but still it holds the default values of 1 MB.

Regards,
Ayan

On Thursday, March 25, 2021 at 12:10:51 AM UTC+5:30 joel.salm...@manomano.com wrote:
Found the issue; as we are using kafka connect distributed, the correct config to override the producer here is
        "producer.override.max.request.size" : "104857600",

having applied connector.client.config.override.policy=ALL for the worker instances to allow the override (https://docs.confluent.io/platform/current/connect/references/allconfigs.html#override-the-worker-configuration)
El dia dimecres, 24 de març de 2021 a les 17:41:47 UTC+1, Joel SALMERON VIVER va escriure:
Hello,

We are receiving the following error for debezium:

org.apache.kafka.connect.errors.ConnectException: Unrecoverable exception from producer send callback
at org.apache.kafka.connect.runtime.WorkerSourceTask.maybeThrowProducerSendException(WorkerSourceTask.java:282)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:336)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:264)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1740572 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.

Which having set the server parameters:
auto.create.topics.enable= true
default.replication.factor=3
num.partitions=6
message.max.bytes=104857600
min.insync.replicas=2
num.io.threads=8
num.network.threads=5
num.replica.fetchers=2
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
unclean.leader.election.enable=true
replica.fetch.max.bytes=10485760

and the debezium producer properties:
{
  "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
  "snapshot.locking.mode": "none",
  "errors.log.include.messages": "true",
  "database.dbname": "pim",
  "database.user": "manomano",
  "producer.max.request.size": "104857600",
  "database.history.kafka.topic": "app_common.pim.dbhistory",
  "database.server.name": "app_common",
  "database.history.producer.max.request.size": "104857600",
  "plugin.name": "pgoutput",
  "tombstones.on.delete": "false",
  "value.converter.schema.registry.url": "http://infra-kafka-framework.prd.manomano.com:8081",
  "decimal.handling.mode": "double",
  "database.serverTimezone": "Europe/Paris",
  "database.hostname": "prd-app-common-pg.manomano.tech",
  "producer.compression_type": "lz4",
  "name": "PIM4",
  "value.converter": "io.confluent.connect.avro.AvroConverter",
  "errors.log.enable": "true",
  "key.converter": "org.apache.kafka.connect.storage.StringConverter",
  "key.converter.schema.registry.url": "http://infra-kafka-framework.prd.manomano.com:8081",
  "snapshot.mode": "exported"
}


The producer configuration "producer.max.request.bytes" does not seem to be taking effect... any ideas?

Joel SALMERON VIVER

unread,
Mar 25, 2021, 12:48:35 PM3/25/21
to debezium
Hello Ayan; if using kafka connect distributed, make sure all workers have the property mentioned above...
connector.client.config.override.policy=ALL

then for each producer, make sure you have debezium as "producer.override.max.request.size" : "104857600" or whatever....

You still need the server brokers to have this:


message.max.bytes=104857600
socket.request.max.bytes=104857600
replica.fetch.max.bytes=10485760
El dia dijous, 25 de març de 2021 a les 8:26:03 UTC+1, ayan.m...@gmail.com va escriure:

jiri.p...@gmail.com

unread,
Mar 26, 2021, 5:40:08 AM3/26/21
to debezium
Hi Joel,

I've prepared a new FAQ entry about the topic - https://github.com/debezium/debezium.github.io/pull/655 Could you please take a look and double check that it is correct?

J.

Reply all
Reply to author
Forward
0 new messages