Hi all,
I already saw couple of questions with similar issues but most of them are quite old and I wanted to better understand if I would be able to use Debezium for this load or not.
So, first of all I'm using currently lates version of Debezium 2.1.2, Kafka is also newest 3.3.2. Oracle version is 19.
Here are relevant parts of configuration:
key.converter: io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url: "..."
value.converter: io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url: "..."
table.include.list: "<61 tables>"
schema.history.internal.kafka.topic: "..."
schema.history.internal.kafka.bootstrap.servers: "..."
log.mining.username.exclude.list: SYS,SYSTEM
log.mining.batch.size.max: experimented with 100_000 to 1_000_000
log.mining.batch.size.min: experimented with 1000, 10000 and 100000
log.mining.batch.size.default: experimented with 100_000 to 500_000
snapshot.mode: schema_only
log.mining.strategy: online_catalog
transforms: unwrap
transforms.unwrap.type: io.debezium.transforms.ExtractNewRecordState
transforms.unwrap.drop.tombstones: true
transforms.unwrap.delete.handling.mode: rewrite
transforms.unwrap.add.fields: op,source.ts_ms
Here are some images of relevant metrics:
As you can see lag increases very quickly (basically it manages to consume about one third of events). Number of transactions on our server is about 800 per second and sometimes even 1K.
At this moment I exhausted my knowledge and configuration options so I'm reaching to you for some proposition or advice.
Thanks