Oracle Connector stalling with statspack process

53 views
Skip to first unread message

Neven MacEwan

unread,
Feb 17, 2026, 4:51:50 PMFeb 17
to debezium
Hi - we have an Oracle connector which is configured with about 80 included tables capturing about 60k events per hour but we noticed that when the statspack process runs then the cdc events on the tables we are capturing stall for about 2 hours - are there any settings we should look at changing? Thanks.

Chris Cranford

unread,
Feb 18, 2026, 3:01:44 AMFeb 18
to debe...@googlegroups.com
Hi, 

There are a lot of interactions between statspack and the SGA depending on the level that the process is run with. Depending on the level, these interactions will cause IO contention, particularly if your hardware is already saturated. It also has the potential to flush the SGA when taking snapshots, which will directly force Oracle to hard parse a lot of data that would otherwise be looked up directly from cached buffers. This has a direct impact on LogMiner's performance, particularly when its interacting with the data dictionary.

What level is statspack executed with?

-cc

On 2/17/26 4:51 PM, 'Neven MacEwan' via debezium wrote:
Hi - we have an Oracle connector which is configured with about 80 included tables capturing about 60k events per hour but we noticed that when the statspack process runs then the cdc events on the tables we are capturing stall for about 2 hours - are there any settings we should look at changing? Thanks. --
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/e18b46ff-d4b4-4248-ae4f-cc95307f1d3an%40googlegroups.com.

Neven MacEwan

unread,
Feb 22, 2026, 9:26:30 PMFeb 22
to debezium
Hi Chris

From the Oracle DBA

"To answer your questions: the statspack snapshot level that is used in FSPRDTM is 7 (slightly higher than the default level 5) and the hardware is definitely not saturated - we don't have any issues with memory/cpu utilisation on the server.

 

We don't think this comment on the Debezium support forum is relevant to the issue with CDC delays. The statspack snapshots are collected every 30min and that collection is not causing any issues for CDC. The issue occurs when the cronjob to purge the old statspack snapshot runs (it used to be at 8am, now at 1am). As discussed before, that cronjob runs just for 2-3 minutes, but it generates about 6GB of archive logs, which apparently slows down the Logminer processing, although it shouldn't even look for the changes in tables in the PERFSTAT schema that are used by Statspack. We have to run this purging cronjob, otherwise that tablespace would grow very fast. I mentioned before that we can change this cronjob to run weekly (e.g. 1am on Sundays) instead of daily, but in that case it would generate even more changes when it runs, so not sure if that would help."

Question - if we explicitly exclude the  PERFSTAT schema would that make a difference?

Neven

Chris Cranford

unread,
Feb 23, 2026, 5:33:30 AMFeb 23
to debe...@googlegroups.com
Hi Neven -

Yes, if you are not currently setting `schema.include.list`, then you can set `schema.exclude.list` and add `PERFSTAT`; however depending on your current configuration, it may not make much difference. For example, `table.include.list` already does schema filtering OOTB if you're specifying `log.mining.query.filter.mode`. What is your configuration?

-cc

Neven MacEwan

unread,
Feb 24, 2026, 7:16:23 PMFeb 24
to debezium
Hi Chris

We don't have either schema.include.list or log.mining.query.filter.mode set, relying on the schema filtering in table.include.list names, We will try setting  log.mining.query.filter.mode to in and schema.include.list to the one schema we are tracking, would you expect that to improve performance?

Config is: (let me know if anything else is out of order - thanks)
{
  "connector.class": "io.debezium.connector.oracle.OracleConnector",
  "errors.log.include.messages": "true",
  "incremental.snapshot.chunk.size": "102400",
  "tasks.max": "1",
  "schema.history.internal.consumer.sasl.jaas.config": "software.amazon.msk.auth.iam.IAMLoginModule required;",
  "transforms.Reroute.key.enforce.uniqueness": "false",
  "transforms": "Reroute",
  "log.mining.strategy": "online_catalog",
  "include.schema.changes": "false",
  "schema.history.internal.store.only.captured.tables.ddl": "true",
  "tombstones.on.delete": "false",
  "topic.prefix": "oracle",
  "transforms.Reroute.topic.replacement": "system.fsni.tms.$1-$2.prod",
  "decimal.handling.mode": "string",
  "schema.history.internal.kafka.topic": "schemahistory.tms",
  "schema.history.internal.producer.security.protocol": "SASL_SSL",
  "signal.data.collection": "\"F..Z\".DBZUSER.DEBEZIUM_SIGNAL",
  "lob.enabled": "true",
  "value.converter": "org.apache.kafka.connect.json.JsonConverter",
  "errors.log.enable": "true",
  "key.converter": "org.apache.kafka.connect.json.JsonConverter",
  "value.converter.ignore.default.for.nullables": "true",
  "schema.history.internal.producer.sasl.mechanism": "AWS_MSK_IAM",
  "database.encrypt": "false",
  "schema.history.internal.consumer.sasl.client.callback.handler.class": "software.amazon.msk.auth.iam.IAMClientCallbackHandler",
  "database.dbname": "F....Z",
  "database.user": "dbzuser",
  "transforms.Reroute.type": "io.debezium.transforms.ByLogicalTableRouter",
  "schema.history.internal.producer.sasl.client.callback.handler.class": "software.amazon.msk.auth.iam.IAMClientCallbackHandler",
  "transforms.Reroute.topic.regex": "\\w+\\.(\\w+).(\\w+).*",
  "database.port": "1521",
  "key.converter.schemas.enable": "false",
  "database.hostname": "TMS2020SCAN.Foodstuffs.co.nz",
  "database.password": "${secretManager:mskconnector:/source-FSPRDTM/database.password}",
  "value.converter.schemas.enable": "false",
  "schema.history.internal.consumer.sasl.mechanism": "AWS_MSK_IAM",
  "schema.history.internal.producer.sasl.jaas.config": "software.amazon.msk.auth.iam.IAMLoginModule required;",
  "table.include.list": "TRANS_MANAGER.ADDR_T,TRANS_MANAGER.AP_TRNS_T, ...",
  "snapshot.mode": "no_data",
  "schema.history.internal.consumer.security.protocol": "SASL_SSL"
}

Chris Cranford

unread,
Feb 25, 2026, 6:48:14 AMFeb 25
to debe...@googlegroups.com
The biggest helper in this context would simply be setting `log.mining.query.filter.mode` as `in`, but you can use both and measure.

To give some context, without pushing the filters to the LogMiner query, which the above option does, LogMiner returns all interested operations regardless of what object they're on. So if you have a lot of DML changes on non-captured tables, all those changes are being sent over the network to the connector where it discards the changes. By pushing the filters to the LogMiner query, you now only send necessary events to the connector layer and we have Oracle do what it does best, filtering large data sets.

-cc
Reply all
Reply to author
Forward
0 new messages