Oracle source-connector has a "DML parse" error.

26 views
Skip to first unread message

Vladislav P

unread,
Feb 10, 2026, 9:27:38 AM (13 days ago) Feb 10
to debezium
Hello everyone.
Help me figure out the problem.
The connector has been working on the prod for several months and now it has fallen.

The main reason is: "DebeziumException: Failed to find column COL 29 in table column position cache."

Apparently, this error may happen again.
In the new release, 2 columns were added to the TABLE. Is there any way to configure the connector so that it doesn't crash when adding new fields, or how to get around this error?

```
{
  "name": "{{sourceConnectorName}}",
  "config": {
    "connector.class": "io.debezium.connector.oracle.OracleConnector",
    "tasks.max": "1",
    "database.hostname": "{{sourceDatabaseHost}}",
    "database.port": "{{sourceDatabasePort}}",
    "database.user": "{{sourceDatabaseUser}}",
    "database.password": "{{sourceDatabasePassword}}",
    "database.dbname": "{{sourceDatabaseName}}",
    "table.include.list": "WPMS.DEBEZIUM_SIGNAL,WMS.TBL_SH_ROUTES_POINTS",
    "column.include.list": "WPMS\\.DEBEZIUM_SIGNAL\\.(ID|TYPE|DATA),WMS\\.TBL_SH_ROUTES_POINTS\\.(ID_POINT|ID_ROUTE|LASTDATE|BEGINTIME|ENDTIME)",
    "topic.prefix": "{{topicPrefix}}",
    "database.server.name": "{{topicPrefix}}",
    "schema.history.internal.kafka.topic": "dbz_oracle_wpms_history",
    "schema.history.internal.kafka.bootstrap.servers": "{{kafkaBootstrapServers}}",
    "schema.include.list": "WPMS,WMS",
    "key.converter": "io.apicurio.registry.utils.converter.AvroConverter",
    "key.converter.apicurio.registry.url": "{{apicurioRegistryUrl}",
    "key.converter.apicurio.registry.auto-register": "true",
    "key.converter.apicurio.registry.find-latest": "true",
    "key.converter.schemas.enable": "false",
    "key.converter.apicurio.registry.headers.enabled": "false",
    "key.converter.apicurio.registry.as-confluent": "true",
    "key.converter.apicurio.use-id": "contentId",
    "value.converter": "io.apicurio.registry.utils.converter.AvroConverter",
    "value.converter.apicurio.registry.url": "{{apicurioRegistryUrl}}",
    "value.converter.apicurio.registry.auto-register": "true",
    "value.converter.apicurio.registry.find-latest": "true",
    "value.converter.schemas.enable": "false",
    "value.converter.apicurio.registry.headers.enabled": "false",
    "value.converter.apicurio.registry.as-confluent": "true",
    "value.converter.apicurio.use-id": "contentId",
    "schema.name.adjustment.mode": "avro",
    "header.converter": "org.apache.kafka.connect.json.JsonConverter",
    "header.converter.schemas.enable": "true",
    "signal.enable.channels": "source",
    "signal.data.collection": "RCDB.WPMS.DEBEZIUM_SIGNAL",
    "topic.creation.enable": "true",
    "topic.creation.default.replication.factor": 1,
    "topic.creation.default.partitions": 1,
    "topic.creation.default.retention.ms": 345600000,
    "topic.creation.default.cleanup.policy": "delete",
    "tombstones.on.delete": "false",
    "log.mining.strategy": "hybrid",
    "log.mining.query.filter.mode": "in",
    "log.mining.transaction.retention.ms": 900000,
    "log.mining.batch.size.max": 20000000,
    "log.mining.batch.size.default": 5000000,
    "log.mining.batch.size.increment": 1000000,
    "log.cleanup.policy": "delete",
    "log.retention.ms": 345600000,
    "poll.interval.ms": 5,
    "incremental.snapshot.chunk.size": 50000,
    "incremental.snapshot.allow.schema.changes": "true",
    "snapshot.fetch.size": 50000,
    "snapshot.mode": "no_data",
    "schema.history.internal.store.only.captured.tables.ddl": "true",
    "schema.history.internal.skip.unparseable.ddl": "true",
    "snapshot.database.errors.max.retries": 2,
    "internal.log.mining.log.query.max.retries": 15,
    "heartbeat.interval.ms": "10000",
    "heartbeat.action.query": "MERGE INTO WPMS.DEBEZIUM_HEARTBEAT t USING (SELECT 1 id, CURRENT_TIMESTAMP ts FROM dual) s ON (t.id = s.id) WHEN MATCHED THEN UPDATE SET t.ts = s.ts WHEN NOT MATCHED THEN INSERT (id, ts) VALUES (s.id, s.ts)",
    "notification.enabled.channels": "sink,jmx,log",
    "notification.sink.topic.name": "debezium_notifications"
  }
}}
```

Vladislav P

unread,
Feb 10, 2026, 9:28:48 AM (13 days ago) Feb 10
to debezium
Error log

вторник, 10 февраля 2026 г. в 18:27:38 UTC+4, Vladislav P:
LogiOshibki10-02-2026.txt

Vladislav P

unread,
Feb 10, 2026, 10:00:45 AM (13 days ago) Feb 10
to debezium
New columns in "column.include.list" are not included and are not planned to be included.

вторник, 10 февраля 2026 г. в 18:28:48 UTC+4, Vladislav P:

Chris Cranford

unread,
Feb 10, 2026, 10:49:39 AM (13 days ago) Feb 10
to debe...@googlegroups.com
Hi -

I'm afraid this is a combination of features that the connector cannot currently handle. The issue is that when using `hybrid`, we explicitly expect the relational table model to include all columns and the fact you've expressly omitted the column means that we have no way to decode "COL 29" in this context. As a workaround, are you able to include all columns and then drop them using a transformation?

Thanks,
-cc
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/1f2cd7b5-134e-42f2-9a6b-a40d6c63095en%40googlegroups.com.

Vladislav P

unread,
Feb 10, 2026, 12:12:53 PM (13 days ago) Feb 10
to debezium
It turns out that I have only one solution - to use a transformation? Or is there any way to change the connector configuration?
It seems to me that when I tested the connector earlier and added columns to the table, it did not crash with an error on the test stand.

I will try to perform the transformation. Do you happen to have an example of such filtering for multiple tables with the same column names, so that columns are cut from the desired table?

вторник, 10 февраля 2026 г. в 19:49:39 UTC+4, Chris Cranford:

Chris Cranford

unread,
Feb 10, 2026, 3:58:49 PM (13 days ago) Feb 10
to debe...@googlegroups.com
Hi, this can only be influenced via configuration moving the filtering of columns to the transformation level. 

If this is an isolated incident, you could always try snapshot.mode recovery to get the connector back in an operational position, but the next time the same situation happens, you could face the same problem.  This is really only a short-term solution.  I've already logged a GH Issue regarding the problem [1].

Thanks,
-cc

[1]: https://github.com/debezium/dbz/issues/1599
Reply all
Reply to author
Forward
0 new messages