Oracle / XStream Setup

64 views
Skip to first unread message

Ajay Bhatnagar

unread,
Jan 8, 2026, 3:05:33 PMJan 8
to debezium
Hi Folks,

We are running into some issues when trying to setup Debezium with Oracle (19c) via XStream.

We don't see any errors in the Debezium logs, but the XStream Outbound server remains in detached state as shown below:

SELECT SERVER_NAME, STATUS FROM DBA_XSTREAM_OUTBOUND;

DBZXOUT, DETACHED

Also sharing the results of this query:

SELECT CAPTURE_NAME, STATUS FROM DBA_CAPTURE
WHERE CAPTURE_NAME = 'CAP$_DBZXOUT_1'; 

CAP$_DBZXOUT_1, ENABLED

TIA,

Ajay

Chris Cranford

unread,
Jan 8, 2026, 7:02:58 PMJan 8
to debe...@googlegroups.com
Hi Ajay -

Please enable TRACE logging for io.debezium, restart the connector, and share the full connector log if it does not enter attached mode.

Thanks,
-cc
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/5659192a-57f3-4cef-a91f-0c2c377396b2n%40googlegroups.com.

Ajay Bhatnagar

unread,
Jan 12, 2026, 5:36:40 PMJan 12
to debe...@googlegroups.com
Hi Chris

Attached are the logs and the connector configuration.

Thanks

You received this message because you are subscribed to a topic in the Google Groups "debezium" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/debezium/aLOwLTT93Z8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/61d4cf4e-38f5-4e76-acb6-badde82aa9f2%40gmail.com.
oracle_outbox_connector-qa2.txt
connector-config.yaml

Ajay Bhatnagar

unread,
Jan 12, 2026, 10:07:43 PMJan 12
to debe...@googlegroups.com
The connector was restarted right before the log capture and you might see a lot of noise because of that. LMK if I can provide any further context.

Thanks!

Chris Cranford

unread,
Jan 13, 2026, 8:59:43 AMJan 13
to debe...@googlegroups.com
Hi Ajay -

I reviewed the logs, there are no `io.debezium` log entries in the entire log, so I am afraid there isn't much we can understand from the logs. Is there a reason why, if the connector was restarted, why there are no log entries? 

-cc

Ajay Bhatnagar

unread,
Feb 5, 2026, 11:18:03 PMFeb 5
to debe...@googlegroups.com
Hi Chris,

Sorry this took longer than expected. Attached are the logs with io.debezium set to TRACE, and the config for the connector.

Question: In the logs I see the following for a large number of tables, even though the table.include.list entry in the config specifies just one table. 

Adding table foo to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource]

Thanks for your help,

Ajay

share.zip

Chris Cranford

unread,
Feb 6, 2026, 4:44:55 PMFeb 6
to debe...@googlegroups.com
Hi Ajay -

Regarding your question, please check out `schema.history.internal.store.only.captured.tables.ddl` [1]. You likely want to set this to `true` so that the connector does not track the schema history associated with non-captured tables.

However, looking at your logs I believe your issue might be with your schema.history.internal.* setup, because Kafka Connect is having trouble talking with the broker:

    Node -1 disconnected.   [org.apache.kafka.clients.NetworkClient]
    Cancelled in-flight API_VERSIONS request with correlation id 180 due to node -1 being disconnected (elapsed time since creation: 5105ms, elapsed time since send: 5105ms, throttle time: 0ms, request timeout: 30000ms)
    Bootstrap broker pkc-41973.westus2.azure.confluent.cloud:9092 (id: -1 rack: null isFenced: false) disconnected
    Rebootstrapping with [pkc-41973.westus2.azure.confluent.cloud/20.57.154.26:9092

I am seeing the following settings set:

    schema.history.internal.consumer.sasl.jaas.config
    schema.history.internal.kafka.topicschema.history.internal.kafka.topic
    schema.history.internal.kafka.bootstrap.servers
    schema.history.internal.consumer.sasl.mechanism
    schema.history.internal.consumer.security.protocol

First, I do not see any `schema.history.internal.producer.*` settings, which I would have expected. And secondly I'd ask you double check the configuration values to make sure they're correct. Typically you will have a series of identical properties and values, one set prefixed with `schema.history.internal.producer.*` and the other set prefixed with `schema.history.internal.consumer.*`. This is required because the schema history setup is both a producer & consumer.

Thanks,
-cc

[1]: https://debezium.io/documentation/reference/stable/connectors/oracle.html#oracle-property-database-history-store-only-captured-tables-ddl

Ajay Bhatnagar

unread,
Feb 9, 2026, 3:14:08 PMFeb 9
to debe...@googlegroups.com
Hi Chris,

Your suggestions helped. I see the snapshot complete successfully and the read events of the pre-existing data being published to the topic. However subsequent data changes are still not being streamed by the Debezium connector to the topic. The log does not show any errors either. Please review the attached log and advice.

Thanks,

Ajay

log.txt.zip

Ajay Bhatnagar

unread,
Feb 9, 2026, 3:23:26 PMFeb 9
to debe...@googlegroups.com
Also, why do I see the following two messages multiple times in the logs?

kafka-connect-1  | 2026-02-09T20:18:47,812 INFO   ||  WorkerSourceTask{id=oracle-xstream-connector2-0} Committing offsets for 5 acknowledged messages   [org.apache.kafka.connect.runtime.WorkerSourceTask]

kafka-connect-1  | 2026-02-09T20:19:47,920 INFO   ||  WorkerSourceTask{id=oracle-xstream-connector2-0} Committing offsets for 6 acknowledged messages   [org.apache.kafka.connect.runtime.WorkerSourceTask]

Chris Cranford

unread,
Feb 10, 2026, 9:13:56 AM (13 days ago) Feb 10
to debe...@googlegroups.com
Hi Ajay -

If you see snapshot data but then you are not seeing streaming changes, this likely means one of a few things:

    1. Check that the XStream Outbound Server filters are correctly setup [1].
    2. Enable TRACE logging and see if the logs share any more insight.

If you've already done (2) and not seeing any events, e.g. log entries like:

    Received LCR ....

Then it's most likely a misconfiguration on the XStream side in (1).

Lastly, the log entries you shared in your most recent response are from Kafka Connect. These are logged when the offset flush interval is reached. So if you have a very small offset.flush.interval.ms configured, you will see these messages more frequently.

Hope that helps.
-cc

[1]: https://debezium.io/documentation/reference/stable/connectors/oracle.html#_create_an_xstream_outbound_server
Reply all
Reply to author
Forward
0 new messages