ORA-01555 Debezium Snapshot Error During Production Deployment

29 views
Skip to first unread message

Ramesh K

unread,
Dec 15, 2025, 10:29:59 PM12/15/25
to Chris Cranford, debe...@googlegroups.com

Hi Chris,

During the deployment of 160 tables (~200 GB) in production using initial snapshot mode on Debezium 2.5, we encountered the following error:

ORA-01555: snapshot too old: rollback segment too small
java.sql.SQLRecoverableException: Closed Connection
Connector restarted due to RetriableException

Based on the Debezium FAQ, this is due to Oracle undo tablespace limitations during long-running queries.

Proposed Approach (without increasing UNDO retention):

  • Deploy the connector with snapshot.mode=schema_only to skip the full initial snapshot.
  • Trigger incremental snapshots via the signaling table

Request for Guidance

Could you please confirm the exact steps we should follow for this approach in production?

  • Should we first deploy with schema_only mode and then trigger incremental snapshots via the signaling table?
  • Any additional best practices you recommend for large datasets?



Regards,
Ramesh

Chris Cranford

unread,
Dec 16, 2025, 8:31:55 PM12/16/25
to debe...@googlegroups.com
Hi Ramesh -

Assuming you have correctly configured the connector for signaling and incremental snapshots, yes using `schema_only` snapshot mode paired with incremental snapshots is the one and only way to avoid ORA-01555 for large tables but limited undo retention.

-cc

Ramesh K

unread,
Dec 29, 2025, 1:56:46 AM12/29/25
to debe...@googlegroups.com
Hi Chris,

Thank you for the update.

We’ve switched to schema_only mode and successfully performed incremental snapshots via the signaling table. The ORA-01555 issue is resolved. However, we are now encountering the following challenges:

  1. Archive Log Error

ORA-00308: cannot open archived log  
'/oracle/******db/oraarch/DB_1_848887_754639603.arc'  
ORA-27037: unable to obtain file status  
Linux-x86_64 Error: 2: No such file or directory 

 2.Debezium Behavior
    • Multiple restarts observed
    • Repeated snapshots for the same tables

Could you advise on the best approach to resolve these? :

  • Is it recommended to configure a secondary archive log destination with an increased Oracle archive log retention period?
  • Can Debezium be pointed to use this secondary archive log destination?
Your guidance on these points would be greatly appreciated..

Thanks,
Ramesh

--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/390686db-0769-4685-9b65-b0b8080c1700%40gmail.com.

Ramesh K

unread,
Dec 29, 2025, 5:07:46 AM12/29/25
to debe...@googlegroups.com

Additional Query:
If we stop the Debezium connector after snapshotting a large volume of data, is there a way to resume the snapshot from where it left off once the connector is restarted?

We’re considering this because, due to the issues mentioned earlier, we plan to temporarily stop Debezium and restart it after resolution. Ideally, we’d like to avoid reprocessing the entire snapshot.

Could you confirm if this is supported and, if so, outline the recommended approach?

Thanks,

Ramesh.

Chris Cranford

unread,
Jan 5, 2026, 2:10:36 AM (8 days ago) Jan 5
to debe...@googlegroups.com
Hi Ramesh -

With incremental snapshots, the snapshot does not start over, but rather resumes from where it left off. However, in your case due to ORA-00308, you're likely clearing the offsets and that directly clears the incremental state. So upon a restart, incremental snapshots must restart from the beginning. 

What is your archive log retention policy?

-cc

Ramesh K

unread,
Jan 5, 2026, 3:08:41 AM (8 days ago) Jan 5
to debe...@googlegroups.com, Chris Cranford

Hi Chris,

The DBA has configured a second archive log destination with a 5‑day retention period to address the ORA‑00308 error. I updated the Debezium configuration to use "log.mining.archive.destination.name": "LOG_ARCHIVE_DEST_5", but I am still encountering the same error. The logs also show that Debezium is not using the new archive destination specified in the configuration.

I’ve included the error details and the full connector configuration below. Please advise on the next steps.

error

Caused by: java.sql.SQLException: ORA-00308: cannot open archived log '/oracle/***/oraarch/****_1_850244_754639603.arc'  

ORA-27037: unable to obtain file status  
Linux-x86_64 Error: 2: No such file or directory 


full conenctor config

{
  "name": "****",
  "config": {
    "topic.creation.default.partitions": "3",
    "incremental.snapshot.chunk.size": "1000000",
    "schema.history.internal.consumer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"*****\" password=\"*****\";",
    "value.converter.schema.registry.basic.auth.user.info": "******9",
    "schema.history.internal.kafka.topic": "*****",
    "schema.history.internal.producer.security.protocol": "SASL_SSL",
    "topic.creation.default.replication.factor": "-1",
    "schema.history.internal.producer.sasl.mechanism": "PLAIN",
    "schema.history.internal.consumer.ssl.endpoint.identification.algorithm": "https",
    "value.converter.schema.registry.basic.auth.credentials.source": "USER_INFO",
    "schema.history.internal.kafka.bootstrap.servers": "*****",
    "schema.history.internal.producer.ssl.endpoint.identification.algorithm": "https",
    "value.converter.schema.registry.url": "ht*****",
    "schema.history.internal.consumer.sasl.mechanism": "PLAIN",
    "schema.history.internal.producer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"*****\" password=\"***\";",
    "key.converter.schema.registry.basic.auth.user.info": "**:*****",
    "key.converter.schema.registry.basic.auth.credentials.source": "USER_INFO",
    "key.converter.schema.registry.url": "https*******",
    "schema.history.internal.consumer.security.protocol": "SASL_SSL",
    "name": "****",
    "connector.class": "io.debezium.connector.oracle.OracleConnector",
    "tasks.max": "1",
    "key.converter": "io.confluent.connect.avro.AvroConverter",
    "value.converter": "io.confluent.connect.avro.AvroConverter",
    "transforms": "unwrap",
    "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
    "transforms.unwrap.drop.tombstones": "false",
    "topic.prefix": "****",
    "database.hostname": "****",
    "database.port": "**",
    "database.user": "***",
    "database.password": "***",
    "database.dbname": "***",
    "snapshot.mode": "schema_only",
    "log.mining.strategy": "online_catalog",
    "decimal.handling.mode": "double",
    "log.mining.archive.destination.name": "LOG_ARCHIVE_DEST_5",
    "log.mining.query.filter.mode": "none",
    "schema.history.internal.skip.unparseable.ddl": "true",
    "schema.history.internal.store.only.captured.tables.ddl": "true",
    "schema.history.internal.store.only.captured.databases.ddl": "true",
    "heartbeat.interval.ms": "60000",
    "signal.data.collection": "**.DBUSER.DEBEZIUM_SIGNAL",
    "column.exclude.list": "****",
    "table.include.list": "***"

Reply all
Reply to author
Forward
0 new messages