Debezium Oracle : reconfig parameter not work

469 views
Skip to first unread message

FairyTail1279

unread,
May 7, 2022, 8:12:05 AM5/7/22
to debezium

Hi. All.

I have created a new debezium connector oracle and reconfigure connector using http:localhost:8083/connectors/cdc1_source_cdc/config. to change the value in "table.include.list", but the result is

[2022-05-07 18:47:24,989] INFO [Worker clientId=connect-1, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1406)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0] Starting OracleConnectorTask with configuration: (io.debezium.connector.common.BaseSourceTask:124)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    connector.class = io.debezium.connector.oracle.OracleConnector (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    database.user = xxx (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    database.dbname = cdc (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    database.history.kafka.bootstrap.servers = 127.0.0.1:9092 (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    database.history.kafka.topic = cdc1.DBZUSER.DUMMY.schema (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    database.url = jdbc:oracle:thin:@xxxxxxx:1521:cdcdb (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    time.precision.mode = connect (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    database.server.name = cdc1 (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    database.tablename.case.insensitive = true (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    offset.flush.timeout.ms = 60000 (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    decimal.handling.mode = string (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    binary.handling.mode = base64 (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    task.class = io.debezium.connector.oracle.OracleConnectorTask (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    database.password = ******** (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    name = cdc1_source_cdc (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    lob.enabled = true (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    interval.handling.mode = string (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,990] INFO [cdc1_source_cdc|task-0]    table.include.list = DBZUSER.DUMMY, BDE.TEST_TABLE (io.debezium.connector.common.BaseSourceTask:126)

[2022-05-07 18:47:24,991] INFO [cdc1_source_cdc|task-0] [Producer clientId=connector-producer-cdc1_source_cdc-0] Cluster ID: vCi0HSZrQne3_ECNkIHVLg (org.apache.kafka.clients.Metadata:287)

[2022-05-07 18:47:25,444] INFO [cdc1_source_cdc|task-0] Database Version: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production (io.debezium.connector.oracle.OracleConnection:74)

[2022-05-07 18:47:25,762] INFO [cdc1_source_cdc|task-0] KafkaDatabaseHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=cdc1-dbhistory, bootstrap.servers=127.0.0.1:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=cdc1-dbhistory} (io.debezium.relational.history.KafkaDatabaseHistory:243)

[2022-05-07 18:47:25,762] INFO [cdc1_source_cdc|task-0] KafkaDatabaseHistory Producer config: {retries=1, value.serializer=org.apache.kafka.common.serialization.StringSerializer, acks=1, batch.size=32768, max.block.ms=10000, bootstrap.servers=127.0.0.1:9092, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=cdc1-dbhistory, linger.ms=0} (io.debezium.relational.history.KafkaDatabaseHistory:244)

[2022-05-07 18:47:25,763] INFO [cdc1_source_cdc|task-0] Requested thread factory for connector OracleConnector, id = cdc1 named = db-history-config-check (io.debezium.util.Threads:270)

[2022-05-07 18:47:25,764] ERROR [cdc1_source_cdc|task-0] WorkerSourceTask{id=cdc1_source_cdc-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:195)

java.lang.RuntimeException: Unable to register the MBean 'debezium.oracle:type=connector-metrics,context=schema-history,server=cdc1'

    at io.debezium.metrics.Metrics.register(Metrics.java:77)

    at io.debezium.relational.history.DatabaseHistoryMetrics.started(DatabaseHistoryMetrics.java:95)

    at io.debezium.relational.history.AbstractDatabaseHistory.start(AbstractDatabaseHistory.java:82)

    at io.debezium.relational.history.KafkaDatabaseHistory.start(KafkaDatabaseHistory.java:261)

    at io.debezium.relational.HistorizedRelationalDatabaseSchema.<init>(HistorizedRelationalDatabaseSchema.java:42)

    at io.debezium.connector.oracle.OracleDatabaseSchema.<init>(OracleDatabaseSchema.java:38)

    at io.debezium.connector.oracle.OracleConnectorTask.start(OracleConnectorTask.java:63)

    at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:130)

    at org.apache.kafka.connect.runtime.WorkerSourceTask.initializeAndStart(WorkerSourceTask.java:225)

    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:186)

    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)

    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)

    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)

    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)

    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)

    at java.base/java.lang.Thread.run(Thread.java:833)

Caused by: javax.management.InstanceAlreadyExistsException: debezium.oracle:type=connector-metrics,context=schema-history,server=cdc1

    at java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)

    at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1862)

    at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:957)

    at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:892)

    at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:317)

    at java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:523)

    at io.debezium.metrics.Metrics.register(Metrics.java:73)

    ... 15 more

[2022-05-07 18:47:25,766] INFO [cdc1_source_cdc|task-0] Stopping down connector (io.debezium.connector.common.BaseSourceTask:238)

[2022-05-07 18:47:25,803] INFO [cdc1_source_cdc|task-0] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:956)

[2022-05-07 18:47:25,803] INFO [cdc1_source_cdc|task-0] [Producer clientId=connector-producer-cdc1_source_cdc-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1228)

[2022-05-07 18:47:25,805] INFO [cdc1_source_cdc|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)

[2022-05-07 18:47:25,805] INFO [cdc1_source_cdc|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:663)

[2022-05-07 18:47:25,805] INFO [cdc1_source_cdc|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:669)

[2022-05-07 18:47:25,806] INFO [cdc1_source_cdc|task-0] App info kafka.producer for connector-producer-cdc1_source_cdc-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)

[2022-05-07 18:47:34,992] INFO [cdc1_source_cdc|task-0|offsets] WorkerSourceTask{id=cdc1_source_cdc-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:484)

[2022-05-07 18:47:42,083] INFO [cdc1_source_cdc|task-0] startScn=35049609, endScn=35051052 (io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource:202)

[2022-05-07 18:47:42,085] INFO [cdc1_source_cdc|task-0] Streaming metrics dump: OracleStreamingChangeEventSourceMetrics{currentScn=35051052, oldestScn=null, committedScn=null, offsetScn=null, logMinerQueryCount=1, totalProcessedRows=0, totalCapturedDmlCount=0, totalDurationOfFetchingQuery=PT9.110399S, lastCapturedDmlCount=0, lastDurationOfFetchingQuery=PT9.110399S, maxCapturedDmlCount=0, maxDurationOfFetchingQuery=PT9.110399S, totalBatchProcessingDuration=PT45.322361S, lastBatchProcessingDuration=PT45.322361S, maxBatchProcessingThroughput=0, currentLogFileName=[D:\HOME\ORACLE\ORADATA\CDCDB_LOCATION\CDCDB\REDO02.LOG], minLogFilesMined=1, maxLogFilesMined=1, redoLogStatus=[D:\HOME\ORACLE\ORADATA\CDCDB_LOCATION\CDCDB\REDO03.LOG | ACTIVE, D:\HOME\ORACLE\ORADATA\CDCDB_LOCATION\CDCDB\REDO01.LOG | ACTIVE, D:\HOME\ORACLE\ORADATA\CDCDB_LOCATION\CDCDB\REDO02.LOG | CURRENT], switchCounter=0, batchSize=20000, millisecondToSleepBetweenMiningQuery=1200, hoursToKeepTransaction=0, networkConnectionProblemsCounter0, batchSizeDefault=20000, batchSizeMin=1000, batchSizeMax=100000, sleepTimeDefault=1000, sleepTimeMin=0, sleepTimeMax=3000, sleepTimeIncrement=200, totalParseTime=PT0S, totalStartLogMiningSessionDuration=PT35.530858S, lastStartLogMiningSessionDuration=PT35.530858S, maxStartLogMiningSessionDuration=PT35.530858S, totalProcessTime=PT45.322361S, minBatchProcessTime=PT45.322361S, maxBatchProcessTime=PT45.322361S, totalResultSetNextTime=PT0S, lagFromTheSource=DurationPT0S, maxLagFromTheSourceDuration=PT0S, minLagFromTheSourceDuration=PT0S, lastCommitDuration=PT0S, maxCommitDuration=PT0S, activeTransactions=0, rolledBackTransactions=0, committedTransactions=0, abandonedTransactionIds=[], rolledbackTransactionIds=[], registeredDmlCount=0, committedDmlCount=0, errorCount=0, warningCount=0, scnFreezeCount=0, unparsableDdlCount=0, miningSessionUserGlobalAreaMemory=10222736, miningSessionUserGlobalAreaMaxMemory=39818824, miningSessionProcessGlobalAreaMemory=36885080, miningSessionProcessGlobalAreaMaxMemory=36885080} (io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource:203)

[2022-05-07 18:47:42,085] INFO [cdc1_source_cdc|task-0] Offsets: OracleOffsetContext [scn=35049609, commit_scn=null] (io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource:204)

[2022-05-07 18:47:42,085] INFO [cdc1_source_cdc|task-0] Finished streaming (io.debezium.pipeline.ChangeEventSourceCoordinator:175)

[2022-05-07 18:47:42,085] INFO [cdc1_source_cdc|task-0] Connected metrics set to 'false' (io.debezium.pipeline.ChangeEventSourceCoordinator:234)

[2022-05-07 18:47:42,139] INFO [cdc1_source_cdc|task-0] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:956)

[2022-05-07 18:47:42,152] INFO [cdc1_source_cdc|task-0] [Producer clientId=cdc1-dbhistory] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1228)

[2022-05-07 18:47:42,154] INFO [cdc1_source_cdc|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)

[2022-05-07 18:47:42,154] INFO [cdc1_source_cdc|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:663)

[2022-05-07 18:47:42,155] INFO [cdc1_source_cdc|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:669)

[2022-05-07 18:47:42,155] INFO [cdc1_source_cdc|task-0] App info kafka.producer for cdc1-dbhistory unregistered (org.apache.kafka.common.utils.AppInfoParser:83)

[2022-05-07 18:47:42,155] INFO [cdc1_source_cdc|task-0] [Producer clientId=connector-producer-cdc1_source_cdc-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1228)

[2022-05-07 18:47:42,155] INFO [cdc1_source_cdc|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)

[2022-05-07 18:47:42,155] INFO [cdc1_source_cdc|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:663)

[2022-05-07 18:47:42,155] INFO [cdc1_source_cdc|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:669)

[2022-05-07 18:47:42,155] INFO [cdc1_source_cdc|task-0] App info kafka.producer for connector-producer-cdc1_source_cdc-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)

[2022-05-07 18:47:44,997] INFO [cdc1_source_cdc|task-0|offsets] WorkerSourceTask{id=cdc1_source_cdc-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:484)

[2022-05-07 18:47:55,001] INFO [cdc1_source_cdc|task-0|offsets] WorkerSourceTask{id=cdc1_source_cdc-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:484)

As a result of the above, when updating the values ​​in the TEST_TABLE table, no TEST_TABLE table topic was created. Please help me figure out the cause and solution.

Thanks.

FairyTail.



Chris Cranford

unread,
May 9, 2022, 9:00:14 AM5/9/22
to debe...@googlegroups.com, FairyTail1279
HI Fairy

It would seem that you have another connector already running with the `database.server.name` with the value `cdc1`.  You cannot have multiple tasks running simultaneously with the same `database.server.name` or the metrics beans cannot be registered due to naming conflicts and the additional connectors will be terminated.  Please make sure the values for this setting are unique if you are running multiple connectors.  If you are running only one, please make sure the prior connector has stopped before trying to start a new one with the same name.

Thanks,
Chris
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/050664d1-f574-4b3e-83e1-37aac37db3c3n%40googlegroups.com.

FairyTail1279

unread,
May 9, 2022, 10:29:02 AM5/9/22
to debezium
Hi. Chris.
      I only update configure of the old connector. 

Thanks.
FairyTail

ในวันที่ วันจันทร์ที่ 9 พฤษภาคม ค.ศ. 2022 เวลา 20 นาฬิกา 00 นาที 14 วินาที UTC+7 Chris Cranford เขียนว่า:

Chris Cranford

unread,
May 9, 2022, 11:05:04 AM5/9/22
to debe...@googlegroups.com, FairyTail1279
Hi Fairy -

Please take a look in the logs just before the new connector started.  Do you see Kafka Connect report that the prior instance failed to stop gracefully within the timeout period?  My guess is the old connector was still running a JDBC operation that was taking longer than expected and Kafka Connect decided to start the new instance without waiting on the old instance to concluded safely.

Chris
Reply all
Reply to author
Forward
0 new messages