2024-02-27 03:52:32,624 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Subscribed to topic(s): idcstate [org.apache.kafka.clients.consumer.KafkaConsumer]
2024-02-27 03:52:32,640 INFO || Starting JdbcSinkConnectorConfig with configuration: [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,641 INFO || connector.class = io.debezium.connector.jdbc.JdbcSinkConnector [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,641 INFO || connection.password = ******** [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,642 INFO || primary.key.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,642 INFO || tasks.max = 1 [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,642 INFO || topics = idcstate [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,642 INFO || connection.username = DW_STAGE [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,642 INFO || quote.identifiers = false [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,642 INFO || schema.evolution = basic [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,642 INFO || task.class = io.debezium.connector.jdbc.JdbcSinkConnectorTask [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,642 INFO || name = sink-connector [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,642 INFO || primary.key.fields = id [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,642 INFO || connection.url = jdbc:oracle:thin:@*****:1521/*** [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,642 INFO || insert.mode = upsert [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:32,772 INFO || HHH000412: Hibernate ORM core version 6.1.7.Final [org.hibernate.Version]
2024-02-27 03:52:33,217 INFO || HHH000130: Instantiating explicit connection provider: org.hibernate.c3p0.internal.C3P0ConnectionProvider [org.hibernate.engine.jdbc.connections.internal.ConnectionProviderInitiator]
2024-02-27 03:52:33,221 INFO || HHH010002: C3P0 using driver: null at URL: jdbc:oracle:thin:@****:1521/** [org.hibernate.orm.connections.pooling.c3p0]
2024-02-27 03:52:33,222 INFO || HHH10001001: Connection properties: {password=****, user=DW_STAGE} [org.hibernate.orm.connections.pooling.c3p0]
2024-02-27 03:52:33,222 INFO || HHH10001003: Autocommit mode: false [org.hibernate.orm.connections.pooling.c3p0]
2024-02-27 03:52:33,222 WARN || HHH10001006: No JDBC Driver class was specified by property hibernate.connection.driver_class [org.hibernate.orm.connections.pooling.c3p0]
2024-02-27 03:52:33,233 INFO || MLog clients using slf4j logging. [com.mchange.v2.log.MLog]
2024-02-27 03:52:33,277 INFO || Initializing c3p0-0.9.5.5 [built 11-December-2019 22:18:33 -0800; debug? true; trace: 10] [com.mchange.v2.c3p0.C3P0Registry]
2024-02-27 03:52:33,305 INFO || HHH10001007: JDBC isolation level: <unknown> [org.hibernate.orm.connections.pooling.c3p0]
2024-02-27 03:52:33,320 INFO || Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@94b1be16 [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@5772b6e2 [ acquireIncrement -> 32, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 1bqvnr9b11skawvcsz1zer|2724045f, idleConnectionTestPeriod -> 0, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 32, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource@4a905b63 [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> 1bqvnr9b11skawvcsz1zer|7759bfd7, jdbcUrl -> jdbc:oracle:thin:@*****:1521/***, properties -> {password=******, user=******} ], preferredTestQuery -> null, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> 1bqvnr9b11skawvcsz1zer|58cc6ca9, numHelperThreads -> 3 ] [com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource]
2024-02-27 03:52:33,733 INFO || HHH000400: Using dialect: org.hibernate.dialect.OracleDialect [SQL dialect]
2024-02-27 03:52:34,479 INFO || HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] [org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator]
2024-02-27 03:52:34,501 INFO || Using dialect io.debezium.connector.jdbc.dialect.oracle.OracleDatabaseDialect [io.debezium.connector.jdbc.dialect.DatabaseDialectResolver]
2024-02-27 03:52:34,568 INFO || Database version 11.2.0 [io.debezium.connector.jdbc.JdbcChangeEventSink]
2024-02-27 03:52:34,568 INFO || WorkerSinkTask{id=sink-connector-0} Sink task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSinkTask]
2024-02-27 03:52:34,574 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Cluster ID: 1sJWvb3HTVaL9hF5TdTmfQ [org.apache.kafka.clients.Metadata]
2024-02-27 03:52:34,575 INFO || [Consumer
clientId=connector-consumer-sink-connector-0,
groupId=connect-sink-connector] Discovered group coordinator kafka:29092 (id:
2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:52:34,576 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:52:34,587 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Request joining group due to: need to re-join with the given member-id [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:52:34,587 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:52:37,597 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Successfully joined group with generation Generation{generationId=1, memberId='connector-consumer-sink-connector-0-2c411905-a6c7-4ace-863b-efcefabda43d', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:52:37,603 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Finished assignment for group at generation 1: {connector-consumer-sink-connector-0-2c411905-a6c7-4ace-863b-efcefabda43d=Assignment(partitions=[idcstate-0])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:52:37,612 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Successfully synced group in generation Generation{generationId=1, memberId='connector-consumer-sink-connector-0-2c411905-a6c7-4ace-863b-efcefabda43d', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:52:37,612 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Notifying assignor about the new Assignment(partitions=[idcstate-0]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:52:37,612 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Adding newly assigned partitions: idcstate-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:52:37,630 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Found no committed offset for partition idcstate-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:52:37,636 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Resetting offset for partition idcstate-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState]
2024-02-27 03:52:38,766 WARN || SQL Error: 903, SQLState: 42000 [org.hibernate.engine.jdbc.spi.SqlExceptionHelper]
2024-02-27 03:52:38,766 ERROR || ORA-00903: invalid table name
[org.hibernate.engine.jdbc.spi.SqlExceptionHelper]
2024-02-27 03:52:38,780 ERROR || Failed to process record: Failed to process a sink record [io.debezium.connector.jdbc.JdbcSinkConnectorTask]
2024-02-27 03:52:43,466 INFO || WorkerSourceTask{id=source-connector-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
2024-02-27 03:53:38,786 ERROR || WorkerSinkTask{id=sink-connector-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: JDBC sink connector failure [org.apache.kafka.connect.runtime.WorkerSinkTask]
org.apache.kafka.connect.errors.ConnectException: JDBC sink connector failure
at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:78)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:186)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:241)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to process a sink record
at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:71)
at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:87)
... 11 more
Caused by: jakarta.persistence.PersistenceException: Converting `org.hibernate.exception.SQLGrammarException` to JPA `PersistenceException` : JDBC exception executing SQL [MERGE INTO PUBLIC.IDCSTATE USING (SELECT ? id, ? shortname, ? longname, ? readingsname, ? type, ? version, ? description, ? namespaceid, ? status, ? statuslastmodified, ? displaykey, ? guid, ? clsfid, ? createddate, ? modifieddate, ? workflowstatemapid, ? lastmodifiedbyid FROM dual) INCOMING ON (
PUBLIC.IDCSTATE.id=INCOMING.id) WHEN MATCHED THEN UPDATE SET
PUBLIC.IDCSTATE.shortname=INCOMING.shortname,
PUBLIC.IDCSTATE.longname=INCOMING.longname,
PUBLIC.IDCSTATE.readingsname=INCOMING.readingsname,
PUBLIC.IDCSTATE.type=INCOMING.type,
PUBLIC.IDCSTATE.version=INCOMING.version,
PUBLIC.IDCSTATE.description=INCOMING.description,
PUBLIC.IDCSTATE.namespaceid=INCOMING.namespaceid,
PUBLIC.IDCSTATE.status=INCOMING.status,
PUBLIC.IDCSTATE.statuslastmodified=INCOMING.statuslastmodified,
PUBLIC.IDCSTATE.displaykey=INCOMING.displaykey,
PUBLIC.IDCSTATE.guid=INCOMING.guid,
PUBLIC.IDCSTATE.clsfid=INCOMING.clsfid,
PUBLIC.IDCSTATE.createddate=INCOMING.createddate,
PUBLIC.IDCSTATE.modifieddate=INCOMING.modifieddate,
PUBLIC.IDCSTATE.workflowstatemapid=INCOMING.workflowstatemapid,
PUBLIC.IDCSTATE.lastmodifiedbyid=INCOMING.lastmodifiedbyid WHEN NOT MATCHED THEN INSERT (shortname,longname,readingsname,type,version,description,namespaceid,status,statuslastmodified,displaykey,guid,clsfid,createddate,modifieddate,workflowstatemapid,lastmodifiedbyid,id) VALUES (INCOMING.shortname,INCOMING.longname,INCOMING.readingsname,INCOMING.type,INCOMING.version,INCOMING.description,INCOMING.namespaceid,INCOMING.status,INCOMING.statuslastmodified,INCOMING.displaykey,INCOMING.guid,INCOMING.clsfid,INCOMING.createddate,INCOMING.modifieddate,INCOMING.workflowstatemapid,INCOMING.lastmodifiedbyid,INCOMING.id)]
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:165)
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:175)
at org.hibernate.query.spi.AbstractQuery.executeUpdate(AbstractQuery.java:654)
at io.debezium.connector.jdbc.JdbcChangeEventSink.writeUpsert(JdbcChangeEventSink.java:257)
at io.debezium.connector.jdbc.JdbcChangeEventSink.write(JdbcChangeEventSink.java:216)
at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:68)
... 12 more
Caused by: org.hibernate.exception.SQLGrammarException: JDBC exception executing SQL [MERGE INTO PUBLIC.IDCSTATE USING (SELECT ? id, ? shortname, ? longname, ? readingsname, ? type, ? version, ? description, ? namespaceid, ? status, ? statuslastmodified, ? displaykey, ? guid, ? clsfid, ? createddate, ? modifieddate, ? workflowstatemapid, ? lastmodifiedbyid FROM dual) INCOMING ON (
PUBLIC.IDCSTATE.id=INCOMING.id) WHEN MATCHED THEN UPDATE SET
PUBLIC.IDCSTATE.shortname=INCOMING.shortname,
PUBLIC.IDCSTATE.longname=INCOMING.longname,
PUBLIC.IDCSTATE.readingsname=INCOMING.readingsname,
PUBLIC.IDCSTATE.type=INCOMING.type,
PUBLIC.IDCSTATE.version=INCOMING.version,
PUBLIC.IDCSTATE.description=INCOMING.description,
PUBLIC.IDCSTATE.namespaceid=INCOMING.namespaceid,
PUBLIC.IDCSTATE.status=INCOMING.status,
PUBLIC.IDCSTATE.statuslastmodified=INCOMING.statuslastmodified,
PUBLIC.IDCSTATE.displaykey=INCOMING.displaykey,
PUBLIC.IDCSTATE.guid=INCOMING.guid,
PUBLIC.IDCSTATE.clsfid=INCOMING.clsfid,
PUBLIC.IDCSTATE.createddate=INCOMING.createddate,
PUBLIC.IDCSTATE.modifieddate=INCOMING.modifieddate,
PUBLIC.IDCSTATE.workflowstatemapid=INCOMING.workflowstatemapid,
PUBLIC.IDCSTATE.lastmodifiedbyid=INCOMING.lastmodifiedbyid WHEN NOT MATCHED THEN INSERT (shortname,longname,readingsname,type,version,description,namespaceid,status,statuslastmodified,displaykey,guid,clsfid,createddate,modifieddate,workflowstatemapid,lastmodifiedbyid,id) VALUES (INCOMING.shortname,INCOMING.longname,INCOMING.readingsname,INCOMING.type,INCOMING.version,INCOMING.description,INCOMING.namespaceid,INCOMING.status,INCOMING.statuslastmodified,INCOMING.displaykey,INCOMING.guid,INCOMING.clsfid,INCOMING.createddate,INCOMING.modifieddate,INCOMING.workflowstatemapid,INCOMING.lastmodifiedbyid,INCOMING.id)]
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:64)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:56)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:109)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:95)
at org.hibernate.sql.exec.internal.StandardJdbcMutationExecutor.execute(StandardJdbcMutationExecutor.java:97)
at org.hibernate.query.sql.internal.NativeNonSelectQueryPlanImpl.executeUpdate(NativeNonSelectQueryPlanImpl.java:78)
at org.hibernate.query.sql.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:820)
at org.hibernate.query.spi.AbstractQuery.executeUpdate(AbstractQuery.java:643)
... 15 more
Caused by: java.sql.SQLSyntaxErrorException: ORA-00903: invalid table name
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:509)
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:461)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1104)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:553)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:269)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:655)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:270)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:91)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:970)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1205)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3666)
at oracle.jdbc.driver.T4CPreparedStatement.executeInternal(T4CPreparedStatement.java:1426)
at oracle.jdbc.driver.OraclePreparedStatement.executeLargeUpdate(OraclePreparedStatement.java:3756)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3736)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1063)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:1502)
at org.hibernate.sql.exec.internal.StandardJdbcMutationExecutor.execute(StandardJdbcMutationExecutor.java:84)
... 18 more
Caused by: Error : 903, Position : 11,
Sql = MERGE INTO PUBLIC.IDCSTATE USING (SELECT :1 id, :2 shortname, :3 longname, :4 readingsname, :5 type, :6 version, :7 description, :8 namespaceid, :9 status, :10 statuslastmodified, :11 displaykey, :12 guid, :13 clsfid, :14 createddate, :15 modifieddate, :16 workflowstatemapid, :17 lastmodifiedbyid FROM dual) INCOMING ON (
PUBLIC.IDCSTATE.id=INCOMING.id) WHEN MATCHED THEN UPDATE SET
PUBLIC.IDCSTATE.shortname=INCOMING.shortname,
PUBLIC.IDCSTATE.longname=INCOMING.longname,
PUBLIC.IDCSTATE.readingsname=INCOMING.readingsname,
PUBLIC.IDCSTATE.type=INCOMING.type,
PUBLIC.IDCSTATE.version=INCOMING.version,
PUBLIC.IDCSTATE.description=INCOMING.description,
PUBLIC.IDCSTATE.namespaceid=INCOMING.namespaceid,
PUBLIC.IDCSTATE.status=INCOMING.status,
PUBLIC.IDCSTATE.statuslastmodified=INCOMING.statuslastmodified,
PUBLIC.IDCSTATE.displaykey=INCOMING.displaykey,
PUBLIC.IDCSTATE.guid=INCOMING.guid,
PUBLIC.IDCSTATE.clsfid=INCOMING.clsfid,
PUBLIC.IDCSTATE.createddate=INCOMING.createddate,
PUBLIC.IDCSTATE.modifieddate=INCOMING.modifieddate,
PUBLIC.IDCSTATE.workflowstatemapid=INCOMING.workflowstatemapid,
PUBLIC.IDCSTATE.lastmodifiedbyid=INCOMING.lastmodifiedbyid WHEN NOT MATCHED THEN INSERT (shortname,longname,readingsname,type,version,description,namespaceid,status,statuslastmodified,displaykey,guid,clsfid,createddate,modifieddate,workflowstatemapid,lastmodifiedbyid,id) VALUES (INCOMING.shortname,INCOMING.longname,INCOMING.readingsname,INCOMING.type,INCOMING.version,INCOMING.description,INCOMING.namespaceid,INCOMING.status,INCOMING.statuslastmodified,INCOMING.displaykey,INCOMING.guid,INCOMING.clsfid,INCOMING.createddate,INCOMING.modifieddate,INCOMING.workflowstatemapid,INCOMING.lastmodifiedbyid,INCOMING.id),
OriginalSql = MERGE INTO PUBLIC.IDCSTATE USING (SELECT ? id, ? shortname, ? longname, ? readingsname, ? type, ? version, ? description, ? namespaceid, ? status, ? statuslastmodified, ? displaykey, ? guid, ? clsfid, ? createddate, ? modifieddate, ? workflowstatemapid, ? lastmodifiedbyid FROM dual) INCOMING ON (
PUBLIC.IDCSTATE.id=INCOMING.id) WHEN MATCHED THEN UPDATE SET
PUBLIC.IDCSTATE.shortname=INCOMING.shortname,
PUBLIC.IDCSTATE.longname=INCOMING.longname,
PUBLIC.IDCSTATE.readingsname=INCOMING.readingsname,
PUBLIC.IDCSTATE.type=INCOMING.type,
PUBLIC.IDCSTATE.version=INCOMING.version,
PUBLIC.IDCSTATE.description=INCOMING.description,
PUBLIC.IDCSTATE.namespaceid=INCOMING.namespaceid,
PUBLIC.IDCSTATE.status=INCOMING.status,
PUBLIC.IDCSTATE.statuslastmodified=INCOMING.statuslastmodified,
PUBLIC.IDCSTATE.displaykey=INCOMING.displaykey,
PUBLIC.IDCSTATE.guid=INCOMING.guid,
PUBLIC.IDCSTATE.clsfid=INCOMING.clsfid,
PUBLIC.IDCSTATE.createddate=INCOMING.createddate,
PUBLIC.IDCSTATE.modifieddate=INCOMING.modifieddate,
PUBLIC.IDCSTATE.workflowstatemapid=INCOMING.workflowstatemapid,
PUBLIC.IDCSTATE.lastmodifiedbyid=INCOMING.lastmodifiedbyid WHEN NOT MATCHED THEN INSERT (shortname,longname,readingsname,type,version,description,namespaceid,status,statuslastmodified,displaykey,guid,clsfid,createddate,modifieddate,workflowstatemapid,lastmodifiedbyid,id) VALUES (INCOMING.shortname,INCOMING.longname,INCOMING.readingsname,INCOMING.type,INCOMING.version,INCOMING.description,INCOMING.namespaceid,INCOMING.status,INCOMING.statuslastmodified,INCOMING.displaykey,INCOMING.guid,INCOMING.clsfid,INCOMING.createddate,INCOMING.modifieddate,INCOMING.workflowstatemapid,INCOMING.lastmodifiedbyid,INCOMING.id), Error
Msg = ORA-00903: invalid table name
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:513)
... 34 more
2024-02-27 03:53:38,809 ERROR || WorkerSinkTask{id=sink-connector-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted [org.apache.kafka.connect.runtime.WorkerTask]
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:609)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:186)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:241)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.ConnectException: JDBC sink connector failure
at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:78)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581)
... 10 more
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to process a sink record
at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:71)
at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:87)
... 11 more
Caused by: jakarta.persistence.PersistenceException: Converting `org.hibernate.exception.SQLGrammarException` to JPA `PersistenceException` : JDBC exception executing SQL [MERGE INTO PUBLIC.IDCSTATE USING (SELECT ? id, ? shortname, ? longname, ? readingsname, ? type, ? version, ? description, ? namespaceid, ? status, ? statuslastmodified, ? displaykey, ? guid, ? clsfid, ? createddate, ? modifieddate, ? workflowstatemapid, ? lastmodifiedbyid FROM dual) INCOMING ON (
PUBLIC.IDCSTATE.id=INCOMING.id) WHEN MATCHED THEN UPDATE SET
PUBLIC.IDCSTATE.shortname=INCOMING.shortname,
PUBLIC.IDCSTATE.longname=INCOMING.longname,
PUBLIC.IDCSTATE.readingsname=INCOMING.readingsname,
PUBLIC.IDCSTATE.type=INCOMING.type,
PUBLIC.IDCSTATE.version=INCOMING.version,
PUBLIC.IDCSTATE.description=INCOMING.description,
PUBLIC.IDCSTATE.namespaceid=INCOMING.namespaceid,
PUBLIC.IDCSTATE.status=INCOMING.status,
PUBLIC.IDCSTATE.statuslastmodified=INCOMING.statuslastmodified,
PUBLIC.IDCSTATE.displaykey=INCOMING.displaykey,
PUBLIC.IDCSTATE.guid=INCOMING.guid,
PUBLIC.IDCSTATE.clsfid=INCOMING.clsfid,
PUBLIC.IDCSTATE.createddate=INCOMING.createddate,
PUBLIC.IDCSTATE.modifieddate=INCOMING.modifieddate,
PUBLIC.IDCSTATE.workflowstatemapid=INCOMING.workflowstatemapid,
PUBLIC.IDCSTATE.lastmodifiedbyid=INCOMING.lastmodifiedbyid WHEN NOT MATCHED THEN INSERT (shortname,longname,readingsname,type,version,description,namespaceid,status,statuslastmodified,displaykey,guid,clsfid,createddate,modifieddate,workflowstatemapid,lastmodifiedbyid,id) VALUES (INCOMING.shortname,INCOMING.longname,INCOMING.readingsname,INCOMING.type,INCOMING.version,INCOMING.description,INCOMING.namespaceid,INCOMING.status,INCOMING.statuslastmodified,INCOMING.displaykey,INCOMING.guid,INCOMING.clsfid,INCOMING.createddate,INCOMING.modifieddate,INCOMING.workflowstatemapid,INCOMING.lastmodifiedbyid,INCOMING.id)]
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:165)
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:175)
at org.hibernate.query.spi.AbstractQuery.executeUpdate(AbstractQuery.java:654)
at io.debezium.connector.jdbc.JdbcChangeEventSink.writeUpsert(JdbcChangeEventSink.java:257)
at io.debezium.connector.jdbc.JdbcChangeEventSink.write(JdbcChangeEventSink.java:216)
at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:68)
... 12 more
Caused by: org.hibernate.exception.SQLGrammarException: JDBC exception executing SQL [MERGE INTO PUBLIC.IDCSTATE USING (SELECT ? id, ? shortname, ? longname, ? readingsname, ? type, ? version, ? description, ? namespaceid, ? status, ? statuslastmodified, ? displaykey, ? guid, ? clsfid, ? createddate, ? modifieddate, ? workflowstatemapid, ? lastmodifiedbyid FROM dual) INCOMING ON (
PUBLIC.IDCSTATE.id=INCOMING.id) WHEN MATCHED THEN UPDATE SET
PUBLIC.IDCSTATE.shortname=INCOMING.shortname,
PUBLIC.IDCSTATE.longname=INCOMING.longname,
PUBLIC.IDCSTATE.readingsname=INCOMING.readingsname,
PUBLIC.IDCSTATE.type=INCOMING.type,
PUBLIC.IDCSTATE.version=INCOMING.version,
PUBLIC.IDCSTATE.description=INCOMING.description,
PUBLIC.IDCSTATE.namespaceid=INCOMING.namespaceid,
PUBLIC.IDCSTATE.status=INCOMING.status,
PUBLIC.IDCSTATE.statuslastmodified=INCOMING.statuslastmodified,
PUBLIC.IDCSTATE.displaykey=INCOMING.displaykey,
PUBLIC.IDCSTATE.guid=INCOMING.guid,
PUBLIC.IDCSTATE.clsfid=INCOMING.clsfid,
PUBLIC.IDCSTATE.createddate=INCOMING.createddate,
PUBLIC.IDCSTATE.modifieddate=INCOMING.modifieddate,
PUBLIC.IDCSTATE.workflowstatemapid=INCOMING.workflowstatemapid,
PUBLIC.IDCSTATE.lastmodifiedbyid=INCOMING.lastmodifiedbyid WHEN NOT MATCHED THEN INSERT (shortname,longname,readingsname,type,version,description,namespaceid,status,statuslastmodified,displaykey,guid,clsfid,createddate,modifieddate,workflowstatemapid,lastmodifiedbyid,id) VALUES (INCOMING.shortname,INCOMING.longname,INCOMING.readingsname,INCOMING.type,INCOMING.version,INCOMING.description,INCOMING.namespaceid,INCOMING.status,INCOMING.statuslastmodified,INCOMING.displaykey,INCOMING.guid,INCOMING.clsfid,INCOMING.createddate,INCOMING.modifieddate,INCOMING.workflowstatemapid,INCOMING.lastmodifiedbyid,INCOMING.id)]
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:64)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:56)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:109)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:95)
at org.hibernate.sql.exec.internal.StandardJdbcMutationExecutor.execute(StandardJdbcMutationExecutor.java:97)
at org.hibernate.query.sql.internal.NativeNonSelectQueryPlanImpl.executeUpdate(NativeNonSelectQueryPlanImpl.java:78)
at org.hibernate.query.sql.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:820)
at org.hibernate.query.spi.AbstractQuery.executeUpdate(AbstractQuery.java:643)
... 15 more
Caused by: java.sql.SQLSyntaxErrorException: ORA-00903: invalid table name
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:509)
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:461)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1104)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:553)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:269)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:655)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:270)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:91)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:970)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1205)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3666)
at oracle.jdbc.driver.T4CPreparedStatement.executeInternal(T4CPreparedStatement.java:1426)
at oracle.jdbc.driver.OraclePreparedStatement.executeLargeUpdate(OraclePreparedStatement.java:3756)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3736)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1063)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:1502)
at org.hibernate.sql.exec.internal.StandardJdbcMutationExecutor.execute(StandardJdbcMutationExecutor.java:84)
... 18 more
Caused by: Error : 903, Position : 11,
Sql = MERGE INTO PUBLIC.IDCSTATE USING (SELECT :1 id, :2 shortname, :3 longname, :4 readingsname, :5 type, :6 version, :7 description, :8 namespaceid, :9 status, :10 statuslastmodified, :11 displaykey, :12 guid, :13 clsfid, :14 createddate, :15 modifieddate, :16 workflowstatemapid, :17 lastmodifiedbyid FROM dual) INCOMING ON (
PUBLIC.IDCSTATE.id=INCOMING.id) WHEN MATCHED THEN UPDATE SET
PUBLIC.IDCSTATE.shortname=INCOMING.shortname,
PUBLIC.IDCSTATE.longname=INCOMING.longname,
PUBLIC.IDCSTATE.readingsname=INCOMING.readingsname,
PUBLIC.IDCSTATE.type=INCOMING.type,
PUBLIC.IDCSTATE.version=INCOMING.version,
PUBLIC.IDCSTATE.description=INCOMING.description,
PUBLIC.IDCSTATE.namespaceid=INCOMING.namespaceid,
PUBLIC.IDCSTATE.status=INCOMING.status,
PUBLIC.IDCSTATE.statuslastmodified=INCOMING.statuslastmodified,
PUBLIC.IDCSTATE.displaykey=INCOMING.displaykey,
PUBLIC.IDCSTATE.guid=INCOMING.guid,
PUBLIC.IDCSTATE.clsfid=INCOMING.clsfid,
PUBLIC.IDCSTATE.createddate=INCOMING.createddate,
PUBLIC.IDCSTATE.modifieddate=INCOMING.modifieddate,
PUBLIC.IDCSTATE.workflowstatemapid=INCOMING.workflowstatemapid,
PUBLIC.IDCSTATE.lastmodifiedbyid=INCOMING.lastmodifiedbyid WHEN NOT MATCHED THEN INSERT (shortname,longname,readingsname,type,version,description,namespaceid,status,statuslastmodified,displaykey,guid,clsfid,createddate,modifieddate,workflowstatemapid,lastmodifiedbyid,id) VALUES (INCOMING.shortname,INCOMING.longname,INCOMING.readingsname,INCOMING.type,INCOMING.version,INCOMING.description,INCOMING.namespaceid,INCOMING.status,INCOMING.statuslastmodified,INCOMING.displaykey,INCOMING.guid,INCOMING.clsfid,INCOMING.createddate,INCOMING.modifieddate,INCOMING.workflowstatemapid,INCOMING.lastmodifiedbyid,INCOMING.id),
OriginalSql = MERGE INTO PUBLIC.IDCSTATE USING (SELECT ? id, ? shortname, ? longname, ? readingsname, ? type, ? version, ? description, ? namespaceid, ? status, ? statuslastmodified, ? displaykey, ? guid, ? clsfid, ? createddate, ? modifieddate, ? workflowstatemapid, ? lastmodifiedbyid FROM dual) INCOMING ON (
PUBLIC.IDCSTATE.id=INCOMING.id) WHEN MATCHED THEN UPDATE SET
PUBLIC.IDCSTATE.shortname=INCOMING.shortname,
PUBLIC.IDCSTATE.longname=INCOMING.longname,
PUBLIC.IDCSTATE.readingsname=INCOMING.readingsname,
PUBLIC.IDCSTATE.type=INCOMING.type,
PUBLIC.IDCSTATE.version=INCOMING.version,
PUBLIC.IDCSTATE.description=INCOMING.description,
PUBLIC.IDCSTATE.namespaceid=INCOMING.namespaceid,
PUBLIC.IDCSTATE.status=INCOMING.status,
PUBLIC.IDCSTATE.statuslastmodified=INCOMING.statuslastmodified,
PUBLIC.IDCSTATE.displaykey=INCOMING.displaykey,
PUBLIC.IDCSTATE.guid=INCOMING.guid,
PUBLIC.IDCSTATE.clsfid=INCOMING.clsfid,
PUBLIC.IDCSTATE.createddate=INCOMING.createddate,
PUBLIC.IDCSTATE.modifieddate=INCOMING.modifieddate,
PUBLIC.IDCSTATE.workflowstatemapid=INCOMING.workflowstatemapid,
PUBLIC.IDCSTATE.lastmodifiedbyid=INCOMING.lastmodifiedbyid WHEN NOT MATCHED THEN INSERT (shortname,longname,readingsname,type,version,description,namespaceid,status,statuslastmodified,displaykey,guid,clsfid,createddate,modifieddate,workflowstatemapid,lastmodifiedbyid,id) VALUES (INCOMING.shortname,INCOMING.longname,INCOMING.readingsname,INCOMING.type,INCOMING.version,INCOMING.description,INCOMING.namespaceid,INCOMING.status,INCOMING.statuslastmodified,INCOMING.displaykey,INCOMING.guid,INCOMING.clsfid,INCOMING.createddate,INCOMING.modifieddate,INCOMING.workflowstatemapid,INCOMING.lastmodifiedbyid,INCOMING.id), Error
Msg = ORA-00903: invalid table name
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:513)
... 34 more
2024-02-27 03:53:38,814 INFO || Closing session. [io.debezium.connector.jdbc.JdbcChangeEventSink]
2024-02-27 03:53:38,816 INFO || Closing the session factory [io.debezium.connector.jdbc.JdbcChangeEventSink]
2024-02-27 03:53:38,828 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Revoke previously assigned partitions idcstate-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:53:38,829 INFO || [Consumer
clientId=connector-consumer-sink-connector-0,
groupId=connect-sink-connector] Member connector-consumer-sink-connector-0-2c411905-a6c7-4ace-863b-efcefabda43d sending LeaveGroup request to coordinator kafka:29092 (id:
2147483646 rack: null) due to the consumer is being closed [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:53:38,831 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Resetting generation due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:53:38,832 INFO || [Consumer clientId=connector-consumer-sink-connector-0, groupId=connect-sink-connector] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
2024-02-27 03:53:38,839 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics]
2024-02-27 03:53:38,839 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics]
2024-02-27 03:53:38,840 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics]
2024-02-27 03:53:38,844 INFO || App info kafka.consumer for connector-consumer-sink-connector-0 unregistered [org.apache.kafka.common.utils.AppInfoParser]
2024-02-27 03:53:43,482 INFO || WorkerSourceTask{id=source-connector-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
2024-02-27 03:54:43,483 INFO || WorkerSourceTask{id=source-connector-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]