Hi,
does anyone know why debezium show this error? Producer is PostgreSQL database and it is run. In first step Debezium connected to database and load user's information.
[2021-11-29 12:24:11,639] INFO snapshot.mode = never (io.debezium.connector.common.BaseSourceTask:129)
[2021-11-29 12:24:11,832] INFO No previous offsets found (io.debezium.connector.common.BaseSourceTask:321)
[2021-11-29 12:24:11,841] INFO user '<user>' connected to database '<db_name>' on PostgreSQL 13.5 on powerpc64le-unknown-linux-gnu, compiled by gcc (GCC) 6.4.1 20180131 (Advance-Toolchain-at10.0) [revision 257243], 64-bit with roles:
role 'user' [superuser: true, replication: true, inherit: true, create role: false, create db: false, can log in: true]
[2021-11-29 12:24:11,860] INFO Obtained valid replication slot ReplicationSlot [active=false, latestFlushedLsn=LSN{FE/AEE2DC80}, catalogXmin=3770407] (io.debezium.connector.postgresql.connection.PostgresConnection:244)
[2021-11-29 12:24:11,861] INFO No previous offset found (io.debezium.connector.postgresql.PostgresConnectorTask:117)
[2021-11-29 12:24:11,861] INFO Snapshots are not allowed as per configuration, starting streaming logical changes only (io.debezium.connector.postgresql.snapshot.NeverSnapshotter:34)
[2021-11-29 12:24:11,907] INFO Requested thread factory for connector PostgresConnector, id = <server_name> named = change-event-source-coordinator (io.debezium.util.Threads:270)
[2021-11-29 12:24:11,920] INFO Creating thread debezium-postgresconnector-<server_name>-change-event-source-coordinator (io.debezium.util.Threads:287)
[2021-11-29 12:24:11,921] INFO WorkerSourceTask{id=connect-debezium-postgres-source-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:225)
[2021-11-29 12:24:11,928] INFO Metrics registered (io.debezium.pipeline.ChangeEventSourceCoordinator:104)
[2021-11-29 12:24:11,929] INFO Context created (io.debezium.pipeline.ChangeEventSourceCoordinator:107)
[2021-11-29 12:24:11,941] INFO According to the connector configuration no snapshot will be executed (io.debezium.connector.postgresql.PostgresSnapshotChangeEventSource:65)
[2021-11-29 12:24:11,942] INFO Snapshot ended with SnapshotResult [status=SKIPPED, offset=null] (io.debezium.pipeline.ChangeEventSourceCoordinator:119)
[2021-11-29 12:24:11,950] INFO Connected metrics set to 'true' (io.debezium.pipeline.metrics.StreamingChangeEventSourceMetrics:70)
[2021-11-29 12:24:11,950] INFO Starting streaming (io.debezium.pipeline.ChangeEventSourceCoordinator:163)
[2021-11-29 12:24:12,002] ERROR Producer failure (io.debezium.pipeline.ErrorHandler:31)
io.debezium.DebeziumException: Error whil executing initial schema load
at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.init(PostgresStreamingChangeEventSource.java:99)
at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:164)
at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:127)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
[2021-11-29 12:24:12,005] INFO Connected metrics set to 'false' (io.debezium.pipeline.metrics.StreamingChangeEventSourceMetrics:70)
[2021-11-29 12:24:12,422] INFO WorkerSourceTask{id=connect-debezium-postgres-source-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:487)
[2021-11-29 12:24:12,423] ERROR WorkerSourceTask{id=connect-debezium-postgres-source-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:184)
org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.
at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:42)
at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:135)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.debezium.DebeziumException: Error whil executing initial schema load
at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.init(PostgresStreamingChangeEventSource.java:99)
at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:164)
at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:127)
... 5 more
[2021-11-29 12:24:12,424] INFO Stopping down connector (io.debezium.connector.common.BaseSourceTask:241)
[2021-11-29 12:24:12,432] INFO Connection gracefully closed (io.debezium.jdbc.JdbcConnection:965)
[2021-11-29 12:24:12,434] INFO Connection gracefully closed (io.debezium.jdbc.JdbcConnection:965)
[2021-11-29 12:24:12,435] INFO [Producer clientId=connector-producer-connect-debezium-postgres-source-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1204)
[2021-11-29 12:24:12,462] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)
Thanks for all suggestions,
Jan