Sink connector 'sink-postgres' is configured with 'delete.enabled=false' and 'pk.mode=none'

2,242 views
Skip to first unread message

Natarajan Sundaramoorthy

unread,
May 6, 2021, 6:04:01 PM5/6/21
to Confluent Platform
Trying to load data from kafka to postgres and running into below error.

Can you please help? Does json which we move from kakfa to postgres should have schema in it?

org.apache.kafka.connect.errors.ConnectException: Sink connector 'sink-postgres' is configured with 'delete.enabled=false' and 'pk.mode=none' and therefore requires records with a non-null Struct value and non-null Struct schema, but found record at (topic='test',partition=0,offset=0,timestamp=1620338169130) with a null value and null value schema.
        at io.confluent.connect.jdbc.sink.RecordValidator.lambda$requiresValue$2(RecordValidator.java:86)
        at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:82)
        at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:74)
        at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:84)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624

samprati sharma

unread,
May 6, 2021, 11:08:34 PM5/6/21
to confluent...@googlegroups.com
Hi Natarajan, 

From error, it is clear that you are publishing records with value as "null". Such records are called tombstone records. 

To handle such records so that connectors won't fail, you have to apply tombstone transformation.


Regards
Samprati Sharma 

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/5e106b13-ec5e-4003-8620-785fb7c3d7d5n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages