debezium delete event, tombstone message does not include key

1,645 views
Skip to first unread message

John Psoroulas

unread,
Sep 10, 2018, 8:41:53 AM9/10/18
to debezium
Hi,

using the setup for postgreSQL and debezium described at https://groups.google.com/forum/#!topic/debezium/Q9HCJesNUlY
the kafka messages stored for a 'delete' event are the following:

offset: 2 position: 813 CreateTime: 1536580522209 isvalid: true keysize: 19 valuesize: 296 magic: 2 compresscodec: NONE producerId: -1 producerEpoch: -1 sequence: -1 isTransactional: false headerKeys: [] key: {"id":"id1       "} payload: {"before":{"id":"id1       ","code":null},"after":null,"source":{"version":"0.8.2","name":"DB_TEST_SERVER","db":"test","ts_usec":1536580521883194000,"txId":934145,"lsn":3322629136,"schema":"public","table":"test_table","snapshot":false,"last_snapshot_record":null},"op":"d","ts_ms":1536580521895}


offset
: 3 position: 1198 CreateTime: 1536580522210 isvalid: true keysize: 19 valuesize: -1 magic: 2 compresscodec: NONE producerId: -1 producerEpoch: -1 sequence: -1 isTransactional: false headerKeys: []

According to the debezium official documentation https://debezium.io/docs/connectors/postgresql/#tombstone-events

When a row is deleted, the delete event value listed above still works with log compaction, since Kafka can still remove all earlier messages with that same key. But only if the message value is null will Kafka know that it can remove all messages with that same key. To make this possible, Debezium’s PostgreSQL connector always follows the delete event with a special tombstone event that has the same key but null value.

The above tombstone message does not include the key.

What is the expected behavior ?

John

Jiri Pechanec

unread,
Sep 11, 2018, 3:32:17 AM9/11/18
to debezium
Hi,

can you try oue tutorial? I just run it know and I saw key in tombstone messages.

J.

John Psoroulas

unread,
Sep 11, 2018, 5:25:48 AM9/11/18
to debezium
Jiri, do you mean the following tutorial https://github.com/debezium/debezium-examples/tree/master/tutorial#using-postgres ?

I haven't docker installed, all my tests are performed directly on the OS.
If I can repeat your test without using docker image please give me the details.

I repeated the test with kafka only (out of confluence - I downloaded the latest kafka archive, installed it and started the respective services as
described at https://kafka.apache.org/quickstart) , and the result was the same,  tombstone message does not include the table key.

Also note that I use debezium v0.8.2 which has the same behavior as v0.8.1

offset: 4 position: 1197 CreateTime: 1536656016185 isvalid: true keysize: 19 valuesize: 296 magic: 2 compresscodec: NONE producerId: -1 producerEpoch: -1 sequence: -1 isTransactional: false headerKeys: [] key: {"id":"id1       "} payload: {"before":{"id":"id1       ","code":null},"after":null,"source":{"version":"0.8.2","name":"DB_TEST_SERVER","db":"test","ts_usec":1536656015996952000,"txId":934157,"lsn":3322648864,"schema":"public","table":"test_table","snapshot":false,"last_snapshot_record":null},"op":"d","ts_ms":1536656016014}

offset
: 5 position: 1197 CreateTime: 1536656016186 isvalid: true keysize: 19 valuesize: -1 magic: 2 compresscodec: NONE producerId: -1 producerEpoch: -1 sequence: -1 isTransactional: false headerKeys: []


The respective output of the wal2json output plugin was

{
 
"change": [
   
{
     
"kind": "delete",

     
"schema": "public",
     
"table": "test_table",

     
"oldkeys": {
       
"keynames": ["id"],
       
"keytypes": ["character(10)"],
       
"keyvalues": ["id1       "]
     
}
   
}
 
]
}

hope this helps

John

Jiri Pechanec

unread,
Sep 11, 2018, 5:55:26 AM9/11/18
to debezium
I sriously believe there is a problem on your side and the key is present - please check the headers "keysize: 19 valuesize: -1". It definitely implies that there is a key. And its length is the same as for delete record.

J.

John Psoroulas

unread,
Sep 11, 2018, 6:02:11 AM9/11/18
to debezium
Do you have any suggestions to find the problem?
Does the problem comes from the database or kafka side ?

thanks in advance 

John

Jiri Pechanec

unread,
Sep 11, 2018, 6:07:45 AM9/11/18
to debezium
Could you please post the kafka datafile you are dumping and at the same time look at the topic using kafka-console-consumer.sh or kafkacat?

J.

John Psoroulas

unread,
Sep 11, 2018, 6:21:44 AM9/11/18
to debezium
Jiri here is the output of the kafka consumer upon the insertion and the deletion of the database record

./bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic DB_TEST_SERVER.public.test_table

message received for the record insertion

{"before":null,"after":{"id":"id1       ","code":"code1     "},"source":{"version":"0.8.1.Final","name":"DB_TEST_SERVER","ts_usec":1536660870391831000,"txId":934165,"lsn":3322655293,"snapshot":false,"last_snapshot_record":null},"op":"c","ts_ms":1536660870430}

messages received for the record deletion

{"before":{"id":"id1       ","code":null},"after":null,"source":{"version":"0.8.1.Final","name":"DB_TEST_SERVER","ts_usec":1536660879723011000,"txId":934166,"lsn":3322656312,"snapshot":false,"last_snapshot_record":null},"op":"d","ts_ms":1536660879738}
null

Please find attached the related kafka data files.

John
DB_TEST_SERVER.public.test_table-0.zip

Jiri Pechanec

unread,
Sep 11, 2018, 6:28:25 AM9/11/18
to debezium
ok, if you look at the file, the record is stored with key - see the almost end of file.

Add `--property print.key=true` to the command, it should print key + value for each record.

J.

John Psoroulas

unread,
Sep 11, 2018, 6:33:04 AM9/11/18
to debezium
Thanks very much for your time Jiri,
I really appreciate it!

John
Reply all
Reply to author
Forward
0 new messages