Postgres 9.6, debezium 7.4, wal2json how to deal with big update in one transaction ?

1,175 views
Skip to first unread message

Adrien Quentin

unread,
Mar 30, 2018, 10:41:55 AM3/30/18
to debezium
Hello,

I encountered an error when debezium tried to read events from a many updates done in one transaction : 

2018-03-27 11:37:46,307 ERROR  Postgres|dbz|records-stream-producer  unexpected exception while streaming logical changes   [io.debezium.connector.postgresql.RecordsStreamProducer]

org.postgresql.util.PSQLException: ERROR: out of memory

  Detail: Cannot enlarge string buffer containing 1073741746 bytes by 1013 more bytes.

  Where: slot "debezium", output plugin "wal2json", in the change callback, associated LSN 8/5A721CE8


We supposed that's because of this big transaction.


How to prevent from this kind of error except of decrease commit count value ?


thanks you.

Jiri Pechanec

unread,
Mar 30, 2018, 10:49:09 AM3/30/18
to debezium
Hi,

the current snapshot/nightly build contains fix for https://issues.jboss.org/browse/DBZ-638. You can give it a try.

J.

Pranav Agrawal

unread,
Dec 5, 2018, 2:07:24 PM12/5/18
to debezium
still hitting this error with 0.8.3Final, please assist.
Screen Shot 2018-12-06 at 12.11.25 AM.png

Jiri Pechanec

unread,
Dec 6, 2018, 1:09:07 AM12/6/18
to debezium
Hi, please use `wal2json_streaming` or `wal2json_rds_streaming` as plugin name in registration request

J.

Pranav Agrawal

unread,
Dec 6, 2018, 8:54:19 AM12/6/18
to debe...@googlegroups.com
thanks, it solved the problem, really appreciate the quick response as always!

--
You received this message because you are subscribed to a topic in the Google Groups "debezium" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/debezium/Sg9Ko975Zt4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to debezium+u...@googlegroups.com.
To post to this group, send email to debe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/bece9c4b-a090-4972-a5db-15fc1618a902%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jiri Pechanec

unread,
Dec 6, 2018, 9:11:12 AM12/6/18
to debezium
No problem, I am glad you are unblocked!

J.

Amit Goldstein

unread,
Nov 5, 2019, 3:25:21 AM11/5/19
to debezium
Facing the same issue with 0.10.0.CR1. Tried using wal2json_streaming, but it seems like it is being ignored.

To test this, I tried to insert 1000 records in single transaction, run pg_logical_slot_peek_changes and review the results. I was expecting that with wal2json_streaming I will get 1000 records, but I only get one large record, exactly the same as with wal2json plugin.

Also I checked the kafka-connect log and I see this:
kafka-connect> [2019-11-05 07:41:02,371] INFO Creating replication slot with command CREATE_REPLICATION_SLOT customers_slot  LOGICAL wal2json (io.debezium.connector.postgresql.connection.PostgresReplicationConnection)

What in this statement is different from wal2json? Shouldn't we see the write-in-chunks option being passed? 

Jiri Pechanec

unread,
Nov 5, 2019, 4:17:27 AM11/5/19
to debezium
Hi,

that's the difference between streaming and non-streaming variants - the  write-in-chunks is passed.

Could you please try it on a completely fresh connector to make sure that this really does not work for you?

J.

Amit Goldstein

unread,
Nov 5, 2019, 6:15:50 AM11/5/19
to debezium
Sorry, my bad... I thought that write-in-chunks is something you pass once when creating the logical slot, did not realize you pass it every time you query the slot... 

I debugged and verified the StreamingWal2JsonMessageDecoder is used, as expected.
Reply all
Reply to author
Forward
0 new messages