very large ddl history topic

565 views
Skip to first unread message

Rich OConnell

unread,
Sep 27, 2017, 11:59:22 PM9/27/17
to debezium
Hi all,

We've been using Debezium on a new project for the last few months and have had great success.  It's a great project, and thanks for all the hard work.

Within the last month we deployed our new data processing system into production in a darkened state and started to see some real world results.  The most notable thing so far has been the DDL history topic that Debezium uses to store the state of the database schema.  In our case, we are replacing an ETL system which uses a ton of create temp table statements to execute ETL's, and it runs 100's of times a day.  Within a month we've grown the history topic to over 40K events.  Additionally, we have three source databases and each one shares the same ddl history topic; this was an oversight on our part, we failed to look into the importance of this topic.  We also over partitioned it, not realizing it would only use one partition.

Anyhow, the issue we started to see would happen during overnight server restarts.  A connector service would cycle which would cause a rebalance when it went down followed by a rebalance when it came back shortly thereafter.  What started to happen was the debezium connectors would take so long reading the history topic that they wouldn't be finished starting by the time they were asked to stop.  So we'd end up with a timing issue where multiple instances of the same connector would be running at overlapping times, causing a variety of issues, e.g. two shyiko slaves registering with the same slave ids, or two history consumers reading the same history topic in the same consumer group and thus not getting the full data set, and lastly, a connector failing to read to the end of the history topic but determining it had based on the 4 retries at 100ms timeout.  The last one is the most confusing and we still aren't sure  exactly why it happens.

We've alleviated the problem by adjusting a few configurations.

Worker configs:
task.shutdown.graceful.timeout.ms=300000
rebalance
.timeout.ms=120000


We jacked up the shutdown timeout to give each connector a chance to read through the full history topic and complete a shutdown before KC would start it up again.  We found we needed to then increase the rebalance timeout so that KC would not prematurely kick members out of the group while they were waiting for connectors to cycle.

Connector configs:
database.history.kafka.recovery.poll.interval.ms=500

We increased this one to prevent the premature timeouts that the history consumer was experiencing.

All of these resolve our issue for now, but I don't think it will scale because of the stupid amount of DDL statements we have to deal with.  We have a few ideas going forward:
  1. Filtering DDL events that we don't care about
  2. Implement some sort of periodic snapshot/rebuild ability for the ddl topic so we can periodically prune it and start over
  3. Adjust the KafkaDatabaseHistory class to use offsets to determine when it's read through the entire topic, i.e. fetch the last offset value prior to reading from the beginning and don't stop until it's met the end

Does anyone have any experience with issues like this or any guidance to offer on the patches we are proposing to contribute?

Thanks,

Rich

 


Rich OConnell

unread,
Sep 28, 2017, 12:15:09 AM9/28/17
to debezium
I should have mentioned, this is with the MySQL connector v 0.5.1

Jiri Pechanec

unread,
Oct 5, 2017, 12:35:09 AM10/5/17
to debezium
Hi,

first of all - thanks for the report, it seems this is a use case we were not thinking about with connector architecture.

As a start do you think you can re-execute snapshot and split the history topic so you have one topic per database?
Next - how do you know which events ar eno longer important? By the name of temporary tables that get created and dropped? Would it be possible as a stop gap solution to write a Kadka Streams app that would read events from the history topic and copy them to another while filtering out thos obsolete? It might be possible that upon reconfiguration Debezium could be redirected to the new topic.

And last but not least - could you please add your use case to https://issues.jboss.org/projects/DBZ/issues/DBZ-239 so we might discuss abot it there?

Thanks a lot

J.

Rich OConnell

unread,
Oct 6, 2017, 11:22:45 AM10/6/17
to debezium
Thanks for the suggestions Jiri.  Using a Kafka Streams app to filter out the ddl and create a new, cleaner, topic would probably work for us.  However, we are in a situation where we are going to be re-snapshotting the source databases in the near future, so we will be holding off any of these attempts to fix the current state of things, and we'll just do it more correctly the next time, i.e. ddl topic per source, single partition, infinite retention etc.

We're going to investigate a way to filter out these temp table events prior to storing them in the ddl topic, since we know we'll never need them.   We're currently doing a little more research into the code to figure out exactly what we will do, and we'll update the jira ticket with our findings.

Thanks,

Rich

Randall Hauch

unread,
Oct 6, 2017, 11:58:50 AM10/6/17
to debezium
I'd actually consider this a bug. The DatabaseHistory should not really be recording operations on temp tables. We're parsing the DDL statements, and it should be possible to identify whether a table is considered temporary and therefore which DDL statements can be filtered out. Rich, would you mind logging an issue with as much detail as you can include?

--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+unsubscribe@googlegroups.com.
To post to this group, send email to debe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/e20648b0-f5bd-43e8-a61c-b163652382db%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jiri Pechanec

unread,
Oct 6, 2017, 12:09:58 PM10/6/17
to debezium
Rich, just to clarify are we talking about temp tables in a sense of CREATE TEMPORARY TABLE, or a regular table that is created during ETL process and dropped after and thus considered temporary from system point of view?

J.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.

Rich OConnell

unread,
Oct 6, 2017, 12:23:29 PM10/6/17
to debezium
Actually, I just went back and checked the topic and it's a bunch of drop temp table statements:

DROP TEMPORARY TABLE IF EXISTS `tmp_data_tbl`

I think we saw those and assumed there were matching create statements in there, but looking at it now I can only find the drop statements.  We will dig a little more and include it in the jira ticket.

Rich OConnell

unread,
Oct 13, 2017, 11:07:25 AM10/13/17
to debezium
We've confirmed it is just the drop temp table statements, so I've added this ticket for it: https://issues.jboss.org/projects/DBZ/issues/DBZ-395

We're still looking at some of the options around rebuilding the topic and also detecting when the internal debezium schema model has been rebuilt into a bad state, i.e. either partially complete or just doesn't exist due to the issues mentioned above.

Thanks,
Rich
Reply all
Reply to author
Forward
0 new messages