Hi Gali -
When "database.history.store.only.captured.tables.ddl" is false,
we capture all table DDL changes. This allows you to safely add
tables to the "table.include.list" configuration that may have
existed before prior to the connector being created as well as
over the connector's lifetime safely because we have continuously
tracked the table's schema evolution.
However, when this is set to true, you loose this safety and it
means that if you add a table to the "table.include.list" that
existed prior to the connector being created or a table that has
been created over the connector's lifetime, you may run into
problems with specific operations. For example, incremental
snapshots expects the table's schema to be registered with the
in-memory relational model, but when the table's DDL isn't
captured, no schema will exist and incremenal snapshots will
fail.
There are two ways forward in this use case:
The first is to send a 'schema-changes' signal with all the
JSON-based schema change metadata that the connector would
generate, including column mappings, data types, etc. This is
quite advanced and you may want to use a temporary second
connector to generate this data in a separate history topic for
the table in question and then use that data as the basis for your
signal. The other way (and I'm not entirely sure every connector
supports this) but during streaming, some connectors have a check
that if a change event happens for a table that is included based
on the configuration but does not yet have a relational model
in-memory, we generate the model followed by emitting the event.
So you can safely trigger this second approach with a small update
to an existing row or an insert of a new row. Once you've
completed either, you should be able to safely perform an
incremental snapshot.
Hope that clarifies.
Chris