Hi,
Unfortunately that's somewhat expected.
The connector relies on Oracle LogMiner which itself has a
specific set of requirements that we must adhere to. First and
foremost, if you want schema changes to be tracked, we have to
tell LogMiner about this. LogMiner then expects that we either
supply it with a data dictionary file reference (which isn't an
option we support) or we specify that the dictionary is part of
the redo logs. LogMiner must read this dictionary, prepare all
the schema tracking metadata in the LogMiner tablespace and only
once all that is complete can the redo entries in the logs can be
mined.
When you deploy multiple Oracle connectors, you are effectively
starting multiple Oracle LogMiner sessions. This means with the
default connector settings, you will write the data dictionary to
the redo logs N times (where N is the number of connectors). This
will cause a burst of archive logs to be generated which isn't
ideal. This also means you will be reading the same redo logs N
times, therefore duplicating the mining work across each
connector.
You can deploy multiple connectors on Oracle safely, but it
requires using log.mining.strategy=online_catalog to keep the load
on the database minimal. But this setting comes at a cost, which
is schema changes aren't tracked. In other words, if you want to
change the schema of a table you are capturing, it requires that
you follow a rigid process of managing data changes and the schema
change closely by:
1. Disallow data changes on the table
2. Wait for all data changes for the table to be emitted.
3. Stop the connector
4. Perform the schema change
5. Restart the connector
6. Resume data changes on the table
In general, we never recommend multiple connector deployments per
database, but there are reasons users may want to do this but its
critical to understand the impact on the database. For Oracle,
its common to see multiple connectors only when you have multiple
pluggable databases that need to be captured, but that's on our
radar to address in the near future even with a single connector
to allow the load on the database to be managed effectively.
If you have any specific questions, feel free to ask but hopefully
that provide some clarity.
Chris
On 2/10/22 06:20, FairyTail1279 wrote:
When I tested debezium with oracle I found that I can't use more than 1 connector because using more than 1 connector it will have very high overhead and the event sent to kafka is very slow.