Hello Chris and Andrey,
I'm currently using an external Postgres database for caching the transactions and am now thinking of also using that postgres for intermediate storage of the logminer results. Only thing is that this is actually the same as using an additional Oracle engine purely for reading the archive log files...
Adding the log files in my case often takes minimal time in comparison to the actual querying. I checked this with a backlog of 1000 archive log files in comparison to just adding the redo log. Reading the view seems to consume one complete CPU, independent of how much must be returned. Optimizing the consumption of the data so that the archive only has to be read once could help lower the impact on the database. In my case where I have to mine hundreds of tables I have to do something.
Using a staging table in the same database with the same engine to me seems not like a solution. In that case I would just use an external database.
As Andrey indicates reading the view with the cursor seems to be the fastest way. But I'm not impressed by the speed. It does not seem to be limited by network bandwidth but rather by CPU of the database. And seems to be single threaded.
Offloading the archive logs and using multiple DB engines to read the files might make it more scalable but also probably very expensive. I'm no expert on Oracle licencing but most things are not free. Maybe reading archive logs is? I don't know.
And instead of offloading we could also just accept that the mining takes resources. Having an external DB engine running also takes CPUs. So why not just add those CPUs to the existing DB and let them be used by the mining process. I saw the graphs and IO does not seem to be impacted.
Having an external transaction/staging table helps with debugging. I keep the staging for a couple of hours and so can exactly see what is happening.
- I'm trying to minimize having to read the archive and redo logs multiple times
- I might try to use the external database not only for the transactions but also as a staging area
- Storing transactions in an external DB helps reduce memory footprint of connector
- The external DB also helps during debugging and monitoring
I'll keep you informed about my findings. Disabling continuous mining by Oracle in v19+ has made things much more difficult... To say the least...