Debezium Server

606 views
Skip to first unread message

Lechosław Pracz

unread,
Mar 17, 2023, 10:49:30 AM3/17/23
to debezium
Hello, I'm new here  and i am testing debezium with oracle database.
I am trying to connect to Oracle database with debezium server, but i am getting below error :
14:22:18.041 [pool-7-thread-1] ERROR io.debezium.server.ConnectorLifecycle - Connector completed: success = 'false', message = 'Connector configuration is not valid. Unable to connect: Failed to resolve Oracle database version', error = 'null'

I'm running oracle database and debezium server  from docker image.

docker run -it --name server  -p 8080:8080  -v /home//debezium/conf:/debezium/conf -v /home/debezium/data:/debezium/data --link dbz_oracle21 --link redisdebez debezium/server

 Here is my application.properies:

#sink
debezium.sink.type=redis
debezium.sink.redis.address=172.17.0.7:6379

#connector
debezium.source.connector.class=io.debezium.connector.oracle.OracleConnector
debezium.source.offset.storage=io.debezium.server.redis.RedisOffsetBackingStore
debezium.source.offset.storage.file.filename=data/offsets.dat
debezium.source.offset.flush.interval.ms=0
debezium.source.database.hostname=172.17.0.4
debezium.source.database.port=1521
debezium.source.database.user=c##dbzuser
debezium.source.database.password=dbz
debezium.source.database.dbname=XE
debezium.source.database.pdb.name=XEPDB1
debezium.source.database.server.name=tutorial
debezium.source.topic.prefix=tutorial
debezium.source.table.include.list=C##DBZUSER.CUSTOMERS


When I'm trying it with kafka connect from this instructions https://debezium.io/blog/2022/09/30/debezium-oracle-series-part-1/ and  https://debezium.io/blog/2022/10/06/debezium-oracle-series-part-2/  everything works fine. I can see result of insert delete ... 

The full stack trace:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/debezium/lib/logback-classic-1.2.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/debezium/lib/slf4j-jboss-logmanager-1.2.0.Final.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
14:22:16.084 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
14:22:16.088 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
14:22:16.089 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
14:22:16.747 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
14:22:16.748 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 11
14:22:16.750 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
14:22:16.752 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
14:22:16.755 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.storeFence: available
14:22:16.756 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
14:22:16.757 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: unavailable: Reflective setAccessible(true) disabled
14:22:16.759 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
14:22:16.761 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable: class io.netty.util.internal.PlatformDependent0$7 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @779895e2
14:22:16.766 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): unavailable
14:22:16.767 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
14:22:16.803 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 2084569088 bytes (maybe)
14:22:16.804 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
14:22:16.804 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
14:22:16.807 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: -1 bytes
14:22:16.808 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1
14:22:16.810 [Thread-0] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available
14:22:16.810 [Thread-0] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
14:22:16.813 [Thread-0] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 1 (auto-detected)
14:22:16.818 [Thread-0] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 57:5c:a0:88:09:cd:5d:9e (user-set)
14:22:16.904 [main] DEBUG io.vertx.core.logging.LoggerFactory - Using io.vertx.core.logging.SLF4JLogDelegateFactory
14:22:16.987 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
14:22:16.987 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
14:22:17.022 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 6
14:22:17.058 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
14:22:17.058 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
14:22:17.068 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
14:22:17.205 [main] DEBUG io.debezium.server.DebeziumServer - Found 1 candidate consumer(s)
14:22:17.207 [main] INFO io.debezium.server.BaseChangeConsumer - Using 'io.debezium.server.BaseChangeConsumer$$Lambda$199/0x0000000840285840@26a4551a' stream name mapper
14:22:17.234 [main] INFO io.debezium.storage.redis.RedisCommonConfig - Configuration for 'RedisStreamChangeConsumerConfig' with prefix 'debezium.sink.': {type=redis, redis.address=172.17.0.7:6379}
14:22:17.371 [main] INFO io.debezium.storage.redis.RedisConnection - Using Redis client 'JedisClient [jedis=Jedis{Connection{DefaultJedisSocketFactory{172.17.0.7:6379}}}]'
14:22:17.379 [main] DEBUG io.debezium.server.redis.RedisMemoryThreshold - Redis 'info memory' field 'maxmemory' is 0. Consider configuring it.
14:22:17.380 [main] INFO io.debezium.server.DebeziumServer - Consumer 'io.debezium.server.redis.RedisStreamChangeConsumer' instantiated
14:22:17.408 [main] DEBUG io.debezium.server.DebeziumServer - Configuration for DebeziumEngine: {connector.class=io.debezium.connector.oracle.OracleConnector, database.user=c##dbzuser, database.dbname=XE, offset.storage=io.debezium.server.redis.RedisOffsetBackingStore, database.pdb.name=XEPDB1, database.server.name=tutorial, database.port=1521, offset.flush.interval.ms=0, topic.prefix=tutorial, offset.storage.redis.address=172.17.0.7:6379, offset.storage.file.filename=data/offsets.dat, database.hostname=172.17.0.4, database.password=dbz, name=redis, table.include.list=C##DBZUSER.CUSTOMERS, schema.history.internal.redis.address=172.17.0.7:6379}
14:22:17.493 [main] INFO org.apache.kafka.connect.json.JsonConverterConfig - JsonConverterConfig values:
        converter.type = key
        decimal.format = BASE64
        schemas.cache.size = 1000
        schemas.enable = false

14:22:17.495 [main] INFO org.apache.kafka.connect.json.JsonConverterConfig - JsonConverterConfig values:
        converter.type = value
        decimal.format = BASE64
        schemas.cache.size = 1000
        schemas.enable = false

14:22:17.511 [main] INFO io.debezium.embedded.EmbeddedEngine$EmbeddedConfig - EmbeddedConfig values:
        access.control.allow.methods =
        access.control.allow.origin =
        admin.listeners = null
        auto.include.jmx.reporter = true
        bootstrap.servers = [localhost:9092]
        client.dns.lookup = use_all_dns_ips
        config.providers = []
        connector.client.config.override.policy = All
        header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
        key.converter = class org.apache.kafka.connect.json.JsonConverter
        listeners = [http://:8083]
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        offset.flush.interval.ms = 0
        offset.flush.timeout.ms = 5000
        offset.storage.file.filename = data/offsets.dat
        offset.storage.partitions = null
        offset.storage.replication.factor = null
        offset.storage.topic =
        plugin.path = null
        response.http.headers.config =
        rest.advertised.host.name = null
        rest.advertised.listener = null
        rest.advertised.port = null
        rest.extension.classes = []
        ssl.cipher.suites = null
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
        ssl.endpoint.identification.algorithm = https
        ssl.engine.factory.class = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.certificate.chain = null
        ssl.keystore.key = null
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLSv1.3
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.certificates = null
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        task.shutdown.graceful.timeout.ms = 5000
        topic.creation.enable = true
        topic.tracking.allow.reset = true
        topic.tracking.enable = true
        value.converter = class org.apache.kafka.connect.json.JsonConverter

14:22:17.513 [main] WARN org.apache.kafka.connect.runtime.WorkerConfig - Variables cannot be used in the 'plugin.path' property, since the property is used by plugin scanning before the config providers that replace the variables are initialized. The raw value 'null' was used for plugin scanning, as opposed to the transformed value 'null', and this may cause unexpected results.
14:22:17.517 [main] INFO org.apache.kafka.connect.json.JsonConverterConfig - JsonConverterConfig values:
        converter.type = key
        decimal.format = BASE64
        schemas.cache.size = 1000
        schemas.enable = true

14:22:17.517 [main] INFO org.apache.kafka.connect.json.JsonConverterConfig - JsonConverterConfig values:
        converter.type = value
        decimal.format = BASE64
        schemas.cache.size = 1000
        schemas.enable = true

14:22:17.518 [main] INFO org.apache.kafka.connect.json.JsonConverterConfig - JsonConverterConfig values:
        converter.type = header
        decimal.format = BASE64
        schemas.cache.size = 1000
        schemas.enable = true

14:22:17.520 [main] INFO io.debezium.server.DebeziumServer - Engine executor started
14:22:17.567 [vert.x-eventloop-thread-1] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
14:22:17.567 [vert.x-eventloop-thread-1] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
14:22:17.570 [vert.x-eventloop-thread-1] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 127.0.0.1)
14:22:17.571 [vert.x-eventloop-thread-1] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 4096
14:22:17.690 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 6
14:22:17.692 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 6
14:22:17.692 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
14:22:17.696 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 3
14:22:17.697 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 65536
14:22:17.697 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
14:22:17.698 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
14:22:17.702 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
14:22:17.704 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
14:22:17.704 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
14:22:17.704 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: false
14:22:17.705 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
14:22:17.777 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
14:22:17.778 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
14:22:17.778 [vert.x-eventloop-thread-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
{"timestamp":"2023-03-17T14:22:17.844Z","sequence":103,"loggerClassName":"org.jboss.logging.Logger","loggerName":"io.quarkus","level":"INFO","message":"debezium-server-dist 2.2.0.Alpha3 on JVM (powered by Quarkus 2.16.3.Final) started in 2.668s. Listening on: http://0.0.0.0:8080","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"5b1351394b5e","processName":"io.debezium.server.Main","processId":1}
{"timestamp":"2023-03-17T14:22:17.866Z","sequence":104,"loggerClassName":"org.jboss.logging.Logger","loggerName":"io.quarkus","level":"INFO","message":"Profile prod activated. ","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"5b1351394b5e","processName":"io.debezium.server.Main","processId":1}
{"timestamp":"2023-03-17T14:22:17.867Z","sequence":105,"loggerClassName":"org.jboss.logging.Logger","loggerName":"io.quarkus","level":"INFO","message":"Installed features: [cdi, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, vertx]","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"5b1351394b5e","processName":"io.debezium.server.Main","processId":1}
14:22:18.021 [pool-7-thread-1] ERROR io.debezium.connector.oracle.OracleConnector - Failed testing connection for {connector.class=io.debezium.connector.oracle.OracleConnector, database.dbname=XE, database.user=c##dbzuser, offset.storage=io.debezium.server.redis.RedisOffsetBackingStore, database.pdb.name=XEPDB1, database.server.name=tutorial, offset.flush.timeout.ms=5000, errors.retry.delay.max.ms=10000, database.port=1521, offset.flush.interval.ms=0, topic.prefix=tutorial, offset.storage.redis.address=172.17.0.7:6379, offset.storage.file.filename=data/offsets.dat, errors.max.retries=-1, database.hostname=172.17.0.4, database.password=********, name=redis, errors.retry.delay.initial.ms=300, table.include.list=C##DBZUSER.CUSTOMERS, value.converter=org.apache.kafka.connect.json.JsonConverter, key.converter=org.apache.kafka.connect.json.JsonConverter, schema.history.internal.redis.address=172.17.0.7:6379} with user '[database.user,null,[],[],true]'
java.lang.RuntimeException: Failed to resolve Oracle database version
        at io.debezium.connector.oracle.OracleConnection.resolveOracleDatabaseVersion(OracleConnection.java:171)
        at io.debezium.connector.oracle.OracleConnection.<init>(OracleConnection.java:81)
        at io.debezium.connector.oracle.OracleConnection.<init>(OracleConnection.java:76)
        at io.debezium.connector.oracle.OracleConnector.validateConnection(OracleConnector.java:74)
        at io.debezium.connector.common.RelationalBaseSourceConnector.validate(RelationalBaseSourceConnector.java:55)
        at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:716)
        at io.debezium.embedded.ConvertingEngineBuilder$2.run(ConvertingEngineBuilder.java:229)
        at io.debezium.server.DebeziumServer.lambda$start$1(DebeziumServer.java:170)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.sql.SQLException: No suitable driver found for jdbc:oracle:thin:@172.17.0.4:1521/XE
        at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:702)
        at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:189)
        at io.debezium.jdbc.JdbcConnection.lambda$patternBasedFactory$0(JdbcConnection.java:189)
        at io.debezium.jdbc.JdbcConnection$ConnectionFactoryDecorator.connect(JdbcConnection.java:127)
        at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:873)
        at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:868)
        at io.debezium.jdbc.JdbcConnection.queryAndMap(JdbcConnection.java:621)
        at io.debezium.jdbc.JdbcConnection.queryAndMap(JdbcConnection.java:495)
        at io.debezium.connector.oracle.OracleConnection.resolveOracleDatabaseVersion(OracleConnection.java:141)
        ... 10 common frames omitted
14:22:18.041 [pool-7-thread-1] ERROR io.debezium.server.ConnectorLifecycle - Connector completed: success = 'false', message = 'Connector configuration is not valid. Unable to connect: Failed to resolve Oracle database version', error = 'null'
14:22:18.063 [main] INFO io.debezium.server.DebeziumServer - Received request to stop the engine
14:22:18.063 [main] INFO io.debezium.embedded.EmbeddedEngine - Stopping the embedded engine
{"timestamp":"2023-03-17T14:22:18.096Z","sequence":106,"loggerClassName":"org.jboss.logging.Logger","loggerName":"io.quarkus","level":"INFO","message":"debezium-server-dist stopped in 0.054s","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"5b1351394b5e","processName":"io.debezium.server.Main","processId":1}

Thanks


Robin Tang

unread,
Mar 17, 2023, 12:50:27 PM3/17/23
to debe...@googlegroups.com
So if I'm understanding correctly, DBZ through Kafka Connect is working but DBZ server is not?

Have you confirmed that you are loading the `application.properties` file correctly? Either by exec-ing into the running container or looking at the DBZ logs?



--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/e8258b14-dda9-40a5-8169-bb7caad03444n%40googlegroups.com.

Chris Cranford

unread,
Mar 18, 2023, 2:23:11 PM3/18/23
to debe...@googlegroups.com
Hi -

The issue appears to be that docker image running Debezium Server does not have the Oracle driver.  Keep in mind, Debezium does not redistribute the Oracle driver, so you'll need to either mount the driver you've manually downloaded into the container or you'll need to bake your own container based on the official Debezium Server container, adding the Oracle driver jars as a part of that baking step.  That should solve that problem related to "No suitable driver".

One additional point I want to make, I noticed the use of "XE" in the configuration, and I'm assuming you're using Oracle 21 XE.  Unfortunately Oracle XE editions are not supported after Oracle 11, so you either need to use the Standard Edition (SE) or the Enterprise Edition (EE) in order to have an Oracle database that has all the necessary change data capture features available.

Thanks,
Chris

Lechosław Pracz

unread,
Mar 20, 2023, 4:36:19 AM3/20/23
to debezium
Ok thanks for response Chris.
So  where in Debezium Server container image I should mount the driver (ojdbc8.jar) that I dowloaded manualy and have it locally on host?

Chris Cranford

unread,
Mar 20, 2023, 8:19:13 AM3/20/23
to debe...@googlegroups.com
Hi, you should be able to mount it inside the Debezium for Oracle connector directory where its other dependencies reside.  Make sure that you're using the 21.6 version of the ojdbc8.jar.

Thanks,
Chris

Lechosław Pracz

unread,
Mar 22, 2023, 10:15:44 AM3/22/23
to debezium
Thanks Chris it's working.

I have another question in my organisation we have Oracle 19 enterprise edition and working REDIS. We want to use Debezium Server in Docker to capture change data and REDIS as a sink. We have a lot of schemas in the database and a lot of tables in them, but we want to monitor only five tables from one schema and one table from the other schema. 
If I understand the documentation correctly, I don't have to enable supplemental log data for the entire database, only for the tables that I want to capture changes?
Do I need to have a separate container running with Debezium Server for each table, or separate container for different schemas?
At what properties of the connector should I pay attention to in order not to have problems with database performance?

Leszek

Chris Cranford

unread,
Mar 23, 2023, 12:46:52 AM3/23/23
to debe...@googlegroups.com
Hi Leszek -

The database itself requires minimum supplemental logging; however you only need to apply full supplemental logging on all columns for the tables ou wish to capture.  As for the number of connectors/server instances, I would suggest for Oracle, you use a single connector.  As long as all the data you wish to capture is within a single database, you can use the "table.include.list" property to capture changes across multiple schema/table spaces, such as:

  "table.include.list": "DEBEZIUM1.TABLE1,DEBEZIUM1.TABLE2,DEBEZIUM2.TABLE3"

In terms of performance, the biggest performance influencing property is the "log.mining.strategy" property.  The default writes the data dictionary to the redo logs, which allows the connector to seamlessly capture schema changes.  But this has the trade-off that more information must be written to the archive logs on each log switch and that it takes longer for the Oracle LogMiner process to start.  We generally recommend when using the default strategy that your redo logs are sized at a minimum of 500MB but 1GB-3GB is generally better. 

If the tables you're capturing won't have schema changes or if they do can you can perform the schema changes in a lock-step fashion, you could try using the "log.mining.strategy=online_catalog" setup.  Schema changes are not seamlessly captured in this mode and requires a lock-step approach.  In short, you would refrain from making data changes in the table and wait for all changes that are still pending to be read by the connector, once that's complete you make the schema change and verify its been picked up, and then resume changes on the table.  In effect, you want to make sure that when the connector reads the DDL change that no changes are mixed with it or LogMiner may be unable to decode the changes properly and changes could be missed as this mode does not enable LogMiner's automated schema tracking feature for performance reasons.

Hope that helps.
Chris
Message has been deleted

Chris Cranford

unread,
Mar 29, 2023, 8:20:18 AM3/29/23
to debe...@googlegroups.com
Hi Lechoslaw -

The Debezium Server logging is controlled by the Quarkus logging settings in the application.properties file where you also configure the source/sink connectors.

quarkus.log.level=WARN
quarkus.log.category."io.debezium".level=WARN

Hope that helps.
Chris

On 3/28/23 12:59, Lechosław Pracz wrote:
Hi and
Thanks again

Debezium Server works fine, but now I have a problem with Debezium Server LOG file it' s grow very fast. I tried to setenvironment variable  LOG_LEVEL=ERROR but it doesn't work. 
Here is my docker compose file:
version: '3'
services:
  redis:
    image: redis
    container_name: redis
    ports:
      - "6379:6379"  
    volumes:
      - /home/pl1468/debezkoll/redis:/data
    command: redis-server appendonly yes --save "300 1"
    restart: always
  debezserv:
    image: debezium/server
    container_name: debserkoll
    ports:
      - "8080:8080"
    volumes:
      - /home/pl1468/debezkoll/oracle/ojdbc8-21.6.0.0.jar:/debezium/lib/ojdbc8-21.6.0.0.jar
      - /home/pl1468/debezkoll/conf:/debezium/conf
      - /home/pl1468/debezkoll/data:/debezium/data
    environment:
      - LOG_LEVEL=ERROR

    restart: always

Leszek
Message has been deleted

Chris Cranford

unread,
Apr 1, 2023, 10:38:33 AM4/1/23
to debe...@googlegroups.com
Hi Lechoslaw -

So the issue appears to be that Redis reported that 85% of its memory had been consumed, and as a result the connector stopped.  You can increase this limit from its 85% default by setting debezium.sink.redis.memory.threshold.percentage to a value higher than 85; however, I believe this ultimately is only a bandaid as it seems that the data you're trying to store in Redis may exceed the memory available or at least will come quite close to that limit.

As for the snapshot, Debezium for Oracle (like all our connectors) will perform a schema-snapshot of all tables.  This does not capture the data in all tables, only the structure of the tables themselves.  You can chose to set debezium.source.schema.history.internal.store.only.captured.tables.ddl=true and this will restrict the schema snapshot to only the captured tables; however, understand that this will require additional steps in the future if you decide to incrementally snapshot or include an existing table in your include list in the future as the newly added table's schema will not have been captured.  And since you're using the File-based schema history implementation, the schema history isn't even being published to Redis, therefore even capturing the structure of all tables isn't consuming space in the key/value store to trigger this specific threshold.

I would suggest checking Redis and see what topic is consuming the most space and then determine whether or not you can retain all records in Redis from that table with your present Redis setup.

Hope that helps.
Chris

On 3/30/23 03:44, Lechosław Pracz wrote:
Thanks for response.
Now I've got such a problem with Dbezium Server.  About half an hour after starting debezium server, I get the following message during initial snapshoot: 
debserkoll  | {"timestamp":"2023-03-30T07:22:04.496Z","sequence":15276,"loggerClassName":"org.slf4j.impl.Slf4jLogger","loggerName":"io.debezium.server.redis.RedisMemoryThreshold","level":"WARN","message":"Memory threshold percentage was reached (current: 85%, configured: 85%, used_memory: 3653442368, maxmemory: 4294967296).","threadName":"pool-7-thread-1","threadId":16,"mdc":{},"ndc":"","hostName":"ec6795e067ba","processName":"io.debezium.server.Main","processId":1}
debserkoll  | {"timestamp":"2023-03-30T07:22:04.496Z","sequence":15277,"loggerClassName":"org.slf4j.impl.Slf4jLogger","loggerName":"io.debezium.server.redis.RedisStreamChangeConsumer","level":"WARN","message":"Stopped consuming records!","threadName":"pool-7-thread-1","threadId":16,"mdc":{},"ndc":"","hostName":"ec6795e067ba","processName":"io.debezium.server.Main","processId":1}
After this all proces stops.

Here is my application.properties:
#sink
debezium.sink.type=redis
debezium.sink.redis.address=redis:6379

#connector
debezium.source.connector.class=io.debezium.connector.oracle.OracleConnector
debezium.source.offset.storage=io.debezium.server.redis.RedisOffsetBackingStore
debezium.source.schema.history.internal=io.debezium.storage.file.history.FileSchemaHistory
debezium.source.schema.history.internal.file.filename=schema.dat
debezium.source.offset.storage.file.filename=data/offsets.dat
debezium.source.offset.flush.interval.ms=60000
debezium.source.offset.flush.timeout.ms=5000
debezium.source.database.hostname=10.1.0.1
debezium.source.database.port=1521
debezium.source.database.user=dbzuser
debezium.source.database.password=password
debezium.source.database.dbname=koll
debezium.source.database.server.name=debeztest1
debezium.source.topic.prefix=debeztest1
debezium.source.table.include.list=koll.account,koll.comments,koll.subscriber,koll.documents,ksi.ugd,koll.docu
debezium.source.heartbeat.interval.ms=0
debezium.source.provide.transaction.metadata=true
debezium.source.log.mining.strategy=online_catalog
debezium.source.log.mining.batch.size.default=50000
debezium.source.decimal.handling.mode=double

What do i need for the server to run properly?

And I have a second question is it necessary to do initial snapshoot of whole database or is it possible to do only initial snapshoot tables which I have in table.include.list?

Leszek

Oren Elias

unread,
Apr 3, 2023, 2:04:49 AM4/3/23
to debe...@googlegroups.com
The Redis sink has a default setting to apply backpressure when it senses that the target Redis has reached 85% of total memory. You can increase this via a property setting.
The message should actually be an INFO since it will be retried once available memory frees up in the target Redis.
-Oren

Reply all
Reply to author
Forward
0 new messages