using debezium 1.8 || getting issue with cassandra 3.10

97 views
Skip to first unread message

Ishant Bhatia

unread,
Jun 9, 2025, 4:42:50 PMJun 9
to debezium
can we use debezium 1.8 with cassandra 3.10 

 issue--> Failed to find any class that implements Connector and which name matches io.debezium.connector.cassandra.CassandraConnector can anyone help me ? Regards,
Ishant

jiri.p...@gmail.com

unread,
Jun 12, 2025, 3:19:48 AMJun 12
to debezium
Hi,

how do you try to start the connector? Also why do ylou want to use Debezium 1.8?

Jiri

Ishant Bhatia

unread,
Jun 12, 2025, 3:37:58 AMJun 12
to debe...@googlegroups.com
I have tried with all debezium jars , but getting the same exception , I can not see Connector interface or class.
below is my cassandra cdc info---->

cassandra 3.10, debezium 1.8.1 , java 8, kafka 3.7.2


trying to have cdc setup so that debezium push msg from cassandra cdc_commitlog file to kafka

1) enabled cdc on table and node level in cassandra 3.10
2) installed kafka in cassandra machine
3) added path.plugin{debezoium path} in kafka
4) added debezium-cassandra-connector.properties file


getting exception
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.debezium.connector.cassandra.CassandraConnector





  • connect-standalone.properties

    bootstrap.servers=localhost:9092
    key.converter=org.apache.kafka.connect.json.JsonConverter
    value.converter=org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable=true
    value.converter.schemas.enable=true
    offset.storage.file.filename=/tmp/connect.offsets
    offset.flush.interval.ms=10000
    plugin.path=/Users/isbhatia/opt/kafka_2.12-3.7.2/plugins/debezium-connector-cassandra


    debezium-cassandra-connector.properties
    # Connector configuration
    name=test_connector
    hashtagconnector.class=io.debezium.connector.cassandra.CassandraConnector
    commit.log.relocation.dir=/opt/cassandra/data/relocation/
    http.port=8000

    # Cassandra connection details
    cassandra.config=/Users/isbhatia/opt/cassandra/apache-cassandra-3.10/conf/cassandra.yaml
    cassandra.hosts=127.0.0.1
    cassandra.port=9042

    # Kafka producer settings (for messages sent by the connector)
    kafka.producer.bootstrap.servers=127.0.0.1:9092
    kafka.producer.retries=1
    kafka.producer.retry.backoff.ms=1000
    topic.prefix=test_prefix

    # Converters for key and value serialization (using Avro with Schema Registry)
    key.converter=io.confluent.connect.avro.AvroConverter
    key.converter.schema.registry.url=http://localhost:8081
    value.converter=io.confluent.connect.avro.AvroConverter
    value.converter.schema.registry.url=http://localhost:8081

    # Offset management (for standalone mode)
    offset.backing.store.dir=/Users/isbhatia240/opt/kafka_2.12-3.7.2/test_dir/

    # Debezium Cassandra specific snapshot settings
    snapshot.consistency=ONE
    snapshot.mode=ALWAYS
    latest.commit.log.only=true


    -- CREATE TABLE na.cdc_demo_events (
    -- id UUID PRIMARY KEY,
    -- name TEXT,
    -- value INT
    -- ) WITH cdc=true;


    cassandra.yaml


    cdc_enabled: true
    cdc_raw_directory: /opt/cassandra/data/cdc_raw




--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/faa7fa55-223c-43cc-bb8f-a65e017226aan%40googlegroups.com.

Ishant Bhatia

unread,
Jun 12, 2025, 3:49:16 AMJun 12
to debe...@googlegroups.com

start connector

bin/connect-standalone.sh config/connect-standalone.properties config/debezium-cassandra-connector.properties

jiri.p...@gmail.com

unread,
Jun 12, 2025, 5:34:29 AMJun 12
to debezium
Hi,

Cassandra connetors are not Kafka Connect connetrs but are standalone Java apps. Please follow tutorial https://github.com/debezium/debezium-examples/tree/1.x/tutorial#using-cassandra to see how it works

Have a nice day

Jiri

Ishant Bhatia

unread,
Jun 12, 2025, 6:59:19 AMJun 12
to debe...@googlegroups.com
Can we do it , without making a docker file ?

Regards,
Ishant

jiri.p...@gmail.com

unread,
Jun 12, 2025, 7:01:10 AMJun 12
to debezium
Yes, this is just an example. You can do the same without containers.

Jiri

Ishant Bhatia

unread,
Jun 12, 2025, 3:23:45 PMJun 12
to debe...@googlegroups.com
run comand ->       

bin/connect-standalone.sh config/connect-standalone.properties config/debezium-cassandra-connector.properties




property file -->

# Connector configuration

name=cassandra-source-connector

connector.class=io.debezium.connector.cassandra.CassandraConnector

#connector.class=io.lenses.streamreactor.connect.cassandra.source.CassandraSourceConnector

commit.log.relocation.dir=/opt/cassandra/data/relocation/

http.port=8000


# Cassandra connection details

cassandra.config=/Users/isbhatia2401/opt/cassandra/apache-cassandra-3.10/conf/cassandra.yaml

cassandra.hosts=127.0.0.1

cassandra.port=9042


# Kafka producer settings (for messages sent by the connector)

kafka.producer.bootstrap.servers=127.0.0.1:9092

kafka.producer.retries=1

kafka.producer.retry.backoff.ms=1000

topic.prefix=test_prefix


# Converters for key and value serialization (using Avro with Schema Registry)

key.converter=io.confluent.connect.avro.AvroConverter

key.converter.schema.registry.url=http://localhost:8081

value.converter=io.confluent.connect.avro.AvroConverter

value.converter.schema.registry.url=http://localhost:8081


# Offset management (for standalone mode)

offset.backing.store.dir=/Users/isbhatia2401/opt/kafka_2.12-3.7.2/test_dir/


# Debezium Cassandra specific snapshot settings

snapshot.consistency=ONE

snapshot.mode=ALWAYS

latest.commit.log.only=true

~                             


getting below issue -->

[2025-06-13 00:51:05,016] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:299)

[2025-06-13 00:51:05,016] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:57)

[2025-06-13 00:51:05,024] ERROR Failed to create connector for config/debezium-cassandra-connector.properties (org.apache.kafka.connect.cli.ConnectStandalone:85)

[2025-06-13 00:51:05,024] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:96)

java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.debezium.connector.cassandra.CassandraConnector, available connectors are: PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorCheckpointConnector, name='org.apache.kafka.connect.mirror.MirrorCheckpointConnector', version='3.7.2', encodedVersion=3.7.2, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorHeartbeatConnector, name='org.apache.kafka.connect.mirror.MirrorHeartbeatConnector', version='3.7.2', encodedVersion=3.7.2, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorSourceConnector, name='org.apache.kafka.connect.mirror.MirrorSourceConnector', version='3.7.2', encodedVersion=3.7.2, type=source, typeName='source', location='classpath'}

at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:135)

at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:108)

at org.apache.kafka.connect.cli.ConnectStandalone.processExtraArgs(ConnectStandalone.java:93)

at org.apache.kafka.connect.cli.AbstractConnectCli.startConnect(AbstractConnectCli.java:150)

at org.apache.kafka.connect.cli.AbstractConnectCli.run(AbstractConnectCli.java:94)

at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:182)

Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.debezium.connector.cassandra.CassandraConnector, available connectors are: PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorCheckpointConnector, name='org.apache.kafka.connect.mirror.MirrorCheckpointConnector', version='3.7.2', encodedVersion=3.7.2, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorHeartbeatConnector, name='org.apache.kafka.connect.mirror.MirrorHeartbeatConnector', version='3.7.2', encodedVersion=3.7.2, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorSourceConnector, name='org.apache.kafka.connect.mirror.MirrorSourceConnector', version='3.7.2', encodedVersion=3.7.2, type=source, typeName='source', location='classpath'}

at org.apache.kafka.connect.runtime.isolation.Plugins.connectorClass(Plugins.java:320)

at org.apache.kafka.connect.runtime.isolation.Plugins.newConnector(Plugins.java:291)

at org.apache.kafka.connect.runtime.AbstractHerder.lambda$getConnector$7(AbstractHerder.java:756)

at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1713)

at org.apache.kafka.connect.runtime.AbstractHerder.getConnector(AbstractHerder.java:756)

at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:501)

at org.apache.kafka.connect.runtime.AbstractHerder.lambda$validateConnectorConfig$3(AbstractHerder.java:413)

at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)

at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)

at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)

at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)

at java.base/java.lang.Thread.run(Thread.java:1575)

[2025-06-13 00:51:05,024] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:67)

[2025-06-13 00:51:05,024] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:354)

[2025-06-13 00:51:05,027] INFO Stopped o.e.j.s.ServletContextHandler@16f9d001{/,null,STOPPED} (org.eclipse.jetty.server.handler.ContextHandler:1159)

[2025-06-13 00:51:05,029] INFO Stopped http_8083@be57341{HTTP/1.1, (http/1.1)}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:383)

[2025-06-13 00:51:05,029] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session:149)

[2025-06-13 00:51:05,030] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:383)

[2025-06-13 00:51:05,030] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:115)

[2025-06-13 00:51:05,030] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:248)

[2025-06-13 00:51:05,030] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:71)

[2025-06-13 00:51:05,030] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684)

[2025-06-13 00:51:05,030] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688)

[2025-06-13 00:51:05,031] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694)

[2025-06-13 00:51:05,031] INFO App info kafka.connect for 192.168.1.6:8083 unregistered (org.apache.kafka.common.utils.AppInfoParser:88)

[2025-06-13 00:51:05,031] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:269)

[2025-06-13 00:51:05,031] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:126)

[2025-06-13 00:51:05,031] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:72)


jiri.p...@gmail.com

unread,
Jun 13, 2025, 12:28:49 AMJun 13
to debezium

Ishant Bhatia

unread,
Jun 13, 2025, 5:10:23 PMJun 13
to debe...@googlegroups.com

thnks for the guidance.  
now again and again getting same issue, this I am running on machine. 

It is saying auth issue while debezium trying to connect with cassandra, I have provided username and password in debezium property file



21:07:01.297 [s0-admin-1] WARN com.datastax.oss.driver.internal.core.control.ControlConnection - [s0] Authentication errors encountered on all contact points. Please check your authentication configuration.

21:07:01.298 [s0-admin-1] DEBUG com.datastax.oss.driver.internal.core.session.DefaultSession - [s0] Initialization failed, force closing

java.util.concurrent.CompletionException: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=2068dd13): [com.datastax.oss.driver.api.core.auth.AuthenticationException: Authentication error on node /127.0.0.1:9042: Node /127.0.0.1:9042 requires authentication (org.apache.cassandra.auth.PasswordAuthenticator), but no authenticator configured]

at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)

at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)

at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)

at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)

at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)

at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)

at com.datastax.oss.driver.internal.core.control.ControlConnection$SingleThreaded.lambda$init$3(ControlConnection.java:327)

at com.datastax.oss.driver.internal.core.control.ControlConnection$SingleThreaded.connect(ControlConnection.java:358)

at com.datastax.oss.driver.internal.core.control.ControlConnection$SingleThreaded.lambda$connect$8(ControlConnection.java:398)

at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)

at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)

at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)

at com.datastax.oss.driver.shaded.netty.channel.DefaultEventLoop.run(DefaultEventLoop.java:54)

at com.datastax.oss.driver.shaded.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)

at com.datastax.oss.driver.shaded.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

at com.datastax.oss.driver.shaded.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

at java.lang.Thread.run(Thread.java:748)



Regards,

Ishant


Ishant Bhatia

unread,
Jun 13, 2025, 5:14:04 PMJun 13
to debe...@googlegroups.com
run debezium connector with --> 

java -jar debezium-connector-cassandra.jar config.properties

Ishant Bhatia

unread,
Jun 14, 2025, 1:30:38 PMJun 14
to debe...@googlegroups.com

my config.properties file


# Connector configuration

connector.name=cassandra-source-connector

commit.log.relocation.dir=/data/cassandra/commitlog/relocation/

http.port=8000


# Cassandra connection details

cassandra.config=/usr/local/cassandra/conf/cassandra.yaml

cassandra.hosts=127.0.0.1

cassandra.port=9042

cassandra.username=cassandra

cassandra.password=cassandra


# Kafka producer settings (for messages sent by the connector)

kafka.producer.bootstrap.servers=127.0.0.1:9092

kafka.producer.retries=1

kafka.producer.retry.backoff.ms=1000

topic.prefix=cassandra_cdc


# Converters for key and value serialization (using Avro with Schema Registry)

key.converter=io.confluent.connect.avro.AvroConverter

key.converter.schema.registry.url=http://localhost:8081

value.converter=io.confluent.connect.avro.AvroConverter

value.converter.schema.registry.url=http://localhost:8081

key.converter.schemas.enable=false

value.converter.schemas.enable=false

# Offset management (for standalone mode)

offset.backing.store.dir=/opt/kafka_2.12-3.7.2/test_dir/


# Debezium Cassandra specific snapshot settings

snapshot.consistency=ONE

snapshot.mode=ALWAYS

latest.commit.log.only=true

Ishant Bhatia

unread,
Jun 16, 2025, 4:40:03 AMJun 16
to debe...@googlegroups.com

1 question -->  help me in the last step in local

in local started cassandra,  zookeeper,  broker,  consumer ,  producer,  debezium jar (stopped after processed msgs from cassandra)

getting below error-> 
debezium jar stopped 

13:56:10.893 [pool-4-thread-1] ERROR io.debezium.connector.cassandra.QueueProcessor - Processing of event Record{source={cluster=cassandra_cdc, keyspace=ncl, file=, connector=cassandra, pos=-1, ts_micro=1750062369950000, version=${project.version}, snapshot=true, table=cdc_events}, after={event_id={name=event_id, value=ishant2, deletionTs=null, type=PARTITION}, event_source={name=event_source, value=ishant, deletionTs=null, type=REGULAR}}, keySchema=Schema{io.debezium.connector.cassandra.cassandra_cdc.ncl.cdc_events.Key:STRUCT}, valueSchema=Schema{io.debezium.connector.cassandra.cassandra_cdc.ncl.cdc_events.Envelope:STRUCT}, op=i, ts=1750062369994} was errorneous: {}

io.debezium.DebeziumException: Failed to send record Record{source={cluster=cassandra_cdc, keyspace=ncl, file=, connector=cassandra, pos=-1, ts_micro=1750062369950000, version=${project.version}, snapshot=true, table=cdc_events}, after={event_id={name=event_id, value=ishant2, deletionTs=null, type=PARTITION}, event_source={name=event_source, value=ishant, deletionTs=null, type=REGULAR}}, keySchema=Schema{io.debezium.connector.cassandra.cassandra_cdc.ncl.cdc_events.Key:STRUCT}, valueSchema=Schema{io.debezium.connector.cassandra.cassandra_cdc.ncl.cdc_events.Envelope:STRUCT}, op=i, ts=1750062369994}

at io.debezium.connector.cassandra.KafkaRecordEmitter.emit(KafkaRecordEmitter.java:72)

at io.debezium.connector.cassandra.QueueProcessor.processEvent(QueueProcessor.java:114)

at io.debezium.connector.cassandra.QueueProcessor.process(QueueProcessor.java:72)

at io.debezium.connector.cassandra.AbstractProcessor.start(AbstractProcessor.java:63)

at io.debezium.connector.cassandra.CassandraConnectorTaskTemplate$ProcessorGroup.lambda$start$0(CassandraConnectorTaskTemplate.java:231)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:750)

Caused by: java.lang.IllegalStateException: Cannot perform operation after producer has been closed

at org.apache.kafka.clients.producer.KafkaProducer.throwIfProducerClosed(KafkaProducer.java:919)

at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:928)

at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:912)

at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:797)

at io.debezium.connector.cassandra.KafkaRecordEmitter.emit(KafkaRecordEmitter.java:64)

... 9 common frames omitted

jiri.p...@gmail.com

unread,
Jun 16, 2025, 4:42:10 AMJun 16
to debezium
Hi,

could you please send the full log?

Jiri

Ishant Bhatia

unread,
Jun 16, 2025, 5:15:17 AMJun 16
to debe...@googlegroups.com
here in the log file below attached.



logs_cdc_kafka

jiri.p...@gmail.com

unread,
Jun 19, 2025, 6:27:11 AMJun 19
to debezium
Hi,

I don't undertsand what can been the reason for InterruptedException. Is there a chance you can try to run newer version of the connector?

Jiri

Ishant Bhatia

unread,
Jun 19, 2025, 6:37:14 AMJun 19
to debe...@googlegroups.com
Will try, but I am using java 8 and Cassandra 3.10. 

Have limited options. 

Can you pls help to find the all mapping or properties files which must be there to run the debezium.


Regards,
Ishant

--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.

Ishant Bhatia

unread,
Jun 19, 2025, 7:23:41 AMJun 19
to debe...@googlegroups.com
https://repo1.maven.org/maven2/io/debezium/

here I can see 
debezium-storage-kafka/
debezium-server-kafka/      

what is the use of the above jar? Do I need the above jar also?

I can see a lot of jar, any suggestions which jar will be good to pick?

Regards,
Ishant

jiri.p...@gmail.com

unread,
Jun 20, 2025, 6:18:12 AMJun 20
to debezium
Hi,


Cassandra connectors for Debezium 3 runs inside Debezium Server so that would be a slightly different approach.

Jiri

Ishant Bhatia

unread,
Jul 1, 2025, 5:14:01 AMJul 1
to debe...@googlegroups.com
What do you mean by debezium server ??

Can you explain for java 8, cassandra3.10, kafka  and debezium steps to make it workable ?

and  

  1. curl -X POST -H "Content-Type: application/json" --data @your-connector-config.json http://localhost:8083/connectors


the biggest issue is 

  "name": "cassandra-source-connector",

  "config": {

    "connector.class": "io.debezium.connector.cassandra.CassandraConnector",

    "commit.log.relocation.dir": "/opt/cassandra/data/relocation/",

    "http.port": "8000",

    "tasks.max": "1",

    "cassandra.config": "/Users/isbhatia2401/Downloads/open__/cdc/apache-cassandra-3.10/conf/cassandra.yaml",

    "cassandra.hosts": "localhost",

    "cassandra.port": "9042",

    "cassandra.keyspace": "ncl",

    "cassandra.table.include.list": "cdc_events",

    "cassandra.cdc.dir": "/var/lib/cassandra/cdc_raw",

    "cassandra.commit.log.relocation.dir": "/opt/debezium/relocation",

    "kafka.producer.bootstrap.servers": "localhost:9092",

    "kafka.producer.retries": "1",

    "kafka.producer.retry.backoff.ms": "1000",

    "kafka.topic.prefix": "cassandra_cdc",

    "key.converter": "org.apache.kafka.connect.json.JsonConverter",

    "value.converter": "org.apache.kafka.connect.json.JsonConverter",

    "key.converter.schema.registry.url": "http://localhost:8081",

    "value.converter.schema.registry.url": "http://localhost:8081",

    "key.converter.schemas.enable": "false",

    "value.converter.schemas.enable": "false",

    "offset.backing.store.dir": "/Users/isbhatia2401/Downloads/open__/cdc/kafka_2.12-3.7.2/offset/",

    "snapshot.consistency": "ONE",

    "snapshot.mode": "ALWAYS",

    "latest.commit.log.only": "true",

    "event.processing.failure.handling.mode": "warn"

  }
}

always getting below error

error_code":500,"message":"Failed to find any class that implements Connector and which name matches io.debezium.connector.cassandra.CassandraConnector,





Regards,

Ishant





Ishant Bhatia

unread,
Jul 4, 2025, 3:32:02 AMJul 4
to debe...@googlegroups.com
I am able to publish messages to kafka from cassandra through debezium jar only  , but at the start of debezium not live , anything missing ??

java -Dcassandra.storagedir=/var/lib/cassandra -Ddebezium.log.level=DEBUG -jar debezium-connector-cassandra.jar debezium.properties 2>&1 | tee debug_log.txt


Attached logs below 

Regards,
Ishant
debezium_logs_

jiri.p...@gmail.com

unread,
Jul 4, 2025, 7:24:14 AMJul 4
to debezium
Hi,

what do you mean by not live?

Jiri

Ishant Bhatia

unread,
Jul 4, 2025, 7:46:40 AMJul 4
to debe...@googlegroups.com
When I initially start Debezium, I observe all expected messages in my Kafka consumer. However, after Debezium has been running for a while, subsequent insertions and deletions of rows are not being reflected in the Kafka consumer.
 
Now debezium started properly, later I inserted and deleted fews rows, not getting any message in kafka consumer , but again I stopped the jar and ran it again, I can see all the messages of insertion and deletion which I had not seen earlier.



Regards,
Ishant

Ishant Bhatia

unread,
Jul 4, 2025, 5:53:08 PMJul 4
to debe...@googlegroups.com
when I stop debezium ,
commitlog file can be seen here-> /var/lib/cassandra/cdc_raw

when I start debezium
same commitlog file get processed one time 

Am I missing something?

Regards,
Ishant

Chris Cranford

unread,
Jul 7, 2025, 9:46:08 AMJul 7
to debe...@googlegroups.com
Hi -

Would it be possible to enable DEBUG logging during the period where you perform insert/update operations and nothing is sent to Kafka?

Thanks,
-cc

Ishant Bhatia

unread,
Jul 7, 2025, 9:55:46 AMJul 7
to debe...@googlegroups.com
Yes I enabled DEBUG logs 

I can see


19:22:32.748 [pool-4-thread-3] INFO io.debezium.connector.cassandra.AbstractDirectoryWatcher - No commitLogFile is detected in /var/lib/cassandra/cdc_raw.
19:22:32.748 [pool-4-thread-3] DEBUG io.debezium.connector.cassandra.Cassandra3CommitLogProcessor - Processing commitLogFiles while initial is false
19:22:32.748 [pool-4-thread-3] INFO io.debezium.connector.cassandra.AbstractDirectoryWatcher - Polling commitLog files from /var/lib/cassandra/cdc_raw ...
19:22:32.846 [pool-4-thread-1] DEBUG io.debezium.connector.base.ChangeEventQueue - checking for more records...
19:22:32.847 [pool-4-thread-1] DEBUG io.debezium.connector.base.ChangeEventQueue - polling records...
19:22:32.847 [pool-4-thread-1] DEBUG io.debezium.connector.base.ChangeEventQueue - no records available yet, sleeping a bit...
19:22:33.852 [pool-4-thread-1] DEBUG io.debezium.connector.base.ChangeEventQueue - checking for more records...

attached logs file below 

Regards & Thanks
Ishant



sleep_logs

Chris Cranford

unread,
Jul 7, 2025, 10:26:29 AMJul 7
to debe...@googlegroups.com
Hi -

So you only allowed Debezium to run for about 30 seconds, so I wouldn't necessarily say that's sufficient.
Furthermore, you are running Debezium on each Cassandra node, correct?

Thanks,
-cc

Ishant Bhatia

unread,
Jul 7, 2025, 10:35:04 AMJul 7
to debe...@googlegroups.com
doing it on a local setup with a single node.

Regards,
Ishant

Reply all
Reply to author
Forward
0 new messages