Hello Everyone !
I'm stuck and I would really appreciate some help on this as this is taking much longer than I originaly thought it will...
First, I know it is not suggested to run multiple connectors to the same DB but we have to do that in current ciscumstances, at least for a while.
Soo in current scenario, we have two separate Azure Kubernetes Clusters (dev and test) and we have two separate Azure Event Hub Namespaces so each AKS cluster point to separate Event Hub Namespace. In both cases, we point to the same Oracle DB where on dev we have all tables that we want to capture and on test, we just have simple connector configuration and 1 small table. I can even delete the connectors from the Dev env but I still can't create new connector on test env as it always times out...
The issue is, when I try to post the connector configuration on test environment, I get below error:
```
{"error_code":500,"message":"Request timed out. The worker is currently performing multi-property validation for the connector, which began at 2025-08-28T15:35:05.893Z"}%
```
I can't see an single error in the pod logs, I can connect to both event hub and oracle db from the pod and since the other environmenti is working fine, I can't figure out what is causing this.
Basically, configuration is duplicated so the only things which differs is event hub name and credentials to it but credentials are not an issue cuz logs would show there is some problem with that.
My understading is we can have all topics named the same as they are hosted in different azure event hub namespaces so this should not be a problem.
```
Example configuration:
{
"name": "dev-connector",
"config": {
"connector.class": "io.debezium.connector.oracle.OracleConnector",
"database.url": "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=${file:/kafka/config/credentials.properties:DATABASE_HOSTNAME})(PORT=1527))(CONNECT_DATA=(SERVICE_NAME=${file:/kafka/config/credentials.properties:DATABASE_NAME})))",
"database.dbname": "someName",
"
database.pdb.name": "someName01",
"database.user": "${file:/kafka/config/credentials.properties:DATABASE_USER}",
"database.password": "${file:/kafka/config/credentials.properties:DATABASE_PASSWORD}",
"topic.prefix": "test",
"table.include.list": "TEST_PRODUCTION.customers",
"
query.timeout.ms": 60000,
"topic.creation.default.partitions": "1",
"topic.creation.default.replication.factor": "1",
"database.connection.adapter": "logminer",
"log.mining.strategy": "online_catalog",
"log.mining.archive.log.only.mode": "false",
"log.mining.buffer.drop.on.stop": "true",
"log.mining.restart.connection": "true",
"snapshot.mode": "no_data",
"schema.history.internal.kafka.bootstrap.servers": "${file:/kafka/config/credentials.properties:BOOTSTRAP_SERVERS}",
"schema.history.internal.kafka.topic": "test-schema-history",
"schema.history.internal.consumer.security.protocol": "SASL_SSL",
"schema.history.internal.consumer.sasl.mechanism": "PLAIN",
"schema.history.internal.consumer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username='${file:/kafka/config/credentials.properties:SASL_USERNAME}' password='${file:/kafka/config/credentials.properties:SASL_PASSWORD}';",
"schema.history.internal.producer.security.protocol": "SASL_SSL",
"schema.history.internal.producer.sasl.mechanism": "PLAIN",
"schema.history.internal.producer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username='${file:/kafka/config/credentials.properties:SASL_USERNAME}' password='${file:/kafka/config/credentials.properties:SASL_PASSWORD}';",
"schema.history.internal.recovery.clean": "true",
"schema.history.internal.recovery.attempts": "3",
"schema.history.internal.store.only.captured.tables.ddl": true,
"
poll.interval.ms": "5000",
"
heartbeat.interval.ms": "5000",
"transforms": "routeToSingleTopic",
"transforms.routeToSingleTopic.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.routeToSingleTopic.regex": "(?i).*TEST_PRODUCTION.*",
"transforms.routeToSingleTopic.replacement": "test-all-tables"
}
}
```
some logs from the pod:
```
[2025-08-28 15:34:29,489] INFO [Worker clientId=connect-100.112.2.62:8083, groupId=connect-cluster-group] Starting connectors and tasks using config offset 1016 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1921)
[2025-08-28 15:34:29,489] INFO [Worker clientId=connect-100.112.2.62:8083, groupId=connect-cluster-group] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1950)
[2025-08-28 15:34:50,726] INFO 10.231.35.103 - - [28/Aug/2025:15:34:50 +0000] "GET / HTTP/1.1" 200 122 "-" "kube-probe/1.30" 111 (org.apache.kafka.connect.runtime.rest.RestServer:62)
[2025-08-28 15:34:53,483] INFO [Worker clientId=connect-100.112.2.62:8083, groupId=connect-cluster-group] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:242)
[2025-08-28 15:34:53,483] INFO [Worker clientId=connect-100.112.2.62:8083, groupId=connect-cluster-group] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:604)
[2025-08-28 15:34:53,488] INFO [Worker clientId=connect-100.112.2.62:8083, groupId=connect-cluster-group] Successfully joined group with generation Generation{generationId=54870228, memberId='ehn-nucleus-debezium-test-01.servicebus.windows.net:c:connect-cluster-group:I:connect-100.112.2.62:8083-02d5c67964df40b9b07125acd4e48031', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:665)
[2025-08-28 15:34:53,502] INFO [Worker clientId=connect-100.112.2.62:8083, groupId=connect-cluster-group] Successfully synced group in generation Generation{generationId=54870228, memberId='ehn-nucleus-debezium-test-01.servicebus.windows.net:c:connect-cluster-group:I:connect-100.112.2.62:8083-02d5c67964df40b9b07125acd4e48031', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:842)
[2025-08-28 15:34:53,502] INFO [Worker clientId=connect-100.112.2.62:8083, groupId=connect-cluster-group] Joined group at generation 54870228 with protocol version 2 and got assignment: Assignment{error=0, leader='ehn-nucleus-debezium-test-01.servicebus.windows.net:c:connect-cluster-group:I:connect-100.112.2.62:8083-02d5c67964df40b9b07125acd4e48031', leaderUrl='
http://100.112.2.62:8083/', offset=1016, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2580)
[2025-08-28 15:34:53,502] INFO [Worker clientId=connect-100.112.2.62:8083, groupId=connect-cluster-group] Starting connectors and tasks using config offset 1016 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1921)
[2025-08-28 15:34:53,502] INFO [Worker clientId=connect-100.112.2.62:8083, groupId=connect-cluster-group] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1950)
[2025-08-28 15:35:00,588] INFO 10.231.35.103 - - [28/Aug/2025:15:35:00 +0000] "GET / HTTP/1.1" 200 122 "-" "kube-probe/1.30" 4 (org.apache.kafka.connect.runtime.rest.RestServer:62)
[2025-08-28 15:35:05,906] INFO Loading the custom source info struct maker plugin: io.debezium.connector.oracle.OracleSourceInfoStructMaker (io.debezium.config.CommonConnectorConfig:1684)
[2025-08-28 15:35:10,586] INFO 10.231.35.103 - - [28/Aug/2025:15:35:10 +0000] "GET / HTTP/1.1" 200 122 "-" "kube-probe/1.30" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62)
```
On dev environment, I can delete and create connectors without any problem but on any other environment I just cant... I've checked conf and properties files and everything is pointing to the right event hub and oracle host.