Exception when tried running the connector in standalone

3,258 views
Skip to first unread message

Cherupally Bhargav

unread,
Jun 7, 2016, 11:53:33 AM6/7/16
to Confluent Platform
Hi,

I've configured kafka connect to use sqlite3 and it worked great. When I tried changing the properties file to use oracle database, i'm facing the following error:

Failed trying to validate that columns used for offsets are NOT NULL

Can someone help me fix this issue? We are working for a POC and struck here.

I could see the zookeeper, kafka server, schema registry services are runnning fine.
Here is the complete log for the following command:
./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstart-sqlite.properties
Log details:
[oracle@vbgeneric confluent-2.0.0]$ ./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstart-sqlite.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-06-07 11:27:00,921] INFO StandaloneConfig values: 
cluster = connect
rest.advertised.port = null
bootstrap.servers = [localhost:9092]
rest.port = 8083
internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
value.converter = class io.confluent.connect.avro.AvroConverter
key.converter = class io.confluent.connect.avro.AvroConverter
 (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:165)
[2016-06-07 11:27:01,939] INFO Logging initialized @8416ms (org.eclipse.jetty.util.log:186)
[2016-06-07 11:27:02,090] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:53)
[2016-06-07 11:27:02,091] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:89)
[2016-06-07 11:27:02,157] INFO ProducerConfig values: 
compression.type = none
metric.reporters = []
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 9223372036854775807
sasl.kerberos.min.time.before.relogin = 60000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 1
metrics.num.samples = 2
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = all
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 2147483647
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
 (org.apache.kafka.clients.producer.ProducerConfig:165)
[2016-06-07 11:27:02,309] INFO Kafka version : 0.9.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-06-07 11:27:02,310] INFO Kafka commitId : d1555e3a21980fa9 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-06-07 11:27:02,314] INFO Starting FileOffsetBackingStore with file /tmp/connect.offsets (org.apache.kafka.connect.storage.FileOffsetBackingStore:53)
[2016-06-07 11:27:02,352] INFO Worker started (org.apache.kafka.connect.runtime.Worker:111)
[2016-06-07 11:27:02,352] INFO Herder starting (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:57)
[2016-06-07 11:27:02,355] INFO Herder started (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:58)
[2016-06-07 11:27:02,356] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:91)
[2016-06-07 11:27:02,849] INFO jetty-9.2.12.v20150709 (org.eclipse.jetty.server.Server:327)
Jun 07, 2016 11:27:05 AM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.

[2016-06-07 11:27:05,090] INFO Started o.e.j.s.ServletContextHandler@2a2da905{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2016-06-07 11:27:05,152] INFO Started ServerConnector@379ab47b{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
[2016-06-07 11:27:05,153] INFO Started @11630ms (org.eclipse.jetty.server.Server:379)
[2016-06-07 11:27:05,157] INFO REST server listening at http://127.0.0.1:8083/, advertising URL http://127.0.0.1:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:132)
[2016-06-07 11:27:05,158] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:60)
[2016-06-07 11:27:05,207] INFO ConnectorConfig values: 
connector.class = class io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max = 1
topics = []
name = test-sqlite-jdbc-autoincrement
 (org.apache.kafka.connect.runtime.ConnectorConfig:165)
[2016-06-07 11:27:05,208] INFO Creating connector test-sqlite-jdbc-autoincrement of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:170)
[2016-06-07 11:27:05,212] INFO Instantiated connector test-sqlite-jdbc-autoincrement with version 2.0.0 of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:183)
[2016-06-07 11:27:05,246] INFO JdbcSourceConnectorConfig values: 
mode = timestamp
timestamp.column.name = name_timestamp
topic.prefix = test-sqlite-jdbc-
query = 
batch.max.rows = 100
connection.url = jdbc:oracle:thin:<user>/<pwd>@<host>:<port>:<sid>
table.blacklist = []
table.whitelist = []
 (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
[2016-06-07 11:27:08,861] INFO Finished creating connector test-sqlite-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:193)
[2016-06-07 11:27:16,498] INFO TaskConfig values: 
task.class = class io.confluent.connect.jdbc.JdbcSourceTask
 (org.apache.kafka.connect.runtime.TaskConfig:165)
[2016-06-07 11:27:16,499] INFO Creating task test-sqlite-jdbc-autoincrement-0 (org.apache.kafka.connect.runtime.Worker:256)
[2016-06-07 11:27:16,506] INFO Instantiated task test-sqlite-jdbc-autoincrement-0 with version 2.0.0 of type io.confluent.connect.jdbc.JdbcSourceTask (org.apache.kafka.connect.runtime.Worker:267)
[2016-06-07 11:27:16,603] INFO JdbcSourceTaskConfig values: 
mode = timestamp
timestamp.column.name = name_timestamp
topic.prefix = test-sqlite-jdbc-
tables = [DR$DICTIONARY, DR$NUMBER_SEQUENCE, DR$OBJECT_ATTRIBUTE, DR$POLICY_TAB, DR$THS, DR$THS_BT, DR$THS_FPHRASE, DR$THS_PHRASE, NTV2_XML_DATA, OGIS_GEOMETRY_COLUMNS, OGIS_SPATIAL_REFERENCE_SYSTEMS, SDO_COORD_AXES, SDO_COORD_AXIS_NAMES, SDO_COORD_OPS, SDO_COORD_OP_METHODS, SDO_COORD_OP_PARAMS, SDO_COORD_OP_PARAM_USE, SDO_COORD_OP_PARAM_VALS, SDO_COORD_OP_PATHS, SDO_COORD_REF_SYS, SDO_COORD_SYS, SDO_CRS_GEOGRAPHIC_PLUS_HEIGHT, SDO_CS_CONTEXT_INFORMATION, SDO_CS_SRS, SDO_DATUMS, SDO_DATUMS_OLD_SNAPSHOT, SDO_ELLIPSOIDS, SDO_ELLIPSOIDS_OLD_SNAPSHOT, SDO_FEATURE_USAGE, SDO_GEOR_PLUGIN_REGISTRY, SDO_GEOR_XMLSCHEMA_TABLE, SDO_GR_MOSAIC_0, SDO_GR_MOSAIC_1, SDO_GR_MOSAIC_2, SDO_GR_MOSAIC_3, SDO_GR_PARALLEL, SDO_GR_RDT_1, SDO_GR_RDT_2, SDO_PC_BLK_TABLE, SDO_PREFERRED_OPS_SYSTEM, SDO_PREFERRED_OPS_USER, SDO_PRIME_MERIDIANS, SDO_PROJECTIONS_OLD_SNAPSHOT, SDO_ST_TOLERANCE, SDO_TIN_BLK_TABLE, SDO_TIN_PC_SEQ, SDO_TIN_PC_SYSDATA_TABLE, SDO_TOPO_DATA$, SDO_TOPO_RELATION_DATA, SDO_TOPO_TRANSACT_DATA, SDO_TXN_IDX_DELETES, SDO_TXN_IDX_EXP_UPD_RGN, SDO_TXN_IDX_INSERTS, SDO_UNITS_OF_MEASURE, SDO_WFS_LOCAL_TXNS, SDO_WS_CONFERENCE, SDO_WS_CONFERENCE_PARTICIPANTS, SDO_WS_CONFERENCE_RESULTS, SDO_XML_SCHEMAS, SRSNAMESPACE_TABLE, OL$, OL$HINTS, OL$NODES, BONUS, DEPT, EMP, SALGRADE, SAMPLE_DATASET_EVOLVE, SAMPLE_DATASET_FULLTEXT, SAMPLE_DATASET_INTRO, SAMPLE_DATASET_PARTN, SAMPLE_DATASET_XMLDB_HOL, SAMPLE_DATASET_XQUERY, AUDIT_ACTIONS, AUDTAB$TBS$FOR_EXPORT_TBL, AW$AWCREATE, AW$AWCREATE10G, AW$AWMD, AW$AWREPORT, AW$AWXML, AW$EXPRESS, DATA_PUMP_XPL_TABLE$, DBA_SENSITIVE_DATA_TBL, DBA_TSDP_POLICY_PROTECTION_TBL, DUAL, FGA_LOG$FOR_EXPORT_TBL, HS$_PARALLEL_METADATA, HS_BULKLOAD_VIEW_OBJ, HS_PARTITION_COL_NAME, HS_PARTITION_COL_TYPE, IMPCALLOUTREG$, IMPDP_STATS, KU$NOEXP_TAB, KU$XKTFBUE, KU$_DATAPUMP_MASTER_10_1, KU$_DATAPUMP_MASTER_11_1, KU$_DATAPUMP_MASTER_11_1_0_7, KU$_DATAPUMP_MASTER_11_2, KU$_DATAPUMP_MASTER_12_0, KU$_LIST_FILTER_TEMP, KU$_LIST_FILTER_TEMP_2, KU$_USER_MAPPING_VIEW_TBL, MAP_OBJECT, NACL$_ACE_EXP_TBL, NACL$_HOST_EXP_TBL, NACL$_WALLET_EXP_TBL, ODCI_PMO_ROWIDS$, ODCI_SECOBJ$, ODCI_WARNINGS$, PLAN_TABLE$, PSTUBTBL, SAM_SPARSITY_ADVICE, SPD_SCRATCH_TAB, STMT_AUDIT_OPTION_MAP, SYSTEM_PRIVILEGE_MAP, TABLE_PRIVILEGE_MAP, TSDP_ASSOCIATION$, TSDP_CONDITION$, TSDP_ERROR$, TSDP_FEATURE_POLICY$, TSDP_PARAMETER$, TSDP_POLICY$, TSDP_PROTECTION$, TSDP_SENSITIVE_DATA$, TSDP_SENSITIVE_TYPE$, TSDP_SOURCE$, TSDP_SUBPOL$, TTS_ERROR$, USER_PRIVILEGE_MAP, WRI$_ADV_ASA_RECO_DATA, WRI$_HEATMAP_TOPN_DEP1, WRI$_HEATMAP_TOPN_DEP2, WRR$_REPLAY_CALL_FILTER, XS$VALIDATION_TABLE, HELP, OL$, OL$HINTS, OL$NODES, SCHEDULER_JOB_ARGS_TBL, SCHEDULER_PROGRAM_ARGS_TBL, APP_ROLE_MEMBERSHIP, APP_USERS_AND_ROLES, Folder7_TAB, MIGR9202STATUS, SYS_NT/ZrP7MGASKngQ7ap6Ar7bw==, X$NM7UJB7VPFVE92KV0GUML7K0LVSF, X$PT7UJB7VPFVE92KV0GUML7K0LVSF, X$QN7UJB7VPFVE92KV0GUML7K0LVSF, XDB$ACL, XDB$ALL_MODEL, XDB$ANY, XDB$ANYATTR, XDB$ATTRGROUP_DEF, XDB$ATTRGROUP_REF, XDB$ATTRIBUTE, XDB$CDBPORTS, XDB$CHECKOUTS, XDB$CHOICE_MODEL, XDB$COLUMN_INFO, XDB$COMPLEX_TYPE, XDB$CONFIG, XDB$DBFS_VIRTUAL_FOLDER, XDB$DXPTAB, XDB$D_LINK, XDB$ELEMENT, XDB$GROUP_DEF, XDB$GROUP_REF, XDB$H_INDEX, XDB$H_LINK, XDB$IMPORT_TT_INFO, XDB$MOUNTS, XDB$NLOCKS, XDB$NONCEKEY, XDB$PATH_INDEX_PARAMS, XDB$REPOS, XDB$RESCONFIG, XDB$RESOURCE, XDB$ROOT_INFO, XDB$SCHEMA, XDB$SEQUENCE_MODEL, XDB$SIMPLE_TYPE, XDB$STATS, XDB$TTSET, XDB$XDB_READY, XDB$XIDX_IMP_T, XDB$XIDX_PARAM_T, XDB$XIDX_PART_TAB, XDB$XTAB, XDB$XTABCOLS, XDB$XTABNMSP, XDB_INDEX_DDL_CACHE, DICOM_METADATA_TABLE, EXIF_METADATA_TABLE, IMAGE_METADATA_TABLE, IPTC_METADATA_TABLE, ORDIMAGE_METADATA_TABLE, SYS_NTMIiiteK0RkPgUwEAAH+Rlw==, SYS_NTMIiiteK1RkPgUwEAAH+Rlw==, SYS_NTMIiiteK2RkPgUwEAAH+Rlw==, SYS_NTMIiiteK3RkPgUwEAAH+Rlw==, SYS_NTMIiiteK4RkPgUwEAAH+Rlw==, SYS_NTMIiiteK5RkPgUwEAAH+Rlw==, SYS_NTMIiiteK6RkPgUwEAAH+Rlw==, SYS_NTMIiiteK7RkPgUwEAAH+Rlw==, SYS_NTMIiiteK8RkPgUwEAAH+Rlw==, SYS_NTMIiiteK9RkPgUwEAAH+Rlw==, SYS_NTMIiiteKZRkPgUwEAAH+Rlw==, SYS_NTMIiiteKaRkPgUwEAAH+Rlw==, SYS_NTMIiiteKbRkPgUwEAAH+Rlw==, SYS_NTMIiiteKcRkPgUwEAAH+Rlw==, SYS_NTMIiiteKdRkPgUwEAAH+Rlw==, SYS_NTMIiiteKeRkPgUwEAAH+Rlw==, SYS_NTMIiiteKfRkPgUwEAAH+Rlw==, SYS_NTMIiiteKgRkPgUwEAAH+Rlw==, SYS_NTMIiiteKqRkPgUwEAAH+Rlw==, SYS_NTMIiiteKrRkPgUwEAAH+Rlw==, SYS_NTMIiiteKsRkPgUwEAAH+Rlw==, SYS_NTMIiiteKtRkPgUwEAAH+Rlw==, SYS_NTMIiiteKuRkPgUwEAAH+Rlw==, SYS_NTMIiiteKvRkPgUwEAAH+Rlw==, XMP_METADATA_TABLE, REVISED_COLL_TYPES, REVISED_TYPES, REVISED_TYPE_ATTRS, REVISED_TYPE_SUMMARY, STORAGE_MODEL_CACHE, TYPE_SUMMARY, XDBPM_INDEX_DDL_CACHE, DOCUMENT_UPLOAD_TABLE, XFILES_DOCUMENT_STAGING, XFILES_WIKI_TABLE]
query = 
batch.max.rows = 100
connection.url = jdbc:oracle:thin:<user>/<pwd>@<host>:<port>:<sid>
table.blacklist = []
table.whitelist = []
 (io.confluent.connect.jdbc.JdbcSourceTaskConfig:135)
[2016-06-07 11:27:16,606] INFO Created connector test-sqlite-jdbc-autoincrement (org.apache.kafka.connect.cli.ConnectStandalone:82)
[2016-06-07 11:27:20,491] ERROR Task test-sqlite-jdbc-autoincrement-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSourceTask:362)
[2016-06-07 11:27:20,494] ERROR Task is being killed and will not recover until manually restarted: (org.apache.kafka.connect.runtime.WorkerSourceTask:363)
org.apache.kafka.connect.errors.ConnectException: Failed trying to validate that columns used for offsets are NOT NULL
at io.confluent.connect.jdbc.JdbcSourceTask.validateNonNullable(JdbcSourceTask.java:262)
at io.confluent.connect.jdbc.JdbcSourceTask.start(JdbcSourceTask.java:131)
at org.apache.kafka.connect.runtime.WorkerSourceTask$WorkerSourceTaskThread.execute(WorkerSourceTask.java:341)
at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
Caused by: java.sql.SQLDataException: ORA-01424: missing or illegal character following the escape character

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:774)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:926)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1112)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4798)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:4846)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1501)
at oracle.jdbc.driver.OracleDatabaseMetaData.getColumnsWithWildcards(OracleDatabaseMetaData.java:348)
at oracle.jdbc.driver.OracleDatabaseMetaData.getColumns(OracleDatabaseMetaData.java:128)
at io.confluent.connect.jdbc.JdbcUtils.isColumnNullable(JdbcUtils.java:153)
at io.confluent.connect.jdbc.JdbcSourceTask.validateNonNullable(JdbcSourceTask.java:254)
... 3 more

Ewen Cheslack-Postava

unread,
Jun 7, 2016, 7:13:21 PM6/7/16
to Confluent Platform
It looks like the underlying issue has to do with an escape character:

> ORA-01424: missing or illegal character following the escape character

I'm not sure of the exact source of the issue -- we're just using database metadata from JDBC to check whether columns are nullable. However, looking at your table list, you've got table names with characters like $, +, and =. The Oracle JDBC driver might need these escaped in a special way when passed to these methods.

-Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/baeebac2-8ac0-4614-8e18-afbeaeab616c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Ewen
Message has been deleted

Cherupally Bhargav

unread,
Jun 8, 2016, 2:58:37 AM6/8/16
to Confluent Platform
Thanks Ewen. By default the JDBC driver reads all those tables. Is there any way to escape these ? or Do I have any alternate approach ?
If possible can I've the table structure in Oracle using timestamp or incremental mode ?

Thanks,
Bhargav Cherupally

Ewen Cheslack-Postava

unread,
Jun 8, 2016, 12:41:36 PM6/8/16
to Confluent Platform
You can use the configuration option table.whitelist to restrict copying to a subset of the tables.

-Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Ewen

Cherupally Bhargav

unread,
Jun 8, 2016, 9:38:56 PM6/8/16
to Confluent Platform
Thanks Ewen. Now I'm able to move forward, but I see another error saying java.lang.IllegalArgumentException: Number of groups must be positive.
Can you please help me fix this issue ?

Here is the log:

 ./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstart-sqlite.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-06-08 21:33:02,098] INFO StandaloneConfig values: 
cluster = connect
rest.advertised.port = null
bootstrap.servers = [localhost:9092]
rest.port = 8083
internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
value.converter = class io.confluent.connect.avro.AvroConverter
key.converter = class io.confluent.connect.avro.AvroConverter
 (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:165)
[2016-06-08 21:33:02,968] INFO Logging initialized @10131ms (org.eclipse.jetty.util.log:186)
[2016-06-08 21:33:03,133] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:53)
[2016-06-08 21:33:03,134] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:89)
[2016-06-08 21:33:03,209] INFO ProducerConfig values: 
[2016-06-08 21:33:03,384] INFO Kafka version : 0.9.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-06-08 21:33:03,385] INFO Kafka commitId : d1555e3a21980fa9 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-06-08 21:33:03,389] INFO Starting FileOffsetBackingStore with file /tmp/connect.offsets (org.apache.kafka.connect.storage.FileOffsetBackingStore:53)
[2016-06-08 21:33:03,534] INFO Worker started (org.apache.kafka.connect.runtime.Worker:111)
[2016-06-08 21:33:03,534] INFO Herder starting (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:57)
[2016-06-08 21:33:03,534] INFO Herder started (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:58)
[2016-06-08 21:33:03,534] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:91)
[2016-06-08 21:33:03,922] INFO jetty-9.2.12.v20150709 (org.eclipse.jetty.server.Server:327)
Jun 08, 2016 9:33:06 PM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.

[2016-06-08 21:33:06,225] INFO Started o.e.j.s.ServletContextHandler@2a2da905{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2016-06-08 21:33:06,269] INFO Started ServerConnector@379ab47b{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
[2016-06-08 21:33:06,273] INFO Started @13439ms (org.eclipse.jetty.server.Server:379)
[2016-06-08 21:33:06,283] INFO REST server listening at http://127.0.0.1:8083/, advertising URL http://127.0.0.1:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:132)
[2016-06-08 21:33:06,283] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:60)
[2016-06-08 21:33:06,482] INFO ConnectorConfig values: 
connector.class = class io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max = 1
topics = []
name = test-sqlite-jdbc-autoincrement
 (org.apache.kafka.connect.runtime.ConnectorConfig:165)
[2016-06-08 21:33:06,484] INFO Creating connector test-sqlite-jdbc-autoincrement of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:170)
[2016-06-08 21:33:06,487] INFO Instantiated connector test-sqlite-jdbc-autoincrement with version 2.0.0 of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:183)
[2016-06-08 21:33:06,542] INFO JdbcSourceConnectorConfig values: 
mode = timestamp
topic.prefix = test-sqlite-jdbc-
query = 
batch.max.rows = 100
connection.url = jdbc:oracle:thin:accounts/ora...@0.0.0.0:1521/orcl
table.blacklist = []
table.whitelist = [accounts]
 (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
[2016-06-08 21:33:10,115] INFO Finished creating connector test-sqlite-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:193)
[2016-06-08 21:34:04,818] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:91)
java.lang.IllegalArgumentException: Number of groups must be positive.
at org.apache.kafka.connect.util.ConnectorUtils.groupPartitions(ConnectorUtils.java:45)
at io.confluent.connect.jdbc.JdbcSourceConnector.taskConfigs(JdbcSourceConnector.java:120)
at org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:215)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.recomputeTaskConfigs(StandaloneHerder.java:210)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.updateConnectorTasks(StandaloneHerder.java:249)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:146)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:85)
[2016-06-08 21:34:04,822] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:68)
[2016-06-08 21:34:04,857] INFO Stopped ServerConnector@379ab47b{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:306)
[2016-06-08 21:34:04,882] INFO Stopped o.e.j.s.ServletContextHandler@2a2da905{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:865)
[2016-06-08 21:34:04,938] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:62)
[2016-06-08 21:34:04,938] INFO Stopping connector test-sqlite-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:226)
[2016-06-08 21:34:04,938] INFO Stopping table monitoring thread (io.confluent.connect.jdbc.JdbcSourceConnector:134)
[2016-06-08 21:34:04,942] INFO Stopped connector test-sqlite-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:240)
[2016-06-08 21:34:04,942] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:77)
[2016-06-08 21:34:04,942] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:115)
[2016-06-08 21:34:04,942] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:61)
[2016-06-08 21:34:04,942] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:155)
[2016-06-08 21:34:04,942] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:74)

Ewen Cheslack-Postava

unread,
Jun 9, 2016, 9:52:31 PM6/9/16
to Confluent Platform
This is a confusing message, but it basically means that with your settings, no tables were actually included for the connector to split up among the tasks. This might mean that "accounts" isn't a table name that is found in the database you've connected to.

-Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Ewen

MR

unread,
Mar 11, 2017, 9:54:51 PM3/11/17
to Confluent Platform
Hello All,

I am getting same error. Zookeeper, Kafka and Schema Registry came up fine. Source Oracle database is Oracle 11.2. I have copied ojdbc6.jar to .share/java/kafka-connect-jdbc directory. I have confirmed that Table exist in source data. Not sure what I am missing here.  Appreciate is someone can help me in fixing this error.

Here are the contents of source-quickstart-oracle.properties file:

name=test-source-oracle-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.password=int_stg_dd1
connection.user=int_stg
connection.url=jdbc:oracle:thin:int_stg/int_stg_dd1@<<ServerName>>:1521:DD1
mode=incrementing
incrementing.column.name=id
topic.prefix=test-oracle-jdbc-
table.whitelist=source_kafka

Here is the command I ran on Linux:

./bin/connect-standalone ./etc/schema-registry/connect-avro-standalone.properties ./etc/kafka-connect-jdbc/source-quickstart-oracle.properties

Here is the error I am getting:

[2017-03-11 21:44:51,883] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:99)

java.lang.IllegalArgumentException: Number of groups must be positive.
    at org.apache.kafka.connect.util.ConnectorUtils.groupPartitions(ConnectorUtils.java:42)
    at io.confluent.connect.jdbc.JdbcSourceConnector.taskConfigs(JdbcSourceConnector.java:127)
    at org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:230)
    at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.recomputeTaskConfigs(StandaloneHerder.java:265)
    at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.updateConnectorTasks(StandaloneHerder.java:295)
    at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:182)
    at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:93)

Here is the confirmation that table exist in source database:


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select * from source_kafka
  2  ;

        ID USER_TEST
---------- ------------------------------
         1 test1

Thanks and Regards

MR
connection.url = jdbc:oracle:thin:accounts/oracl...@0.0.0.0:1521/orcl
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen

MR

unread,
Mar 14, 2017, 9:03:12 AM3/14/17
to Confluent Platform
Hello All,

Any insights on this issue? Has anyone used Kafka Connect successfully with Oracle/SQL Server/DB2 database?

Regards

susana...@gmail.com

unread,
Mar 22, 2017, 4:57:22 PM3/22/17
to Confluent Platform
I ran into the "Number of groups must be positive." exception when the underlying object was actually a view.  By default, the connector only considers tables.  set the table.types to include views, https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/source/JdbcSourceConnectorConfig.java#L178.

mayank rathi

unread,
Mar 22, 2017, 5:19:38 PM3/22/17
to confluent...@googlegroups.com
Thanks Bruce for replying.

I was able to resolve this issue this morning. Oracle accepts table/column names in Caps only. Once I changed table/column names to CAPS in property file , issue went away.

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsubscribe@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to a topic in the Google Groups "Confluent Platform" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/confluent-platform/jdI1oWi0WxQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/81f89bca-0927-4cd0-af6e-06106a00a352%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

Koen Dejonghe

unread,
May 16, 2017, 5:31:58 AM5/16/17
to Confluent Platform
Note that the incrementing column name must also be in uppercase:
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to a topic in the Google Groups "Confluent Platform" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/confluent-platform/jdI1oWi0WxQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

san...@parallelstack.com

unread,
Jan 8, 2018, 4:32:49 PM1/8/18
to Confluent Platform
Try this 
 

/usr/bin/connect-standalone  /etc/schema-registry/connect-avro-standalone.properties /etc/kafka-connect-elasticsearch/config/elasticsearch-connect.properties 


It must me due to clashing of /etc or etc I was facing same issue but then it got resolved.

Reply all
Reply to author
Forward
0 new messages