I've configured kafka connect to use sqlite3 and it worked great. When I tried changing the properties file to use oracle database, i'm facing the following error:
Can someone help me fix this issue? We are working for a POC and struck here.
I could see the zookeeper, kafka server, schema registry services are runnning fine.
[oracle@vbgeneric confluent-2.0.0]$ ./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstart-sqlite.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-06-07 11:27:00,921] INFO StandaloneConfig values:
cluster = connect
rest.advertised.port = null
bootstrap.servers = [localhost:9092]
rest.port = 8083
internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
value.converter = class io.confluent.connect.avro.AvroConverter
key.converter = class io.confluent.connect.avro.AvroConverter
(org.apache.kafka.connect.runtime.standalone.StandaloneConfig:165)
[2016-06-07 11:27:01,939] INFO Logging initialized @8416ms (org.eclipse.jetty.util.log:186)
[2016-06-07 11:27:02,090] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:53)
[2016-06-07 11:27:02,091] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:89)
[2016-06-07 11:27:02,157] INFO ProducerConfig values:
compression.type = none
metric.reporters = []
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 1
metrics.num.samples = 2
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = all
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
(org.apache.kafka.clients.producer.ProducerConfig:165)
[2016-06-07 11:27:02,309] INFO Kafka version : 0.9.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-06-07 11:27:02,310] INFO Kafka commitId : d1555e3a21980fa9 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-06-07 11:27:02,314] INFO Starting FileOffsetBackingStore with file /tmp/connect.offsets (org.apache.kafka.connect.storage.FileOffsetBackingStore:53)
[2016-06-07 11:27:02,352] INFO Worker started (org.apache.kafka.connect.runtime.Worker:111)
[2016-06-07 11:27:02,352] INFO Herder starting (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:57)
[2016-06-07 11:27:02,355] INFO Herder started (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:58)
[2016-06-07 11:27:02,356] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:91)
[2016-06-07 11:27:02,849] INFO jetty-9.2.12.v20150709 (org.eclipse.jetty.server.Server:327)
Jun 07, 2016 11:27:05 AM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.
[2016-06-07 11:27:05,090] INFO Started o.e.j.s.ServletContextHandler@2a2da905{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2016-06-07 11:27:05,152] INFO Started ServerConnector@379ab47b{HTTP/1.1}{
0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
[2016-06-07 11:27:05,153] INFO Started @11630ms (org.eclipse.jetty.server.Server:379)
[2016-06-07 11:27:05,158] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:60)
[2016-06-07 11:27:05,207] INFO ConnectorConfig values:
connector.class = class io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max = 1
topics = []
name = test-sqlite-jdbc-autoincrement
(org.apache.kafka.connect.runtime.ConnectorConfig:165)
[2016-06-07 11:27:05,208] INFO Creating connector test-sqlite-jdbc-autoincrement of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:170)
[2016-06-07 11:27:05,212] INFO Instantiated connector test-sqlite-jdbc-autoincrement with version 2.0.0 of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:183)
[2016-06-07 11:27:05,246] INFO JdbcSourceConnectorConfig values:
mode = timestamp
topic.prefix = test-sqlite-jdbc-
query =
batch.max.rows = 100
connection.url = jdbc:oracle:thin:<user>/<pwd>@<host>:<port>:<sid>
table.blacklist = []
table.whitelist = []
(io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
[2016-06-07 11:27:08,861] INFO Finished creating connector test-sqlite-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:193)
[2016-06-07 11:27:16,498] INFO TaskConfig values:
task.class = class io.confluent.connect.jdbc.JdbcSourceTask
(org.apache.kafka.connect.runtime.TaskConfig:165)
[2016-06-07 11:27:16,499] INFO Creating task test-sqlite-jdbc-autoincrement-0 (org.apache.kafka.connect.runtime.Worker:256)
[2016-06-07 11:27:16,506] INFO Instantiated task test-sqlite-jdbc-autoincrement-0 with version 2.0.0 of type io.confluent.connect.jdbc.JdbcSourceTask (org.apache.kafka.connect.runtime.Worker:267)
[2016-06-07 11:27:16,603] INFO JdbcSourceTaskConfig values:
mode = timestamp
topic.prefix = test-sqlite-jdbc-
tables = [DR$DICTIONARY, DR$NUMBER_SEQUENCE, DR$OBJECT_ATTRIBUTE, DR$POLICY_TAB, DR$THS, DR$THS_BT, DR$THS_FPHRASE, DR$THS_PHRASE, NTV2_XML_DATA, OGIS_GEOMETRY_COLUMNS, OGIS_SPATIAL_REFERENCE_SYSTEMS, SDO_COORD_AXES, SDO_COORD_AXIS_NAMES, SDO_COORD_OPS, SDO_COORD_OP_METHODS, SDO_COORD_OP_PARAMS, SDO_COORD_OP_PARAM_USE, SDO_COORD_OP_PARAM_VALS, SDO_COORD_OP_PATHS, SDO_COORD_REF_SYS, SDO_COORD_SYS, SDO_CRS_GEOGRAPHIC_PLUS_HEIGHT, SDO_CS_CONTEXT_INFORMATION, SDO_CS_SRS, SDO_DATUMS, SDO_DATUMS_OLD_SNAPSHOT, SDO_ELLIPSOIDS, SDO_ELLIPSOIDS_OLD_SNAPSHOT, SDO_FEATURE_USAGE, SDO_GEOR_PLUGIN_REGISTRY, SDO_GEOR_XMLSCHEMA_TABLE, SDO_GR_MOSAIC_0, SDO_GR_MOSAIC_1, SDO_GR_MOSAIC_2, SDO_GR_MOSAIC_3, SDO_GR_PARALLEL, SDO_GR_RDT_1, SDO_GR_RDT_2, SDO_PC_BLK_TABLE, SDO_PREFERRED_OPS_SYSTEM, SDO_PREFERRED_OPS_USER, SDO_PRIME_MERIDIANS, SDO_PROJECTIONS_OLD_SNAPSHOT, SDO_ST_TOLERANCE, SDO_TIN_BLK_TABLE, SDO_TIN_PC_SEQ, SDO_TIN_PC_SYSDATA_TABLE, SDO_TOPO_DATA$, SDO_TOPO_RELATION_DATA, SDO_TOPO_TRANSACT_DATA, SDO_TXN_IDX_DELETES, SDO_TXN_IDX_EXP_UPD_RGN, SDO_TXN_IDX_INSERTS, SDO_UNITS_OF_MEASURE, SDO_WFS_LOCAL_TXNS, SDO_WS_CONFERENCE, SDO_WS_CONFERENCE_PARTICIPANTS, SDO_WS_CONFERENCE_RESULTS, SDO_XML_SCHEMAS, SRSNAMESPACE_TABLE, OL$, OL$HINTS, OL$NODES, BONUS, DEPT, EMP, SALGRADE, SAMPLE_DATASET_EVOLVE, SAMPLE_DATASET_FULLTEXT, SAMPLE_DATASET_INTRO, SAMPLE_DATASET_PARTN, SAMPLE_DATASET_XMLDB_HOL, SAMPLE_DATASET_XQUERY, AUDIT_ACTIONS, AUDTAB$TBS$FOR_EXPORT_TBL, AW$AWCREATE, AW$AWCREATE10G, AW$AWMD, AW$AWREPORT, AW$AWXML, AW$EXPRESS, DATA_PUMP_XPL_TABLE$, DBA_SENSITIVE_DATA_TBL, DBA_TSDP_POLICY_PROTECTION_TBL, DUAL, FGA_LOG$FOR_EXPORT_TBL, HS$_PARALLEL_METADATA, HS_BULKLOAD_VIEW_OBJ, HS_PARTITION_COL_NAME, HS_PARTITION_COL_TYPE, IMPCALLOUTREG$, IMPDP_STATS, KU$NOEXP_TAB, KU$XKTFBUE, KU$_DATAPUMP_MASTER_10_1, KU$_DATAPUMP_MASTER_11_1, KU$_DATAPUMP_MASTER_11_1_0_7, KU$_DATAPUMP_MASTER_11_2, KU$_DATAPUMP_MASTER_12_0, KU$_LIST_FILTER_TEMP, KU$_LIST_FILTER_TEMP_2, KU$_USER_MAPPING_VIEW_TBL, MAP_OBJECT, NACL$_ACE_EXP_TBL, NACL$_HOST_EXP_TBL, NACL$_WALLET_EXP_TBL, ODCI_PMO_ROWIDS$, ODCI_SECOBJ$, ODCI_WARNINGS$, PLAN_TABLE$, PSTUBTBL, SAM_SPARSITY_ADVICE, SPD_SCRATCH_TAB, STMT_AUDIT_OPTION_MAP, SYSTEM_PRIVILEGE_MAP, TABLE_PRIVILEGE_MAP, TSDP_ASSOCIATION$, TSDP_CONDITION$, TSDP_ERROR$, TSDP_FEATURE_POLICY$, TSDP_PARAMETER$, TSDP_POLICY$, TSDP_PROTECTION$, TSDP_SENSITIVE_DATA$, TSDP_SENSITIVE_TYPE$, TSDP_SOURCE$, TSDP_SUBPOL$, TTS_ERROR$, USER_PRIVILEGE_MAP, WRI$_ADV_ASA_RECO_DATA, WRI$_HEATMAP_TOPN_DEP1, WRI$_HEATMAP_TOPN_DEP2, WRR$_REPLAY_CALL_FILTER, XS$VALIDATION_TABLE, HELP, OL$, OL$HINTS, OL$NODES, SCHEDULER_JOB_ARGS_TBL, SCHEDULER_PROGRAM_ARGS_TBL, APP_ROLE_MEMBERSHIP, APP_USERS_AND_ROLES, Folder7_TAB, MIGR9202STATUS, SYS_NT/ZrP7MGASKngQ7ap6Ar7bw==, X$NM7UJB7VPFVE92KV0GUML7K0LVSF, X$PT7UJB7VPFVE92KV0GUML7K0LVSF, X$QN7UJB7VPFVE92KV0GUML7K0LVSF, XDB$ACL, XDB$ALL_MODEL, XDB$ANY, XDB$ANYATTR, XDB$ATTRGROUP_DEF, XDB$ATTRGROUP_REF, XDB$ATTRIBUTE, XDB$CDBPORTS, XDB$CHECKOUTS, XDB$CHOICE_MODEL, XDB$COLUMN_INFO, XDB$COMPLEX_TYPE, XDB$CONFIG, XDB$DBFS_VIRTUAL_FOLDER, XDB$DXPTAB, XDB$D_LINK, XDB$ELEMENT, XDB$GROUP_DEF, XDB$GROUP_REF, XDB$H_INDEX, XDB$H_LINK, XDB$IMPORT_TT_INFO, XDB$MOUNTS, XDB$NLOCKS, XDB$NONCEKEY, XDB$PATH_INDEX_PARAMS, XDB$REPOS, XDB$RESCONFIG, XDB$RESOURCE, XDB$ROOT_INFO, XDB$SCHEMA, XDB$SEQUENCE_MODEL, XDB$SIMPLE_TYPE, XDB$STATS, XDB$TTSET, XDB$XDB_READY, XDB$XIDX_IMP_T, XDB$XIDX_PARAM_T, XDB$XIDX_PART_TAB, XDB$XTAB, XDB$XTABCOLS, XDB$XTABNMSP, XDB_INDEX_DDL_CACHE, DICOM_METADATA_TABLE, EXIF_METADATA_TABLE, IMAGE_METADATA_TABLE, IPTC_METADATA_TABLE, ORDIMAGE_METADATA_TABLE, SYS_NTMIiiteK0RkPgUwEAAH+Rlw==, SYS_NTMIiiteK1RkPgUwEAAH+Rlw==, SYS_NTMIiiteK2RkPgUwEAAH+Rlw==, SYS_NTMIiiteK3RkPgUwEAAH+Rlw==, SYS_NTMIiiteK4RkPgUwEAAH+Rlw==, SYS_NTMIiiteK5RkPgUwEAAH+Rlw==, SYS_NTMIiiteK6RkPgUwEAAH+Rlw==, SYS_NTMIiiteK7RkPgUwEAAH+Rlw==, SYS_NTMIiiteK8RkPgUwEAAH+Rlw==, SYS_NTMIiiteK9RkPgUwEAAH+Rlw==, SYS_NTMIiiteKZRkPgUwEAAH+Rlw==, SYS_NTMIiiteKaRkPgUwEAAH+Rlw==, SYS_NTMIiiteKbRkPgUwEAAH+Rlw==, SYS_NTMIiiteKcRkPgUwEAAH+Rlw==, SYS_NTMIiiteKdRkPgUwEAAH+Rlw==, SYS_NTMIiiteKeRkPgUwEAAH+Rlw==, SYS_NTMIiiteKfRkPgUwEAAH+Rlw==, SYS_NTMIiiteKgRkPgUwEAAH+Rlw==, SYS_NTMIiiteKqRkPgUwEAAH+Rlw==, SYS_NTMIiiteKrRkPgUwEAAH+Rlw==, SYS_NTMIiiteKsRkPgUwEAAH+Rlw==, SYS_NTMIiiteKtRkPgUwEAAH+Rlw==, SYS_NTMIiiteKuRkPgUwEAAH+Rlw==, SYS_NTMIiiteKvRkPgUwEAAH+Rlw==, XMP_METADATA_TABLE, REVISED_COLL_TYPES, REVISED_TYPES, REVISED_TYPE_ATTRS, REVISED_TYPE_SUMMARY, STORAGE_MODEL_CACHE, TYPE_SUMMARY, XDBPM_INDEX_DDL_CACHE, DOCUMENT_UPLOAD_TABLE, XFILES_DOCUMENT_STAGING, XFILES_WIKI_TABLE]
query =
batch.max.rows = 100
connection.url = jdbc:oracle:thin:<user>/<pwd>@<host>:<port>:<sid>
table.blacklist = []
table.whitelist = []
(io.confluent.connect.jdbc.JdbcSourceTaskConfig:135)
[2016-06-07 11:27:16,606] INFO Created connector test-sqlite-jdbc-autoincrement (org.apache.kafka.connect.cli.ConnectStandalone:82)
[2016-06-07 11:27:20,491] ERROR Task test-sqlite-jdbc-autoincrement-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSourceTask:362)
[2016-06-07 11:27:20,494] ERROR Task is being killed and will not recover until manually restarted: (org.apache.kafka.connect.runtime.WorkerSourceTask:363)
org.apache.kafka.connect.errors.ConnectException: Failed trying to validate that columns used for offsets are NOT NULL
at io.confluent.connect.jdbc.JdbcSourceTask.validateNonNullable(JdbcSourceTask.java:262)
at io.confluent.connect.jdbc.JdbcSourceTask.start(JdbcSourceTask.java:131)
at org.apache.kafka.connect.runtime.WorkerSourceTask$WorkerSourceTaskThread.execute(WorkerSourceTask.java:341)
at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
Caused by: java.sql.SQLDataException: ORA-01424: missing or illegal character following the escape character
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:774)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:926)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1112)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4798)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:4846)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1501)
at oracle.jdbc.driver.OracleDatabaseMetaData.getColumnsWithWildcards(OracleDatabaseMetaData.java:348)
at oracle.jdbc.driver.OracleDatabaseMetaData.getColumns(OracleDatabaseMetaData.java:128)
at io.confluent.connect.jdbc.JdbcUtils.isColumnNullable(JdbcUtils.java:153)
at io.confluent.connect.jdbc.JdbcSourceTask.validateNonNullable(JdbcSourceTask.java:254)
... 3 more