Kafka Connect with Oracle database as source

6,714 views
Skip to first unread message

Cherupally Bhargav

unread,
Apr 1, 2016, 2:09:37 PM4/1/16
to Confluent Platform
Hi,

I've tried kafka connect with mysql database, and it worked great. 
I've installed the oracle client 12c and I've set the ojdbc6.jar and ojdbc7.jar in classpath. 
But when I changed the properties file to match the oracle database connection url, it is showing the following error:

[2016-04-01 17:11:24,882] INFO JdbcSourceConnectorConfig values: 
connection.url = jdbc:oracle:thin:<user>/<password>@<HOST>:<PORT>:<SID>
query = 
topic.prefix = test_jdbc_
batch.max.rows = 100
table.whitelist = []
mode = incrementing
table.blacklist = []
 (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
[2016-04-01 17:11:25,735] 
ERROR Couldn't open connection to jdbc:oracle:thin:<user>/<password>@<HOST>:<PORT>:<SID>: 
java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
ORA-01882: timezone region not found
 (io.confluent.connect.jdbc.JdbcSourceConnector:76)

Can you please help me resolve this issue ?

Thanks,
Bhargav

Gwen Shapira

unread,
Apr 4, 2016, 11:52:33 AM4/4/16
to confluent...@googlegroups.com
It looks like your client is so new, that it is not compatible with the server :)
I'd remove ojdbc7.jar from the classpath to see if this works. If it doesn't, you may need an ojdbc6.jar from a client that matches your server version.

Gwen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/50fcef85-61f1-4592-b978-a60e8182d492%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Cherupally Bhargav

unread,
Apr 6, 2016, 11:36:37 AM4/6/16
to Confluent Platform
Thanks Gwen Shapira. As said, I have downloaded the ojdbc.jar compatible with the database I'm trying to connect. And now it worked fine.
But I don't see services getting started after this:

[2016-04-06 15:28:08,309] INFO Started o.e.j.s.ServletContextHandler@5bcef63e{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2016-04-06 15:28:08,322] INFO Started ServerConnector@1f2dc289{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
[2016-04-06 15:28:08,322] INFO Started @3422ms (org.eclipse.jetty.server.Server:379)
[2016-04-06 15:28:08,325] INFO REST server listening at http://10.0.2.15:8083/, advertising URL http://10.0.2.15:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:132)
[2016-04-06 15:28:08,326] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:60)
[2016-04-06 15:28:08,339] INFO ConnectorConfig values: 
topics = []
name = test-mysql-jdbc
tasks.max = 1
connector.class = class io.confluent.connect.jdbc.JdbcSourceConnector
 (org.apache.kafka.connect.runtime.ConnectorConfig:165)
[2016-04-06 15:28:08,342] INFO Creating connector test-mysql-jdbc of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:170)
[2016-04-06 15:28:08,348] INFO Instantiated connector test-mysql-jdbc with version 2.0.0 of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:183)
[2016-04-06 15:28:08,361] INFO JdbcSourceConnectorConfig values: 
connection.url = jdbc:oracle:thin:<user>/<password>@<HOST>:<PORT>:<SID>
query = 
topic.prefix = test_jdbc_
batch.max.rows = 100
table.whitelist = []
mode = incrementing
table.blacklist = []
 (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
[2016-04-06 15:28:14,612] INFO Finished creating connector test-mysql-jdbc (org.apache.kafka.connect.runtime.Worker:193)

Gwen Shapira

unread,
Apr 6, 2016, 12:50:49 PM4/6/16
to confluent...@googlegroups.com
It looks like you didn't configure any tables for the connector to read... both table.whitelist and table.blacklist are empty.

Also, I would recommend not to call your Oracle source connector "test-mysql-jdbc", it can get confusing :)

Gwen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

Ewen Cheslack-Postava

unread,
Apr 7, 2016, 3:31:16 AM4/7/16
to Confluent Platform
Just to clarify, neither table.whitelist or table.blacklist are required -- if you omit both, you should simply copy all tables by default.

What exactly do you mean by "I don't see services getting started after this"? I'd expect some log messages over time, but the connector is not particularly verbose -- for normal copying operations it will not log any messages (else it would be quite verbose). It should log some info periodically about committing offsets, but other than that you should simply observe output topics for data. If the connector is running properly (and changes are being made in the database) you should see data being delivered into Kafka.

In the worst case, if nothing else, I would expect to see some sort of exception or error in the log if no data is being copied at all. If you aren't seeing any data downstream, is anything being emitted to the log file?

-Ewen


For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Ewen

Cherupally Bhargav

unread,
Apr 8, 2016, 12:26:26 AM4/8/16
to Confluent Platform
Hi Ewen,

I've been looking for a platform where I can build a scalable ETL pipeline and found kafka connect. So, I had downloaded the pre-built virtual machine from here:

I've downloaded the virtual machine and I had setup the environment and started the necessary services for kafka connect (hadoop, hive metastore). 
Here is the configuration I have tried initially:
Source Database: MySQL
Hive Metastore: MySQL

MySQL → Kafka → HDFS → Hive

The above pipeline worked great and I was able to see the database change capture.

And I tried to configure the same for Oracle:
Source Database: Oracle
Hive Metastore: MySQL

Oracle → Kafka → HDFS → Hive

I've installed oracle client compatible to the database I'm trying to connect and placed the ojdbc*.jar in classpath.
Changed the mysql.properties file to use oracle connection:

connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:oracle:thin:<user>/<password>@<HOST>:<PORT>:<SID>
mode=timestamp+incrementing
topic.prefix=test_jdbc_

I've started the services (hadoop, hive metastore) and created a table users and inserted records into oracle database.
And when the following command is run, I can see only INFO Finished creating connector test-mysql-jdbc :
connect-standalone /mnt/etc/connect-avro-standalone.properties \
/mnt/etc/mysql.properties /mnt/etc/hdfs.properties &

[2016-04-06 15:28:14,612] INFO Finished creating connector test-mysql-jdbc (org.apache.kafka.connect.runtime.Worker:193)

I have also tried connecting to hive and I don't see any external table.
I'm not sure if the process I followed to configure the platform for oracle is correct.
Can you please let me know the required steps to configure the platform for oracle database ?

Thanks,
Bhargav Cheupally


byl...@gmail.com

unread,
Apr 11, 2016, 2:14:03 AM4/11/16
to Confluent Platform
Hi, I have been stuck here for 2 days. I got the same log, but when I list all topics on Kafka, there does not exist the expected topic <topic.prefix + table name>. If you have any idea, pls tell me, thank you very much!!!

在 2016年4月6日星期三 UTC+8下午11:36:37,Cherupally Bhargav写道:

byl...@gmail.com

unread,
Apr 11, 2016, 3:04:00 AM4/11/16
to Confluent Platform
Hi Ewen, 
       I think the services are topics which map the tables in oracle. However, I got the following log:
       
       [2016-04-11 02:52:35,514] INFO Created connector oracle-connect-test (org.apache.kafka.connect.cli.ConnectStandalone:82)
       [2016-04-11 02:52:36,501] INFO Source task Thread[WorkerSourceTask-oracle-connect-test-0,5,main] finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:342)
       
       then I list all topics in Kafka, but I didn't find out any expected topics(<topic.prefix> + table name).
Thanks.

在 2016年4月7日星期四 UTC+8下午3:31:16,Ewen Cheslack-Postava写道:
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen

Liquan Pei

unread,
Apr 11, 2016, 3:09:16 AM4/11/16
to confluent...@googlegroups.com
Hi 

Can you try to see whether you can query your data in oracle database with the JDBC driver?

Thanks,
Liquan 

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Liquan Pei | Software Engineer | Confluent | +1 413.230.6855
Download Apache Kafka and Confluent Platform: www.confluent.io/download

byl...@gmail.com

unread,
Apr 11, 2016, 3:27:32 AM4/11/16
to Confluent Platform
Yes, of course. The Oracle version: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production. JDBC driver: odbc6-1.0.0.jar -> oracle.jdbc.driver.OracleDriver.
I am doing a PoC of Confulent Platform and stuck here.

Thanks,
Brave.
在 2016年4月11日星期一 UTC+8下午3:09:16,Liquan Pei写道:
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.
Message has been deleted

byl...@gmail.com

unread,
Apr 11, 2016, 4:44:55 AM4/11/16
to Confluent Platform
Hi Liquan,
I can query data from oracle database with the odbc6.jar. 

Thanks.

在 2016年4月11日星期一 UTC+8下午3:09:16,Liquan Pei写道:
Hi 
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

Liquan Pei

unread,
Apr 11, 2016, 4:59:51 AM4/11/16
to confluent...@googlegroups.com
Hi 

Can you give me more information on how do you run the pipeline with oracle database? Did you use another topic prefix for the oracle database?  Did you clean Kafka, HDFS and schema registry before using the oracle database?

Thanks,
Liquan 

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.



--
Liquan Pei | Software Engineer | Confluent | +1 413.230.6855
Download Apache Kafka and Confluent Platform: www.confluent.io/download

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

byl...@gmail.com

unread,
Apr 11, 2016, 6:48:00 AM4/11/16
to Confluent Platform
Hi,
   I tried it again.
   1. Clean Kafka (remove all topics, in zookeeper /config/topics/topicname, /brokers/topics/topicname and local logs) and restart
   2. Restart Schema Registry.
   
   oracle.properties:
   name=oracle-connect-test
   connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
   tasks.max=1
   topic.prefix=test_oracle_jdbc_
   connection.url=jdbc:oracle:thin:<username>/<password>@<host>:<port>:<sid>
   table.whitelist=USER

   mode=timestamp+incrementing
   timestamp.column.name=MODIFIED

   Table USER in oracle:
   Name     Null     Type         
   -------- -------- ------------ 
   ID       NOT NULL NUMBER(38)   
   MODIFIED NOT NULL TIMESTAMP(6) 
   USERNAME NOT NULL VARCHAR2(20) 
   PASSWORD NOT NULL VARCHAR2(20) 

   3. ./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/oracle.properties
  
   4. List all topics:
    __consumer_offsets
    _schemas 
    connect-configs  
    connect-offsets

    ...

   
在 2016年4月11日星期一 UTC+8下午4:59:51,Liquan Pei写道:
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.



--
Thanks,
Ewen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.



--
Liquan Pei | Software Engineer | Confluent | +1 413.230.6855
Download Apache Kafka and Confluent Platform: www.confluent.io/download

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

Gwen Shapira

unread,
Apr 11, 2016, 1:33:42 PM4/11/16
to confluent...@googlegroups.com
Maybe try running with debug log level, so we can see if there is anything else going on?

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

Yongjian Meng

unread,
Apr 11, 2016, 10:51:54 PM4/11/16
to Confluent Platform
No more information. JUST NO TOPICS CREATED!!! I have reviewed the code of kafka-connect-jdbc, inconducive for me...
So our team decide to give up confluent...

在 2016年4月12日星期二 UTC+8上午1:33:42,Gwen Shapira写道:
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

Liquan Pei

unread,
Apr 12, 2016, 12:36:02 AM4/12/16
to confluent...@googlegroups.com
Hi Yongjian,

It is a bit interesting to see the connect-configs in the topic list as it is not used by the standalone mode. Do you want another try?

1. Stop all the services including Kafka, Zookeeper, Schema Registry and Kafka Connect.
2. Clean zookeeper by running rm -rf /tmp/zookeeper
3. Clean Kafka logs rm -rf /tmp/kafka-logs
4. The above steps ensures that the old data are cleaned

5. Start Zookeeper, Kafka and Schema Registry with the default configurations
6. Start the JDBC connector with the following config: 
   
   name=oracle-connect-test
   connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
   tasks.max=1
   topic.prefix=test_oracle_jdbc_
   connection.url=jdbc:oracle:thin:<username>/<password>@<host>:<port>:<sid>
   mode=timestamp+incrementing
   timestamp.column.name=MODIFIED
./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/oracle.properties

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Liquan Pei

unread,
Apr 12, 2016, 12:43:00 AM4/12/16
to confluent...@googlegroups.com
Hi Yongjian,

It is a bit interesting to see the connect-configs in the topic list as it is not used by the standalone mode. Do you want another try?

1. Stop all the services including Kafka, Zookeeper, Schema Registry and Kafka Connect.
2. Clean zookeeper by running rm -rf /tmp/zookeeper
3. Clean Kafka logs rm -rf /tmp/kafka-logs
4. The above steps ensures that the old data are cleaned

5. Start Zookeeper, Kafka and Schema Registry with the default configurations
6. Start the JDBC connector with the following config: 
   
   name=oracle-connect-test
   connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
   tasks.max=1
   topic.prefix=test_oracle_jdbc_
   connection.url=jdbc:oracle:thin:<username>/<password>@<host>:<port>:<sid>
   mode=timestamp+incrementing
   timestamp.column.name=MODIFIED
   
  ./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/oracle.properties

7. If the connector is started and there are some data in the database, you probably see some data ingested to the database or you see an exception: Invalid type of Incrementing column: BYTES as there are some issues in working with oracle's number type. If you see this, repeat step 1-4 to ensure everything is cleaned, but before you start the connector, change oracle.properties to use bulk mode 
  
   name=oracle-connect-test
   connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
   tasks.max=1
   topic.prefix=test_oracle_jdbc_
   connection.url=jdbc:oracle:thin:<username>/<password>@<host>:<port>:<sid>
   mode=bulk

   Then start Kafka Connect again. 

8. If you see any errors, it would be nice if you can send me the whole Kafka Connect log. This will help me to dig out the issue. 


Thanks,
Liquan


Yongjian Meng

unread,
Apr 12, 2016, 3:34:03 AM4/12/16
to Confluent Platform
Hi Liquan,
Thanks for your reply.
I try it again and get the following summaries:

1. I did Step 1 -> 6, but in different, in connector configuartion, I add a table.whitelist propertity, because there are too much tables in this schema, so I need to add table.whitelist.
    After I start the JDBC connector, no expected topic created...
    I use Integer type instead of Number type for ID, so I didn't see the exception: BYTES as there are some issues in working with oracle's number type. issue
2. If I change mode to bulk, surprised!!! topic was created with prefix test_oracle_jdbc_ and consumer can receive message from topic. However, we need increment+timestamp mode...
    Then we change bulk to increment+timestamp, restart jdbc connector, the consumer can not receive message from topic....

Connect start log of increment+timestamp

>  ./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/oracle.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/bigdatagfts/confluent-2.0.1/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/bigdatagfts/confluent-2.0.1/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/bigdatagfts/confluent-2.0.1/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/bigdatagfts/confluent-2.0.1/share/java/kafka/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-04-12 03:02:08,785] INFO StandaloneConfig values: 
        cluster = connect
        rest.advertised.host.name = null
        rest.host.name = null
        rest.advertised.port = null
        bootstrap.servers = [host1:9092, host2:9092, host3:9092]
        offset.flush.timeout.ms = 5000
        offset.flush.interval.ms = 60000
        rest.port = 8083
        internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
        internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
        value.converter = class io.confluent.connect.avro.AvroConverter
        key.converter = class io.confluent.connect.avro.AvroConverter
 (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:165)
[2016-04-12 03:02:09,102] INFO Logging initialized @774ms (org.eclipse.jetty.util.log:186)
[2016-04-12 03:02:09,131] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:53)
[2016-04-12 03:02:09,132] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:89)
[2016-04-12 03:02:09,144] INFO ProducerConfig values: 
        compression.type = none
        metric.reporters = []
        metadata.max.age.ms = 300000
        metadata.fetch.timeout.ms = 60000
        reconnect.backoff.ms = 50
        sasl.kerberos.ticket.renew.window.factor = 0.8
        bootstrap.servers = [host1:9092, host2:9092, host3:9092]
        retry.backoff.ms = 100
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        buffer.memory = 33554432
        timeout.ms = 30000
        key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.keystore.type = JKS
        ssl.trustmanager.algorithm = PKIX
        block.on.buffer.full = false
        ssl.key.password = null
        max.block.ms = 9223372036854775807
        sasl.kerberos.min.time.before.relogin = 60000
        connections.max.idle.ms = 540000
        ssl.truststore.password = null
        max.in.flight.requests.per.connection = 1
        metrics.num.samples = 2
        client.id
        ssl.endpoint.identification.algorithm = null
        ssl.protocol = TLS
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        acks = all
        batch.size = 16384
        ssl.keystore.location = null
        receive.buffer.bytes = 32768
        ssl.cipher.suites = null
        ssl.truststore.type = JKS
        security.protocol = PLAINTEXT
        retries = 2147483647
        max.request.size = 1048576
        value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        ssl.truststore.location = null
        ssl.keystore.password = null
        ssl.keymanager.algorithm = SunX509
        metrics.sample.window.ms = 30000
        partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
        send.buffer.bytes = 131072
        linger.ms = 0
 (org.apache.kafka.clients.producer.ProducerConfig:165)
[2016-04-12 03:02:09,180] INFO Kafka version : 0.9.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-04-12 03:02:09,180] INFO Kafka commitId : 7113452b3e7d5638 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-04-12 03:02:09,181] INFO Starting FileOffsetBackingStore with file /tmp/connect.offsets (org.apache.kafka.connect.storage.FileOffsetBackingStore:53)
[2016-04-12 03:02:09,185] INFO Worker started (org.apache.kafka.connect.runtime.Worker:111)
[2016-04-12 03:02:09,185] INFO Herder starting (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:57)
[2016-04-12 03:02:09,185] INFO Herder started (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:58)
[2016-04-12 03:02:09,185] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:91)
[2016-04-12 03:02:09,316] INFO jetty-9.2.12.v20150709 (org.eclipse.jetty.server.Server:327)
Apr 12, 2016 3:02:09 AM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.

[2016-04-12 03:02:09,966] INFO Started o.e.j.s.ServletContextHandler@54107f42{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2016-04-12 03:02:09,975] INFO Started ServerConnector@7dbf463c{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
[2016-04-12 03:02:09,975] INFO Started @1649ms (org.eclipse.jetty.server.Server:379)
[2016-04-12 03:02:09,978] INFO REST server listening at http://host3:8083/, advertising URL http://host3:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:132)
[2016-04-12 03:02:09,978] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:60)
[2016-04-12 03:02:09,981] INFO ConnectorConfig values: 
        connector.class = class io.confluent.connect.jdbc.JdbcSourceConnector
        tasks.max = 1
        topics = []
        name = oracle-connect-test
 (org.apache.kafka.connect.runtime.ConnectorConfig:165)
[2016-04-12 03:02:09,981] INFO Creating connector oracle-connect-test of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:170)
[2016-04-12 03:02:09,982] INFO Instantiated connector oracle-connect-test with version 2.0.1 of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:183)
[2016-04-12 03:02:09,987] INFO JdbcSourceConnectorConfig values: 
        mode = timestamp+incrementing
        timestamp.column.name = MODIFIED
        incrementing.column.name = ID
        topic.prefix = test_oracle_jdbc_
        poll.interval.ms = 5000
        query = 
        batch.max.rows = 100
        connection.url = jdbc:oracle:thin:<username>/<password>@host:port:sid
        table.blacklist = []
        table.poll.interval.ms = 60000
        table.whitelist = [USERYM]
 (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
[2016-04-12 03:02:21,770] INFO Finished creating connector oracle-connect-test (org.apache.kafka.connect.runtime.Worker:193)
[2016-04-12 03:02:57,826] INFO TaskConfig values: 
        task.class = class io.confluent.connect.jdbc.JdbcSourceTask
 (org.apache.kafka.connect.runtime.TaskConfig:165)
[2016-04-12 03:02:57,826] INFO Creating task oracle-connect-test-0 (org.apache.kafka.connect.runtime.Worker:256)
[2016-04-12 03:02:57,827] INFO Instantiated task oracle-connect-test-0 with version 2.0.1 of type io.confluent.connect.jdbc.JdbcSourceTask (org.apache.kafka.connect.runtime.Worker:267)
[2016-04-12 03:02:57,832] INFO JdbcSourceTaskConfig values: 
        mode = timestamp+incrementing
        timestamp.column.name = MODIFIED
        incrementing.column.name = ID
        topic.prefix = test_oracle_jdbc_
        tables = [USERYM]
        poll.interval.ms = 5000
        query = 
        batch.max.rows = 100
        connection.url = jdbc:oracle:thin:<username>/<password>@host:port:sid
        table.blacklist = []
        table.poll.interval.ms = 60000
        table.whitelist = [USERYM]
 (io.confluent.connect.jdbc.JdbcSourceTaskConfig:135)
[2016-04-12 03:02:57,833] INFO Created connector oracle-connect-test (org.apache.kafka.connect.cli.ConnectStandalone:82)
[2016-04-12 03:02:58,413] INFO Source task Thread[WorkerSourceTask-oracle-connect-test-0,5,main] finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:342)

Connect start log of bulk
...<same to above log>...
[2016-04-12 02:58:35,735] INFO Wait to catch up until the offset of the last message at 3 (io.confluent.kafka.schemaregistry.storage.KafkaStore:225)
[2016-04-12 02:58:35,787] INFO 127.0.0.1 - - [12/Apr/2016:02:58:35 -0400] "POST /subjects/test_oracle_jdbc_USERYM-value/versions HTTP/1.1" 200 8  352 (io.confluent.rest-utils.requests:77)
[2016-04-12 02:58:35,916] INFO Topic creation {"version":1,"partitions":{"0":[0]}} (kafka.admin.AdminUtils$)
[2016-04-12 02:58:35,922] INFO [KafkaApi-0] Auto creation of topic test_oracle_jdbc_USERYM with 1 partitions and replication factor 1 is successful! (kafka.server.KafkaApis)
[2016-04-12 02:58:35,936] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [test_oracle_jdbc_USERYM,0] (kafka.server.ReplicaFetcherManager)
[2016-04-12 02:58:35,937] WARN Error while fetching metadata with correlation id 0 : {test_oracle_jdbc_USERYM=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient:582)
[2016-04-12 02:58:35,939] INFO Completed load of log test_oracle_jdbc_USERYM-0 with log end offset 0 (kafka.log.Log)
[2016-04-12 02:58:35,940] INFO Created log for partition [test_oracle_jdbc_USERYM,0] in /home/bigdatagfts/confluent-2.0.1/logs with properties {compression.type -> producer, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> delete, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2016-04-12 02:58:35,940] INFO Partition [test_oracle_jdbc_USERYM,0] on broker 0: No checkpointed highwatermark is found for partition [test_oracle_jdbc_USERYM,0] (kafka.cluster.Partition)

在 2016年4月12日星期二 UTC+8下午12:43:00,Liquan Pei写道:
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.
--
Liquan Pei | Software Engineer | Confluent | +1 413.230.6855
Download Apache Kafka and Confluent Platform: www.confluent.io/download

Liquan Pei

unread,
Apr 12, 2016, 8:25:13 PM4/12/16
to confluent...@googlegroups.com
Hi Yongjian,

Thanks for getting back with me with the logs. The only reason that I can think of now is that the connector is not getting data from the database. If no data is returned, the poll() method in JdbcSourceTask will be blocked. In this case, Kafka Connect will not write data to Kafka. 

Do you mind to try mode=timestamp and mode=incrementing as well? That will help me dig out the root cause of the problem. Also please post the connector log. 

Thanks,
Liquan

To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.
--
Liquan Pei | Software Engineer | Confluent | +1 413.230.6855
Download Apache Kafka and Confluent Platform: www.confluent.io/download



--
Liquan Pei | Software Engineer | Confluent | +1 413.230.6855
Download Apache Kafka and Confluent Platform: www.confluent.io/download

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Yongjian Meng

unread,
Apr 12, 2016, 10:16:37 PM4/12/16
to Confluent Platform
Hi Liquan,

If I use mode=incrementing only, I can see a exception: Invalid type for incrementing column: BYTES after log finished initalization and start, but mode=incrementing+timestamp will not, it just log finished initalization and start... not the following log. So you are right, mode=incrementing+timestamp has been blocked. 
hmm, table in white list has records...
mode=incrementing log:
[2016-04-12 21:31:57,722] INFO Created connector oracle-connect-test (org.apache.kafka.connect.cli.ConnectStandalone:82)
[2016-04-12 21:31:58,228] INFO Source task Thread[WorkerSourceTask-oracle-connect-test-0,5,main] finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:342)
...
[2016-04-12 21:31:58,276] ERROR Task oracle-connect-test-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSourceTask:362)
[2016-04-12 21:31:58,277] ERROR Task is being killed and will not recover until manually restarted: (org.apache.kafka.connect.runtime.WorkerSourceTask:363)
org.apache.kafka.connect.errors.ConnectException: Invalid type for incrementing column: BYTES
        at io.confluent.connect.jdbc.TimestampIncrementingTableQuerier.extractRecord(TimestampIncrementingTableQuerier.java:177)
        at io.confluent.connect.jdbc.JdbcSourceTask.poll(JdbcSourceTask.java:211)
        at org.apache.kafka.connect.runtime.WorkerSourceTask$WorkerSourceTaskThread.execute(WorkerSourceTask.java:353)
        at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)

If I use mode=timestamp only, display log is same to mode=timestamp+incrementing, so I think it has been blocked too...timestamp column results it?

Column name    Data type          Nullable  Data default
ID                 NUMBER(38,0) No
MODIFIED TIMESTAMP(6) No       CURRENT_TIMESTAMP

Thanks,
Yongjian.


在 2016年4月13日星期三 UTC+8上午8:25:13,Liquan Pei写道:
...

Liquan Pei

unread,
Apr 12, 2016, 10:53:49 PM4/12/16
to confluent...@googlegroups.com
Hi Yongjian,

We actually log in trace level if we didn't get any data. I am also figuring out what happened, but can you try to run the connector with log at the trace level?

Thanks,
Liquan

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Yongjian Meng

unread,
Apr 12, 2016, 11:45:58 PM4/12/16
to Confluent Platform
Hi Liquan,

mode=timestamp+incrementing log:
[2016-04-12 23:22:13,841] INFO DistributedConfig values: 
cluster = connect
metric.reporters = []
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
ssl.keystore.type = JKS
ssl.truststore.password = null
key.converter = class io.confluent.connect.avro.AvroConverter
ssl.endpoint.identification.algorithm = null
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
rest.port = 8083
ssl.truststore.location = null
ssl.keystore.password = null
internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
send.buffer.bytes = 131072
group.id = connect-cluster
rest.advertised.port = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
value.converter = class io.confluent.connect.avro.AvroConverter
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
ssl.keymanager.algorithm = SunX509
 (org.apache.kafka.connect.runtime.distributed.DistributedConfig:165)
[2016-04-12 23:22:14,191] DEBUG Logging to org.slf4j.impl.Log4jLoggerAdapter(org.eclipse.jetty.util.log) via org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:176)
[2016-04-12 23:22:14,193] INFO Logging initialized @825ms (org.eclipse.jetty.util.log:186)
[2016-04-12 23:22:14,201] DEBUG org.eclipse.jetty.server.Server@561b6512 added {qtp1886491834{STOPPED,8<=0<=200,i=0,q=0},AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:14,218] DEBUG HttpConnectionFactory@68567e20{HTTP/1.1} added {HttpConfiguration@76ed1b7c{32768/8192,8192/8192,https://:0,[]},POJO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:14,220] DEBUG ServerConnector@fd07cbb{null}{0.0.0.0:0} added {org.eclipse.jetty.server.Server@561b6512,UNMANAGED} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:14,221] DEBUG ServerConnector@fd07cbb{null}{0.0.0.0:0} added {qtp1886491834{STOPPED,8<=0<=200,i=0,q=0},AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:14,221] DEBUG ServerConnector@fd07cbb{null}{0.0.0.0:0} added {org.eclipse.jetty.util.thread.ScheduledExecutorScheduler@fa36558,AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:14,221] DEBUG ServerConnector@fd07cbb{null}{0.0.0.0:0} added {org.eclipse.jetty.io.ArrayByteBufferPool@3571b748,POJO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:14,222] DEBUG ServerConnector@fd07cbb{null}{0.0.0.0:0} added {HttpConnectionFactory@68567e20{HTTP/1.1},AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:14,224] DEBUG ServerConnector@fd07cbb{HTTP/1.1}{0.0.0.0:0} added {org.eclipse.jetty.server.ServerConnector$ServerConnectorManager@7748410a,MANAGED} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:14,224] DEBUG org.eclipse.jetty.server.Server@561b6512 added {ServerConnector@fd07cbb{HTTP/1.1}{0.0.0.0:8083},AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:14,430] DEBUG Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, localhost, 9092)], partitions = []) (org.apache.kafka.clients.Metadata:172)
[2016-04-12 23:22:14,445] DEBUG Added sensor with name connections-closed:client-id-connect-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,448] DEBUG Added sensor with name connections-created:client-id-connect-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,448] DEBUG Added sensor with name bytes-sent-received:client-id-connect-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,449] DEBUG Added sensor with name bytes-sent:client-id-connect-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,450] DEBUG Added sensor with name bytes-received:client-id-connect-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,451] DEBUG Added sensor with name select-time:client-id-connect-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,451] DEBUG Added sensor with name io-time:client-id-connect-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,465] DEBUG Added sensor with name heartbeat-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,465] DEBUG Added sensor with name join-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,466] DEBUG Added sensor with name sync-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,470] INFO Kafka version : 0.9.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-04-12 23:22:14,470] INFO Kafka commitId : 7113452b3e7d5638 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-04-12 23:22:14,471] DEBUG Connect group member created (org.apache.kafka.connect.runtime.distributed.WorkerGroupMember:113)
[2016-04-12 23:22:14,472] DEBUG Kafka Connect instance created (org.apache.kafka.connect.runtime.Connect:45)
[2016-04-12 23:22:14,473] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:53)
[2016-04-12 23:22:14,473] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:89)
[2016-04-12 23:22:14,479] INFO ProducerConfig values: 
compression.type = none
metric.reporters = []
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
[2016-04-12 23:22:14,479] TRACE Starting the Kafka producer (org.apache.kafka.clients.producer.KafkaProducer:201)
[2016-04-12 23:22:14,482] DEBUG Added sensor with name bufferpool-wait-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,484] DEBUG Added sensor with name buffer-exhausted-records (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,484] DEBUG Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, localhost, 9092)], partitions = []) (org.apache.kafka.clients.Metadata:172)
[2016-04-12 23:22:14,484] DEBUG Added sensor with name connections-closed:client-id-producer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,485] DEBUG Added sensor with name connections-created:client-id-producer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,485] DEBUG Added sensor with name bytes-sent-received:client-id-producer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,485] DEBUG Added sensor with name bytes-sent:client-id-producer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,485] DEBUG Added sensor with name bytes-received:client-id-producer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,486] DEBUG Added sensor with name select-time:client-id-producer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,486] DEBUG Added sensor with name io-time:client-id-producer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,488] DEBUG Added sensor with name batch-size (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,488] DEBUG Added sensor with name compression-rate (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,488] DEBUG Added sensor with name queue-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,489] DEBUG Added sensor with name request-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,489] DEBUG Added sensor with name produce-throttle-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,489] DEBUG Added sensor with name records-per-request (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,490] DEBUG Added sensor with name record-retries (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,490] DEBUG Added sensor with name errors (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,490] DEBUG Added sensor with name record-size-max (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,492] INFO Kafka version : 0.9.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-04-12 23:22:14,492] INFO Kafka commitId : 7113452b3e7d5638 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-04-12 23:22:14,492] DEBUG Kafka producer started (org.apache.kafka.clients.producer.KafkaProducer:315)
[2016-04-12 23:22:14,492] DEBUG Starting Kafka producer I/O thread. (org.apache.kafka.clients.producer.internals.Sender:123)
[2016-04-12 23:22:14,492] INFO Starting KafkaOffsetBackingStore (org.apache.kafka.connect.storage.KafkaOffsetBackingStore:84)
[2016-04-12 23:22:14,492] INFO Starting KafkaBasedLog with topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:114)
[2016-04-12 23:22:14,493] INFO ProducerConfig values: 
compression.type = none
metric.reporters = []
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = all
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
 (org.apache.kafka.clients.producer.ProducerConfig:165)
[2016-04-12 23:22:14,493] TRACE Starting the Kafka producer (org.apache.kafka.clients.producer.KafkaProducer:201)
[2016-04-12 23:22:14,493] DEBUG Added sensor with name bufferpool-wait-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,493] DEBUG Added sensor with name buffer-exhausted-records (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,493] DEBUG Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, localhost, 9092)], partitions = []) (org.apache.kafka.clients.Metadata:172)
[2016-04-12 23:22:14,494] DEBUG Added sensor with name connections-closed:client-id-producer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,494] DEBUG Added sensor with name connections-created:client-id-producer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,494] DEBUG Added sensor with name bytes-sent-received:client-id-producer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,494] DEBUG Added sensor with name bytes-sent:client-id-producer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,494] DEBUG Added sensor with name bytes-received:client-id-producer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,495] DEBUG Added sensor with name select-time:client-id-producer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,495] DEBUG Added sensor with name io-time:client-id-producer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,495] DEBUG Added sensor with name batch-size (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,496] DEBUG Added sensor with name compression-rate (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,496] DEBUG Added sensor with name queue-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,496] DEBUG Added sensor with name request-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,496] DEBUG Added sensor with name produce-throttle-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,497] DEBUG Added sensor with name records-per-request (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,497] DEBUG Added sensor with name record-retries (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,497] DEBUG Added sensor with name errors (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,497] DEBUG Added sensor with name record-size-max (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,498] WARN The configuration config.storage.topic = connect-configs was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,498] DEBUG Starting Kafka producer I/O thread. (org.apache.kafka.clients.producer.internals.Sender:123)
[2016-04-12 23:22:14,498] WARN The configuration group.id = connect-cluster was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,498] WARN The configuration internal.key.converter.schemas.enable = false was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,498] WARN The configuration value.converter.schema.registry.url = http://localhost:8081 was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,498] WARN The configuration internal.key.converter = org.apache.kafka.connect.json.JsonConverter was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,498] WARN The configuration internal.value.converter.schemas.enable = false was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,498] WARN The configuration internal.value.converter = org.apache.kafka.connect.json.JsonConverter was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,498] WARN The configuration offset.storage.topic = connect-offsets was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,498] WARN The configuration value.converter = io.confluent.connect.avro.AvroConverter was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,499] WARN The configuration key.converter = io.confluent.connect.avro.AvroConverter was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,499] WARN The configuration key.converter.schema.registry.url = http://localhost:8081 was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,499] INFO Kafka version : 0.9.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-04-12 23:22:14,499] INFO Kafka commitId : 7113452b3e7d5638 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-04-12 23:22:14,499] DEBUG Kafka producer started (org.apache.kafka.clients.producer.KafkaProducer:315)
[2016-04-12 23:22:14,504] INFO ConsumerConfig values: 
metric.reporters = []
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
group.id = connect-cluster
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:9092]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
enable.auto.commit = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
ssl.truststore.password = null
metrics.num.samples = 2
ssl.endpoint.identification.algorithm = null
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
ssl.protocol = TLS
check.crcs = true
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
fetch.min.bytes = 1
send.buffer.bytes = 131072
auto.offset.reset = earliest
 (org.apache.kafka.clients.consumer.ConsumerConfig:165)
[2016-04-12 23:22:14,504] DEBUG Starting the Kafka consumer (org.apache.kafka.clients.consumer.KafkaConsumer:552)
[2016-04-12 23:22:14,505] DEBUG Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, localhost, 9092)], partitions = []) (org.apache.kafka.clients.Metadata:172)
[2016-04-12 23:22:14,505] DEBUG Added sensor with name connections-closed:client-id-consumer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,505] DEBUG Added sensor with name connections-created:client-id-consumer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,505] DEBUG Added sensor with name bytes-sent-received:client-id-consumer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,505] DEBUG Added sensor with name bytes-sent:client-id-consumer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,506] DEBUG Added sensor with name bytes-received:client-id-consumer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,506] DEBUG Added sensor with name select-time:client-id-consumer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,506] DEBUG Added sensor with name io-time:client-id-consumer-1 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,512] DEBUG Added sensor with name heartbeat-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,512] DEBUG Added sensor with name join-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,512] DEBUG Added sensor with name sync-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,514] DEBUG Added sensor with name commit-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,520] DEBUG Added sensor with name bytes-fetched (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,521] DEBUG Added sensor with name records-fetched (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,521] DEBUG Added sensor with name fetch-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,521] DEBUG Added sensor with name records-lag (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,521] DEBUG Added sensor with name fetch-throttle-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,522] WARN The configuration config.storage.topic = connect-configs was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,522] WARN The configuration internal.key.converter.schemas.enable = false was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,522] WARN The configuration value.converter.schema.registry.url = http://localhost:8081 was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,522] WARN The configuration internal.key.converter = org.apache.kafka.connect.json.JsonConverter was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,522] WARN The configuration internal.value.converter.schemas.enable = false was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,522] WARN The configuration internal.value.converter = org.apache.kafka.connect.json.JsonConverter was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,522] WARN The configuration offset.storage.topic = connect-offsets was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,522] WARN The configuration value.converter = io.confluent.connect.avro.AvroConverter was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,522] WARN The configuration key.converter = io.confluent.connect.avro.AvroConverter was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,522] WARN The configuration key.converter.schema.registry.url = http://localhost:8081 was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,522] INFO Kafka version : 0.9.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-04-12 23:22:14,522] INFO Kafka commitId : 7113452b3e7d5638 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-04-12 23:22:14,523] DEBUG Kafka consumer created (org.apache.kafka.clients.consumer.KafkaConsumer:642)
[2016-04-12 23:22:14,539] DEBUG Initiating connection to node -1 at localhost:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:14,594] DEBUG Added sensor with name node--1.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,595] DEBUG Added sensor with name node--1.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,595] DEBUG Added sensor with name node--1.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,596] DEBUG Completed connection to node -1 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:14,697] DEBUG Sending metadata request ClientRequest(expectResponse=true, callback=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=2,client_id=consumer-1}, body={topics=[]}), isInitiatedByNetworkClient, createdTimeMs=1460517734696, sendTimeMs=0) to node -1 (org.apache.kafka.clients.NetworkClient:619)
[2016-04-12 23:22:14,715] DEBUG Updated cluster metadata version 2 to Cluster(nodes = [Node(0, sd-4261-f1e1, 9092), Node(1, sd-6286-1278, 9092), Node(2, sd-023d-317b, 9092)], partitions = [Partition(topic = __consumer_offsets, partition = 13, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 46, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 9, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 42, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = connect-offsets, partition = 0, leader = 1, replicas = [1,], isr = [1,], Partition(topic = __consumer_offsets, partition = 21, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 17, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 30, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 26, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 5, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 38, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 1, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 34, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = _schemas, partition = 0, leader = 0, replicas = [0,1,2,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 16, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 45, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 12, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 41, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 24, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 20, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 49, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 0, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 29, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 25, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 8, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 37, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 4, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = test_oracle_jdbc_USERYM, partition = 0, leader = 0, replicas = [0,], isr = [0,], Partition(topic = __consumer_offsets, partition = 33, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = connect-configs, partition = 0, leader = 1, replicas = [1,], isr = [1,], Partition(topic = __consumer_offsets, partition = 15, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 48, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 11, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 44, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 23, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 19, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 32, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 28, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 7, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = message-topic-test1, partition = 0, leader = 1, replicas = [1,], isr = [1,], Partition(topic = __consumer_offsets, partition = 40, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 3, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 36, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 47, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 14, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 43, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = test, partition = 0, leader = 0, replicas = [0,], isr = [0,], Partition(topic = __consumer_offsets, partition = 10, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 22, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 18, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 31, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 27, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 39, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 6, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 35, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 2, leader = 0, replicas = [0,2,1,], isr = [0,2,1,]]) (org.apache.kafka.clients.Metadata:172)
[2016-04-12 23:22:14,909] DEBUG Subscribed to partition(s): connect-offsets-0 (org.apache.kafka.clients.consumer.KafkaConsumer:809)
[2016-04-12 23:22:14,910] TRACE Reading to end of offset log (org.apache.kafka.connect.util.KafkaBasedLog:244)
[2016-04-12 23:22:14,911] DEBUG Issuing group metadata request to broker 1 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:465)
[2016-04-12 23:22:14,912] DEBUG Initiating connection to node 1 at sd-6286-1278:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:14,912] DEBUG Initialize connection to node 0 for sending metadata request (org.apache.kafka.clients.NetworkClient:623)
[2016-04-12 23:22:14,912] DEBUG Initiating connection to node 0 at sd-4261-f1e1:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:14,913] DEBUG Added sensor with name node-1.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,913] DEBUG Added sensor with name node-1.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,913] DEBUG Added sensor with name node-1.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,914] DEBUG Added sensor with name node-0.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,914] DEBUG Added sensor with name node-0.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,914] DEBUG Added sensor with name node-0.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,914] DEBUG Completed connection to node 1 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:14,914] DEBUG Completed connection to node 0 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:14,915] DEBUG Sending metadata request ClientRequest(expectResponse=true, callback=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=4,client_id=consumer-1}, body={topics=[connect-offsets]}), isInitiatedByNetworkClient, createdTimeMs=1460517734915, sendTimeMs=0) to node 0 (org.apache.kafka.clients.NetworkClient:619)
[2016-04-12 23:22:14,916] DEBUG Updated cluster metadata version 3 to Cluster(nodes = [Node(0, sd-4261-f1e1, 9092), Node(1, sd-6286-1278, 9092), Node(2, sd-023d-317b, 9092)], partitions = [Partition(topic = connect-offsets, partition = 0, leader = 1, replicas = [1,], isr = [1,]]) (org.apache.kafka.clients.Metadata:172)
[2016-04-12 23:22:14,916] DEBUG Issuing group metadata request to broker 0 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:465)
[2016-04-12 23:22:14,917] DEBUG Group metadata response ClientResponse(receivedTimeMs=1460517734916, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@8e50104, request=RequestSend(header={api_key=10,api_version=0,correlation_id=5,client_id=consumer-1}, body={group_id=connect-cluster}), createdTimeMs=1460517734916, sendTimeMs=1460517734916), responseBody={error_code=0,coordinator={node_id=2,host=sd-023d-317b,port=9092}}) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:478)
[2016-04-12 23:22:14,917] DEBUG Initiating connection to node 2147483645 at sd-023d-317b:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:14,917] DEBUG Fetching committed offsets for partitions: [connect-offsets-0] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:581)
[2016-04-12 23:22:14,919] DEBUG Added sensor with name node-2147483645.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,919] DEBUG Added sensor with name node-2147483645.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,919] DEBUG Added sensor with name node-2147483645.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,920] DEBUG Completed connection to node 2147483645 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:14,921] DEBUG No committed offset for partition connect-offsets-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:628)
[2016-04-12 23:22:14,922] DEBUG Resetting offset for partition connect-offsets-0 to earliest offset. (org.apache.kafka.clients.consumer.internals.Fetcher:290)
[2016-04-12 23:22:14,924] DEBUG Fetched offset 0 for partition connect-offsets-0 (org.apache.kafka.clients.consumer.internals.Fetcher:483)
[2016-04-12 23:22:14,925] DEBUG Seeking to end of partition connect-offsets-0 (org.apache.kafka.clients.consumer.KafkaConsumer:1078)
[2016-04-12 23:22:14,925] DEBUG Resetting offset for partition connect-offsets-0 to latest offset. (org.apache.kafka.clients.consumer.internals.Fetcher:290)
[2016-04-12 23:22:14,925] DEBUG Fetched offset 0 for partition connect-offsets-0 (org.apache.kafka.clients.consumer.internals.Fetcher:483)
[2016-04-12 23:22:14,926] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:14,930] TRACE Reading to end of log for connect-offsets-0: starting offset 0 to ending offset 0 (org.apache.kafka.connect.util.KafkaBasedLog:269)
[2016-04-12 23:22:14,931] INFO Finished reading KafakBasedLog for topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:143)
[2016-04-12 23:22:14,931] INFO Started KafakBasedLog for topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:145)
[2016-04-12 23:22:14,931] INFO Finished reading offsets topic and starting KafkaOffsetBackingStore (org.apache.kafka.connect.storage.KafkaOffsetBackingStore:86)
[2016-04-12 23:22:14,932] INFO Worker started (org.apache.kafka.connect.runtime.Worker:111)
[2016-04-12 23:22:14,932] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:91)
[2016-04-12 23:22:14,932] INFO Herder starting (org.apache.kafka.connect.runtime.distributed.DistributedHerder:152)
[2016-04-12 23:22:14,932] INFO Starting KafkaConfigStorage (org.apache.kafka.connect.storage.KafkaConfigStorage:236)
[2016-04-12 23:22:14,932] INFO Starting KafkaBasedLog with topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:114)
[2016-04-12 23:22:14,933] INFO ProducerConfig values: 
compression.type = none
metric.reporters = []
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = all
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
 (org.apache.kafka.clients.producer.ProducerConfig:165)
[2016-04-12 23:22:14,933] TRACE Starting the Kafka producer (org.apache.kafka.clients.producer.KafkaProducer:201)
[2016-04-12 23:22:14,933] DEBUG Added sensor with name bufferpool-wait-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,934] DEBUG Added sensor with name buffer-exhausted-records (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,934] DEBUG Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, localhost, 9092)], partitions = []) (org.apache.kafka.clients.Metadata:172)
[2016-04-12 23:22:14,934] DEBUG Added sensor with name connections-closed:client-id-producer-3 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,934] DEBUG Added sensor with name connections-created:client-id-producer-3 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,934] DEBUG Added sensor with name bytes-sent-received:client-id-producer-3 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,935] DEBUG Added sensor with name bytes-sent:client-id-producer-3 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,935] DEBUG Added sensor with name bytes-received:client-id-producer-3 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,935] DEBUG Added sensor with name select-time:client-id-producer-3 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,935] DEBUG Added sensor with name io-time:client-id-producer-3 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,936] DEBUG Added sensor with name batch-size (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,936] DEBUG Added sensor with name compression-rate (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,936] DEBUG Added sensor with name queue-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,936] DEBUG Added sensor with name request-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,936] DEBUG Added sensor with name produce-throttle-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,937] DEBUG Added sensor with name records-per-request (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,937] DEBUG Added sensor with name record-retries (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,937] DEBUG Added sensor with name errors (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,937] DEBUG Added sensor with name record-size-max (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,937] DEBUG Starting Kafka producer I/O thread. (org.apache.kafka.clients.producer.internals.Sender:123)
[2016-04-12 23:22:14,938] WARN The configuration config.storage.topic = connect-configs was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,938] WARN The configuration group.id = connect-cluster was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,938] WARN The configuration internal.key.converter.schemas.enable = false was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,938] WARN The configuration value.converter.schema.registry.url = http://localhost:8081 was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,938] WARN The configuration internal.key.converter = org.apache.kafka.connect.json.JsonConverter was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,938] WARN The configuration internal.value.converter.schemas.enable = false was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,938] WARN The configuration internal.value.converter = org.apache.kafka.connect.json.JsonConverter was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,938] WARN The configuration offset.storage.topic = connect-offsets was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,938] WARN The configuration value.converter = io.confluent.connect.avro.AvroConverter was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,938] WARN The configuration key.converter = io.confluent.connect.avro.AvroConverter was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,938] WARN The configuration key.converter.schema.registry.url = http://localhost:8081 was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:173)
[2016-04-12 23:22:14,938] INFO Kafka version : 0.9.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-04-12 23:22:14,938] INFO Kafka commitId : 7113452b3e7d5638 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-04-12 23:22:14,939] DEBUG Kafka producer started (org.apache.kafka.clients.producer.KafkaProducer:315)
[2016-04-12 23:22:14,940] INFO ConsumerConfig values: 
metric.reporters = []
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
group.id = connect-cluster
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:9092]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
enable.auto.commit = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
ssl.truststore.password = null
metrics.num.samples = 2
ssl.endpoint.identification.algorithm = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
check.crcs = true
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
fetch.min.bytes = 1
send.buffer.bytes = 131072
auto.offset.reset = earliest
 (org.apache.kafka.clients.consumer.ConsumerConfig:165)
[2016-04-12 23:22:14,940] DEBUG Starting the Kafka consumer (org.apache.kafka.clients.consumer.KafkaConsumer:552)
[2016-04-12 23:22:14,940] DEBUG Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, localhost, 9092)], partitions = []) (org.apache.kafka.clients.Metadata:172)
[2016-04-12 23:22:14,940] DEBUG Added sensor with name connections-closed:client-id-consumer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,940] DEBUG Added sensor with name connections-created:client-id-consumer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,940] DEBUG Added sensor with name bytes-sent-received:client-id-consumer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,941] DEBUG Added sensor with name bytes-sent:client-id-consumer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,941] DEBUG Added sensor with name bytes-received:client-id-consumer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,941] DEBUG Added sensor with name select-time:client-id-consumer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,941] DEBUG Added sensor with name io-time:client-id-consumer-2 (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,942] DEBUG Added sensor with name heartbeat-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,942] DEBUG Added sensor with name join-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,942] DEBUG Added sensor with name sync-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,942] DEBUG Added sensor with name commit-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,943] DEBUG Added sensor with name bytes-fetched (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,943] DEBUG Added sensor with name records-fetched (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,943] DEBUG Added sensor with name fetch-latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,943] DEBUG Added sensor with name records-lag (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,943] DEBUG Added sensor with name fetch-throttle-time (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,943] WARN The configuration config.storage.topic = connect-configs was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,944] WARN The configuration internal.key.converter.schemas.enable = false was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,944] WARN The configuration value.converter.schema.registry.url = http://localhost:8081 was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,944] WARN The configuration internal.key.converter = org.apache.kafka.connect.json.JsonConverter was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,944] WARN The configuration internal.value.converter.schemas.enable = false was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,944] WARN The configuration internal.value.converter = org.apache.kafka.connect.json.JsonConverter was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,944] WARN The configuration offset.storage.topic = connect-offsets was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,944] WARN The configuration value.converter = io.confluent.connect.avro.AvroConverter was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,944] WARN The configuration key.converter = io.confluent.connect.avro.AvroConverter was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,944] WARN The configuration key.converter.schema.registry.url = http://localhost:8081 was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:173)
[2016-04-12 23:22:14,944] INFO Kafka version : 0.9.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-04-12 23:22:14,944] INFO Kafka commitId : 7113452b3e7d5638 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-04-12 23:22:14,944] DEBUG Kafka consumer created (org.apache.kafka.clients.consumer.KafkaConsumer:642)
[2016-04-12 23:22:14,944] DEBUG Initiating connection to node -1 at localhost:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:14,945] DEBUG Added sensor with name node--1.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,945] DEBUG Added sensor with name node--1.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,945] DEBUG Added sensor with name node--1.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:14,946] DEBUG Completed connection to node -1 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:15,024] DEBUG org.eclipse.jetty.server.session.SessionHandler@dbd8e44 added {org.eclipse.jetty.server.session.HashSessionManager@51acdf2e,AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,024] DEBUG o.e.j.s.ServletContextHandler@6a55299e{/,null,null} added {org.eclipse.jetty.server.session.SessionHandler@dbd8e44,AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,028] DEBUG org.eclipse.jetty.server.session.SessionHandler@dbd8e44 added {org.eclipse.jetty.servlet.ServletHandler@4c51cf28,AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,029] DEBUG org.eclipse.jetty.servlet.ServletHandler@4c51cf28 added {org.glassfish.jersey.servlet.ServletContainer-29a0cdb@2e04b19d==org.glassfish.jersey.servlet.ServletContainer,-1,true,AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,029] DEBUG org.eclipse.jetty.servlet.ServletHandler@4c51cf28 added {[/*]=>org.glassfish.jersey.servlet.ServletContainer-29a0cdb,POJO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,032] DEBUG org.eclipse.jetty.server.handler.RequestLogHandler@525575 added {org.eclipse.jetty.server.Slf4jRequestLog@46dffdc3,AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,042] DEBUG org.eclipse.jetty.server.handler.HandlerCollection@e19bb76[o.e.j.s.ServletContextHandler@6a55299e{/,null,null}, org.eclipse.jetty.server.handler.DefaultHandler@512535ff, org.eclipse.jetty.server.handler.RequestLogHandler@525575] added {o.e.j.s.ServletContextHandler@6a55299e{/,null,null},AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,042] DEBUG org.eclipse.jetty.server.handler.HandlerCollection@e19bb76[o.e.j.s.ServletContextHandler@6a55299e{/,null,null}, org.eclipse.jetty.server.handler.DefaultHandler@512535ff, org.eclipse.jetty.server.handler.RequestLogHandler@525575] added {org.eclipse.jetty.server.handler.DefaultHandler@512535ff,AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,042] DEBUG org.eclipse.jetty.server.handler.HandlerCollection@e19bb76[o.e.j.s.ServletContextHandler@6a55299e{/,null,null}, org.eclipse.jetty.server.handler.DefaultHandler@512535ff, org.eclipse.jetty.server.handler.RequestLogHandler@525575] added {org.eclipse.jetty.server.handler.RequestLogHandler@525575,AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,044] DEBUG org.eclipse.jetty.server.handler.StatisticsHandler@3f270e0a added {org.eclipse.jetty.server.handler.HandlerCollection@e19bb76[o.e.j.s.ServletContextHandler@6a55299e{/,null,null}, org.eclipse.jetty.server.handler.DefaultHandler@512535ff, org.eclipse.jetty.server.handler.RequestLogHandler@525575],AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,044] DEBUG org.eclipse.jetty.server.Server@561b6512 added {org.eclipse.jetty.server.handler.StatisticsHandler@3f270e0a,AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,044] DEBUG starting org.eclipse.jetty.server.Server@561b6512 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,046] DEBUG Sending metadata request ClientRequest(expectResponse=true, callback=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=2,client_id=consumer-2}, body={topics=[]}), isInitiatedByNetworkClient, createdTimeMs=1460517735046, sendTimeMs=0) to node -1 (org.apache.kafka.clients.NetworkClient:619)
[2016-04-12 23:22:15,048] INFO jetty-9.2.12.v20150709 (org.eclipse.jetty.server.Server:327)
[2016-04-12 23:22:15,051] DEBUG Updated cluster metadata version 2 to Cluster(nodes = [Node(0, sd-4261-f1e1, 9092), Node(2, sd-023d-317b, 9092), Node(1, sd-6286-1278, 9092)], partitions = [Partition(topic = __consumer_offsets, partition = 13, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 46, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 9, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 42, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = connect-offsets, partition = 0, leader = 1, replicas = [1,], isr = [1,], Partition(topic = __consumer_offsets, partition = 21, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 17, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 30, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 26, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 5, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 38, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 1, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 34, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = _schemas, partition = 0, leader = 0, replicas = [0,1,2,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 16, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 45, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 12, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 41, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 24, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 20, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 49, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 0, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 29, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 25, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 8, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 37, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 4, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = test_oracle_jdbc_USERYM, partition = 0, leader = 0, replicas = [0,], isr = [0,], Partition(topic = __consumer_offsets, partition = 33, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = connect-configs, partition = 0, leader = 1, replicas = [1,], isr = [1,], Partition(topic = __consumer_offsets, partition = 15, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 48, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 11, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 44, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 23, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 19, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 32, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 28, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 7, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = message-topic-test1, partition = 0, leader = 1, replicas = [1,], isr = [1,], Partition(topic = __consumer_offsets, partition = 40, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 3, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 36, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 47, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 14, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 43, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = test, partition = 0, leader = 0, replicas = [0,], isr = [0,], Partition(topic = __consumer_offsets, partition = 10, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 22, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 18, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 31, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 27, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 39, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 6, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 35, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 2, leader = 0, replicas = [0,2,1,], isr = [0,2,1,]]) (org.apache.kafka.clients.Metadata:172)
[2016-04-12 23:22:15,060] DEBUG starting org.eclipse.jetty.server.Server@561b6512 (org.eclipse.jetty.server.handler.AbstractHandler:58)
[2016-04-12 23:22:15,060] DEBUG starting qtp1886491834{STOPPED,8<=0<=200,i=0,q=0} (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,062] DEBUG STARTED @1694ms qtp1886491834{STARTED,8<=8<=200,i=7,q=0} (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,062] DEBUG starting org.eclipse.jetty.server.handler.StatisticsHandler@3f270e0a (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,062] DEBUG starting org.eclipse.jetty.server.handler.StatisticsHandler@3f270e0a (org.eclipse.jetty.server.handler.AbstractHandler:58)
[2016-04-12 23:22:15,062] DEBUG starting org.eclipse.jetty.server.handler.HandlerCollection@e19bb76[o.e.j.s.ServletContextHandler@6a55299e{/,null,null}, org.eclipse.jetty.server.handler.DefaultHandler@512535ff, org.eclipse.jetty.server.handler.RequestLogHandler@525575] (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,062] DEBUG starting org.eclipse.jetty.server.handler.HandlerCollection@e19bb76[o.e.j.s.ServletContextHandler@6a55299e{/,null,null}, org.eclipse.jetty.server.handler.DefaultHandler@512535ff, org.eclipse.jetty.server.handler.RequestLogHandler@525575] (org.eclipse.jetty.server.handler.AbstractHandler:58)
[2016-04-12 23:22:15,062] DEBUG starting o.e.j.s.ServletContextHandler@6a55299e{/,null,null} (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,070] DEBUG starting o.e.j.s.ServletContextHandler@6a55299e{/,null,STARTING} (org.eclipse.jetty.server.handler.AbstractHandler:58)
[2016-04-12 23:22:15,070] DEBUG starting org.eclipse.jetty.server.session.SessionHandler@dbd8e44 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,070] DEBUG starting org.eclipse.jetty.server.session.SessionHandler@dbd8e44 (org.eclipse.jetty.server.handler.AbstractHandler:58)
[2016-04-12 23:22:15,070] DEBUG starting org.eclipse.jetty.server.session.HashSessionManager@51acdf2e (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,070] DEBUG org.eclipse.jetty.server.session.HashSessionManager@51acdf2e added {org.eclipse.jetty.util.thread.ScheduledExecutorScheduler@146587a2,MANAGED} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,072] DEBUG starting org.eclipse.jetty.server.session.HashSessionIdManager@16c63f5 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,072] DEBUG STARTED @1705ms org.eclipse.jetty.server.session.HashSessionIdManager@16c63f5 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,072] DEBUG org.eclipse.jetty.server.Server@561b6512 added {org.eclipse.jetty.server.session.HashSessionIdManager@16c63f5,MANAGED} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,072] DEBUG org.eclipse.jetty.server.session.HashSessionManager@51acdf2e added {org.eclipse.jetty.server.session.HashSessionIdManager@16c63f5,UNMANAGED} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,073] DEBUG starting org.eclipse.jetty.util.thread.ScheduledExecutorScheduler@146587a2 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,073] DEBUG STARTED @1706ms org.eclipse.jetty.util.thread.ScheduledExecutorScheduler@146587a2 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,075] DEBUG STARTED @1707ms org.eclipse.jetty.server.session.HashSessionManager@51acdf2e (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,075] DEBUG starting org.eclipse.jetty.servlet.ServletHandler@4c51cf28 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,076] DEBUG Chose path=/* mapped to servlet=org.glassfish.jersey.servlet.ServletContainer-29a0cdb from default=false (org.eclipse.jetty.servlet.ServletHandler:1495)
[2016-04-12 23:22:15,077] DEBUG filterNameMap={} (org.eclipse.jetty.servlet.ServletHandler:1516)
[2016-04-12 23:22:15,077] DEBUG pathFilters=null (org.eclipse.jetty.servlet.ServletHandler:1517)
[2016-04-12 23:22:15,077] DEBUG servletFilterMap=null (org.eclipse.jetty.servlet.ServletHandler:1518)
[2016-04-12 23:22:15,077] DEBUG servletPathMap={/*=org.glassfish.jersey.servlet.ServletContainer-29a0cdb@2e04b19d==org.glassfish.jersey.servlet.ServletContainer,-1,true} (org.eclipse.jetty.servlet.ServletHandler:1519)
[2016-04-12 23:22:15,077] DEBUG servletNameMap={org.glassfish.jersey.servlet.ServletContainer-29a0cdb=org.glassfish.jersey.servlet.ServletContainer-29a0cdb@2e04b19d==org.glassfish.jersey.servlet.ServletContainer,-1,true} (org.eclipse.jetty.servlet.ServletHandler:1520)
[2016-04-12 23:22:15,077] DEBUG Adding Default404Servlet to org.eclipse.jetty.servlet.ServletHandler@4c51cf28 (org.eclipse.jetty.servlet.ServletHandler:165)
[2016-04-12 23:22:15,078] DEBUG org.eclipse.jetty.servlet.ServletHandler@4c51cf28 added {org.eclipse.jetty.servlet.ServletHandler$Default404Servlet-2826f61@ebef5243==org.eclipse.jetty.servlet.ServletHandler$Default404Servlet,-1,false,AUTO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,078] DEBUG org.eclipse.jetty.servlet.ServletHandler@4c51cf28 added {[/]=>org.eclipse.jetty.servlet.ServletHandler$Default404Servlet-2826f61,POJO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,078] DEBUG Chose path=/* mapped to servlet=org.glassfish.jersey.servlet.ServletContainer-29a0cdb from default=false (org.eclipse.jetty.servlet.ServletHandler:1495)
[2016-04-12 23:22:15,078] DEBUG Chose path=/ mapped to servlet=org.eclipse.jetty.servlet.ServletHandler$Default404Servlet-2826f61 from default=false (org.eclipse.jetty.servlet.ServletHandler:1495)
[2016-04-12 23:22:15,078] DEBUG filterNameMap={} (org.eclipse.jetty.servlet.ServletHandler:1516)
[2016-04-12 23:22:15,079] DEBUG pathFilters=null (org.eclipse.jetty.servlet.ServletHandler:1517)
[2016-04-12 23:22:15,079] DEBUG servletFilterMap=null (org.eclipse.jetty.servlet.ServletHandler:1518)
[2016-04-12 23:22:15,079] DEBUG servletPathMap={/*=org.glassfish.jersey.servlet.ServletContainer-29a0cdb@2e04b19d==org.glassfish.jersey.servlet.ServletContainer,-1,true, /=org.eclipse.jetty.servlet.ServletHandler$Default404Servlet-2826f61@ebef5243==org.eclipse.jetty.servlet.ServletHandler$Default404Servlet,-1,false} (org.eclipse.jetty.servlet.ServletHandler:1519)
[2016-04-12 23:22:15,079] DEBUG servletNameMap={org.glassfish.jersey.servlet.ServletContainer-29a0cdb=org.glassfish.jersey.servlet.ServletContainer-29a0cdb@2e04b19d==org.glassfish.jersey.servlet.ServletContainer,-1,true, org.eclipse.jetty.servlet.ServletHandler$Default404Servlet-2826f61=org.eclipse.jetty.servlet.ServletHandler$Default404Servlet-2826f61@ebef5243==org.eclipse.jetty.servlet.ServletHandler$Default404Servlet,-1,false} (org.eclipse.jetty.servlet.ServletHandler:1520)
[2016-04-12 23:22:15,079] DEBUG starting org.eclipse.jetty.servlet.ServletHandler@4c51cf28 (org.eclipse.jetty.server.handler.AbstractHandler:58)
[2016-04-12 23:22:15,079] DEBUG STARTED @1712ms org.eclipse.jetty.servlet.ServletHandler@4c51cf28 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,080] DEBUG STARTED @1712ms org.eclipse.jetty.server.session.SessionHandler@dbd8e44 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,080] DEBUG starting org.eclipse.jetty.servlet.ServletHandler$Default404Servlet-2826f61@ebef5243==org.eclipse.jetty.servlet.ServletHandler$Default404Servlet,-1,false (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,081] DEBUG STARTED @1714ms org.eclipse.jetty.servlet.ServletHandler$Default404Servlet-2826f61@ebef5243==org.eclipse.jetty.servlet.ServletHandler$Default404Servlet,-1,false (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,081] DEBUG starting org.glassfish.jersey.servlet.ServletContainer-29a0cdb@2e04b19d==org.glassfish.jersey.servlet.ServletContainer,-1,true (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,082] DEBUG STARTED @1714ms org.glassfish.jersey.servlet.ServletContainer-29a0cdb@2e04b19d==org.glassfish.jersey.servlet.ServletContainer,-1,true (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,083] DEBUG Servlet.init org.glassfish.jersey.servlet.ServletContainer@4ce7fffa for org.glassfish.jersey.servlet.ServletContainer-29a0cdb (org.eclipse.jetty.servlet.ServletHolder:611)
[2016-04-12 23:22:15,158] DEBUG Subscribed to partition(s): connect-configs-0 (org.apache.kafka.clients.consumer.KafkaConsumer:809)
[2016-04-12 23:22:15,158] TRACE Reading to end of offset log (org.apache.kafka.connect.util.KafkaBasedLog:244)
[2016-04-12 23:22:15,159] DEBUG Issuing group metadata request to broker 0 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:465)
[2016-04-12 23:22:15,159] DEBUG Initiating connection to node 0 at sd-4261-f1e1:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:15,159] DEBUG Initialize connection to node 2 for sending metadata request (org.apache.kafka.clients.NetworkClient:623)
[2016-04-12 23:22:15,159] DEBUG Initiating connection to node 2 at sd-023d-317b:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:15,159] DEBUG Added sensor with name node-0.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,160] DEBUG Added sensor with name node-0.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,160] DEBUG Added sensor with name node-0.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,160] DEBUG Completed connection to node 0 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:15,160] DEBUG Sending metadata request ClientRequest(expectResponse=true, callback=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=4,client_id=consumer-2}, body={topics=[connect-configs]}), isInitiatedByNetworkClient, createdTimeMs=1460517735160, sendTimeMs=0) to node 0 (org.apache.kafka.clients.NetworkClient:619)
[2016-04-12 23:22:15,161] DEBUG Added sensor with name node-2.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,161] DEBUG Added sensor with name node-2.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,161] DEBUG Added sensor with name node-2.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,161] DEBUG Completed connection to node 2 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:15,162] DEBUG Updated cluster metadata version 3 to Cluster(nodes = [Node(0, sd-4261-f1e1, 9092), Node(1, sd-6286-1278, 9092), Node(2, sd-023d-317b, 9092)], partitions = [Partition(topic = connect-configs, partition = 0, leader = 1, replicas = [1,], isr = [1,]]) (org.apache.kafka.clients.Metadata:172)
[2016-04-12 23:22:15,162] DEBUG Issuing group metadata request to broker 2 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:465)
[2016-04-12 23:22:15,164] DEBUG Group metadata response ClientResponse(receivedTimeMs=1460517735164, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@27b6526e, request=RequestSend(header={api_key=10,api_version=0,correlation_id=5,client_id=consumer-2}, body={group_id=connect-cluster}), createdTimeMs=1460517735162, sendTimeMs=1460517735162), responseBody={error_code=0,coordinator={node_id=2,host=sd-023d-317b,port=9092}}) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:478)
[2016-04-12 23:22:15,164] DEBUG Initiating connection to node 2147483645 at sd-023d-317b:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:15,165] DEBUG Fetching committed offsets for partitions: [connect-configs-0] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:581)
[2016-04-12 23:22:15,165] DEBUG Added sensor with name node-2147483645.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,165] DEBUG Added sensor with name node-2147483645.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,165] DEBUG Added sensor with name node-2147483645.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,166] DEBUG Completed connection to node 2147483645 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:15,166] DEBUG No committed offset for partition connect-configs-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:628)
[2016-04-12 23:22:15,167] DEBUG Resetting offset for partition connect-configs-0 to earliest offset. (org.apache.kafka.clients.consumer.internals.Fetcher:290)
[2016-04-12 23:22:15,167] DEBUG Initiating connection to node 1 at sd-6286-1278:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:15,167] DEBUG Added sensor with name node-1.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,168] DEBUG Added sensor with name node-1.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,168] DEBUG Added sensor with name node-1.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,168] DEBUG Completed connection to node 1 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:15,169] DEBUG Fetched offset 0 for partition connect-configs-0 (org.apache.kafka.clients.consumer.internals.Fetcher:483)
[2016-04-12 23:22:15,169] DEBUG Seeking to end of partition connect-configs-0 (org.apache.kafka.clients.consumer.KafkaConsumer:1078)
[2016-04-12 23:22:15,169] DEBUG Resetting offset for partition connect-configs-0 to latest offset. (org.apache.kafka.clients.consumer.internals.Fetcher:290)
[2016-04-12 23:22:15,170] DEBUG Fetched offset 0 for partition connect-configs-0 (org.apache.kafka.clients.consumer.internals.Fetcher:483)
[2016-04-12 23:22:15,170] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:15,170] TRACE Reading to end of log for connect-configs-0: starting offset 0 to ending offset 0 (org.apache.kafka.connect.util.KafkaBasedLog:269)
[2016-04-12 23:22:15,171] INFO Finished reading KafakBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:143)
[2016-04-12 23:22:15,171] INFO Started KafakBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:145)
[2016-04-12 23:22:15,171] INFO Started KafkaConfigStorage (org.apache.kafka.connect.storage.KafkaConfigStorage:242)
[2016-04-12 23:22:15,171] INFO Herder started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:156)
[2016-04-12 23:22:15,171] DEBUG Issuing group metadata request to broker -1 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:465)
[2016-04-12 23:22:15,171] DEBUG Initiating connection to node -1 at localhost:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:15,171] DEBUG Added sensor with name node--1.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,172] DEBUG Added sensor with name node--1.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,172] DEBUG Added sensor with name node--1.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,172] DEBUG Completed connection to node -1 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:15,272] DEBUG Sending metadata request ClientRequest(expectResponse=true, callback=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=1,client_id=connect-1}, body={topics=[]}), isInitiatedByNetworkClient, createdTimeMs=1460517735272, sendTimeMs=0) to node -1 (org.apache.kafka.clients.NetworkClient:619)
[2016-04-12 23:22:15,275] DEBUG Updated cluster metadata version 2 to Cluster(nodes = [Node(1, sd-6286-1278, 9092), Node(0, sd-4261-f1e1, 9092), Node(2, sd-023d-317b, 9092)], partitions = [Partition(topic = __consumer_offsets, partition = 13, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 46, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 9, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 42, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = connect-offsets, partition = 0, leader = 1, replicas = [1,], isr = [1,], Partition(topic = __consumer_offsets, partition = 21, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 17, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 30, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 26, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 5, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 38, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 1, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 34, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = _schemas, partition = 0, leader = 0, replicas = [0,1,2,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 16, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 45, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 12, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 41, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 24, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 20, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 49, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 0, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 29, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 25, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 8, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 37, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 4, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = test_oracle_jdbc_USERYM, partition = 0, leader = 0, replicas = [0,], isr = [0,], Partition(topic = __consumer_offsets, partition = 33, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = connect-configs, partition = 0, leader = 1, replicas = [1,], isr = [1,], Partition(topic = __consumer_offsets, partition = 15, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 48, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 11, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 44, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 23, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 19, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 32, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 28, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 7, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = message-topic-test1, partition = 0, leader = 1, replicas = [1,], isr = [1,], Partition(topic = __consumer_offsets, partition = 40, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 3, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 36, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 47, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 14, leader = 0, replicas = [0,2,1,], isr = [0,2,1,], Partition(topic = __consumer_offsets, partition = 43, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = test, partition = 0, leader = 0, replicas = [0,], isr = [0,], Partition(topic = __consumer_offsets, partition = 10, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 22, leader = 2, replicas = [2,0,1,], isr = [2,0,1,], Partition(topic = __consumer_offsets, partition = 18, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 31, leader = 2, replicas = [2,1,0,], isr = [2,1,0,], Partition(topic = __consumer_offsets, partition = 27, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 39, leader = 1, replicas = [1,2,0,], isr = [1,2,0,], Partition(topic = __consumer_offsets, partition = 6, leader = 1, replicas = [1,0,2,], isr = [1,0,2,], Partition(topic = __consumer_offsets, partition = 35, leader = 0, replicas = [0,1,2,], isr = [0,1,2,], Partition(topic = __consumer_offsets, partition = 2, leader = 0, replicas = [0,2,1,], isr = [0,2,1,]]) (org.apache.kafka.clients.Metadata:172)
[2016-04-12 23:22:15,275] DEBUG Issuing group metadata request to broker 2 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:465)
[2016-04-12 23:22:15,275] DEBUG Initiating connection to node 2 at sd-023d-317b:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:15,276] DEBUG Added sensor with name node-2.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,276] DEBUG Added sensor with name node-2.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,277] DEBUG Added sensor with name node-2.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,277] DEBUG Completed connection to node 2 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:15,279] DEBUG Group metadata response ClientResponse(receivedTimeMs=1460517735279, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@7ca97b34, request=RequestSend(header={api_key=10,api_version=0,correlation_id=2,client_id=connect-1}, body={group_id=connect-cluster}), createdTimeMs=1460517735275, sendTimeMs=1460517735277), responseBody={error_code=0,coordinator={node_id=2,host=sd-023d-317b,port=9092}}) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:478)
[2016-04-12 23:22:15,279] DEBUG Initiating connection to node 2147483645 at sd-023d-317b:9092. (org.apache.kafka.clients.NetworkClient:487)
[2016-04-12 23:22:15,280] DEBUG Revoking previous assignment null (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:234)
[2016-04-12 23:22:15,280] DEBUG (Re-)joining group connect-cluster (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:309)
[2016-04-12 23:22:15,282] DEBUG Issuing request (JOIN_GROUP: {group_id=connect-cluster,session_timeout=30000,member_id=,protocol_type=connect,group_protocols=[{protocol_name=default,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=40 cap=40]}]}) to coordinator 2147483645 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:318)
[2016-04-12 23:22:15,283] DEBUG Added sensor with name node-2147483645.bytes-sent (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,284] DEBUG Added sensor with name node-2147483645.bytes-received (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,284] DEBUG Added sensor with name node-2147483645.latency (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,284] DEBUG Completed connection to node 2147483645 (org.apache.kafka.clients.NetworkClient:467)
[2016-04-12 23:22:15,286] DEBUG Joined group: {error_code=0,generation_id=1,group_protocol=default,leader_id=connect-1-1b945979-ff36-4b4e-85ff-bbc00a9f462e,member_id=connect-1-1b945979-ff36-4b4e-85ff-bbc00a9f462e,members=[{member_id=connect-1-1b945979-ff36-4b4e-85ff-bbc00a9f462e,member_metadata=java.nio.HeapByteBuffer[pos=0 lim=40 cap=40]}]} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:336)
[2016-04-12 23:22:15,286] DEBUG Performing task assignment (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:123)
[2016-04-12 23:22:15,286] DEBUG Max config offset root: -1, local snapshot config offsets root: -1 (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:151)
[2016-04-12 23:22:15,287] DEBUG Assignment: connect-1-1b945979-ff36-4b4e-85ff-bbc00a9f462e -> Assignment{error=0, leader='connect-1-1b945979-ff36-4b4e-85ff-bbc00a9f462e', leaderUrl='http://169.172.134.169:8083/', offset=-1, connectorIds=[], taskIds=[]} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:225)
[2016-04-12 23:22:15,287] DEBUG Finished assignment (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:228)
[2016-04-12 23:22:15,288] DEBUG Issuing leader SyncGroup (SYNC_GROUP: {group_id=connect-cluster,generation_id=1,member_id=connect-1-1b945979-ff36-4b4e-85ff-bbc00a9f462e,group_assignment=[{member_id=connect-1-1b945979-ff36-4b4e-85ff-bbc00a9f462e,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=94 cap=94]}]}) to coordinator 2147483645 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:397)
[2016-04-12 23:22:15,293] DEBUG Received successful sync group response for group connect-cluster: {error_code=0,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=94 cap=94]} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:423)
[2016-04-12 23:22:15,294] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-1b945979-ff36-4b4e-85ff-bbc00a9f462e', leaderUrl='http://169.172.134.169:8083/', offset=-1, connectorIds=[], taskIds=[]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder:868)
[2016-04-12 23:22:15,294] INFO Starting connectors and tasks using config offset -1 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:639)
[2016-04-12 23:22:15,295] INFO Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:659)
[2016-04-12 23:22:15,436] DEBUG Added sensor with name topic.connect-offsets.bytes-fetched (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,436] DEBUG Added sensor with name topic.connect-offsets.records-fetched (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,436] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:15,563] INFO Started o.e.j.s.ServletContextHandler@6a55299e{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2016-04-12 23:22:15,563] DEBUG STARTED @2196ms o.e.j.s.ServletContextHandler@6a55299e{/,null,AVAILABLE} (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,563] DEBUG starting org.eclipse.jetty.server.handler.DefaultHandler@512535ff (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,564] DEBUG starting org.eclipse.jetty.server.handler.DefaultHandler@512535ff (org.eclipse.jetty.server.handler.AbstractHandler:58)
[2016-04-12 23:22:15,564] DEBUG STARTED @2196ms org.eclipse.jetty.server.handler.DefaultHandler@512535ff (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,564] DEBUG starting org.eclipse.jetty.server.handler.RequestLogHandler@525575 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,564] DEBUG starting org.eclipse.jetty.server.handler.RequestLogHandler@525575 (org.eclipse.jetty.server.handler.AbstractHandler:58)
[2016-04-12 23:22:15,564] DEBUG starting org.eclipse.jetty.server.Slf4jRequestLog@46dffdc3 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,565] DEBUG STARTED @2198ms org.eclipse.jetty.server.Slf4jRequestLog@46dffdc3 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,565] DEBUG STARTED @2198ms org.eclipse.jetty.server.handler.RequestLogHandler@525575 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,565] DEBUG STARTED @2198ms org.eclipse.jetty.server.handler.HandlerCollection@e19bb76[o.e.j.s.ServletContextHandler@6a55299e{/,null,AVAILABLE}, org.eclipse.jetty.server.handler.DefaultHandler@512535ff, org.eclipse.jetty.server.handler.RequestLogHandler@525575] (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,565] DEBUG STARTED @2198ms org.eclipse.jetty.server.handler.StatisticsHandler@3f270e0a (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,565] DEBUG starting ServerConnector@fd07cbb{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,567] DEBUG ServerConnector@fd07cbb{HTTP/1.1}{0.0.0.0:8083} added {sun.nio.ch.ServerSocketChannelImpl[/0.0.0.0:8083],POJO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,567] DEBUG starting org.eclipse.jetty.util.thread.ScheduledExecutorScheduler@fa36558 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,568] DEBUG STARTED @2200ms org.eclipse.jetty.util.thread.ScheduledExecutorScheduler@fa36558 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,568] DEBUG starting HttpConnectionFactory@68567e20{HTTP/1.1} (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,568] DEBUG STARTED @2201ms HttpConnectionFactory@68567e20{HTTP/1.1} (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,568] DEBUG starting org.eclipse.jetty.server.ServerConnector$ServerConnectorManager@7748410a (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,572] DEBUG starting org.eclipse.jetty.io.SelectorManager$ManagedSelector@42deb43a keys=-1 selected=-1 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,572] DEBUG STARTED @2205ms org.eclipse.jetty.io.SelectorManager$ManagedSelector@42deb43a keys=0 selected=0 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,573] DEBUG starting org.eclipse.jetty.io.SelectorManager$ManagedSelector@1cefc4b3 keys=-1 selected=-1 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,573] DEBUG STARTED @2206ms org.eclipse.jetty.io.SelectorManager$ManagedSelector@1cefc4b3 keys=0 selected=0 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,573] DEBUG starting org.eclipse.jetty.io.SelectorManager$ManagedSelector@2b27cc70 keys=-1 selected=-1 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,573] DEBUG STARTED @2206ms org.eclipse.jetty.io.SelectorManager$ManagedSelector@2b27cc70 keys=0 selected=0 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,574] DEBUG Starting Thread[qtp1886491834-21-selector-ServerConnectorManager@7748410a/0,5,main] on org.eclipse.jetty.io.SelectorManager$ManagedSelector@42deb43a keys=0 selected=0 (org.eclipse.jetty.io.SelectorManager:547)
[2016-04-12 23:22:15,574] DEBUG starting org.eclipse.jetty.io.SelectorManager$ManagedSelector@6f6a7463 keys=-1 selected=-1 (org.eclipse.jetty.util.component.AbstractLifeCycle:185)
[2016-04-12 23:22:15,574] DEBUG Starting Thread[qtp1886491834-23-selector-ServerConnectorManager@7748410a/2,5,main] on org.eclipse.jetty.io.SelectorManager$ManagedSelector@2b27cc70 keys=0 selected=0 (org.eclipse.jetty.io.SelectorManager:547)
[2016-04-12 23:22:15,574] DEBUG STARTED @2206ms org.eclipse.jetty.io.SelectorManager$ManagedSelector@6f6a7463 keys=0 selected=0 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,574] DEBUG Starting Thread[qtp1886491834-22-selector-ServerConnectorManager@7748410a/1,5,main] on org.eclipse.jetty.io.SelectorManager$ManagedSelector@1cefc4b3 keys=0 selected=0 (org.eclipse.jetty.io.SelectorManager:547)
[2016-04-12 23:22:15,574] DEBUG STARTED @2207ms org.eclipse.jetty.server.ServerConnector$ServerConnectorManager@7748410a (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,574] DEBUG Selector loop waiting on select (org.eclipse.jetty.io.SelectorManager:599)
[2016-04-12 23:22:15,574] DEBUG Starting Thread[qtp1886491834-24-selector-ServerConnectorManager@7748410a/3,5,main] on org.eclipse.jetty.io.SelectorManager$ManagedSelector@6f6a7463 keys=0 selected=0 (org.eclipse.jetty.io.SelectorManager:547)
[2016-04-12 23:22:15,574] DEBUG Selector loop waiting on select (org.eclipse.jetty.io.SelectorManager:599)
[2016-04-12 23:22:15,574] DEBUG Selector loop waiting on select (org.eclipse.jetty.io.SelectorManager:599)
[2016-04-12 23:22:15,574] DEBUG Selector loop waiting on select (org.eclipse.jetty.io.SelectorManager:599)
[2016-04-12 23:22:15,575] DEBUG ServerConnector@fd07cbb{HTTP/1.1}{0.0.0.0:8083} added {acceptor-0@6ca320ab,POJO} (org.eclipse.jetty.util.component.ContainerLifeCycle:324)
[2016-04-12 23:22:15,575] INFO Started ServerConnector@fd07cbb{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
[2016-04-12 23:22:15,575] DEBUG STARTED @2208ms ServerConnector@fd07cbb{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,575] INFO Started @2208ms (org.eclipse.jetty.server.Server:379)
[2016-04-12 23:22:15,575] DEBUG STARTED @2208ms org.eclipse.jetty.server.Server@561b6512 (org.eclipse.jetty.util.component.AbstractLifeCycle:177)
[2016-04-12 23:22:15,576] INFO REST server listening at http://169.172.134.169:8083/, advertising URL http://169.172.134.169:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:132)
[2016-04-12 23:22:15,576] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:60)
[2016-04-12 23:22:15,671] DEBUG Added sensor with name topic.connect-configs.bytes-fetched (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,671] DEBUG Added sensor with name topic.connect-configs.records-fetched (org.apache.kafka.common.metrics.Metrics:201)
[2016-04-12 23:22:15,671] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:15,937] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:16,173] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:16,439] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:16,675] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:16,939] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:17,176] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:17,441] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:17,677] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:17,943] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:18,178] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:18,300] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:18,444] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:18,680] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:18,946] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:19,182] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:19,447] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:19,683] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:19,949] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:20,185] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:20,450] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:20,687] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:20,952] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:21,188] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:21,300] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:21,454] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:21,689] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:21,955] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:22,191] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:22,457] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:22,693] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:22,957] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:23,194] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:23,458] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:23,696] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:23,960] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:24,198] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:24,303] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:24,462] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:24,699] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:24,964] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:25,200] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:25,465] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:25,702] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:25,967] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:26,203] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:26,468] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:26,705] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:26,969] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:27,206] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:27,306] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:27,471] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:27,708] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:27,972] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:28,209] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:28,473] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:28,710] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:28,973] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:29,212] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:29,475] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:29,716] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:29,976] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:30,218] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:30,309] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:30,478] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:30,719] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:30,980] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:31,221] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:31,481] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:31,722] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:31,983] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:32,225] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:32,484] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:32,726] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:32,985] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:33,228] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:33,312] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:33,487] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:33,729] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:33,988] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:34,230] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:34,490] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:34,732] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:34,991] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:35,233] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:35,493] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:35,735] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:35,995] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:36,236] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:36,315] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:36,497] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:36,738] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:36,998] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:37,239] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:37,499] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:37,740] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:38,001] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:38,242] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:38,502] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:38,743] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:39,003] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:39,244] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:39,318] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:39,505] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:39,746] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:40,006] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:40,248] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:40,508] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:40,750] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:41,010] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:41,252] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:41,511] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:41,754] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:42,016] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:42,255] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:42,321] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:42,518] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:42,757] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:43,019] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:43,258] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:43,520] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:43,759] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:44,023] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:44,260] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:44,525] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:44,762] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:45,026] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:45,074] DEBUG Scavenging sessions at 1460517765074 (org.eclipse.jetty.server.session:347)
... repeat the following message
[2016-04-12 23:22:36,315] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:36,497] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:36,738] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:36,998] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:37,239] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:37,499] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:37,740] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:38,001] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:38,242] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:38,502] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:38,743] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:39,003] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:39,244] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:39,318] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:39,505] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:39,746] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:40,006] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:40,248] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:40,508] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:40,750] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:41,010] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:41,252] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:41,511] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:41,754] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:42,016] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:42,255] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:42,321] DEBUG Received successful heartbeat response. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:615)
[2016-04-12 23:22:42,518] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:42,757] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:43,019] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:43,258] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:43,520] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:43,759] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:44,023] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:44,260] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:44,525] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:44,762] TRACE Added fetch request for partition connect-configs-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:45,026] TRACE Added fetch request for partition connect-offsets-0 at offset 0 (org.apache.kafka.clients.consumer.internals.Fetcher:519)
[2016-04-12 23:22:45,074] DEBUG Scavenging sessions at 1460517765074 (org.eclipse.jetty.server.session:347)
...

mode=timestamp display log is same to mode=timestamp+incrementing



在 2016年4月13日星期三 UTC+8上午10:53:49,Liquan Pei写道:
...

Liquan Pei

unread,
Apr 13, 2016, 12:22:45 AM4/13/16
to confluent...@googlegroups.com
Hi Yongjian,

Are you running connect-standalone or connect-distributed?

Best,
Liquan

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Liquan Pei

unread,
Apr 13, 2016, 12:28:57 AM4/13/16
to confluent...@googlegroups.com
Hi Yongjian,

Please making sure that you start Kafka Connector in standalone mode. The log you sent me shows that it is running in distributed mode. 
You need to submit a connector to the cluster via the rest API when Kafka Connect is running in the distributed mode. 

Thanks,
Liquan

Yongjian Meng

unread,
Apr 13, 2016, 2:01:05 AM4/13/16
to Confluent Platform
Hi Liquan,
I am sorry I made a mistake, I will retry... Thank you!!!

Thanks,
Yongjian.

在 2016年4月13日星期三 UTC+8下午12:28:57,Liquan Pei写道:
...

Yongjian Meng

unread,
Apr 13, 2016, 2:12:49 AM4/13/16
to Confluent Platform
Hi Liquan,

mode=timestamp+incrementing
./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/oracle-timestamp_incrementing.properties 
...
[2016-04-13 01:53:18,114] INFO Creating task oracle-connect-test-0 (org.apache.kafka.connect.runtime.Worker:256)
[2016-04-13 01:53:18,114] INFO Instantiated task oracle-connect-test-0 with version 2.0.1 of type io.confluent.connect.jdbc.JdbcSourceTask (org.apache.kafka.connect.runtime.Worker:267)
[2016-04-13 01:53:18,120] INFO Created connector oracle-connect-test (org.apache.kafka.connect.cli.ConnectStandalone:82)
[2016-04-13 01:53:18,121] INFO JdbcSourceTaskConfig values: 
        mode = timestamp+incrementing
        timestamp.column.name = MODIFIED
        incrementing.column.name = ID
        topic.prefix = test_oracle_jdbc_2_
        tables = [USERYM]
        poll.interval.ms = 5000
        query = 
        batch.max.rows = 100
        connection.url = jdbc:oracle:thin:<username>/<password>@<host>:<port>:<sid>
        table.blacklist = []
        table.poll.interval.ms = 60000
        table.whitelist = [USERYM]
 (io.confluent.connect.jdbc.JdbcSourceTaskConfig:135)
[2016-04-13 01:53:18,155] DEBUG Trying to connect to jdbc:oracle:thin:<username>/<password>@<host>:<port>:<sid> (io.confluent.connect.jdbc.JdbcSourceTask:114)
[2016-04-13 01:53:18,983] INFO Source task Thread[WorkerSourceTask-oracle-connect-test-0,5,main] finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:342)
[2016-04-13 01:53:18,984] TRACE {} Polling for new data (io.confluent.connect.jdbc.JdbcSourceTask:186)
[2016-04-13 01:53:18,984] TRACE Waiting -1460526793984 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 01:53:18,984] TRACE Checking for next block of results from TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:205)
[2016-04-13 01:53:18,985] DEBUG TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} prepared SQL query: SELECT * FROM "USERYM" WHERE "MODIFIED" < CURRENT_TIMESTAMP AND (("MODIFIED" = ? AND "ID" > ?) OR "MODIFIED" > ?) ORDER BY "MODIFIED","ID" ASC (io.confluent.connect.jdbc.TimestampIncrementingTableQuerier:143)
[2016-04-13 01:53:19,030] TRACE Closing this query for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:216)
[2016-04-13 01:53:19,030] TRACE No updates for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:225)
[2016-04-13 01:53:19,030] TRACE Waiting 5000 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 01:53:24,030] TRACE Waiting 0 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 01:53:24,030] TRACE Checking for next block of results from TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:205)
[2016-04-13 01:53:24,070] TRACE Closing this query for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:216)
[2016-04-13 01:53:24,070] TRACE No updates for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:225)
[2016-04-13 01:53:24,071] TRACE Waiting 5000 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 01:53:29,071] TRACE Waiting -1 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 01:53:29,071] TRACE Checking for next block of results from TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:205)
[2016-04-13 01:53:29,111] TRACE Closing this query for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:216)
[2016-04-13 01:53:29,111] TRACE No updates for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:225)
[2016-04-13 01:53:29,111] TRACE Waiting 5000 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 01:53:34,111] TRACE Waiting 0 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 01:53:34,111] TRACE Checking for next block of results from TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:205)
[2016-04-13 01:53:34,151] TRACE Closing this query for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:216)
[2016-04-13 01:53:34,151] TRACE No updates for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:225)
[2016-04-13 01:53:34,151] TRACE Waiting 5000 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 01:53:39,151] TRACE Waiting 0 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 01:53:39,152] TRACE Checking for next block of results from TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:205)
[2016-04-13 01:53:39,192] TRACE Closing this query for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:216)
[2016-04-13 01:53:39,192] TRACE No updates for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:225)
[2016-04-13 01:53:39,192] TRACE Waiting 5000 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 01:53:43,509] DEBUG Scavenging sessions at 1460526823509 (org.eclipse.jetty.server.session:347)
[2016-04-13 01:53:44,192] TRACE Waiting 0 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 01:53:44,192] TRACE Checking for next block of results from TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:205)
[2016-04-13 01:53:44,232] TRACE Closing this query for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:216)
[2016-04-13 01:53:44,232] TRACE No updates for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:225)
[2016-04-13 01:53:44,232] TRACE Waiting 5000 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
...

mode=incrementing
./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/oracle-incrementing.properties 
...
[2016-04-13 02:03:12,585] INFO Creating task oracle-connect-test-0 (org.apache.kafka.connect.runtime.Worker:256)
[2016-04-13 02:03:12,586] INFO Instantiated task oracle-connect-test-0 with version 2.0.1 of type io.confluent.connect.jdbc.JdbcSourceTask (org.apache.kafka.connect.runtime.Worker:267)
[2016-04-13 02:03:12,592] INFO JdbcSourceTaskConfig values: 
        mode = incrementing
        timestamp.column.name
        incrementing.column.name = ID
        topic.prefix = test_oracle_jdbc_2_
        tables = [USERYM]
        poll.interval.ms = 5000
        query = 
        batch.max.rows = 100
        connection.url = jdbc:oracle:thin:<username>/<password>@<host>:<port>:<sid>
        table.blacklist = []
        table.poll.interval.ms = 60000
        table.whitelist = [USERYM]
 (io.confluent.connect.jdbc.JdbcSourceTaskConfig:135)
[2016-04-13 02:03:12,592] INFO Created connector oracle-connect-test (org.apache.kafka.connect.cli.ConnectStandalone:82)
[2016-04-13 02:03:12,624] DEBUG Trying to connect to jdbc:oracle:thin:<username>/<password>@<host>:<port>:<sid> (io.confluent.connect.jdbc.JdbcSourceTask:114)
[2016-04-13 02:03:13,138] INFO Source task Thread[WorkerSourceTask-oracle-connect-test-0,5,main] finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:342)
[2016-04-13 02:03:13,138] TRACE {} Polling for new data (io.confluent.connect.jdbc.JdbcSourceTask:186)
[2016-04-13 02:03:13,139] TRACE Waiting -1460527388138 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='null', incrementingColumn='ID'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 02:03:13,139] TRACE Checking for next block of results from TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='null', incrementingColumn='ID'} (io.confluent.connect.jdbc.JdbcSourceTask:205)
[2016-04-13 02:03:13,139] DEBUG TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='null', incrementingColumn='ID'} prepared SQL query: SELECT * FROM "USERYM" WHERE "ID" > ? ORDER BY "ID" ASC (io.confluent.connect.jdbc.TimestampIncrementingTableQuerier:143)
[2016-04-13 02:03:13,189] ERROR Task oracle-connect-test-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSourceTask:362)
[2016-04-13 02:03:13,189] ERROR Task is being killed and will not recover until manually restarted: (org.apache.kafka.connect.runtime.WorkerSourceTask:363)
org.apache.kafka.connect.errors.ConnectException: Invalid type for incrementing column: BYTES
        at io.confluent.connect.jdbc.TimestampIncrementingTableQuerier.extractRecord(TimestampIncrementingTableQuerier.java:177)
        at io.confluent.connect.jdbc.JdbcSourceTask.poll(JdbcSourceTask.java:211)
        at org.apache.kafka.connect.runtime.WorkerSourceTask$WorkerSourceTaskThread.execute(WorkerSourceTask.java:353)
        at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
[2016-04-13 02:03:13,191] DEBUG WorkerSourceTask{id=oracle-connect-test-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:232)
[2016-04-13 02:03:13,191] DEBUG Finished WorkerSourceTask{id=oracle-connect-test-0} offset commitOffsets successfully in 0 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:260)
[2016-04-13 02:03:30,815] DEBUG Scavenging sessions at 1460527410815 (org.eclipse.jetty.server.session:347)
[2016-04-13 02:04:00,816] DEBUG Scavenging sessions at 1460527440816 (org.eclipse.jetty.server.session:347)
...

mode=timestamp
./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/oracle-timestamp.properties 
...
[2016-04-13 02:06:46,013] INFO Creating task oracle-connect-test-0 (org.apache.kafka.connect.runtime.Worker:256)
[2016-04-13 02:06:46,014] INFO Instantiated task oracle-connect-test-0 with version 2.0.1 of type io.confluent.connect.jdbc.JdbcSourceTask (org.apache.kafka.connect.runtime.Worker:267)
[2016-04-13 02:06:46,020] INFO JdbcSourceTaskConfig values: 
        mode = timestamp
        timestamp.column.name = MODIFIED
        incrementing.column.name
        topic.prefix = test_oracle_jdbc_2_
        tables = [USERYM]
        poll.interval.ms = 5000
        query = 
        batch.max.rows = 100
        connection.url = jdbc:oracle:thin:<username>/<password>@<host>:<port>:<sid>
        table.blacklist = []
        table.poll.interval.ms = 60000
        table.whitelist = [USERYM]
 (io.confluent.connect.jdbc.JdbcSourceTaskConfig:135)
[2016-04-13 02:06:46,020] INFO Created connector oracle-connect-test (org.apache.kafka.connect.cli.ConnectStandalone:82)
[2016-04-13 02:06:46,051] DEBUG Trying to connect to jdbc:oracle:thin:<username>/<password>@<host>:<port>:<sid> (io.confluent.connect.jdbc.JdbcSourceTask:114)
[2016-04-13 02:06:46,560] INFO Source task Thread[WorkerSourceTask-oracle-connect-test-0,5,main] finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:342)
[2016-04-13 02:06:46,560] TRACE {} Polling for new data (io.confluent.connect.jdbc.JdbcSourceTask:186)
[2016-04-13 02:06:46,560] TRACE Waiting -1460527601560 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 02:06:46,560] TRACE Checking for next block of results from TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} (io.confluent.connect.jdbc.JdbcSourceTask:205)
[2016-04-13 02:06:46,561] DEBUG TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} prepared SQL query: SELECT * FROM "USERYM" WHERE "MODIFIED" > ? AND "MODIFIED" < CURRENT_TIMESTAMP ORDER BY "MODIFIED" ASC (io.confluent.connect.jdbc.TimestampIncrementingTableQuerier:143)
[2016-04-13 02:06:46,609] TRACE Closing this query for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} (io.confluent.connect.jdbc.JdbcSourceTask:216)
[2016-04-13 02:06:46,609] TRACE No updates for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} (io.confluent.connect.jdbc.JdbcSourceTask:225)
[2016-04-13 02:06:46,609] TRACE Waiting 5000 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 02:06:51,609] TRACE Waiting 0 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 02:06:51,610] TRACE Checking for next block of results from TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} (io.confluent.connect.jdbc.JdbcSourceTask:205)
[2016-04-13 02:06:51,653] TRACE Closing this query for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} (io.confluent.connect.jdbc.JdbcSourceTask:216)
[2016-04-13 02:06:51,653] TRACE No updates for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} (io.confluent.connect.jdbc.JdbcSourceTask:225)
[2016-04-13 02:06:51,653] TRACE Waiting 5000 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 02:06:56,653] TRACE Waiting 0 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
[2016-04-13 02:06:56,653] TRACE Checking for next block of results from TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} (io.confluent.connect.jdbc.JdbcSourceTask:205)
[2016-04-13 02:06:56,696] TRACE Closing this query for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} (io.confluent.connect.jdbc.JdbcSourceTask:216)
[2016-04-13 02:06:56,696] TRACE No updates for TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} (io.confluent.connect.jdbc.JdbcSourceTask:225)
[2016-04-13 02:06:56,697] TRACE Waiting 5000 ms to poll TimestampIncrementingTableQuerier{name='USERYM', query='null', topicPrefix='test_oracle_jdbc_2_', timestampColumn='MODIFIED', incrementingColumn='null'} next (io.confluent.connect.jdbc.JdbcSourceTask:194)
...

Thanks,
Yongjian.
在 2016年4月13日星期三 UTC+8下午12:28:57,Liquan Pei写道:
Hi Yongjian,
...

Liquan Pei

unread,
Apr 13, 2016, 2:23:13 AM4/13/16
to confluent...@googlegroups.com
Hi Yongjian,

I think the query SELECT * FROM "USERYM" WHERE "MODIFIED" < CURRENT_TIMESTAMP AND (("MODIFIED" = ? AND "ID" > ?) OR "MODIFIED" > ?) ORDER BY "MODIFIED","ID" ASC doesn't return any data. Can you try to set the timestamp manually to be of value corresponding to some earlier time? 

Thanks,
Liquan

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Yongjian Meng

unread,
Apr 13, 2016, 3:59:56 AM4/13/16
to Confluent Platform
Hi Liquan,

I changed timestamps of records in table to earlier time:

when I used mode=timestamp+incrementing and mode=incrementing, it appeared the exception!!! => org.apache.kafka.connect.errors.ConnectException: Invalid type for incrementing column: BYTES

when  I used mode=timestamp, the topic was created!!! 

BUT:
In table:
ID      MODIFIED                                       USERNAME
1 13-APR-10 10.04.38.223072000 AM aaa
2 13-APR-10 10.04.44.596697000 AM bbb

And ./bin/kafka-avro-console-consumer --topic test_oracle_jdbc_2_USERYM --zookeeper ...  --from-beginning
{"ID":"\u0001","USERNAME":{"string":"aaa"},"MODIFIED":1271153078223}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
{"ID":"\u0002","USERNAME":{"string":"bbb"},"MODIFIED":1271153084596}
...repetition...


在 2016年4月13日星期三 UTC+8下午2:23:13,Liquan Pei写道:

Yongjian Meng

unread,
Apr 13, 2016, 4:20:31 AM4/13/16
to Confluent Platform
I think I may have understand this "issue".

I am in China, so I use Oracle SQL Developer to add data into table, CURRENT_TIMESTAMP in this session is local(CN) time.

BUT my company's hosts are in American, so when connector start a query in its session, CURRENT_TIMESTAMP is local(US) time. So I couldn't receive any data from oracle because CURRENT_TIMESTAMP in US is earlier than "MODIFIED"(Create use CN time) in table, so CURRENT_TIMESTAMP < "MODIFIED"... is it right ???

hmm... could you tell me how to solve the exception org.apache.kafka.connect.errors.ConnectException: Invalid type for incrementing column: BYTES so I can use mode=incrementing+timestamp

THANK YOU VERY MUCH, I couldn't have solved it without you.

Best regards,
Yongjian.

在 2016年4月13日星期三 UTC+8下午2:23:13,Liquan Pei写道:
Hi Yongjian,

Cherupally Bhargav

unread,
May 21, 2016, 1:47:51 PM5/21/16
to Confluent Platform
Hi Yongjian,

Are you now able to build the pipeline for Oracle using Kafka Connect ?
I mean the following ETL flow:

Oracle → Kafka → HDFS → Hive

Thanks,
Bhargav
Hi Yongjian,
<a href="http://retry.backoff.ms/" rel="nofollow" target="_blank" onmousedown="this.href='http://www.google.com/url?q\75http%3A%2F%2Fretry.backoff.ms%2F\46sa\75D\

liu Brian

unread,
May 31, 2016, 5:03:34 PM5/31/16
to Confluent Platform
where can I find documentation to use kafka connect to connect Oracle ?

getting this and tried all mentioned in this thread, but seems not helpful:

 (org.apache.kafka.connect.runtime.SourceConnectorConfig:178)
[2016-05-31 20:34:44,874] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:100)
java.lang.IllegalArgumentException: Number of groups must be positive.

Thanks,

在 2016年5月21日星期六 UTC-7上午10:47:51,Cherupally Bhargav写道:
Hi Yongjian,

Cherupally Bhargav

unread,
Jun 9, 2016, 2:03:55 AM6/9/16
to Confluent Platform
Hi,

Even I'm facing the same problem. java.lang.IllegalArgumentException: Number of groups must be positive.

Can someone help fix this issue ?

Thanks,
Bhargav Cherupally

Cherupally Bhargav

unread,
Jun 9, 2016, 12:30:47 PM6/9/16
to Confluent Platform
Here is the log:

./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstart-sqlite.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-06-09 12:24:44,509] INFO StandaloneConfig values: 
cluster = connect
rest.advertised.port = null
bootstrap.servers = [localhost:9092]
rest.port = 8083
internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
value.converter = class io.confluent.connect.avro.AvroConverter
key.converter = class io.confluent.connect.avro.AvroConverter
 (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:165)
[2016-06-09 12:24:45,433] INFO Logging initialized @1928ms (org.eclipse.jetty.util.log:186)
[2016-06-09 12:24:45,575] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:53)
[2016-06-09 12:24:45,575] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:89)
[2016-06-09 12:24:45,611] INFO ProducerConfig values: 
compression.type = none
metric.reporters = []
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 9223372036854775807
sasl.kerberos.min.time.before.relogin = 60000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 1
metrics.num.samples = 2
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = all
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 2147483647
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
 (org.apache.kafka.clients.producer.ProducerConfig:165)
[2016-06-09 12:24:45,761] INFO Kafka version : 0.9.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-06-09 12:24:45,761] INFO Kafka commitId : d1555e3a21980fa9 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-06-09 12:24:45,763] INFO Starting FileOffsetBackingStore with file /tmp/connect.offsets (org.apache.kafka.connect.storage.FileOffsetBackingStore:53)
[2016-06-09 12:24:45,786] INFO Worker started (org.apache.kafka.connect.runtime.Worker:111)
[2016-06-09 12:24:45,786] INFO Herder starting (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:57)
[2016-06-09 12:24:45,787] INFO Herder started (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:58)
[2016-06-09 12:24:45,787] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:91)
[2016-06-09 12:24:46,245] INFO jetty-9.2.12.v20150709 (org.eclipse.jetty.server.Server:327)
Jun 09, 2016 12:24:48 PM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.

[2016-06-09 12:24:48,248] INFO Started o.e.j.s.ServletContextHandler@2a2da905{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2016-06-09 12:24:48,296] INFO Started ServerConnector@379ab47b{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
[2016-06-09 12:24:48,298] INFO Started @4794ms (org.eclipse.jetty.server.Server:379)
[2016-06-09 12:24:48,311] INFO REST server listening at http://127.0.0.1:8083/, advertising URL http://127.0.0.1:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:132)
[2016-06-09 12:24:48,312] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:60)
[2016-06-09 12:24:48,325] INFO ConnectorConfig values: 
connector.class = class io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max = 1
topics = []
name = test-oracle-jdbc-autoincrement
 (org.apache.kafka.connect.runtime.ConnectorConfig:165)
[2016-06-09 12:24:48,325] INFO Creating connector test-oracle-jdbc-autoincrement of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:170)
[2016-06-09 12:24:48,339] INFO Instantiated connector test-oracle-jdbc-autoincrement with version 2.0.0 of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:183)
[2016-06-09 12:24:48,361] INFO JdbcSourceConnectorConfig values: 
mode = incrementing
topic.prefix = test-oracle-jdbc-
query = 
batch.max.rows = 100
connection.url = jdbc:oracle:thin:user/p...@0.0.0.0:1521/orcl
table.blacklist = []
table.whitelist = [users]
 (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
[2016-06-09 12:24:56,195] INFO Finished creating connector test-oracle-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:193)
[2016-06-09 12:24:59,000] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:91)
java.lang.IllegalArgumentException: Number of groups must be positive.
at org.apache.kafka.connect.util.ConnectorUtils.groupPartitions(ConnectorUtils.java:45)
at io.confluent.connect.jdbc.JdbcSourceConnector.taskConfigs(JdbcSourceConnector.java:120)
at org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:215)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.recomputeTaskConfigs(StandaloneHerder.java:210)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.updateConnectorTasks(StandaloneHerder.java:249)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:146)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:85)
[2016-06-09 12:24:59,003] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:68)
[2016-06-09 12:24:59,029] INFO Stopped ServerConnector@379ab47b{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:306)
[2016-06-09 12:24:59,062] INFO Stopped o.e.j.s.ServletContextHandler@2a2da905{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:865)
[2016-06-09 12:24:59,116] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:62)
[2016-06-09 12:24:59,117] INFO Stopping connector test-oracle-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:226)
[2016-06-09 12:24:59,117] INFO Stopping table monitoring thread (io.confluent.connect.jdbc.JdbcSourceConnector:134)
[2016-06-09 12:24:59,118] INFO Stopped connector test-oracle-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:240)
[2016-06-09 12:24:59,119] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:77)
[2016-06-09 12:24:59,119] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:115)
[2016-06-09 12:24:59,123] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:61)
[2016-06-09 12:24:59,123] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:155)
[2016-06-09 12:24:59,123] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:74)

We are currently working for a POC and this issue is stopping us, as we are running out of time. Can you please help fix this issue ? 

Thanks,
Bhargav Cherupally

Dustin Cote

unread,
Jun 9, 2016, 12:47:49 PM6/9/16
to confluent...@googlegroups.com
Hi Bhargav,

The message you are seeing comes because the number of groups is <= 0.  This value is populated in the code like this:

int numGroups = Math.min(currentTables.size(), maxTasks);

In your case, tasks.max is set to 1, so you must not have anything in currentTables, which looks like you don't have any tables in your Oracle database.  Are you sure the 'users' table exists in jdbc:oracle:thin:user/p...@0.0.0.0:1521/orcl?  Can you independently verify that?

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Dustin Cote

Gwen Shapira

unread,
Jun 9, 2016, 12:53:12 PM6/9/16
to confluent...@googlegroups.com
Also, Oracle is notorious for being case sensitive for table names.
Perhaps try upper case :)
> https://groups.google.com/d/msgid/confluent-platform/CACOCneY0wkQzf_A7uT9XeaQxXuRzH7muw9kk2SGpoGtFF8AGvg%40mail.gmail.com.
Message has been deleted

Cherupally Bhargav

unread,
Jun 9, 2016, 1:56:43 PM6/9/16
to Confluent Platform
Thanks Gwen. After changing the table name to upper case I got another exception. 
Here is the log:
[2016-06-09 13:54:31,303] INFO JdbcSourceConnectorConfig values: 
mode = timestamp+incrementing
topic.prefix = test-oracle-jdbc-
query = 
batch.max.rows = 100
connection.url = jdbc:oracle:thin:system/ora...@0.0.0.0:1521/orcl
table.blacklist = []
table.whitelist = [USERS]
 (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
[2016-06-09 13:54:32,830] INFO Finished creating connector test-oracle-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:193)
[2016-06-09 13:54:36,239] INFO TaskConfig values: 
task.class = class io.confluent.connect.jdbc.JdbcSourceTask
 (org.apache.kafka.connect.runtime.TaskConfig:165)
[2016-06-09 13:54:36,239] INFO Creating task test-oracle-jdbc-autoincrement-0 (org.apache.kafka.connect.runtime.Worker:256)
[2016-06-09 13:54:36,240] INFO Instantiated task test-oracle-jdbc-autoincrement-0 with version 2.0.0 of type io.confluent.connect.jdbc.JdbcSourceTask (org.apache.kafka.connect.runtime.Worker:267)
[2016-06-09 13:54:36,331] INFO JdbcSourceTaskConfig values: 
mode = timestamp+incrementing
topic.prefix = test-oracle-jdbc-
tables = [USERS, USERS]
query = 
batch.max.rows = 100
connection.url = jdbc:oracle:thin:system/ora...@0.0.0.0:1521/orcl
table.blacklist = []
table.whitelist = [USERS]
 (io.confluent.connect.jdbc.JdbcSourceTaskConfig:135)
[2016-06-09 13:54:36,336] INFO Created connector test-oracle-jdbc-autoincrement (org.apache.kafka.connect.cli.ConnectStandalone:82)
[2016-06-09 13:54:39,030] INFO Source task Thread[WorkerSourceTask-test-oracle-jdbc-autoincrement-0,5,main] finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:342)
[2016-06-09 13:54:39,071] ERROR Task test-oracle-jdbc-autoincrement-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSourceTask:362)
[2016-06-09 13:54:39,071] ERROR Task is being killed and will not recover until manually restarted: (org.apache.kafka.connect.runtime.WorkerSourceTask:363)
org.apache.kafka.connect.errors.ConnectException: Invalid type for incrementing column: BYTES
at io.confluent.connect.jdbc.TimestampIncrementingTableQuerier.extractRecord(TimestampIncrementingTableQuerier.java:177)
at io.confluent.connect.jdbc.JdbcSourceTask.poll(JdbcSourceTask.java:211)
at org.apache.kafka.connect.runtime.WorkerSourceTask$WorkerSourceTaskThread.execute(WorkerSourceTask.java:353)
at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)


Thanks,
Bhargav Cherupally 

Cherupally Bhargav

unread,
Jun 9, 2016, 2:22:06 PM6/9/16
to Confluent Platform
Thanks Gwen and Dustin. Now here are my observations:
1. When tried running with mode=incrementing+timestamp -----> Invalid type for incrementing column: BYTES
2. When tried running with mode=timestamp -----> I see the topics getting created. GREAT !!
3. When tried running with mode=bulk -----> I see the topics getting created.

Thanks a ton for all the help, which made me to run it in timestamp and bulk mode. If possible could you please tell me how to solve the exception org.apache.kafka.connect.errors.ConnectException: Invalid type for incrementing column: BYTES so I can use mode=incrementing+timestamp

Thanks,
Bhargav Cherupally

Dustin Cote

unread,
Jun 9, 2016, 2:42:10 PM6/9/16
to confluent...@googlegroups.com
You'd want to use incrementing mode when you have an autoincrementing column.  I think you'll find this doc to be useful:


--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Dustin Cote

Cherupally Bhargav

unread,
Jun 9, 2016, 2:53:02 PM6/9/16
to confluent...@googlegroups.com
Thankyou Dustin. I'll try running in incremental mode and let you know if I'm into any issues.

Thanks,
Bhargav Cherupally

--
You received this message because you are subscribed to a topic in the Google Groups "Confluent Platform" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/confluent-platform/oFaXfjMJ8Co/unsubscribe.
To unsubscribe from this group and all its topics, send an email to confluent-platf...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.

Ewen Cheslack-Postava

unread,
Jun 9, 2016, 10:11:26 PM6/9/16
to Confluent Platform
This is a known issue due to the NUMBER type in Oracle: https://github.com/confluentinc/kafka-connect-jdbc/issues/31 The only way to *guarantee* we preserve the value accurately is to use this encoded format written as bytes, but the incrementing column code doesn't know how to handle this encoding of the variable precision type.

-Ewen


For more options, visit https://groups.google.com/d/optout.



--
Thanks,
Ewen

Cherupally Bhargav

unread,
Jun 10, 2016, 3:12:32 PM6/10/16
to confluent...@googlegroups.com
Thanks Gwen and Dustin. I've tried it one more time and the table exists locally. I've confluent and oracle db on the same linux machine.
Please find the following infor:

Connector Info:
name=test-oracle-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:oracle:thin:system/ora...@0.0.0.0:1521/orcl
table.whitelist=users
mode=timestamp+incrementing
topic.prefix=test-oracle-jdbc-

Table Structure:

CREATE TABLE users (
  id number(19) NOT NULL PRIMARY KEY,
  username varchar2(100),
  password varchar2(200),
  modified timestamp(0) default SYSTIMESTAMP NOT NULL
);

CREATE SEQUENCE users_seq START WITH 1 INCREMENT BY 1;

CREATE OR REPLACE TRIGGER users_seq_tr
 BEFORE INSERT ON users FOR EACH ROW
 WHEN (NEW.id IS NULL)
BEGIN
 SELECT users_seq.NEXTVAL INTO :NEW.id FROM DUAL;
END;
/

CREATE INDEX modified_index ON users (modified);

INSERT INTO users (username, password) VALUES ('alice', '123');

Kafka Log:

./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstart-oracle.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/userhome/oracle/confluent-2.0.0/share/java/kafka/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-06-09 13:17:30,255] INFO StandaloneConfig values: 
cluster = connect
rest.advertised.port = null
bootstrap.servers = [localhost:9092]
rest.port = 8083
internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
value.converter = class io.confluent.connect.avro.AvroConverter
key.converter = class io.confluent.connect.avro.AvroConverter
 (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:165)
[2016-06-09 13:17:31,297] INFO Logging initialized @8935ms (org.eclipse.jetty.util.log:186)
[2016-06-09 13:17:31,426] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:53)
[2016-06-09 13:17:31,429] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:89)
[2016-06-09 13:17:31,484] INFO ProducerConfig values: 
[2016-06-09 13:17:31,626] INFO Kafka version : 0.9.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2016-06-09 13:17:31,626] INFO Kafka commitId : d1555e3a21980fa9 (org.apache.kafka.common.utils.AppInfoParser:83)
[2016-06-09 13:17:31,629] INFO Starting FileOffsetBackingStore with file /tmp/connect.offsets (org.apache.kafka.connect.storage.FileOffsetBackingStore:53)
[2016-06-09 13:17:31,677] INFO Worker started (org.apache.kafka.connect.runtime.Worker:111)
[2016-06-09 13:17:31,677] INFO Herder starting (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:57)
[2016-06-09 13:17:31,677] INFO Herder started (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:58)
[2016-06-09 13:17:31,677] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:91)
[2016-06-09 13:17:32,098] INFO jetty-9.2.12.v20150709 (org.eclipse.jetty.server.Server:327)
Jun 09, 2016 1:17:34 PM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.

[2016-06-09 13:17:34,842] INFO Started o.e.j.s.ServletContextHandler@2a2da905{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2016-06-09 13:17:34,888] INFO Started ServerConnector@18b315fe{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
[2016-06-09 13:17:34,888] INFO Started @12531ms (org.eclipse.jetty.server.Server:379)
[2016-06-09 13:17:34,904] INFO REST server listening at http://127.0.0.1:8083/, advertising URL http://127.0.0.1:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:132)
[2016-06-09 13:17:34,905] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:60)
[2016-06-09 13:17:34,938] INFO ConnectorConfig values: 
connector.class = class io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max = 1
topics = []
name = test-oracle-jdbc-autoincrement
 (org.apache.kafka.connect.runtime.ConnectorConfig:165)
[2016-06-09 13:17:34,939] INFO Creating connector test-oracle-jdbc-autoincrement of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:170)
[2016-06-09 13:17:34,941] INFO Instantiated connector test-oracle-jdbc-autoincrement with version 2.0.0 of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:183)
[2016-06-09 13:17:34,961] INFO JdbcSourceConnectorConfig values: 
mode = timestamp+incrementing
topic.prefix = test-oracle-jdbc-
query = 
batch.max.rows = 100
connection.url = jdbc:oracle:thin:system/ora...@0.0.0.0:1521/orcl
table.blacklist = []
table.whitelist = [users]
 (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
[2016-06-09 13:17:37,623] INFO Finished creating connector test-oracle-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:193)
[2016-06-09 13:17:43,310] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:91)
java.lang.IllegalArgumentException: Number of groups must be positive.
at org.apache.kafka.connect.util.ConnectorUtils.groupPartitions(ConnectorUtils.java:45)
at io.confluent.connect.jdbc.JdbcSourceConnector.taskConfigs(JdbcSourceConnector.java:120)
at org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:215)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.recomputeTaskConfigs(StandaloneHerder.java:210)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.updateConnectorTasks(StandaloneHerder.java:249)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:146)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:85)
[2016-06-09 13:17:43,312] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:68)
[2016-06-09 13:17:43,339] INFO Stopped ServerConnector@18b315fe{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:306)
[2016-06-09 13:17:43,368] INFO Stopped o.e.j.s.ServletContextHandler@2a2da905{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:865)
[2016-06-09 13:17:43,415] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:62)
[2016-06-09 13:17:43,416] INFO Stopping connector test-oracle-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:226)
[2016-06-09 13:17:43,416] INFO Stopping table monitoring thread (io.confluent.connect.jdbc.JdbcSourceConnector:134)
[2016-06-09 13:17:43,418] INFO Stopped connector test-oracle-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:240)
[2016-06-09 13:17:43,422] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:77)
[2016-06-09 13:17:43,422] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:115)
[2016-06-09 13:17:43,422] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:61)
[2016-06-09 13:17:43,422] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:155)
[2016-06-09 13:17:43,422] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:74)

Please let me know if you need any other info. 

Thanks,
Bhargav Cherupally

Sujiesh Nair

unread,
Jul 21, 2017, 12:01:30 AM7/21/17
to Confluent Platform
Hi guys,

I am trying the kafka JDBC connector to source data from Oracle database in kafka distributed mode and am getting the following error

{"error_code":500,"message":"Request timed out"}

Am using curl to create the Oracle connector 

curl -X POST -H "Content-Type: application/json" --data '{"name": "oracle-connect", "config": {"connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max":"1",
"connection.url":"jdbc:oracle:thin:username/password@host:1521:SID", "query":"select * from table_name", "mode":"incrementing", "incrementing.column.name":"table_column", "topic.prefix":"abc" }}' http://localhost:8083/connectors

I tried to have verbose logging but still no use.  

I am successfully able to run queries from the kafka server through sql plus, so there is no fire wall issue. 

Any help would be greatly appreciated.

Thanks,
Sujiesh 

On Saturday, April 2, 2016 at 7:09:37 AM UTC+13, Cherupally Bhargav wrote:
Hi,

I've tried kafka connect with mysql database, and it worked great. 
I've installed the oracle client 12c and I've set the ojdbc6.jar and ojdbc7.jar in classpath. 
But when I changed the properties file to match the oracle database connection url, it is showing the following error:

[2016-04-01 17:11:24,882] INFO JdbcSourceConnectorConfig values: 
connection.url = jdbc:oracle:thin:<user>/<password>@<HOST>:<PORT>:<SID>
query = 
topic.prefix = test_jdbc_
batch.max.rows = 100
table.whitelist = []
mode = incrementing
table.blacklist = []
 (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
[2016-04-01 17:11:25,735] 
ERROR Couldn't open connection to jdbc:oracle:thin:<user>/<password>@<HOST>:<PORT>:<SID>: 
java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
ORA-01882: timezone region not found
 (io.confluent.connect.jdbc.JdbcSourceConnector:76)

Can you please help me resolve this issue ?

Thanks,
Bhargav

Robin Moffatt

unread,
Jul 21, 2017, 5:11:59 AM7/21/17
to confluent...@googlegroups.com
Can you check your Kafka Connect log - you should see the inbound POST and then any errors subsequent that are causing it to throw the HTTP 500. 

Rao

unread,
Sep 1, 2017, 12:47:38 PM9/1/17
to Confluent Platform
Hi All,

I am also facing the same issue while using  kafka JDBC connector to source data from Oracle database in kafka distributed mode.


[2017-08-31 22:08:08,508] DEBUG SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,Open,in,out,-,-,30000/30000,HttpConnection}{io=0,kio=0,kro=1} idle timeout check, elapsed: 30000 ms, remaining: 0 ms (org.eclipse.jetty.io.IdleTimeout)
[2017-08-31 22:08:08,509] DEBUG SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,Open,in,out,-,-,30001/30000,HttpConnection}{io=0,kio=0,kro=1} idle timeout expired (org.eclipse.jetty.io.IdleTimeout)
[2017-08-31 22:08:08,509] DEBUG ignored: WriteFlusher@3c0712a{IDLE} {} (org.eclipse.jetty.io.WriteFlusher)
[2017-08-31 22:08:08,509] DEBUG Ignored idle endpoint SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,Open,in,out,-,-,30001/30000,HttpConnection}{io=0,kio=0,kro=1} (org.eclipse.jetty.io.AbstractEndPoint)
[2017-08-31 22:08:08,512] DEBUG Uncaught exception in REST call:  (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: Request timed out
at org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.completeOrForwardRequest(ConnectorsResource.java:268)
at org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.createConnector(ConnectorsResource.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:160)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
[2017-08-31 22:08:08,514] DEBUG org.eclipse.jetty.server.HttpConnection$SendCallback@103e4deb[PROCESSING][i=ResponseInfo{HTTP/1.1 500 Internal Server Error,48,false},cb=org.eclipse.jetty.server.HttpChannel$CommitCallback@573dce4c] generate: NEED_HEADER (null,[p=0,l=48,c=8192,r=48],true)@START (org.eclipse.jetty.server.HttpConnection)
[2017-08-31 22:08:08,514] DEBUG org.eclipse.jetty.server.HttpConnection$SendCallback@103e4deb[PROCESSING][i=ResponseInfo{HTTP/1.1 500 Internal Server Error,48,false},cb=org.eclipse.jetty.server.HttpChannel$CommitCallback@573dce4c] generate: FLUSH ([p=0,l=160,c=8192,r=160],[p=0,l=48,c=8192,r=48],true)@COMPLETING (org.eclipse.jetty.server.HttpConnection)
[2017-08-31 22:08:08,514] DEBUG write: WriteFlusher@3c0712a{IDLE} [HeapByteBuffer@37693b3b[p=0,l=160,c=8192,r=160]={<<<HTTP/1.1 500 Inte....v20160210)\r\n\r\n>>>\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00},HeapByteBuffer@94eccab[p=0,l=48,c=8192,r=48]={<<<{"error_code":500...est timed out"}>>>\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}] (org.eclipse.jetty.io.WriteFlusher)
[2017-08-31 22:08:08,514] DEBUG update WriteFlusher@3c0712a{WRITING}:IDLE-->WRITING (org.eclipse.jetty.io.WriteFlusher)
[2017-08-31 22:08:08,516] DEBUG flushed 208 SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,Open,in,out,-,W,7/30000,HttpConnection}{io=0,kio=0,kro=1} (org.eclipse.jetty.io.ChannelEndPoint)
[2017-08-31 22:08:08,516] DEBUG update WriteFlusher@3c0712a{IDLE}:WRITING-->IDLE (org.eclipse.jetty.io.WriteFlusher)
[2017-08-31 22:08:08,517] DEBUG org.eclipse.jetty.server.HttpConnection$SendCallback@103e4deb[PROCESSING][i=ResponseInfo{HTTP/1.1 500 Internal Server Error,48,false},cb=org.eclipse.jetty.server.HttpChannel$CommitCallback@573dce4c] generate: DONE ([p=160,l=160,c=8192,r=0],[p=48,l=48,c=8192,r=0],true)@END (org.eclipse.jetty.server.HttpConnection)
[2017-08-31 22:08:08,517] INFO 172.17.0.1 - - [31/Aug/2017:22:06:38 +0000] "POST /connectors HTTP/1.1" 500 48  90012 (org.apache.kafka.connect.runtime.rest.RestServer)
[2017-08-31 22:08:08,517] DEBUG RESPONSE /connectors  500 handled=true (org.eclipse.jetty.server.Server)
[2017-08-31 22:08:08,518] DEBUG HttpChannelState@7b295020{s=DISPATCHED i=true a=null} unhandle DISPATCHED (org.eclipse.jetty.server.HttpChannelState)
[2017-08-31 22:08:08,518] DEBUG unconsumed input HttpConnection@2f03d750[FILLING,SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,Open,in,out,-,-,2/30000,HttpConnection}{io=0,kio=0,kro=1}][p=HttpParser{s=CONTENT,593 of 593},g=HttpGenerator{s=END},c=HttpChannelOverHttp@97df553{r=1,c=true,a=COMPLETED,uri=/connectors}] (org.eclipse.jetty.server.HttpConnection)
[2017-08-31 22:08:08,518] DEBUG parseNext s=CONTENT HeapByteBuffer@12883419[p=736,l=736,c=16384,r=0]={POST /connectors ...":"LOCALTIME"}}<<<>>>\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00} (org.eclipse.jetty.http.HttpParser)
[2017-08-31 22:08:08,518] DEBUG CONTENT --> END (org.eclipse.jetty.http.HttpParser)
[2017-08-31 22:08:08,518] DEBUG HttpChannelOverHttp@97df553{r=1,c=true,a=COMPLETED,uri=/connectors} messageComplete (org.eclipse.jetty.server.HttpChannel)
[2017-08-31 22:08:08,518] DEBUG HttpInputOverHTTP@22e79137 EOF (org.eclipse.jetty.server.HttpInput)
[2017-08-31 22:08:08,519] DEBUG filled -1 SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,Open,in,out,-,-,3/30000,HttpConnection}{io=0,kio=0,kro=1} (org.eclipse.jetty.io.ChannelEndPoint)
[2017-08-31 22:08:08,519] DEBUG ishut SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,Open,in,out,-,-,3/30000,HttpConnection}{io=0,kio=0,kro=1} (org.eclipse.jetty.io.ChannelEndPoint)
[2017-08-31 22:08:08,519] DEBUG HttpConnection@2f03d750[FILLING,SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,Open,ISHUT,out,-,-,3/30000,HttpConnection}{io=0,kio=0,kro=1}][p=HttpParser{s=END,593 of 593},g=HttpGenerator{s=END},c=HttpChannelOverHttp@97df553{r=1,c=true,a=COMPLETED,uri=/connectors}] filled -1 (org.eclipse.jetty.server.HttpConnection)
[2017-08-31 22:08:08,519] DEBUG atEOF HttpParser{s=END,593 of 593} (org.eclipse.jetty.http.HttpParser)
[2017-08-31 22:08:08,519] DEBUG HttpInputOverHTTP@22e79137 eof EOF (org.eclipse.jetty.server.HttpInput)
[2017-08-31 22:08:08,519] DEBUG reset HttpParser{s=END,593 of 593} (org.eclipse.jetty.http.HttpParser)
[2017-08-31 22:08:08,519] DEBUG END --> START (org.eclipse.jetty.http.HttpParser)
[2017-08-31 22:08:08,519] DEBUG HttpChannelOverHttp@97df553{r=1,c=false,a=IDLE,uri=} handle exit, result COMPLETE (org.eclipse.jetty.server.HttpChannel)
[2017-08-31 22:08:08,520] DEBUG atEOF HttpParser{s=START,0 of -1} (org.eclipse.jetty.http.HttpParser)
[2017-08-31 22:08:08,520] DEBUG parseNext s=START HeapByteBuffer@7e440d25[p=0,l=0,c=0,r=0]={<<<>>>} (org.eclipse.jetty.http.HttpParser)
[2017-08-31 22:08:08,520] DEBUG START --> CLOSED (org.eclipse.jetty.http.HttpParser)
[2017-08-31 22:08:08,520] DEBUG onClose SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,CLOSED,ISHUT,out,-,-,4/30000,HttpConnection}{io=0,kio=0,kro=1} (org.eclipse.jetty.io.AbstractEndPoint)
[2017-08-31 22:08:08,520] DEBUG close SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,CLOSED,ISHUT,out,-,-,4/30000,HttpConnection}{io=0,kio=0,kro=1} (org.eclipse.jetty.io.ChannelEndPoint)
[2017-08-31 22:08:08,520] DEBUG Destroyed SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,CLOSED,ISHUT,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=-1,kro=-1} (org.eclipse.jetty.io.SelectorManager)
[2017-08-31 22:08:08,520] DEBUG onClose HttpConnection@2f03d750[FILLING,SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,CLOSED,ISHUT,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=-1,kro=-1}][p=HttpParser{s=CLOSED,0 of -1},g=HttpGenerator{s=START},c=HttpChannelOverHttp@97df553{r=1,c=false,a=IDLE,uri=}] (org.eclipse.jetty.io.AbstractConnection)
[2017-08-31 22:08:08,520] DEBUG onClose SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,CLOSED,ISHUT,OSHUT,-,-,4/30000,HttpConnection}{io=0,kio=-1,kro=-1} (org.eclipse.jetty.io.AbstractEndPoint)
[2017-08-31 22:08:08,521] DEBUG FILLING-->IDLE HttpConnection@2f03d750[IDLE,SelectChannelEndPoint@7ccd03d7{/172.17.0.1:54992<->8083,CLOSED,ISHUT,OSHUT,-,-,5/30000,HttpConnection}{io=0,kio=-1,kro=-1}][p=HttpParser{s=CLOSED,0 of -1},g=HttpGenerator{s=START},c=HttpChannelOverHttp@97df553{r=1,c=false,a=IDLE,uri=}] (org.eclipse.jetty.io.AbstractConnection)

Attached error logs and Please let me know how to resolve this.

Any help would be greatly appreciated. 

Thanks,
Rao

Blazej Checinski

unread,
Sep 19, 2017, 4:19:18 AM9/19/17
to Confluent Platform
Hi Rao,
Any luck?

Kind regards,
Blazej

Bharath Raghu

unread,
Aug 2, 2019, 6:07:35 AM8/2/19
to Confluent Platform
Hi all,

Is there any solution available for this issue? i am stuck at this point. Getting '{"error_code":500,"message":"Request timed out"}' 

NIRANJAN SAHOO

unread,
Sep 19, 2019, 1:57:11 AM9/19/19
to Confluent Platform
Hello ,

Active Task is showing 0 for me even all configurations are proper .

task0.PNG

configuration.PNG

I can able to view list of tables in UI . Attached screen shot.

Could you please help ?

Thank you .

Ravi C

unread,
May 26, 2020, 1:21:15 PM5/26/20
to Confluent Platform
Today, I have observed in the oracle db, when I issue the connector with the configuration.
 The request is being processed and it is querying the database with this below query.

SELECT NULL AS table_cat,
       o.owner AS table_schem,
       o.object_name AS table_name,
       o.object_type AS table_type,
       NULL AS remarks
  FROM all_objects o
  WHERE o.owner LIKE :1 ESCAPE '/'
    AND o.object_name LIKE :2 ESCAPE '/'
    AND o.object_type IN ('xxx', 'TABLE')
  ORDER BY table_type, table_schem, table_name


Bind variables are '%' for :1, and :2.

This query is submitted by the connector, and this will be never completes even though it has 2616 rows.
Due to this after poll interval, Im getting timeout.


If I kill the session from the database, it will say that session was killed see screenshots.


Please help on this, Im not able to get any help on this, I did'nt even got any clue why the connector is behaving like this.








Ravi-MacBook-Pro:~ ranger$ curl -X POST http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -d '{

>                 "name": "jdbc_connector",

>                 "config": {

>                         "connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector",

>                         "connection.url": "jdbc:oracle:thin:@//localhost:32769/ORCLCDB.localdomain",

>                         "connection.user": "kafka_user",

>                         "connection.password": "oracle123",

>                         "topic.prefix": "oracle-01-",

>                         "table.whitelist" : "KAFKA_USER.KAFKA_TABLE",

>                         "mode":"timestamp+incrementing",

>                         "poll.interval.ms" : 3600,

>                         "incrementing.column.name":"ID",

>                         "validate.non.null": false,

>                         "name": "jdbc_connector"

>                         }

>                 }'

{"error_code":500,"message":"Request timed out"}Ravi-MacBook-Pro:~ ranger$ 







Ravi-MacBook-Pro:~ ranger$ curl -X POST http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -d '{

                "name": "jdbc_connector",

                "config": {

                        "connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector",

                        "connection.url": "jdbc:oracle:thin:@//localhost:32769/ORCLCDB.localdomain",

                        "connection.user": "kafka_user",

                        "connection.password": "oracle123",

                        "topic.prefix": "oracle-01-",

                        "table.whitelist" : "KAFKA_USER.KAFKA_TABLE",

                        "mode":"timestamp+incrementing",

                        "poll.interval.ms" : 3600,

                        "incrementing.column.name":"ID",

                        "validate.non.null": false,

                        "name": "jdbc_connector"

                        }

                }'

{"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):\nInvalid value java.sql.SQLRecoverableException: ORA-00028: your session has been killed\n for configuration Couldn't open connection to jdbc:oracle:thin:@//localhost:32769/ORCLCDB.localdomain\nInvalid value java.sql.SQLRecoverableException: ORA-00028: your session has been killed\n for configuration Couldn't open connection to jdbc:oracle:thin:@//localhost:32769/ORCLCDB.localdomain\nYou can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`"}Ravi-MacBook-Pro:~ ranger$ 


Amit Sahu

unread,
May 26, 2020, 4:49:20 PM5/26/20
to confluent...@googlegroups.com
Hi,

Seems your database is slow. You can try increasing the consumer timeout in the worker properties file. Tweak the below configs w.r.t your case.

session.timeout.ms

The timeout used to detect client failures when using Kafka's group management facility. The client sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms.

Type:int
Default:10000
Valid Values:
Importance:high



max.poll.interval.ms

The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. For consumers using a non-null group.instance.id which reach this timeout, partitions will not be immediately reassigned. Instead, the consumer will stop sending heartbeats and partitions will be reassigned after expiration of session.timeout.ms. This mirrors the behavior of a static consumer which has shutdown.

Type:int
Default:300000
Valid Values:[1,...]
Importance:medium

-Amit

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.

Kay Olusanya

unread,
Aug 11, 2020, 4:51:49 PM8/11/20
to Confluent Platform
Hi there, 
I'm hoping you can help with a similar issue I've been experiencing pretty much the same error. I've tried ojdbc6,ojdbc7 and ojdbc8.
I am running within a docker container. cp 5.4.1 version.
That said, I've use the same config with the binaries and it works fine with the same database.
I've also used the docker config with a MySQL database and that works too.
Any insight would be greatly appreciated

Thanks
Kay 

On Monday, 4 April 2016 16:52:33 UTC+1, Gwen Shapira wrote:
It looks like your client is so new, that it is not compatible with the server :)
I'd remove ojdbc7.jar from the classpath to see if this works. If it doesn't, you may need an ojdbc6.jar from a client that matches your server version.

Gwen

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

To post to this group, send email to confluent...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages