I am reading from a JDBC Source (and subsequently writing to Elasticsearch) - the stream works for a few thousand rows but eventually dies with the following error message:
[2018-03-13 00:25:09,610] INFO Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:847)
[2018-03-13 00:25:09,885] INFO WorkerSourceTask{id=jdbc_source_netezza-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:158)
[2018-03-13 00:25:25,669] INFO 127.0.0.1 - - [13/Mar/2018:04:25:25 +0000] "POST /connectors HTTP/1.1" 409 75 281 (org.apache.kafka.connect.runtime.rest.RestServer:60)
[2018-03-13 00:25:46,962] INFO The database connection is invalid. Reconnecting... (io.confluent.connect.jdbc.util.CachedConnectionProvider:70)
[2018-03-13 00:25:47,138] INFO WorkerSourceTask{id=jdbc_source_netezza-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:306)
[2018-03-13 00:25:47,138] INFO WorkerSourceTask{id=jdbc_source_netezza-0} flushing 100 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:323)
[2018-03-13 00:25:47,153] INFO WorkerSourceTask{id=jdbc_source_netezza-0} Finished commitOffsets successfully in 15 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:405)
[2018-03-13 00:25:47,153] ERROR WorkerSourceTask{id=jdbc_source_netezza-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:172)
java.lang.NullPointerException
at org.netezza.sql.NzConnection.receiveByte(NzConnection.java:543)
at org.netezza.sql.NzConnection.receiveChar(NzConnection.java:562)
at org.netezza.internal.QueryExecutor.update(QueryExecutor.java:322)
at org.netezza.sql.NzConnection.updateResultSet(NzConnection.java:2933)
at org.netezza.sql.NzResultSet.next(NzResultSet.java:1944)
at io.confluent.connect.jdbc.source.TableQuerier.next(TableQuerier.java:92)
at io.confluent.connect.jdbc.source.TimestampIncrementingTableQuerier.next(TimestampIncrementingTableQuerier.java:55)
at io.confluent.connect.jdbc.source.JdbcSourceTask.poll(JdbcSourceTask.java:229)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:179)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2018-03-13 00:25:47,154] ERROR WorkerSourceTask{id=jdbc_source_netezza-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:173)
[2018-03-13 00:25:47,154] INFO [Producer clientId=producer-11] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:341)
[2018-03-13 00:26:09,504] INFO WorkerSinkTask{id=elasticsearch-sink-vuln_scan-0} Committing offsets asynchronously using sequence number 1: {netezza-vuln_scan-0=OffsetAndMetadata{offset=1800, metadata=''}} (org.apache.kafka.connect.runtime.WorkerSinkTask:311)
[2018-03-13 00:26:09,509] INFO WorkerSourceTask{id=jdbc_source_netezza-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:306)
[2018-03-13 00:26:09,509] INFO WorkerSourceTask{id=jdbc_source_netezza-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:323)