I tried to manually create the 3 storage topics (for offsets, configs, and status) per the documentation (the only difference is that I could not put replication factor to 2, because in my demo I had only 1 kafka broker in cluster). That did not fix the problem.
However info from logs of influxdb connector
[2017-06-15 22:58:04,100] WARN Commit of WorkerSinkTask{id=influxdb-sink-0} offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask)
[2017-06-15 22:59:04,099] INFO Empty list of records received. (com.datamountaineer.streamreactor.connect.influx.InfluxSinkTask)
[2017-06-15 22:59:04,100] INFO WorkerSinkTask{id=influxdb-sink-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask)
[2017-06-15 22:59:04,100] WARN Commit of WorkerSinkTask{id=influxdb-sink-0} offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask)
[2017-06-15 23:00:04,099] INFO Empty list of records received. (com.datamountaineer.streamreactor.connect.influx.InfluxSinkTask)
[2017-06-15 23:00:04,099] INFO WorkerSinkTask{id=influxdb-sink-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask)
[2017-06-15 23:00:04,099] WARN Commit of WorkerSinkTask{id=influxdb-sink-0} offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask)
[2017-06-15 23:01:04,099] INFO Empty list of records received. (com.datamountaineer.streamreactor.connect.influx.InfluxSinkTask)
[2017-06-15 23:01:04,099] INFO WorkerSinkTask{id=influxdb-sink-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask)
[2017-06-15 23:01:04,099] WARN Commit of WorkerSinkTask{id=influxdb-sink-0} offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask)
I thought that this could be fixed for the latest image of Confluent Kafka Connect image and executed following test with mysql database and Conlfuent JDBC sink connector.
I mixed knowledge from following articles:
and wrote some data to kafka topic, when DB was up, than stopped DB Docker container, wrote some info while DB is down, than started DB container and restarted related connector.
All the info from kafka topic was transferred to DB (the behaviour, which I expected from InfluxDB connector, but currently it does not work as I expect).
One strange thing is that when I subscribed to topic that was marked as offset topic for connector, it was empty.
kafka-connect.properties:
rest.port=38083
config.storage.topic=quickstart-avro-sink-config
log4j.root.loglevel=DEBUG
key.converter=io.confluent.connect.avro.AvroConverter
offset.storage.topic=quickstart-avro-sink-offsets
internal.key.converter.schemas.enable=false
bootstrap.servers=localhost:29092
value.converter=io.confluent.connect.avro.AvroConverter
status.storage.topic=quickstart-avro-sink-status
internal.value.converter.schemas.enable=false
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
I have some questions related to that:
1) Does it means, that related info was stored and later read from __consumer_offsets topic?
2) Is fix of bug KAFKA-4942 is not yet included to the latest Docker image of Kafka Connect (Confluent)?
3) If no, is assumed correct behaviour is writing info to specified connect topic (quickstart-avro-sink-offsets in my example)?