ERROR Failed to flush WorkerSourceTask{id=conn1-0}, timed out while waiting for producer to flush outstanding messages
(then it dumps everything in the buffer to the log)
ERROR Failed to commit offsets for WorkerSourceTask{id=conn1-1} (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112).
These errors occur when there are around 3000 lines in the input file. We have tried many different setting for offset.flush.timeout.ms & offset.flush.interval.ms but the issue persists. Eventually we encounter java heap / out-of-memory error. Seems like some offset buffer continue to grow and the connector logic can't handle it. Wondering if there are any other config setting that will clear the offset buffer more frequently, or increase the size that it can handle.
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/71cd9f76-6c96-4940-a9c7-f74975efb350%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent-platform@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/d63aecc5-bf4c-4727-9632-0072baf541fe%40googlegroups.com.
Hi,You need to increase producer buffer configs.Something like.. producer.something...
On Sep 7, 2018 10:59 AM, <foreve...@hotmail.com> wrote:
--
Have you resolved this issue just like @Dustin Cote told you?I'm getting the same error just as you.
在 2016年7月15日星期五 UTC+8上午2:10:26,Tim Zeller写道:We are building a datastream around kafka-connect (confluent 2.0.1/kafka 9) to parse text files and write each line to a topic. There is a threshold where the connector fails with:ERROR Failed to flush WorkerSourceTask{id=conn1-0}, timed out while waiting for producer to flush outstanding messages
(then it dumps everything in the buffer to the log)
ERROR Failed to commit offsets for WorkerSourceTask{id=conn1-1} (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112).
These errors occur when there are around 3000 lines in the input file. We have tried many different setting for offset.flush.timeout.ms & offset.flush.interval.ms but the issue persists. Eventually we encounter java heap / out-of-memory error. Seems like some offset buffer continue to grow and the connector logic can't handle it. Wondering if there are any other config setting that will clear the offset buffer more frequently, or increase the size that it can handle.
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
Could you tell me the exact configs? Thank you, as I'm still stucked by these problems...
在 2018年9月7日星期五 UTC+8下午5:37:30,Darth写道:
Hi,You need to increase producer buffer configs.Something like.. producer.something...
On Sep 7, 2018 10:59 AM, <foreve...@hotmail.com> wrote:
--
Have you resolved this issue just like @Dustin Cote told you?I'm getting the same error just as you.
在 2016年7月15日星期五 UTC+8上午2:10:26,Tim Zeller写道:We are building a datastream around kafka-connect (confluent 2.0.1/kafka 9) to parse text files and write each line to a topic. There is a threshold where the connector fails with:ERROR Failed to flush WorkerSourceTask{id=conn1-0}, timed out while waiting for producer to flush outstanding messages
(then it dumps everything in the buffer to the log)
ERROR Failed to commit offsets for WorkerSourceTask{id=conn1-1} (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112).
These errors occur when there are around 3000 lines in the input file. We have tried many different setting for offset.flush.timeout.ms & offset.flush.interval.ms but the issue persists. Eventually we encounter java heap / out-of-memory error. Seems like some offset buffer continue to grow and the connector logic can't handle it. Wondering if there are any other config setting that will clear the offset buffer more frequently, or increase the size that it can handle.
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/d63aecc5-bf4c-4727-9632-0072baf541fe%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/7063cf3d-cac9-470f-b4e5-d0dd9460dede%40googlegroups.com.