Hey Confluent folks,
we are trying to use confluent platform ver 3.1.1 and kafka ver 0.10.1.0-cp2 in our swarm cluster.
and kept seeing this error when we were trying to consume the message.
2016-12-13 02:08:13,846] ERROR Task dp threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.common.KafkaException: Record for partition dp at offset 234 is invalid, cause: Record is corrupt (stored crc = 1133837813, computed crc = 2330297257)
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:743)
at org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(Fetcher.java:682)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:425)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1045)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
at org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:317)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:235)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:172)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:143)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
At first we thought it might be related to https://github.com/confluentinc/kafka/commit/d2acd676c3eb0c11d0042bc3b9ae314165c68443,
but the update to "crc update function" was just a simple check, can anyone kindly shed some light into this?
Thanks in advance.
cheng