Getting failed Index kafka tasks

11 views
Skip to first unread message

Suprabhat

unread,
Jun 18, 2024, 7:22:07 AMJun 18
to Druid User

Hi Team, getting failed Index kafka tasks and on checking the logs, found following infos :

  •  WARN [NodeRoleWatcher[COORDINATOR]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Ignored event type [CONNECTION_RECONNECTED] for node watcher of role [coordinator].
  • WARN [main] org.apache.druid.indexing.common.config.TaskConfig - Batch processing mode argument value is null or not valid:[null], defaulting to[CLOSED_SEGMENTS] 
  • WARN [main] org.eclipse.jetty.server.handler.gzip.GzipHandler - minGzipSize of 0 is inefficient for short content, break even is size 23
  • WARN [main] org.apache.druid.query.lookup.LookupReferencesManager - No lookups found for tier [__default], response [org.apache.druid.java.util.http.client.response.StringFullResponseHolder]

And following error:
 

ERROR [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Encountered exception in run() before persisting.

org.apache.kafka.common.errors.InterruptException: java.lang.InterruptedException
  at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeThrowInterruptException(ConsumerNetworkClient.java:535) ~[?:?]
  at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:296) ~[?:?]
  at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:251) ~[?:?]
  at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1311) ~[?:?]
  at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1247) ~[?:?]
  at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1220) ~[?:?]
  at org.apache.druid.indexing.kafka.KafkaRecordSupplier.poll(KafkaRecordSupplier.java:132) ~[?:?]
  at org.apache.druid.indexing.kafka.IncrementalPublishingKafkaIndexTaskRunner.getRecords(IncrementalPublishingKafkaIndexTaskRunner.java:95) ~[?:?]
  at org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner.runInternal(SeekableStreamIndexTaskRunner.java:614) ~[druid-indexing-service-25.0.0.jar:25.0.0]
  at org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner.run(SeekableStreamIndexTaskRunner.java:266) ~[druid-indexing-service-25.0.0.jar:25.0.0]
  at org.apache.druid.indexing.seekablestream.SeekableStreamIndexTask.runTask(SeekableStreamIndexTask.java:151) ~[druid-indexing-service-25.0.0.jar:25.0.0]
  at org.apache.druid.indexing.common.task.AbstractTask.run(AbstractTask.java:169) ~[druid-indexing-service-25.0.0.jar:25.0.0]
  at org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:477) ~[druid-indexing-service-25.0.0.jar:25.0.0]
  at org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:449) ~[druid-indexing-service-25.0.0.jar:25.0.0]
  at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_382]
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_382]
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_382]
  at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_382]
Caused by: java.lang.InterruptedException
  ... 18 more
============================================

I googled and found following 2 can be the reasons :

 

  • Thread Interruption: The thread executing the Kafka consumer was interrupted (e.g., by another thread or due to a timeout).
  • Resource Constraints: Network glitches or resource limitations (CPU, memory) might lead to interruptions.

Can anybody please help what can be the exact reason?
Reply all
Reply to author
Forward
0 new messages