Started receiving Failed to lock error after moving to 0.10.2.0

893 views
Skip to first unread message

Yun Olg

unread,
Mar 2, 2017, 4:59:49 PM3/2/17
to Confluent Platform
I used to run this streams application with 0.10.1.1-cp1. After moving to 0.10.2.0 I started to receive these errors at the end of copying existing state stores. 
Afterwards it repeats this for every task and stays that way. When I restart the application, it starts to download all of the state store from scratch again.

- It is the only instance running.
- 16 partitions, 4 threads.
- Kafka brokers are still CP3.1.2.
- Since zookeeper url is deprecated I removed it from configuration. That is the only code change.

2017-03-02 20:49:16 [StreamThread-4] WARN  o.a.k.s.p.i.StreamThread - Could not create task 0_1. Will retry.
org.apache.kafka.streams.errors.LockException: task [0_1] Failed to lock the state directory: /var/lib/firefly/0_1
        at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:102)
        at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:73)
        at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:108)
        at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
        at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
        at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
        at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
        at org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
        at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:236)
        at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:255)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:339)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
        at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
        at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
        at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582)
        at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)

At debug mode it continously prints these afterwards.

2017-03-02 21:23:15 [kafka-coordinator-heartbeat-thread | fireflyYns] DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending Heartbeat request for group fireflyYns to coordinator xxx.xxx:9092 (id: 2147483647 rack: null)
2017-03-02 21:23:15 [kafka-coordinator-heartbeat-thread | fireflyYns] DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending Heartbeat request for group fireflyYns to coordinator xxx.xxx:9092 (id: 2147483647 rack: null)
2017-03-02 21:23:15 [kafka-coordinator-heartbeat-thread | fireflyYns] DEBUG o.a.k.c.c.i.AbstractCoordinator - Attempt to heartbeat failed for group fireflyYns since it is rebalancing. 2017-03-02 21:23:15 [kafka-coordinator-heartbeat-thread | fireflyYns] DEBUG o.a.k.c.c.i.AbstractCoordinator - Attempt to heartbeat failed for group fireflyYns since it is rebalancing.

Damian Guy

unread,
Mar 3, 2017, 5:39:58 AM3/3/17
to Confluent Platform
Hi Yun,

This is usually because one or more of the threads in your application is taking longer than max.poll.interval.ms between calls to poll. This will trigger a rebalance and the thread(s) that are taking a long time will be kicked out of the group. These threads still hold the locks to the state directories so when the new assignment happens the other threads are unable to get the lock. Here we retry and back-off until we can get the lock.

You probably need to increase max.poll.interval.ms to be at least as large as the time it takes to restore your state stores.

Thanks,
Damian 

--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/71b338b6-80a0-489c-938f-26c8f91a5be9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Yun Olg

unread,
Mar 3, 2017, 8:11:33 AM3/3/17
to Confluent Platform
Some followup:

1- Downgrading streams application to 0.10.1.1-cp1 helped. Application works normal again.

2- Initial changelog fetch takes approximately 40 minutes. Setting max.poll.interval.ms to such a high value doesn't seem like a good solution.

3- What I really don't understand is that: In the beginning it fetches changelog topics until latest offset. Then this deadlock happens. After I restart the streams application, it begins by fetching changelogs from earliest offset again.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.

Eno Thereska

unread,
May 4, 2017, 3:07:52 PM5/4/17
to Confluent Platform
Yunus,

We've fixed the lock issue in the 0.10.2.1 release, mind trying it?

Thanks
Eno

Yunus Olgun

unread,
May 4, 2017, 3:51:57 PM5/4/17
to Confluent Platform
Sure, I will try and return to you.

Thanks
Yunus

Kiran Gilvaz

unread,
Jun 8, 2017, 5:32:58 PM6/8/17
to Confluent Platform
I'm running on 0.10.2.1.
I still see the issue. 

Kiran

Eno Thereska

unread,
Jun 14, 2017, 9:41:08 AM6/14/17
to Confluent Platform
Could you please share a log so we can see what's going on?

Thanks
Eno

Yunus Olgun

unread,
Jun 20, 2017, 9:34:31 AM6/20/17
to Confluent Platform
Still happens.

20 13:07:06.208 [StreamThread-2] ERROR o.a.k.c.c.i.ConsumerCoordinator - User provided listener org.apache.kafka.streams.processor.internals.StreamThread$1 for group fireflyYns failed on partition assignment
org.apache.kafka.streams.errors.ProcessorStateException: Error opening store userAwardStore at location /var/lib/firefly/fireflyYns/0_127/rocksdb/userAwardStore
        at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:181)
        at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:151)
        at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:156)
        at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:40)
        at org.apache.kafka.streams.state.internals.MeteredKeyValueStore$7.run(MeteredKeyValueStore.java:100)
        at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
        at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:131)
        at org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:63)
        at org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:86)
        at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:141)
        at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:864)
        at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1237)
        at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
        at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:967)
        at org.apache.kafka.streams.processor.internals.StreamThread.access$600(StreamThread.java:69)
        at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:234)
        at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:259)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:352)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
        at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:290)
        at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
        at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:592)
        at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:361)
Caused by: org.rocksdb.RocksDBException: R
        at org.rocksdb.RocksDB.open(Native Method)
        at org.rocksdb.RocksDB.open(RocksDB.java:231)
        at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:174)
        ... 23 common frames omitted

Eno Thereska

unread,
Jun 26, 2017, 9:15:48 AM6/26/17
to Confluent Platform
Sorry for the delay. This seems like a RocksDB issue with a corrupt store. Have any of the RocksDb files been deleted by any chance? Would you mind doing a fresh run one more time and let me know if you still get it?

Thanks
Eno

RonnieXie

unread,
May 23, 2018, 5:33:14 AM5/23/18
to Confluent Platform
Hi ,Yun,

Do you run 4 consumer?

I  run 4 consumer in 4 child process (uwsgi) , also have this issue "Heartbeat failed for group ti-myGroup because it is rebalancing"


在 2017年3月3日星期五 UTC+8上午5:59:49,Yunus Olgun写道:
Reply all
Reply to author
Forward
0 new messages