I used to run this streams application with 0.10.1.1-cp1. After moving to 0.10.2.0 I started to receive these errors at the end of copying existing state stores.
Afterwards it repeats this for every task and stays that way. When I restart the application, it starts to download all of the state store from scratch again.
- It is the only instance running.
- 16 partitions, 4 threads.
- Kafka brokers are still CP3.1.2.- Since zookeeper url is deprecated I removed it from configuration. That is the only code change.
2017-03-02 20:49:16 [StreamThread-4] WARN o.a.k.s.p.i.StreamThread - Could not create task 0_1. Will retry. org.apache.kafka.streams.errors.LockException: task [0_1] Failed to lock the state directory: /var/lib/firefly/0_1 at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:102) at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:73) at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:108) at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834) at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207) at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180) at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937) at org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69) at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:236) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:255) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:339) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
At debug mode it continously prints these afterwards.
2017-03-02 21:23:15 [kafka-coordinator-heartbeat-thread | fireflyYns] DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending Heartbeat request for group fireflyYns to coordinator xxx.xxx:9092 (id: 2147483647 rack: null) 2017-03-02 21:23:15 [kafka-coordinator-heartbeat-thread | fireflyYns] DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending Heartbeat request for group fireflyYns to coordinator xxx.xxx:9092 (id: 2147483647 rack: null)
2017-03-02 21:23:15 [kafka-coordinator-heartbeat-thread | fireflyYns] DEBUG o.a.k.c.c.i.AbstractCoordinator - Attempt to heartbeat failed for group fireflyYns since it is rebalancing. 2017-03-02 21:23:15 [kafka-coordinator-heartbeat-thread | fireflyYns] DEBUG o.a.k.c.c.i.AbstractCoordinator - Attempt to heartbeat failed for group fireflyYns since it is rebalancing.
--
You received this message because you are subscribed to the Google Groups "Confluent Platform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platf...@googlegroups.com.
To post to this group, send email to confluent...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/71b338b6-80a0-489c-938f-26c8f91a5be9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to confluent-platform+unsub...@googlegroups.com.
20 13:07:06.208 [StreamThread-2] ERROR o.a.k.c.c.i.ConsumerCoordinator - User provided listener org.apache.kafka.streams.processor.internals.StreamThread$1 for group fireflyYns failed on partition assignment org.apache.kafka.streams.errors.ProcessorStateException: Error opening store userAwardStore at location /var/lib/firefly/fireflyYns/0_127/rocksdb/userAwardStore at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:181) at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:151) at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:156) at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:40) at org.apache.kafka.streams.state.internals.MeteredKeyValueStore$7.run(MeteredKeyValueStore.java:100) at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188) at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:131) at org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:63) at org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:86) at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:141) at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:864) at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1237) at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210) at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:967) at org.apache.kafka.streams.processor.internals.StreamThread.access$600(StreamThread.java:69) at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:234) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:259) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:352) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:290) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1029) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:592) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:361) Caused by: org.rocksdb.RocksDBException: R at org.rocksdb.RocksDB.open(Native Method) at org.rocksdb.RocksDB.open(RocksDB.java:231) at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:174) ... 23 common frames omitted