Hi all,
I am using druid 0.8.1 with Cassandra as a deep storage. In historical node i am getting the following error repeatedly.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2016-03-10T06:43:10,512 ERROR [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Failed to load segment for dataSource: {class=io.druid.server.coordination.ZkCoordinator, exceptionType=class io.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[clickstream_2016-03-09T08:05:00.000Z_2016-03-09T08:10:00.000Z_2016-03-09T08:05:00.000Z], segment=DataSegment{size=41499, shardSpec=NoneShardSpec, metrics=[count], dimensions=[authenticated_user, browser_language, browser_viewable_resolution, campaign_medium, campaign_name, campaign_source, client_ip, color_depth, cvar_1_key, cvar_1_scope, cvar_1_slot, cvar_1_value, cvar_2_key, cvar_2_scope, cvar_2_slot, cvar_2_value, cvar_3_key, cvar_3_scope, cvar_3_slot, cvar_3_value, cvar_4_key, cvar_4_scope, cvar_4_slot, cvar_4_value, cvar_5_key, cvar_5_scope, cvar_5_slot, cvar_5_value, cvariables, encoding, event_action, event_category, event_label, event_value, flash_version, ga_hit_id, hostname, java_enabled, page_view_count, pid, referrer, request_page_title, request_type, request_uri, screen_resolution, tran_city, tran_country, tran_order, tran_product_category, tran_product_name, tran_product_sku, tran_product_unit_price, tran_shipping, tran_state, tran_tax, tran_total, ts_current_visit, ts_initial_visit, ts_previous_visit, user_agent, ute, ute_PixelId, visit_count, visitor_id], version='2016-03-09T08:05:00.000Z', loadSpec={type=c*, key=druid/clickstream/2016-03-09T08:05:00.000Z_2016-03-09T08:10:00.000Z/2016-03-09T08:05:00.000Z/0}, interval=2016-03-09T08:05:00.000Z/2016-03-09T08:10:00.000Z, dataSource='clickstream', binaryVersion='9'}}
io.druid.segment.loading.SegmentLoadingException: Exception loading segment[clickstream_2016-03-09T08:05:00.000Z_2016-03-09T08:10:00.000Z_2016-03-09T08:05:00.000Z]
at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:146) ~[druid-server-0.8.1.jar:0.8.1]
at io.druid.server.coordination.ZkCoordinator.addSegment(ZkCoordinator.java:171) [druid-server-0.8.1.jar:0.8.1]
at io.druid.server.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:42) [druid-server-0.8.1.jar:0.8.1]
at io.druid.server.coordination.BaseZkCoordinator$1.childEvent(BaseZkCoordinator.java:115) [druid-server-0.8.1.jar:0.8.1]
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:516) [curator-recipes-2.8.0.jar:?]
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:510) [curator-recipes-2.8.0.jar:?]
at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92) [curator-framework-2.8.0.jar:?]
at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) [guava-16.0.1.jar:?]
at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:84) [curator-framework-2.8.0.jar:?]
at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:508) [curator-recipes-2.8.0.jar:?]
at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35) [curator-recipes-2.8.0.jar:?]
at org.apache.curator.framework.recipes.cache.PathChildrenCache$9.run(PathChildrenCache.java:759) [curator-recipes-2.8.0.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_60]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
Caused by: java.lang.IllegalArgumentException: Could not resolve type id 'c*' into a subtype of [simple type, class io.druid.segment.loading.LoadSpec]
at [Source: N/A; line: -1, column: -1]
at com.fasterxml.jackson.databind.ObjectMapper._convert(ObjectMapper.java:2774) ~[jackson-databind-2.4.4.jar:2.4.4]
at com.fasterxml.jackson.databind.ObjectMapper.convertValue(ObjectMapper.java:2700) ~[jackson-databind-2.4.4.jar:2.4.4]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:140) ~[druid-server-0.8.1.jar:0.8.1]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:93) ~[druid-server-0.8.1.jar:0.8.1]
at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:151) ~[druid-server-0.8.1.jar:0.8.1]
at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:142) ~[druid-server-0.8.1.jar:0.8.1]
... 18 more
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Could not resolve type id 'c*' into a subtype of [simple type, class io.druid.segment.loading.LoadSpec]
at [Source: N/A; line: -1, column: -1]
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148) ~[jackson-databind-2.4.4.jar:2.4.4]
at com.fasterxml.jackson.databind.DeserializationContext.unknownTypeException(DeserializationContext.java:862) ~[jackson-databind-2.4.4.jar:2.4.4]
at com.fasterxml.jackson.databind.jsontype.impl.TypeDeserializerBase._findDeserializer(TypeDeserializerBase.java:167) ~[jackson-databind-2.4.4.jar:2.4.4]
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:99) ~[jackson-databind-2.4.4.jar:2.4.4]
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:84) ~[jackson-databind-2.4.4.jar:2.4.4]
at com.fasterxml.jackson.databind.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java:132) ~[jackson-databind-2.4.4.jar:2.4.4]
at com.fasterxml.jackson.databind.deser.impl.TypeWrappedDeserializer.deserialize(TypeWrappedDeserializer.java:41) ~[jackson-databind-2.4.4.jar:2.4.4]
at com.fasterxml.jackson.databind.ObjectMapper._convert(ObjectMapper.java:2769) ~[jackson-databind-2.4.4.jar:2.4.4]
at com.fasterxml.jackson.databind.ObjectMapper.convertValue(ObjectMapper.java:2700) ~[jackson-databind-2.4.4.jar:2.4.4]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:140) ~[druid-server-0.8.1.jar:0.8.1]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:93) ~[druid-server-0.8.1.jar:0.8.1]
at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:151) ~[druid-server-0.8.1.jar:0.8.1]
at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:142) ~[druid-server-0.8.1.jar:0.8.1]
... 18 more
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Here is my common.config :-
druid.extensions.coordinates=["io.druid.extensions:druid-examples","io.druid.extensions:druid-kafka-eight","io.druid.extensions:mysql-metadata-storage","io.druid.extensions:druid-cassandra-storage:0.8.1"]
druid.extensions.localRepository=extensions-repo
# Zookeeper
druid.zk.service.host=localhost
# Metadata Storage (use something like mysql in production by uncommenting properties below)
# by default druid will use derby
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://
192.168.169.112:3306/druiddruid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=druid
druid.storage.type=c*
druid.storage.host=
192.168.169.112:9160druid.storage.keyspace=druid
# Query Cache (we use a simple 10mb heap-based local cache on the broker)
druid.cache.type=local
druid.cache.sizeInBytes=10000000
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
please suggest.