Unable to start Historical node with error - druid.segmentCache.locations - may not be empty

415 views
Skip to first unread message

lpdr

unread,
Apr 10, 2017, 11:38:03 AM4/10/17
to Druid User
Using HDFS for storage and unable to start historical with any combination of HDFS/local storage for "druid.segmentCache.locations" parameter.
Can anyone please look at the parameters and help me figure out what could be wrong please? 


conf/druid/historical/runtime.properties
=============================
druid.service=druid/historical
druid.port=8083

# HTTP server threads
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=7

# Segment storage
druid.segmentCache.locations=[{"path":"/tmp/druid-caching","maxSize"\:10000000000}]
druid.server.maxSize=10000000000

_common/common.runtime.properties
=============================
# Extensions
druid.extensions.loadList=["druid-hdfs-storage","mysql-metadata-storage"]

# Logging
druid.startup.logging.logProperties=true

# Zookeeper
druid.zk.service.host=x.x.x.x
druid.zk.paths.base=/druid

#
# Metadata storage
# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://10.x.x.x:3306/druid
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=druid

# Deep storage
# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
druid.storage.type=hdfs
druid.storage.storageDirectory=/druid/segments

# Query Cache (we use a simple 10mb heap-based local cache on the broker)
druid.cache.type=local
druid.cache.sizeInBytes=10000000

#
# Indexing service logs
# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=/druid/indexing-logs

# Service discovery
#
druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator

#
# Monitoring
#
druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=info

druid.indexer.runner.javaOpts=-server -Xmx1g -Xms1g -XX:NewSize=256m -XX:MaxNewSize=256m -XX:MaxDirectMemorySize=1g  -XX:+PrintGCDetails -XX:+PrintGCTimeStamp




ERROR
======
2017-04-10T08:28:25,722 ERROR [main] io.druid.cli.CliHistorical - Error when starting up.  Failing.
com.google.inject.ProvisionException: Unable to provision, see the following errors:

1) druid.segmentCache.locations - may not be empty
  at io.druid.guice.JsonConfigProvider.bind(JsonConfigProvider.java:131) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.guice.StorageNodeModule)
  at io.druid.guice.JsonConfigProvider.bind(JsonConfigProvider.java:131) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.guice.StorageNodeModule)
  while locating com.google.common.base.Supplier<io.druid.segment.loading.SegmentLoaderConfig>
  at io.druid.guice.JsonConfigProvider.bind(JsonConfigProvider.java:132) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.guice.StorageNodeModule)
  while locating io.druid.segment.loading.SegmentLoaderConfig
    for the 2nd parameter of io.druid.segment.loading.SegmentLoaderLocalCacheManager.<init>(SegmentLoaderLocalCacheManager.java:59)
  while locating io.druid.segment.loading.SegmentLoaderLocalCacheManager
  at io.druid.guice.LocalDataStorageDruidModule.configure(LocalDataStorageDruidModule.java:53) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.guice.LocalDataStorageDruidModule)
  while locating io.druid.segment.loading.SegmentLoader
    for the 1st parameter of io.druid.server.coordination.ServerManager.<init>(ServerManager.java:106)
  at io.druid.cli.CliHistorical$1.configure(CliHistorical.java:78) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.cli.CliHistorical$1)
  while locating io.druid.server.coordination.ServerManager
  at io.druid.cli.CliHistorical$1.configure(CliHistorical.java:80) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.cli.CliHistorical$1)
  while locating io.druid.query.QuerySegmentWalker
    for the 5th parameter of io.druid.server.QueryResource.<init>(QueryResource.java:110)
  while locating io.druid.server.QueryResource

1 error
        at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1028) ~[guice-4.1.0.jar:?]
        at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1050) ~[guice-4.1.0.jar:?]
        at io.druid.guice.LifecycleModule$2.start(LifecycleModule.java:153) ~[druid-api-0.9.2.jar:0.9.2]
        at io.druid.cli.GuiceRunnable.initLifecycle(GuiceRunnable.java:101) [druid-services-0.9.2.jar:0.9.2]
        at io.druid.cli.ServerRunnable.run(ServerRunnable.java:40) [druid-services-0.9.2.jar:0.9.2]
        at io.druid.cli.Main.main(Main.java:106) [druid-services-0.9.2.jar:0.9.2]

Nishant Bangarwa

unread,
Apr 10, 2017, 11:50:28 AM4/10/17
to Druid User
On a quick look, the configs seems fine, 
Can you also share how you are starting the historical node ? Maybe the historical runtime.properties is not added to classpah properly ? 

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/1c968c4c-5a4d-4ea4-badc-2c8c583b6a8d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

lpdr

unread,
Apr 11, 2017, 11:24:55 AM4/11/17
to Druid User
Thanks Nishant. 
Reply all
Reply to author
Forward
0 new messages