Druid integration with HDFS ( urgent )

276 views
Skip to first unread message

Suman Banerjee

unread,
May 17, 2017, 10:30:13 AM5/17/17
to Druid Development
Hi I am able to store the data local file system 
But unable to use deep storage to HDFS .


I did 3 things :-

1) Included the HDFS extension in your list of extensions.
2) Set the proper configs to HDFS
3) Include relevant hadoop configuration files in the classpath of the nodes you are using.

Do we need to include hadoop jars also in the classpath while running druid processes  ?

Here is my configs .... 

1# common.runtime.properties   under conf-quickstart/druid/_common


#
# Licensed to Metamarkets Group Inc. (Metamarkets) under oneexer.task.defaultHadoopCoordinates
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. Metamarkets licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

# Extensions specified in the load list will be loaded by Druid
# We are using local fs for deep storage - not recommended for production - use S3, HDFS, or NFS instead
# We are using local derby for the metadata store - not recommended for production - use MySQL or Postgres instead

# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
druid.extensions.loadList=["druid-hdfs-storage"]

# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
# and uncomment the line below to point to your directory.

#druid.extensions.hadoopDependenciesDir=/root/labtest/druid_hadoop/druid-0.10.0/conf-quickstart/druid/hadoop-dependencies

#
# Logging
#

# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true

#
# Zookeeper
#

druid.zk.service.host=localhost
druid.zk.paths.base=/druid

#
# Metadata storage
#

# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
druid.metadata.storage.type=derby
druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/var/druid/metadata.db;create=true
druid.metadata.storage.connector.host=localhost
druid.metadata.storage.connector.port=1527

# For MySQL:
#druid.metadata.storage.type=mysql
#druid.metadata.storage.connector.connectURI=jdbc:mysql://db.example.com:3306/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...

# For PostgreSQL:
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...

#
# Deep storage
#

# For local disk (only viable in a cluster if this is a network mount):
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments

# For HDFS:
druid.storage.type=hdfs
druid.storage.storageDirectory=/druid_quick/segments

# For S3:
#druid.storage.type=s3
#druid.storage.bucket=your-bucket
#druid.storage.baseKey=druid/segments
#druid.s3.accessKey=...
#druid.s3.secretKey=...

#
# Indexing service logs
#

# For local disk (only viable in a cluster if this is a network mount):
#druid.indexer.logs.type=file
#druid.indexer.logs.directory=var/druid/indexing-logs

# For HDFS:
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=/druid_quick/indexing-logs

# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=your-bucket
#druid.indexer.logs.s3Prefix=druid/indexing-logs

#
# Service discovery
#

druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator

#
# Monitoring
#

druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=info

-Duser.timezone=UTC


2# Added the hadoop config files in conf-quickstart/druid/_common

3#   java `cat conf-quickstart/druid/historical/jvm.config | xargs` -cp "conf-quickstart/druid/_common:conf-quickstart/druid/historical:lib/*


But facing exception while MR job execution . 



2017-05-17T10:10:39,998 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_local805813083_0002 running in uber mode : false
2017-05-17T10:10:39,999 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job -  map 0% reduce 0%
2017-05-17T10:10:43,129 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map
2017-05-17T10:10:44,101 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job -  map 39% reduce 0%
2017-05-17T10:10:45,438 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.LocalJobRunner - map > map
2017-05-17T10:10:45,439 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - Starting flush of map output
2017-05-17T10:10:45,439 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - Spilling map output
2017-05-17T10:10:45,439 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - bufstart = 0; bufend = 16736001; bufvoid = 104857600
2017-05-17T10:10:45,439 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - kvstart = 26214396(104857584); kvend = 26057424(104229696); length = 156973/6553600
2017-05-17T10:10:45,726 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - Finished spill 0
2017-05-17T10:10:45,730 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.Task - Task:attempt_local805813083_0002_m_000000_0 is done. And is in the process of committing
2017-05-17T10:10:45,794 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.LocalJobRunner - map
2017-05-17T10:10:45,794 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.Task - Task 'attempt_local805813083_0002_m_000000_0' done.
2017-05-17T10:10:45,794 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local805813083_0002_m_000000_0
2017-05-17T10:10:45,795 INFO [Thread-42] org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
2017-05-17T10:10:45,795 INFO [Thread-42] org.apache.hadoop.mapred.LocalJobRunner - Waiting for reduce tasks
2017-05-17T10:10:45,797 INFO [pool-23-thread-1] org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local805813083_0002_r_000000_0
2017-05-17T10:10:45,804 INFO [pool-23-thread-1] org.apache.hadoop.mapred.Task -  Using ResourceCalculatorProcessTree : [ ]
2017-05-17T10:10:45,804 INFO [pool-23-thread-1] org.apache.hadoop.mapred.ReduceTask - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@7b595d8a
2017-05-17T10:10:45,836 INFO [pool-23-thread-1] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - MergerManager: memoryLimit=1336252800, maxSingleShuffleLimit=334063200, mergeThreshold=881926912, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2017-05-17T10:10:45,838 INFO [EventFetcher for fetching Map Completion Events] org.apache.hadoop.mapreduce.task.reduce.EventFetcher - attempt_local805813083_0002_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2017-05-17T10:10:45,848 INFO [localfetcher#2] org.apache.hadoop.mapreduce.task.reduce.LocalFetcher - localfetcher#2 about to shuffle output of map attempt_local805813083_0002_m_000000_0 decomp: 16892979 len: 16892983 to MEMORY
2017-05-17T10:10:46,011 INFO [localfetcher#2] org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput - Read 16892979 bytes from map-output for attempt_local805813083_0002_m_000000_0
2017-05-17T10:10:46,011 INFO [localfetcher#2] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - closeInMemoryFile -> map-output of size: 16892979, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->16892979
2017-05-17T10:10:46,019 INFO [EventFetcher for fetching Map Completion Events] org.apache.hadoop.mapreduce.task.reduce.EventFetcher - EventFetcher is interrupted.. Returning
2017-05-17T10:10:46,032 INFO [pool-23-thread-1] org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
2017-05-17T10:10:46,033 INFO [pool-23-thread-1] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2017-05-17T10:10:46,035 INFO [pool-23-thread-1] org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
2017-05-17T10:10:46,049 INFO [pool-23-thread-1] org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 16892927 bytes
2017-05-17T10:10:46,111 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job -  map 100% reduce 0%
2017-05-17T10:10:46,189 INFO [pool-23-thread-1] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merged 1 segments, 16892979 bytes to disk to satisfy reduce memory limit
2017-05-17T10:10:46,189 INFO [pool-23-thread-1] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 1 files, 16892983 bytes from disk
2017-05-17T10:10:46,190 INFO [pool-23-thread-1] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 0 segments, 0 bytes from memory into reduce
2017-05-17T10:10:46,190 INFO [pool-23-thread-1] org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
2017-05-17T10:10:46,191 INFO [pool-23-thread-1] org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 16892927 bytes
2017-05-17T10:10:46,191 INFO [pool-23-thread-1] org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
2017-05-17T10:10:46,514 INFO [pool-23-thread-1] io.druid.indexer.HadoopDruidIndexerConfig - Running with config:
{
  "spec" : {
    "dataSchema" : {
      "dataSource" : "wikiticker",
      "parser" : {
        "type" : "hadoopyString",
        "parseSpec" : {
          "format" : "json",
          "dimensionsSpec" : {
            "dimensions" : [ "channel", "cityName", "comment", "countryIsoCode", "countryName", "isAnonymous", "isMinor", "isNew", "isRobot", "isUnpatrolled", "metroCode", "namespace", "page", "regionIsoCode", "regionName", "user" ]
          },
          "timestampSpec" : {
            "format" : "auto",
            "column" : "time"
          }
        }
      },
      "metricsSpec" : [ {
        "type" : "count",
        "name" : "count"
      }, {
        "type" : "longSum",
        "name" : "added",
        "fieldName" : "added",
        "expression" : null
      }, {
        "type" : "longSum",
        "name" : "deleted",
        "fieldName" : "deleted",
        "expression" : null
      }, {
        "type" : "longSum",
        "name" : "delta",
        "fieldName" : "delta",
        "expression" : null
      }, {
        "type" : "hyperUnique",
        "name" : "user_unique",
        "fieldName" : "user",
        "isInputHyperUnique" : false
      } ],
      "granularitySpec" : {
        "type" : "uniform",
        "segmentGranularity" : "DAY",
        "queryGranularity" : {
          "type" : "none"
        },
        "rollup" : true,
        "intervals" : [ "2015-09-12T00:00:00.000Z/2015-09-13T00:00:00.000Z" ]
      }
    },
    "ioConfig" : {
      "type" : "hadoop",
      "inputSpec" : {
        "type" : "static",
        "paths" : "quickstart/wikiticker-2015-09-12-sampled.json"
      },
      "metadataUpdateSpec" : null,
      "segmentOutputPath" : "file:/druid_quick/segments"
    },
    "tuningConfig" : {
      "type" : "hadoop",
      "workingPath" : "var/druid/hadoop-tmp",
      "version" : "2017-05-17T10:10:05.030Z",
      "partitionsSpec" : {
        "type" : "hashed",
        "targetPartitionSize" : 5000000,
        "maxPartitionSize" : 7500000,
        "assumeGrouped" : false,
        "numShards" : -1,
        "partitionDimensions" : [ ]
      },
      "shardSpecs" : {
        "1442016000000" : [ {
          "actualSpec" : {
            "type" : "none"
          },
          "shardNum" : 0
        } ]
      },
      "indexSpec" : {
        "bitmap" : {
          "type" : "concise"
        },
        "dimensionCompression" : "lz4",
        "metricCompression" : "lz4",
        "longEncoding" : "longs"
      },
      "maxRowsInMemory" : 75000,
      "leaveIntermediate" : false,
      "cleanupOnFailure" : true,
      "overwriteFiles" : false,
      "ignoreInvalidRows" : false,
      "jobProperties" : { },
      "combineText" : false,
      "useCombiner" : false,
      "buildV9Directly" : true,
      "numBackgroundPersistThreads" : 0,
      "forceExtendableShardSpecs" : false,
      "useExplicitVersion" : false
    },
    "uniqueId" : "1f3c150297c9432cafa8eced522572a8"
  }
}
2017-05-17T10:10:46,572 INFO [Thread-42] org.apache.hadoop.mapred.LocalJobRunner - reduce task executor complete.
2017-05-17T10:10:46,575 WARN [Thread-42] org.apache.hadoop.mapred.LocalJobRunner - job_local805813083_0002
java.lang.Exception: java.io.IOException: No such file or directory
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) ~[hadoop-mapreduce-client-common-2.3.0.jar:?]
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529) [hadoop-mapreduce-client-common-2.3.0.jar:?]
Caused by: java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method) ~[?:1.8.0_131]
at java.io.File.createTempFile(File.java:2024) ~[?:1.8.0_131]
at java.io.File.createTempFile(File.java:2070) ~[?:1.8.0_131]
at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:569) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]
at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:478) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) ~[hadoop-mapreduce-client-common-2.3.0.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_131]
2017-05-17T10:10:47,112 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_local805813083_0002 failed with state FAILED due to: NA
2017-05-17T10:10:47,126 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Counters: 33
File System Counters
FILE: Number of bytes read=34215448
FILE: Number of bytes written=17309299
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Map input records=39244
Map output records=39244
Map output bytes=16736001
Map output materialized bytes=16892983
Input split bytes=320
Combine input records=0
Combine output records=0
Reduce input groups=0
Reduce shuffle bytes=16892983
Reduce input records=0
Reduce output records=0
Spilled Records=39244
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=3570
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=811073536
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters 
Bytes Read=0
File Output Format Counters 
Bytes Written=0
2017-05-17T10:10:47,134 INFO [task-runner-0-priority-0] io.druid.indexer.JobHelper - Deleting path[var/druid/hadoop-tmp/wikiticker/2017-05-17T101005.030Z_1f3c150297c9432cafa8eced522572a8]
2017-05-17T10:10:47,164 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_wikiticker_2017-05-17T10:10:04.930Z, type=index_hadoop, dataSource=wikiticker}]
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:211) ~[druid-indexing-service-0.10.0.jar:0.10.0]
at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:223) ~[druid-indexing-service-0.10.0.jar:0.10.0]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.10.0.jar:0.10.0]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.10.0.jar:0.10.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:208) ~[druid-indexing-service-0.10.0.jar:0.10.0]
... 7 more
Caused by: io.druid.java.util.common.ISE: Job[class io.druid.indexer.IndexGeneratorJob] failed!
at io.druid.indexer.JobHelper.runJobs(JobHelper.java:369) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]
at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:95) ~[druid-indexing-hadoop-0.10.0.jar:0.10.0]
at io.druid.indexing.common.task.HadoopIndexTask$HadoopIndexGeneratorInnerProcessing.runTask(HadoopIndexTask.java:276) ~[druid-indexing-service-0.10.0.jar:0.10.0]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:208) ~[druid-indexing-service-0.10.0.jar:0.10.0]
... 7 more
2017-05-17T10:10:47,183 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_hadoop_wikiticker_2017-05-17T10:10:04.930Z] status changed to [FAILED].
2017-05-17T10:10:47,202 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_wikiticker_2017-05-17T10:10:04.930Z",
  "status" : "FAILED",
  "duration" : 28376
}
2017-05-17T10:10:47,228 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.coordination.AbstractDataSegmentAnnouncer.stop()] on object[io.druid.server.coordination.BatchDataSegmentAnnouncer@73c48264].
2017-05-17T10:10:47,228 INFO [main] io.druid.server.coordination.AbstractDataSegmentAnnouncer - Stopping class io.druid.server.coordination.BatchDataSegmentAnnouncer with config[io.druid.server.initialization.ZkPathsConfig@22e2266d]
2017-05-17T10:10:47,229 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/announcements/sandbox.hortonworks.com:8100]
2017-05-17T10:10:47,290 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.listener.announcer.ListenerResourceAnnouncer.stop()] on object[io.druid.query.lookup.LookupResourceListenerAnnouncer@2b736fee].
2017-05-17T10:10:47,290 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/listeners/lookups/__default/sandbox.hortonworks.com:8100]
2017-05-17T10:10:47,422 INFO [main] io.druid.server.listener.announcer.ListenerResourceAnnouncer - Unannouncing start time on [/druid/listeners/lookups/__default/sandbox.hortonworks.com:8100]
2017-05-17T10:10:47,422 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.query.lookup.LookupReferencesManager.stop()] on object[io.druid.query.lookup.LookupReferencesManager@44286963].
2017-05-17T10:10:47,422 INFO [main] io.druid.query.lookup.LookupReferencesManager - Stopping lookup factory references manager
2017-05-17T10:10:47,549 INFO [main] org.eclipse.jetty.server.AbstractConnector - Stopped ServerConnector@50b93353{HTTP/1.1,[http/1.1]}{0.0.0.0:8100}
2017-05-17T10:10:47,553 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@77cb452c{/,null,UNAVAILABLE}
2017-05-17T10:10:47,591 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.worker.executor.ExecutorLifecycle.stop() throws java.lang.Exception] on object[io.druid.indexing.worker.executor.ExecutorLifecycle@7c781c42].
2017-05-17T10:10:47,591 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.overlord.ThreadPoolTaskRunner.stop()] on object[io.druid.indexing.overlord.ThreadPoolTaskRunner@245253d8].
2017-05-17T10:10:47,635 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@2a334bac].
2017-05-17T10:10:47,658 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.announcement.Announcer.stop()] on object[io.druid.curator.announcement.Announcer@75e09567].
2017-05-17T10:10:47,663 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@36c281ed].
2017-05-17T10:10:47,663 INFO [main] io.druid.curator.CuratorModule - Stopping Curator
2017-05-17T10:10:47,664 INFO [Curator-Framework-0] org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting
2017-05-17T10:10:47,777 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x15c154525550132 closed
2017-05-17T10:10:47,777 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x15c154525550132
2017-05-17T10:10:47,778 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.http.client.NettyHttpClient.stop()] on object[com.metamx.http.client.NettyHttpClient@1386313f].
2017-05-17T10:10:47,928 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.metrics.MonitorScheduler.stop()] on object[com.metamx.metrics.MonitorScheduler@d5556bf].
2017-05-17T10:10:47,928 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.emitter.service.ServiceEmitter.close() throws java.io.IOException] on object[com.metamx.emitter.service.ServiceEmitter@5854a18].
2017-05-17T10:10:47,928 INFO [main] com.metamx.emitter.core.LoggingEmitter - Close: started [false]
2017-05-17T10:10:47,928 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner.stop()] on object[io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner@4a37191a].






Parag Jain

unread,
May 17, 2017, 10:40:25 PM5/17/17
to druid-de...@googlegroups.com
Hello, may be check if the machine running the hadoop job has temp directory corresponding to System.getProperty("java.io.tmpdir"). If not then make the dir manually, give appropriate write permissions and try again. Also, please use druid-user group for support questions.


--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/9eb183ef-7e98-43ac-92ce-379fcb0689c8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Suman Banerjee

unread,
May 18, 2017, 1:33:44 AM5/18/17
to Druid Development
Thanks , 

I can see /tmp  dir and also with proper permission. as below 

drwxrwxrwt   1 root  root   12288 May 17 14:49 tmp

I am running  druid with root in HDP 2.6 VM .

Please help me what next to check ??

Parag Jain

unread,
May 18, 2017, 9:38:35 AM5/18/17
to druid-de...@googlegroups.com
Not sure what else might be wrong, the exception clearly indicates that temp dir does not exist. May be write a simple MapReduce job that creates a temp file in same way as Druid does (https://github.com/druid-io/druid/blob/master/indexing-hadoop/src/main/java/io/druid/indexer/IndexGeneratorJob.java#L570) and see if that works.


On Thursday, May 18, 2017, 12:33:50 AM CDT, Suman Banerjee <sban...@gmail.com> wrote:
Thanks , 

I can see /tmp  dir and also with proper permission. as below 

drwxrwxrwt   1 root  root   12288 May 17 14:49 tmp

I am running  druid with root in HDP 2.6 VM .

Please help me what next to check ??



On Thursday, May 18, 2017 at 8:10:25 AM UTC+5:30, Parag Jain wrote:
Hello, may be check if the machine running the hadoop job has temp directory corresponding to System.getProperty("java.io. tmpdir"). If not then make the dir manually, give appropriate write permissions and try again. Also, please use druid-user group for support questions.


3#   java `cat conf-quickstart/druid/ historical/jvm.config | xargs` -cp "conf-quickstart/druid/_common :conf-quickstart/druid/ historical:lib/*


But facing exception while MR job execution . 



2017-05-17T10:10:39,998 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce. Job - Job job_local805813083_0002 running in uber mode : false
2017-05-17T10:10:39,999 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce. Job -  map 0% reduce 0%
2017-05-17T10:10:43,129 INFO [communication thread] org.apache.hadoop.mapred. LocalJobRunner - map > map
2017-05-17T10:10:44,101 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce. Job -  map 39% reduce 0%
2017-05-17T10:10:45,438 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred. LocalJobRunner - map > map
2017-05-17T10:10:45,439 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred. MapTask - Starting flush of map output
2017-05-17T10:10:45,439 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred. MapTask - Spilling map output
2017-05-17T10:10:45,439 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred. MapTask - bufstart = 0; bufend = 16736001; bufvoid = 104857600
2017-05-17T10:10:45,439 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred. MapTask - kvstart = 26214396(104857584); kvend = 26057424(104229696); length = 156973/6553600
2017-05-17T10:10:45,726 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred. MapTask - Finished spill 0
2017-05-17T10:10:45,730 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.Task - Task:attempt_local805813083_ 0002_m_000000_0 is done. And is in the process of committing
2017-05-17T10:10:45,794 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred. LocalJobRunner - map
2017-05-17T10:10:45,794 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.Task - Task 'attempt_local805813083_0002_ m_000000_0' done.
2017-05-17T10:10:45,794 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred. LocalJobRunner - Finishing task: attempt_local805813083_0002_m_ 000000_0
2017-05-17T10:10:45,795 INFO [Thread-42] org.apache.hadoop.mapred. LocalJobRunner - map task executor complete.
2017-05-17T10:10:45,795 INFO [Thread-42] org.apache.hadoop.mapred. LocalJobRunner - Waiting for reduce tasks
2017-05-17T10:10:45,797 INFO [pool-23-thread-1] org.apache.hadoop.mapred. LocalJobRunner - Starting task: attempt_local805813083_0002_r_ 000000_0
2017-05-17T10:10:45,804 INFO [pool-23-thread-1] org.apache.hadoop.mapred.Task -  Using ResourceCalculatorProcessTree : [ ]
2017-05-17T10:10:45,804 INFO [pool-23-thread-1] org.apache.hadoop.mapred. ReduceTask - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce. task.reduce.Shuffle@7b595d8a
2017-05-17T10:10:45,836 INFO [pool-23-thread-1] org.apache.hadoop.mapreduce. task.reduce.MergeManagerImpl - MergerManager: memoryLimit=1336252800, maxSingleShuffleLimit= 334063200, mergeThreshold=881926912, ioSortFactor=10, memToMemMergeOutputsThreshold= 10
2017-05-17T10:10:45,838 INFO [EventFetcher for fetching Map Completion Events] org.apache.hadoop.mapreduce. task.reduce.EventFetcher - attempt_local805813083_0002_r_ 000000_0 Thread started: EventFetcher for fetching Map Completion Events
2017-05-17T10:10:45,848 INFO [localfetcher#2] org.apache.hadoop.mapreduce. task.reduce.LocalFetcher - localfetcher#2 about to shuffle output of map attempt_local805813083_0002_m_ 000000_0 decomp: 16892979 len: 16892983 to MEMORY
2017-05-17T10:10:46,011 INFO [localfetcher#2] org.apache.hadoop.mapreduce. task.reduce.InMemoryMapOutput - Read 16892979 bytes from map-output for attempt_local805813083_0002_m_ 000000_0
2017-05-17T10:10:46,011 INFO [localfetcher#2] org.apache.hadoop.mapreduce. task.reduce.MergeManagerImpl - closeInMemoryFile -> map-output of size: 16892979, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->16892979
2017-05-17T10:10:46,019 INFO [EventFetcher for fetching Map Completion Events] org.apache.hadoop.mapreduce. task.reduce.EventFetcher - EventFetcher is interrupted.. Returning
2017-05-17T10:10:46,032 INFO [pool-23-thread-1] org.apache.hadoop.mapred. LocalJobRunner - 1 / 1 copied.
2017-05-17T10:10:46,033 INFO [pool-23-thread-1] org.apache.hadoop.mapreduce. task.reduce.MergeManagerImpl - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2017-05-17T10:10:46,035 INFO [pool-23-thread-1] org.apache.hadoop.mapred. Merger - Merging 1 sorted segments
2017-05-17T10:10:46,049 INFO [pool-23-thread-1] org.apache.hadoop.mapred. Merger - Down to the last merge-pass, with 1 segments left of total size: 16892927 bytes
2017-05-17T10:10:46,111 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce. Job -  map 100% reduce 0%
2017-05-17T10:10:46,189 INFO [pool-23-thread-1] org.apache.hadoop.mapreduce. task.reduce.MergeManagerImpl - Merged 1 segments, 16892979 bytes to disk to satisfy reduce memory limit
2017-05-17T10:10:46,189 INFO [pool-23-thread-1] org.apache.hadoop.mapreduce. task.reduce.MergeManagerImpl - Merging 1 files, 16892983 bytes from disk
2017-05-17T10:10:46,190 INFO [pool-23-thread-1] org.apache.hadoop.mapreduce. task.reduce.MergeManagerImpl - Merging 0 segments, 0 bytes from memory into reduce
2017-05-17T10:10:46,190 INFO [pool-23-thread-1] org.apache.hadoop.mapred. Merger - Merging 1 sorted segments
2017-05-17T10:10:46,191 INFO [pool-23-thread-1] org.apache.hadoop.mapred. Merger - Down to the last merge-pass, with 1 segments left of total size: 16892927 bytes
2017-05-17T10:10:46,191 INFO [pool-23-thread-1] org.apache.hadoop.mapred. LocalJobRunner - 1 / 1 copied.
2017-05-17T10:10:46,514 INFO [pool-23-thread-1] io.druid.indexer. HadoopDruidIndexerConfig - Running with config:
        "intervals" : [ "2015-09-12T00:00:00.000Z/ 2015-09-13T00:00:00.000Z" ]
2017-05-17T10:10:46,572 INFO [Thread-42] org.apache.hadoop.mapred. LocalJobRunner - reduce task executor complete.
2017-05-17T10:10:46,575 WARN [Thread-42] org.apache.hadoop.mapred. LocalJobRunner - job_local805813083_0002
java.lang.Exception: java.io.IOException: No such file or directory
at org.apache.hadoop.mapred. LocalJobRunner$Job.runTasks( LocalJobRunner.java:462) ~[hadoop-mapreduce-client- common-2.3.0.jar:?]
at org.apache.hadoop.mapred. LocalJobRunner$Job.run( LocalJobRunner.java:529) [hadoop-mapreduce-client- common-2.3.0.jar:?]
Caused by: java.io.IOException: No such file or directory
at java.io.UnixFileSystem. createFileExclusively(Native Method) ~[?:1.8.0_131]
at java.io.File.createTempFile( File.java:2024) ~[?:1.8.0_131]
at java.io.File.createTempFile( File.java:2070) ~[?:1.8.0_131]
at io.druid.indexer. IndexGeneratorJob$ IndexGeneratorReducer.reduce( IndexGeneratorJob.java:569) ~[druid-indexing-hadoop-0.10. 0.jar:0.10.0]
at io.druid.indexer. IndexGeneratorJob$ IndexGeneratorReducer.reduce( IndexGeneratorJob.java:478) ~[druid-indexing-hadoop-0.10. 0.jar:0.10.0]
at org.apache.hadoop.mapreduce. Reducer.run(Reducer.java:171) ~[hadoop-mapreduce-client- core-2.3.0.jar:?]
at org.apache.hadoop.mapred. ReduceTask.runNewReducer( ReduceTask.java:627) ~[hadoop-mapreduce-client- core-2.3.0.jar:?]
at org.apache.hadoop.mapred. ReduceTask.run(ReduceTask. java:389) ~[hadoop-mapreduce-client- core-2.3.0.jar:?]
at org.apache.hadoop.mapred. LocalJobRunner$Job$ ReduceTaskRunnable.run( LocalJobRunner.java:319) ~[hadoop-mapreduce-client- common-2.3.0.jar:?]
at java.util.concurrent. Executors$RunnableAdapter. call(Executors.java:511) ~[?:1.8.0_131]
at java.util.concurrent. FutureTask.run(FutureTask. java:266) ~[?:1.8.0_131]
at java.util.concurrent. ThreadPoolExecutor.runWorker( ThreadPoolExecutor.java:1142) ~[?:1.8.0_131]
at java.util.concurrent. ThreadPoolExecutor$Worker.run( ThreadPoolExecutor.java:617) ~[?:1.8.0_131]
at java.lang.Thread.run(Thread. java:748) ~[?:1.8.0_131]
2017-05-17T10:10:47,112 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce. Job - Job job_local805813083_0002 failed with state FAILED due to: NA
2017-05-17T10:10:47,126 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce. Job - Counters: 33
2017-05-17T10:10:47,164 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord. ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_ hadoop_wikiticker_2017-05- 17T10:10:04.930Z, type=index_hadoop, dataSource=wikiticker}]
java.lang.RuntimeException: java.lang.reflect. InvocationTargetException
at com.google.common.base. Throwables.propagate( Throwables.java:160) ~[guava-16.0.1.jar:?]
at io.druid.indexing.common.task. HadoopTask. invokeForeignLoader( HadoopTask.java:211) ~[druid-indexing-service-0.10. 0.jar:0.10.0]
at io.druid.indexing.common.task. HadoopIndexTask.run( HadoopIndexTask.java:223) ~[druid-indexing-service-0.10. 0.jar:0.10.0]
at io.druid.indexing.overlord. ThreadPoolTaskRunner$ ThreadPoolTaskRunnerCallable. call(ThreadPoolTaskRunner. java:436) [druid-indexing-service-0.10. 0.jar:0.10.0]
at io.druid.indexing.overlord. ThreadPoolTaskRunner$ ThreadPoolTaskRunnerCallable. call(ThreadPoolTaskRunner. java:408) [druid-indexing-service-0.10. 0.jar:0.10.0]
at java.util.concurrent. FutureTask.run(FutureTask. java:266) [?:1.8.0_131]
at java.util.concurrent. ThreadPoolExecutor.runWorker( ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent. ThreadPoolExecutor$Worker.run( ThreadPoolExecutor.java:617) [?:1.8.0_131]
at java.lang.Thread.run(Thread. java:748) [?:1.8.0_131]
Caused by: java.lang.reflect. InvocationTargetException
at sun.reflect. NativeMethodAccessorImpl. invoke0(Native Method) ~[?:1.8.0_131]
at sun.reflect. NativeMethodAccessorImpl. invoke( NativeMethodAccessorImpl.java: 62) ~[?:1.8.0_131]
at sun.reflect. DelegatingMethodAccessorImpl. invoke( DelegatingMethodAccessorImpl. java:43) ~[?:1.8.0_131]
at java.lang.reflect.Method. invoke(Method.java:498) ~[?:1.8.0_131]
at io.druid.indexing.common.task. HadoopTask. invokeForeignLoader( HadoopTask.java:208) ~[druid-indexing-service-0.10. 0.jar:0.10.0]
... 7 more
Caused by: io.druid.java.util.common.ISE: Job[class io.druid.indexer. IndexGeneratorJob] failed!
at io.druid.indexer.JobHelper. runJobs(JobHelper.java:369) ~[druid-indexing-hadoop-0.10. 0.jar:0.10.0]
at io.druid.indexer. HadoopDruidIndexerJob.run( HadoopDruidIndexerJob.java:95) ~[druid-indexing-hadoop-0.10. 0.jar:0.10.0]
at io.druid.indexing.common.task. HadoopIndexTask$ HadoopIndexGeneratorInnerProce ssing.runTask(HadoopIndexTask. java:276) ~[druid-indexing-service-0.10. 0.jar:0.10.0]
at sun.reflect. NativeMethodAccessorImpl. invoke0(Native Method) ~[?:1.8.0_131]
at sun.reflect. NativeMethodAccessorImpl. invoke( NativeMethodAccessorImpl.java: 62) ~[?:1.8.0_131]
at sun.reflect. DelegatingMethodAccessorImpl. invoke( DelegatingMethodAccessorImpl. java:43) ~[?:1.8.0_131]
at java.lang.reflect.Method. invoke(Method.java:498) ~[?:1.8.0_131]
at io.druid.indexing.common.task. HadoopTask. invokeForeignLoader( HadoopTask.java:208) ~[druid-indexing-service-0.10. 0.jar:0.10.0]
... 7 more
2017-05-17T10:10:47,183 INFO [task-runner-0-priority-0] io.druid.indexing.overlord. TaskRunnerUtils - Task [index_hadoop_wikiticker_2017- 05-17T10:10:04.930Z] status changed to [FAILED].
2017-05-17T10:10:47,202 INFO [task-runner-0-priority-0] io.druid.indexing.worker. executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_wikiticker_2017- 05-17T10:10:04.930Z",
  "status" : "FAILED",
  "duration" : 28376
}
2017-05-17T10:10:47,228 INFO [main] io.druid.java.util.common. lifecycle.Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void io.druid.server.coordination. AbstractDataSegmentAnnouncer. stop()] on object[io.druid.server. coordination. BatchDataSegmentAnnouncer@ 73c48264].
2017-05-17T10:10:47,228 INFO [main] io.druid.server.coordination. AbstractDataSegmentAnnouncer - Stopping class io.druid.server.coordination. BatchDataSegmentAnnouncer with config[io.druid.server. initialization.ZkPathsConfig@ 22e2266d]
2017-05-17T10:10:47,229 INFO [main] io.druid.curator.announcement. Announcer - unannouncing [/druid/announcements/sandbox. hortonworks.com:8100]
2017-05-17T10:10:47,290 INFO [main] io.druid.java.util.common. lifecycle.Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void io.druid.server.listener. announcer. ListenerResourceAnnouncer. stop()] on object[io.druid.query.lookup. LookupResourceListenerAnnounce r@2b736fee].
2017-05-17T10:10:47,290 INFO [main] io.druid.curator.announcement. Announcer - unannouncing [/druid/listeners/lookups/__ default/sandbox.hortonworks. com:8100]
2017-05-17T10:10:47,422 INFO [main] io.druid.server.listener. announcer. ListenerResourceAnnouncer - Unannouncing start time on [/druid/listeners/lookups/__ default/sandbox.hortonworks. com:8100]
2017-05-17T10:10:47,422 INFO [main] io.druid.java.util.common. lifecycle.Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void io.druid.query.lookup. LookupReferencesManager.stop() ] on object[io.druid.query.lookup. LookupReferencesManager@ 44286963].
2017-05-17T10:10:47,422 INFO [main] io.druid.query.lookup. LookupReferencesManager - Stopping lookup factory references manager
2017-05-17T10:10:47,549 INFO [main] org.eclipse.jetty.server. AbstractConnector - Stopped ServerConnector@50b93353{HTTP/ 1.1,[http/1.1]}{0.0.0.0:8100}
2017-05-17T10:10:47,553 INFO [main] org.eclipse.jetty.server. handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@ 77cb452c{/,null,UNAVAILABLE}
2017-05-17T10:10:47,591 INFO [main] io.druid.java.util.common. lifecycle.Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.worker. executor.ExecutorLifecycle. stop() throws java.lang.Exception] on object[io.druid.indexing. worker.executor. ExecutorLifecycle@7c781c42].
2017-05-17T10:10:47,591 INFO [main] io.druid.java.util.common. lifecycle.Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.overlord. ThreadPoolTaskRunner.stop()] on object[io.druid.indexing. overlord.ThreadPoolTaskRunner@ 245253d8].
2017-05-17T10:10:47,635 INFO [main] io.druid.java.util.common. lifecycle.Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery. ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator. discovery. ServerDiscoverySelector@ 2a334bac].
2017-05-17T10:10:47,658 INFO [main] io.druid.java.util.common. lifecycle.Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.announcement. Announcer.stop()] on object[io.druid.curator. announcement.Announcer@ 75e09567].
2017-05-17T10:10:47,663 INFO [main] io.druid.java.util.common. lifecycle.Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery. ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator. discovery. ServerDiscoverySelector@ 36c281ed].
2017-05-17T10:10:47,663 INFO [main] io.druid.curator.CuratorModule - Stopping Curator
2017-05-17T10:10:47,664 INFO [Curator-Framework-0] org.apache.curator.framework. imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting
2017-05-17T10:10:47,777 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x15c154525550132 closed
2017-05-17T10:10:47,777 INFO [main-EventThread] org.apache.zookeeper. ClientCnxn - EventThread shut down for session: 0x15c154525550132
2017-05-17T10:10:47,778 INFO [main] com.metamx.common.lifecycle. Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void com.metamx.http.client. NettyHttpClient.stop()] on object[com.metamx.http.client. NettyHttpClient@1386313f].
2017-05-17T10:10:47,928 INFO [main] io.druid.java.util.common. lifecycle.Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void com.metamx.metrics. MonitorScheduler.stop()] on object[com.metamx.metrics. MonitorScheduler@d5556bf].
2017-05-17T10:10:47,928 INFO [main] io.druid.java.util.common. lifecycle.Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void com.metamx.emitter.service. ServiceEmitter.close() throws java.io.IOException] on object[com.metamx.emitter. service.ServiceEmitter@5854a18 ].
2017-05-17T10:10:47,928 INFO [main] com.metamx.emitter.core. LoggingEmitter - Close: started [false]
2017-05-17T10:10:47,928 INFO [main] io.druid.java.util.common. lifecycle.Lifecycle$ AnnotationBasedHandler - Invoking stop method[public void io.druid.initialization. Log4jShutterDownerModule$ Log4jShutterDowner.stop()] on object[io.druid. initialization. Log4jShutterDownerModule$ Log4jShutterDowner@4a37191a].






--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@ googlegroups.com.
To post to this group, send email to druid-de...@ googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/ msgid/druid-development/ 9eb183ef-7e98-43ac-92ce- 379fcb0689c8%40googlegroups. com.
For more options, visit https://groups.google.com/d/ optout.

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages