Hi Druid team,There's some problems with my druid cluster.we use druid version is 0.6.1691. how to enable historical write to memcached and broker read from memcached?
on broker -
druid.broker.cache.useCache=true
druid.broker.cache.populateCache=falsedruid.historical.cache.useCache=true
2. when we do queries, some queries are very slow, attachments is the metrics of each kind node.could you give me some advice to tune the druid cluster?
--Thanks..
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/31fb1d08-9220-4895-bfdb-a269212c2340%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
|
[Delay attachments]]Thanks very much for your quick reply.Yes, I have check the gc metrics on broker and found that caused heavy gc.We should tune the jvm start parameters.Attachments are the runtime.properties and start scriptsOur cluster:40 historical machines, each with 40CPUs and 128G mem
18 realtime machines, each with 40CPUs and 128G mem, but 2 realtime nodes are started on each machine.
5 broker machines, each with 40CPUsand 128G mem.
why the broker doesn't disptach search requests to historical node in parallel when I submit a request to a broker? How to configure to make it in parallel?
2015-09-10T19:03:22,880 INFO [qtp1781399452-77] com.metamx.http.client.pool.ChannelResourceFactory - Generating: http://broker185.kafka.game.dev.com:8083
2015-09-10T19:03:24,126 INFO [qtp1781399452-77] com.metamx.http.client.pool.ChannelResourceFactory - Generating: http://broker36.kafka.game.dev.com:8083
2015-09-10T19:03:24,714 INFO [qtp1781399452-77] com.metamx.http.client.pool.ChannelResourceFactory - Generating: http://broker20.kafka.game.dev.com:8083
2015-09-10T19:03:26,420 INFO [qtp1781399452-77] com.metamx.http.client.pool.ChannelResourceFactory - Generating: http://broker239.kafka.game.dev.com:8083
2015-09-10T19:03:26,952 INFO [qtp1781399452-77] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2015-09-10T19:03:26.952+08:00","service":"broker","host":"broker36.kafka.game.dev.com:8082","metric":"query/node/time","value":2828,"dataSource":"user_game_daily","duration":"PT1296000S","hasFilters":"true","id":"d8145fcd-83dd-45be-bde5-8a19811af6a3","interval":["2015-05-03T00:00:00.000+08:00/2015-05-04T00:00:00.000+08:00","2015-05-05T00:00:00.000+08:00/2015-05-06T00:00:00.000+08:00","2015-05-20T00:00:00.000+08:00/2015-05-21T00:00:00.000+08:00","2015-05-23T00:00:00.000+08:00/2015-05-28T00:00:00.000+08:00","2015-05-29T00:00:00.000+08:00/2015-06-01T00:00:00.000+08:00","2015-06-02T00:00:00.000+08:00/2015-06-06T00:00:00.000+08:00"],"numComplexMetrics":"0","numDimensions":"3","numMetrics":"6","server":"broker36.kafka.game.dev.com:8083","type":"groupBy"}]
2015-09-10T19:03:26,955 INFO [qtp1781399452-77] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2015-09-10T19:03:26.955+08:00","service":"broker","host":"broker36.kafka.game.dev.com:8082","metric":"query/node/time","value":2243,"dataSource":"user_game_daily","duration":"PT777600S","hasFilters":"true","id":"d8145fcd-83dd-45be-bde5-8a19811af6a3","interval":["2015-05-08T00:00:00.000+08:00/2015-05-10T00:00:00.000+08:00","2015-05-13T00:00:00.000+08:00/2015-05-18T00:00:00.000+08:00","2015-05-28T00:00:00.000+08:00/2015-05-29T00:00:00.000+08:00","2015-06-06T00:00:00.000+08:00/2015-06-07T00:00:00.000+08:00"],"numComplexMetrics":"0","numDimensions":"3","numMetrics":"6","server":"broker20.kafka.game.dev.com:8083","type":"groupBy"}]
2015-09-10T19:03:26,958 INFO [qtp1781399452-77] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2015-09-10T19:03:26.958+08:00","service":"broker","host":"broker36.kafka.game.dev.com:8082","metric":"query/node/time","value":540,"dataSource":"user_game_daily","duration":"PT518400S","hasFilters":"true","id":"d8145fcd-83dd-45be-bde5-8a19811af6a3","interval":["2015-05-10T00:00:00.000+08:00/2015-05-11T00:00:00.000+08:00","2015-05-12T00:00:00.000+08:00/2015-05-13T00:00:00.000+08:00","2015-05-18T00:00:00.000+08:00/2015-05-19T00:00:00.000+08:00","2015-05-21T00:00:00.000+08:00/2015-05-23T00:00:00.000+08:00","2015-06-07T00:00:00.000+08:00/2015-06-08T00:00:00.000+08:00"],"numComplexMetrics":"0","numDimensions":"3","numMetrics":"6","server":"broker239.kafka.game.dev.com:8083","type":"groupBy"}]
2015-09-10T19:03:26,960 INFO [qtp1781399452-77] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2015-09-10T19:03:26.960+08:00","service":"broker","host":"broker36.kafka.game.dev.com:8082","metric":"query/node/time","value":4082,"dataSource":"user_game_daily","duration":"PT777600S","hasFilters":"true","id":"d8145fcd-83dd-45be-bde5-8a19811af6a3","interval":["2015-05-01T00:00:00.000+08:00/2015-05-03T00:00:00.000+08:00","2015-05-04T00:00:00.000+08:00/2015-05-05T00:00:00.000+08:00","2015-05-06T00:00:00.000+08:00/2015-05-08T00:00:00.000+08:00","2015-05-11T00:00:00.000+08:00/2015-05-12T00:00:00.000+08:00","2015-05-19T00:00:00.000+08:00/2015-05-20T00:00:00.000+08:00","2015-06-01T00:00:00.000+08:00/2015-06-02T00:00:00.000+08:00","2015-06-08T00:00:00.000+08:00/2015-06-09T00:00:00.000+08:00"],"numComplexMetrics":"0","numDimensions":"3","numMetrics":"6","server":"broker185.kafka.game.dev.com:8083","type":"groupBy"}]
2015-09-10T19:03:27,137 INFO [qtp1781399452-77] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2015-09-10T19:03:27.137+08:00","service":"broker","host":"broker36.kafka.game.dev.com:8082","metric":"query/time","value":4265,"context":"{\"queryId\":\"d8145fcd-83dd-45be-bde5-8a19811af6a3\",\"timeout\":300000}","dataSource":"user_game_daily","duration":"PT3369600S","hasFilters":"true","id":"d8145fcd-83dd-45be-bde5-8a19811af6a3","interval":["2015-05-01T00:00:00.000+08:00/2015-06-09T00:00:00.000+08:00"],"remoteAddress":"10.21.42.62","type":"groupBy"}]
my broker config
#druid.host=localhost
druid.port=8082
druid.service=broker
# We enable using the local query cache here
druid.broker.cache.useCache=true
druid.broker.cache.populateCache=true
#default 5
druid.broker.http.numConnections=20
druid.broker.http.readTimeout=PT5M
# For prod: set numThreads = # cores - 1, and sizeBytes to 512mb
druid.processing.buffer.sizeBytes=512000000
druid.processing.numThreads=20
#Number of threads for HTTP requests. default 10
druid.server.http.numThreads=30
the historical config
druid.port=8083
druid.service=historical
# Our intermediate buffer is also very small so longer topNs will be slow.
# In prod: set sizeBytes = 512mb
druid.processing.buffer.sizeBytes=512000000
# We can only 1 scan segment in parallel with these configs.
# In prod: set numThreads = # cores - 1
druid.processing.numThreads=20
druid.server.http.numThreads=30
# maxSize should reflect the performance you want.
# Druid memory maps segments.
# memory_for_segments = total_memory - heap_size - (processing.buffer.sizeBytes * (processing.numThreads+1)) - JVM overhead (~1G)
# The greater the memory/disk ratio, the better performance you should see
druid.segmentCache.locations=[{"path": "/data/druid/data/indexCache", "maxSize"\: 500000000000}]
druid.monitoring.monitors=["io.druid.server.metrics.HistoricalMetricsMonitor", "com.metamx.metrics.JvmMonitor"]
# 700G
druid.server.maxSize=500000000000
thanks