-server
-Xmx192g
-Xms192g
-XX:NewSize=6g
-XX:MaxNewSize=6g
-XX:MaxDirectMemorySize=64g
# Processing threads and buffers
druid.processing.buffer.sizeBytes=2147483647
druid.processing.numThreads=11
druid.processing.numMergeBuffers=2
Any ideas how we should deal with this situation? Can we force Broker to clean its JVM memory instantly?
--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/9524384c-a128-4972-919f-1ad09511f90b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Gian
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
# HTTP server threads
druid.server.http.numThreads=11
druid.server.http.maxIdleTime=PT15m
druid.broker.http.numConnections=50
druid.broker.http.readTimeout=PT15M
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/95d3cfa4-61c7-4c01-9808-39ec1c2a283f%40googlegroups.com.
green line: jvm/mem/max
blue line: jvm/mem/used
For normal use broker consumes c.a 30GB. But then, when we start running those heavy queries it consumes all it has. As you may see when we had 128GB (~140M bytes on the chart) Broker used almost all of it. Then we change it to 192GB and even that high amount was consumed entirely. So, can I suppose that if fact we need that big heap?
And when we reached those peaks we had to restart the broker, because of its slow responsiveness. Heap usage decreased to 10GB after this.
We use Javascript aggregators.
On Jun 20, 2017, at 3:36 PM, Sebastian Zontek <s...@deep.bi> wrote:Not sure if we need that heap. Please look at the below chart:
green line: jvm/mem/max
blue line: jvm/mem/used
For normal use broker consumes c.a 30GB. But then, when we start running those heavy queries it consumes all it has. As you may see when we had 128GB (~140M bytes on the chart) Broker used almost all of it. Then we change it to 192GB and even that high amount was consumed entirely. So, can I suppose that if fact we need that big heap?
And when we reached those peaks we had to restart the broker, because of its slow responsiveness. Heap usage decreased to 10GB after this.
We use Javascript aggregators.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/721a7064-3f66-4108-8741-2a7416cec26a%40googlegroups.com.