Resource limit exceeded

1,327 views
Skip to first unread message

salman siddiqui

unread,
Jul 9, 2018, 1:43:28 AM7/9/18
to Druid User
I am new to druid .

I am using druid 0.11.0 and groupBy v2. I want to query data above the limit of 500k in groupBy but when i do that error occurs as.

{ "error": "Resource limit exceeded", "errorMessage": "Not enough dictionary space to execute this query. Try increasing druid.query.groupBy.maxMergingDictionarySize or enable disk spilling by setting druid.query.groupBy.maxOnDiskStorage to a positive number.", "errorClass": "io.druid.query.ResourceLimitExceededException", "host": "ubuntu:8083" }

Can anyone help figure out what should i do?

Suhas

unread,
Jul 9, 2018, 2:38:57 AM7/9/18
to Druid User
Hey Salman,

Make sure to set those configuration parameters in both Broker and Historical (and Middle Manager, if you're using a real-time node).

Suhas

Salman

unread,
Jul 9, 2018, 3:24:05 AM7/9/18
to Druid User
I have set this configuration in broker runtime.properties
druid.query.groupBy.maxMergingDictionarySize=900000000
druid.query.groupBy.maxOnDiskStorage=100000

still i get this error
{

Suhas

unread,
Jul 9, 2018, 3:41:57 AM7/9/18
to Druid User
Hey Salman, 

Did you set it in Historical node too? This is an on-heap memory. So, make sure there's enough heap allocated.

Suhas

Salman

unread,
Jul 9, 2018, 4:50:18 AM7/9/18
to Druid User
yea i did set the these configurations in historical but when i ran the query on postman the postman stopped working and crashed every time i run it . i am running groupBy query on a 10 lakh data.


Suhas

unread,
Jul 10, 2018, 6:38:50 AM7/10/18
to Druid User
Can you please share your configurations on Broker and Historical?

Suhas

Salman

unread,
Jul 10, 2018, 6:55:34 AM7/10/18
to Druid User

i have changed the config a lot of times and tried different config but it does not work 
broker_configuration.txt
historical_configuration.txt

Suhas

unread,
Jul 10, 2018, 9:02:02 AM7/10/18
to Druid User
Can you also share their respective jvm.config? Please share your server's configuration (like total memory and number of cores)?
Also, right off the bat, I can tell you haven't set the druid.query.groupBy.maxMergingDictionarySize parameter. 

Suhas

Salman

unread,
Jul 11, 2018, 12:39:36 AM7/11/18
to Druid User
i tried setting druid.query.groupBy.maxMergingDictionarySize parameter but it did not help.
jvm_config.txt

Salman

unread,
Jul 12, 2018, 2:35:47 AM7/12/18
to Druid User
Hi Druid,
 
Can anyone help figure out what should i do to solve this problem?

Jihoon Son

unread,
Jul 12, 2018, 12:58:49 PM7/12/18
to druid...@googlegroups.com
Hi Salman,

have you restarted historicals and brokers after changing the configurations?

Jihoon

On Wed, Jul 11, 2018 at 11:35 PM Salman <salmansi...@gmail.com> wrote:
Hi Druid,
 
Can anyone help figure out what should i do to solve this problem?

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/64de2bc9-551a-4fc4-99ea-344f4c5252a9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Salman

unread,
Jul 16, 2018, 12:48:08 AM7/16/18
to Druid User
Yes i restarted the historical and broker configuration ... but do i have to change the configuration of historical & broker  in conf-quickstart or conf ??? I am still confused about this.

Salman

unread,
Jul 16, 2018, 7:11:30 AM7/16/18
to Druid User
Hey Jihoon,
                   I have already tried these two options but nothing worked for me .

"
1) Increasing druid.processing.buffer.sizeBytes. You need to set for your all historicals (http://druid.io/docs/latest/configuration/historical.html) and brokers (http://druid.io/docs/latest/configuration/broker.html). If you have realtimes, you need to set for them as well (http://druid.io/docs/latest/configuration/realtime.html).

2) Increasing druid.query.groupBy.maxOnDiskStorage to enable disk spilling (http://druid.io/docs/latest/querying/groupbyquery.html) "
                   

Salman

unread,
Jul 17, 2018, 3:22:51 AM7/17/18
to Druid User

Hey Druid,
                 When I am running groupBy query it is taking too much time I have already increased `druid.query.groupBy.maxOnDiskStorage`  and `druid.processing.buffer.sizeBytes` but the query time is very large .how to lower the time it takes to query the data .

Suhas

unread,
Jul 17, 2018, 5:50:04 AM7/17/18
to Druid User
Hey Salman,

One of the easiest ways is to make sure each of the segments are at least 300-500 MB.

Suhas

Salman

unread,
Jul 17, 2018, 5:54:28 AM7/17/18
to Druid User
I have only 1 segment with size of 182Mb.  still it takes too much time 

GunWoo Kim

unread,
Jul 19, 2018, 11:28:23 AM7/19/18
to Druid User
Hi Salman.

What is your broker and historical node server spec(physical total memory size and number of cores)?
Does all node process running on single server or different server for each node?

GunWoo Kim

unread,
Jul 19, 2018, 11:32:24 AM7/19/18
to Druid User
Hi Salman, conf-quickstart is configurations for quickstart guide(http://druid.io/docs/0.12.1/tutorials/quickstart.html).

If you setup cluster for prod or PoC, use conf directory.
Reply all
Reply to author
Forward
0 new messages