Hey Team,
We use Druid in our production setup and we are trying to tune foe performance. The following is the version I use.
Druid v 0.12.1
- I understand that segment query latencies depend on number of cores across historicals vs number of segments, I have tuned this to reduce query time.
- Now as mentioned in the above link, I am trying to reduce memory maps and set the "druid.server.maxSize" to match my available physical memory so that historical behaves as an in-memory store i.e. segments are loaded from physical memory.
- Despite doing this, I do not see a significant improvement in query performance(query/segment/time).
Is the approach I use correct? I do not even see a spike in memory usage as well so I still doubt that segments are not being fetched from physical memory.
Just in case if every segment is loaded to memory on demand, I would better fare using compute optimized machines rather than memory optimized, am I correct in thinking so?
Thanks
--Kiran.