"Cold" read-only index ad-hoc loading by query

18 views
Skip to first unread message

Artem Shpynov

unread,
Oct 7, 2015, 4:29:39 AM10/7/15
to heliosearch
Hi guys,

Currently we are using other Lucene-based solutions - Elasticsearch and It has one simple but significant architect problem that make imposible futher usage of it.

Our application is log-aggregator. We record data and provide ability to search it. Data is splitted to several index by dates. 

We need to store it for long period - several months and years. And total amount of index size is about 30-50 TB per each hardware server.
We have to search fast only in few most recent days (few seconds per query), but all "historical" data can be searched much slower - 100 times slower (minutes and hours are acceptable).

But elasticsearch pre-loaded all known and opened index into heap- at least  the part of lucene "term index" and it is about 0,5% from total indexes size (125GB per node in our case). 

We want to have some kind of "freezed read-only" indexes that will use heap only on query time, may be load or warm up index by query (yes it can be slow), and unload(free it) after.


So we investigate replacement. And Solr / Helio are potential candidates.

Is Heliosearch or Solr have some kind of such feature?


Best Regards,

Artem Shpynov

Yonik Seeley

unread,
Oct 7, 2015, 4:13:51 PM10/7/15
to helio...@googlegroups.com
Hi Artem,
Heliosearch is no longer under development, and have been bringing
much of the functionality back to Solr. I'd follow up on the
solr-user list.

Solr can load/unload cores explicitly, and can "open-on-demand" via
the "lazy cores" functionality. I know of a number of companies using
this to support very large indexes with transient requests.

-Yonik
> --
> You received this message because you are subscribed to the Google Groups
> "heliosearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to heliosearch...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages