In addition to competing for resources with HAPI, Luene is a library, it takes a lot more effort to build a high-performing (indexing and search), scalable, and stable search layer on top of it, and that's what Elasticsearch and/or Solr do. So I think you are heading in the right direction.
There is one major challenge, though, with Elasticsearch, or with any other search strategy that's not the database itself, is what I call the "split memory" problem, or "join" problem. For example, when a query has multiple clauses, some to be fulfilled by the database and others by search. You can fetch 10,000 results from search but that may not be enough to produce a single result after "join", and that may cause performance problems or incomplete results set problems, or both.
Our overall experience with using Elasticsearch with HAPI is that it generally works well but issues pop up once in a while, which is why we are experimenting with the Postgres-only configuration, and this value set expansion issue was noticed during this experiment. With sufficient attention (from the HAPI team), which I suspect is not yet the case, I think the Elasticsearch option is a viable solution.
Thanks,
Xiaocheng