--
You received this message because you are subscribed to the Google Groups "HAPI FHIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hapi-fhir+unsubscribe@googlegroups.com.
To post to this group, send email to hapi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hapi-fhir/86f7cb54-f742-41f1-b3df-65e964376279%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
JamesCheers,There is already a scheduled process that expires old search result cache entries from the tables you list, it can be configured using the ExpireSearchResultsAfterMillis property.Hi Kyle,I'm not sure what is going on, but Lucene does not index either of those tables (it only indexes HFJ_RESOURCE and a few terminology tables). Are you perhaps referring to Derby? If so, you almost certainly want to migrate to a more scalable database platform (e.g. Postgres) if you're doing "real things" with the server.
On Fri, Nov 17, 2017 at 5:56 PM, <dark...@gmail.com> wrote:
Hey,I've been running the JPA server (v2.5 most recently) in a container based hosting environment (I believe they are using Docker). I've recently noticed that every time I redeploy the server there are Lucene indexes that get wiped out and need to be rebuilt which causes a decently large heap spike, and thereby requires extra heap space to bring the server back up.One tactic I've used to get the server back up without increasing the container's memory limit (which is already at 2 GB) is to truncate the HFJ_SEARCH and HFJ_SEARCH_RESULT tables. Once the records are cleared out of those tables the spike is small enough to handle. This leads me to a few questions:1) Is there a reason why all of the records are in the HFJ_SEARCH* tables? As far I as knew the cached search results should be expiring after a minute by default. Is it safe to have a background process delete the records out of that table after they are more than a minute old?2) Do the HFJ_SEARCH* tables get indexed for any particular reason or does Hibernate/Lucene just index everything by default?3) Is it possible to completely disable Lucene indexing? I've read conflicting things about whether it is possible and what is the correct approach. Does any functionality break if the indexing is disabled?Thanks,Kyle
--
You received this message because you are subscribed to the Google Groups "HAPI FHIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hapi-fhir+...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to hapi-fhir+unsubscribe@googlegroups.com.
To post to this group, send email to hapi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hapi-fhir/51c49625-4958-4476-a97c-8df4f1ba792d%40googlegroups.com.
daoConfig.setExpireSearchResults(true);
daoConfig.setExpireSearchResultsAfterMillis(300000L);
And then watched as the HFJ_SEARCH table steadily shrunk from over 300k records down to less than 50. Now when I hit a GC overhead limit error I am able to restart the server without issue (before I had manually truncate those search tables before restarting).
I'll still work on grabbing a heap dump to see if I can debug my heap memory issues- my guess is that the heap just needs to be bigger. Currently I'm only able to set the max heap to 1024m. Do you have any guidance for JVM tuning assuming certain use cases?
Thanks again,
Kyle