Building and deploying a terminology server based on HAPI FHIR

180 views
Skip to first unread message

Mohan

unread,
Nov 16, 2024, 12:36:10 PM11/16/24
to HAPI FHIR
Hi everyone,
We are new to FHIR in general.
We are building a core EMR on top of FHIR.

Being in a region where no terminology server is available to use,
we are thinking of deploying our terminology server based on HAPI FHIR.

We understand that deploying and running a terminology server is not easy. However, HAPI FHIR has solid support for rolling out a terminology server for moderate use.

Our initial main usage for this terminology server will be the $expand operation to provide auto-complete functionalities in UI.

We plan to deploy many of the value sets defined at https://hl7.org/fhir/R4/terminologies-valuesets.html, which contain FHIR-defined value sets, SNOMEDCT, LOINC codes, etc.

I was playing with hapifhir docker instance and could $expand many of the FHIR internal value sets. For eg, http://localhost:8888/fhir/ValueSet/$expand?url=http://hl7.org/fhir/ValueSet/administrative-gender. However, for some, for eg, http://localhost:8888/fhir/ValueSet/$expand?url=http://hl7.org/fhir/ValueSet/account-type, it responds error message `HAPI-0831: Expansion of ValueSet produced too many codes (maximum 1,000) - Operation aborted! - ValueSet with URL \"ValueSet.url[http://hl7.org/fhir/ValueSet/account-type]\" was expanded using an in-memory expansion` even with `&count=20`.


Now to the implementation side, I understand.
  1. Create a project with one from below,
    1. hapi-fhir-jpaserver-starter - contains many unrelated dependencies, may need to remove some
    2. hapi-fhir-spring-boot-starter - will need to add jpaserver and validation dependencies, not tested yet.
  2. Upload terminologies,
    1.  FHIR built-in value sets are already available without any upload.
    2. Need to upload external SNOMEDCT, LOINC codes, etc.
  3. Once uploaded successfully, we should be able to call the $expand operation possibly with `filter` and `count` for larger value sets.
4. We will display them in the UI possibly with auto-complete.

There are a few questions that I would like to ask.
  1.  Are the above steps enough to run a HAPI FHIR-based terminology server?
  2. Is any customization needed for validation support modules or ValidationSupportChain, described at https://hapifhir.io/hapi-fhir/docs/validation/validation_support_modules.html?
  3. Do we need external elastic search or the embedded indexing library will be enough to support, let's say <20 $expand operation per second? We will be testing this anyway but if you have any similar experiences.
  4. How to solve the aforementioned issue when expanding http://localhost:8888/fhir/ValueSet/$expand?url=http://hl7.org/fhir/ValueSet/account-type results in error? Because neither filter nor count solves the issue.
  5. Any kind of suggestions, pieces of information, etc would be appreciated.

Regards,
Mohan

Mohan

unread,
Jan 31, 2025, 6:56:59 AMJan 31
to HAPI FHIR
Hi folks,

I have made some progress.

1. I cloned hapi-fhir-jpaserver-starter
2. I started the application with H2 file based and local lucene, local laptop.
3. I uploaded the Loinc zip and waited about 40 min to complete the expansion.
4. Then uploaded the SnomedCTxxxxxxxxxxxxx.zip file and waited about 30 min to complete some processing.
5. (Optional) DB size was near 2GB, I did `shutdown compact` and was less than 1GB
6. Then tested $expand for various valusets


{
"resourceType": "OperationOutcome",
"issue": [ {
"severity": "error",
"code": "processing",
"diagnostics": "HAPI-0831: Expansion of ValueSet produced too many codes (maximum 100) - Operation aborted! - ValueSet with URL \"ValueSet.url[http://hl7.org/fhir/ValueSet/observation-codes]\" was expanded using an in-memory expansion"
} ]
}


{
"resourceType": "OperationOutcome",
"issue": [ {
"severity": "error",
"code": "processing",
"diagnostics": "HAPI-0831: Expansion of ValueSet produced too many codes (maximum 50) - Operation aborted! - ValueSet with URL \"ValueSet.url[http://hl7.org/fhir/ValueSet/approach-site-codes]\" was expanded using an in-memory expansion"
} ]
}

I have updated to `hapi-fhir-jpaserver-starter` to use `hapi-fhir` version 7.6.1, latest at this time. Still no luck.

I have seen a few similar post in this group which explain that pre-expanded valusets should be fine to fetch.
However, I must be doing something wrong here.

Has anyone encountered this before?
Any help will be appreciated!

Thanks,
Mohan

Mohan

unread,
Feb 2, 2025, 8:04:03 AMFeb 2
to HAPI FHIR
I did some tweaks for expansion size which I am not sure is the right thing to touch but it is working.

jpaStorageSettings.setMaximumExpansionSize(1_000_000);
jpaStorageSettings.setPreExpandValueSetsMaxCount(1_000_000);

Started server with -Xms8G -Xmx8G

I added a response interceptor to write the response in a local text file because I could never get a large response in the browser.
Surprisingly, the response json file size is 1.3GB, more than 20 millions lines of json data(pretty-printed) for only  http://hl7.org/fhir/ValueSet/body-site  valueset.
Another issue is that `&offset=1&count=100` does not seem to work.

While this is working with the tweaks I feel many things can be improved such as 
- sending compressed JSON
- fixing pagination response

I think I am not the only one having this issue.
Many people would have crossed this path.
I would appreciate it if you share your experiences/challenges and/or the workaround/solution that you came up with.

Regards,
Mohan
Reply all
Reply to author
Forward
0 new messages