Hello everyone,
I am setting up a fhir server through the docker image and I am trying to understand multiple functions that need to be implemented
In particular we're storing a high number amount of data in our server (mostly synthetic data as of now for the POC purposes). Then, we are exporting the patient data through the bulk export operation '$export' which is enabled on the application.yaml
We're facing two issues that we haven't been able to find how to control so far.
* Multiple jobs are created internally as it can be seen in the DB through the query select id, start_time, stat from bt2_job_instance order by create_time desc limit 50;
* Then, probably due to the large number of the exported patients once these are files are expired we're experiencing javaoom errors (possible java is loading everything in memory which is not enough). The current server is a 16G machine and the configured Java memory options are 4Gb max based on the default values.
The questions that arise are
- Can we somehow control the number of jobs that we can have at the same time through fhir ? Can we limit them ?
- Can we limit the maximum filesize of the exported files ? If so, would that be the solution on the javaoom issue we're experiencing ? What would be the impact ? It seems there is an option called file_max_capacity which by default is configured to 1000. Should that one be used ?
Thank you in advance for your responses