Upgrading Fhir Database

32 views
Skip to first unread message

Shivam Bhaskar

unread,
Jun 18, 2024, 7:32:38 AMJun 18
to HAPI FHIR
we are trying to upgrade fhir RDS postgres database from 11.17 to 16.1 but it gets stuck at * pg_dump: reading large objects and fails.
how can i upgrade the database,
One way I am thinking of is to move all the large objects (i do not know what they are) from the database to s3 bucket and database and refer to those objects in s3 instead of storing them inside the database. But we are not sure how to achieve that.

Our end goal is to upgrade fhir RDS postgres database from 11.17 to 16.1 with the least downtime possible.

James Agnew

unread,
Jun 18, 2024, 7:38:54 AMJun 18
to HAPI FHIR
You haven't mentioned which version of HAPI FHIR you are using (please see Getting Help for tips on how to ask effective questions here) but if you aren't already on the latest version of HAPI FHIR I'd recommend upgrading, and then running a resource reindex job. We no longer use the pg_largeobject table is almost all cases because of the many issues it causes, so a reindex on a current HAPI FHIR release will migrate all data out of that table. That may well help in unsticking whatever is causing that failure.

Cheers,
James

Shivam Bhaskar

unread,
Jun 20, 2024, 8:47:48 AMJun 20
to HAPI FHIR
Thank you for your prompt response James, I was discussing the same within my team.

we are running HAPI Fhir version 6.2.1

we are using AWS RDS aurora postgres database for it which we wish to upgrade from version 11.17 to 16.1. However, the upgrade keeps failing at "pg_dump: reading large objects". we suspect it is due to the pg_largeobject being too huge.

as per your previous response am I correct to assume that instead of trying to move all large objects outside of the database (to s3) and creating a reference link in the db using some fhir utility (which we are not sure how to achieve).

If we just upgrade to 7th major version and run the "resource reindex job" it will help us reduce the large object size significantly so that we can upgrade the database with the least downtime?
Is there any reference links you can share that can help me and my team understand and proceed with this approach?

Regards,
Shivam Bhaskar 

James Agnew

unread,
Jun 20, 2024, 11:53:18 AMJun 20
to HAPI FHIR
Yup, that's correct. The reindex operation is described here (note this is Smile CDR documentation but this part applies to the open source product too) - You'll want to set optimizeStorage to ALL_VERSIONS in order to migrate all data away from the LO table in your database.

Cheers,
James
Reply all
Reply to author
Forward
0 new messages