Thank you for your prompt response James, I was discussing the same within my team.
we are running HAPI Fhir version 6.2.1
we are using AWS RDS aurora postgres database for it which we wish to upgrade from version 11.17 to 16.1. However, the upgrade keeps failing at "pg_dump: reading large objects". we suspect it is due to the pg_largeobject being too huge.
as per your previous response am I correct to assume that instead of trying to move all large objects outside of the database (to s3) and creating a reference link in the db using some fhir utility (which we are not sure how to achieve).
If we just upgrade to 7th major version and run the "resource reindex job" it will help us reduce the large object size significantly so that we can upgrade the database with the least downtime?
Is there any reference links you can share that can help me and my team understand and proceed with this approach?
Regards,
Shivam Bhaskar