2025-06-26 15:14:03.905 CEST [65097] hapi@hapi ERROR: current transaction is aborted, commands ignored until end of transaction block
2025-06-26 15:14:03.905 CEST [65097] hapi@hapi STATEMENT: update hfj_resource rt1_0 set sp_index_status=$1 where rt1_0.res_id=$2
2025-06-26 15:14:05.073 CEST [65097] hapi@hapi ERROR: duplicate key value violates unique constraint "idx_codesystem_and_ver"
2025-06-26 15:14:05.073 CEST [65097] hapi@hapi DETAIL: Key (codesystem_pid, cs_version_id)=(3, 1.0.0) already exists.
2025-06-26 15:14:05.073 CEST [65097] hapi@hapi STATEMENT: update trm_codesystem_ver set cs_display=$1,codesystem_pid=$2,cs_version_id=$3,partition_id=$4,res_id=$5 where pid=$6
We tried pgsql commands REINDEX DATABSE and REINDEX SYSTEM, but even with this, doing VACUUM FULL on the database do not reduce the 17 millions entries in pg_largeobject.
Any idea how we could get rid of these large objects? Our database saves are really slow because of them.
Kind regards,
Chris
--
You received this message because you are subscribed to the Google Groups "HAPI FHIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hapi-fhir+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hapi-fhir/b8f45289-ac41-4950-abd3-2cf600a76255n%40googlegroups.com.