We are implementing a SDOH Exchange flow for one of the HL7 connectathons. As part of this flow - we create a Task resource in one Logica sandbox, and after that - in another. Import thing is that we are adding an additional custom "profile" for these tasks. After the task is created in one sandbox - the request to another sandbox fails with diagnostics:
"Failed to call access method: org.springframework.dao.DataIntegrityViolationException: could not execute batch; SQL [insert into HFJ_RES_TAG (PARTITION_DATE, PARTITION_ID, TAG_ID, RES_ID, RES_TYPE, PID) values (?, ?, ?, ?, ?, ?)]; constraint [null]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute batch"
I created a simplified example to reproduce this issue (using Java HAPI framework v.5.2.1):
---------------------------------------
public static void main(String[] args) {
FhirContext fhirContext = FhirContext.
forR4();
IGenericClient client = fhirContext.newRestfulGenericClient("
https://api.logicahealth.org/GravitySandboxNew/open");
Task t1 = new Task();
t1.getMeta()
.addProfile(SDOHProfiles.
TASK);
client.create()
.resource(t1)
.execute();
Task t = new Task();
t.getMeta()
.addProfile(SDOHProfiles.
TASK);
IGenericClient cbroClient = fhirContext.newRestfulGenericClient(
"
https://api.logicahealth.org/GravitySDOHCBRO/open");
cbroClient.create()
.resource(t)
.execute();
}
---------------------------------------
It is reproducible 99% of the time. I tried to emulate the same requests using Postman, but for some reason it cannot be reproduced so often, even though I set the same request headers as HAPI does. Very weird thing is - when I create a task in first sandbox using HAPI and in the second sandbox using Postman - this is reproduced 99% of the time as well! The requests to the second sandbox will fail for another 10-30 seconds, and after that will succeed.