Deadlock exception on sql-server for hfj_forced_id table

233 views
Skip to first unread message

saji...@gmail.com

unread,
Feb 19, 2020, 7:16:22 PM2/19/20
to HAPI FHIR

Hi all,
I am trying to parse some resources to the to a HAPI server with a Sql Server backend. However some of the transactions fail with a deadlock exception...looking at the sql log...I can see that the lock is caused at the table :


forced_id.PNG



Enabling the Hibernate SQL logs on my app shows some inserts and writes on to the table at what seems to be the same time


forced_error.PNG



Scenario:
I have a few hundred bundles of PractitionerRole that I am trying to POST to the HAPI server in a parallel manner at about 50 bundles per second
Each bundle has a 100 PractitionerRoles


I presume this happens when creating a forced_id several times for the resources causing a deadlock.


Other types of databases did not cause this (tested on hsqldb and derby), so I think this is specific to the Sql Server Engine. However I am limited to using Sql Server due to circumstances so I would like to try to resolved this issue

I have also tried setting the transaction isolation level to read_uncommited to no avail at both the spring boot application level as well as the database level


spring.jpa.properties.hibernate.connection.isolation=READ_UNCOMMITED
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED


Does anyone perhaps have some insight into the matter


p.s. also post on chat.fhir.org

James Agnew

unread,
Feb 20, 2020, 5:47:24 AM2/20/20
to saji...@gmail.com, HAPI FHIR
Oddly enough a fix related to an unnecessary circular dependency between HFJ_RESOURCE and HFJ_FORCED_ID just got merged two days ago.

Would you be able to bump your HAPI FHIR dependencies to 3.3.0-SNAPSHOT to see if this fix helps?

Cheers,
James

--
You received this message because you are subscribed to the Google Groups "HAPI FHIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hapi-fhir+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hapi-fhir/4351ec1a-0f6c-4f22-bb57-9fff813f51b5%40googlegroups.com.

Sajith Jamal

unread,
Feb 23, 2020, 4:11:32 PM2/23/20
to HAPI FHIR
Hi James,

I tried as you asked and bumped up the version, but it did not solve the problem

I am attaching some more screenshots below from the sql log, that shows the actual query and the isolation level.
Oddly enough they seem to deadlock on select queries which from my understanding should not happen

forced1.PNG

foreced2.PNG

foreced3.PNG


Let me know if I can get you any more information that you need to help in resolving this problem

regards,
Sajith
To unsubscribe from this group and stop receiving emails from it, send an email to hapi...@googlegroups.com.

James Agnew

unread,
Feb 23, 2020, 5:32:09 PM2/23/20
to Sajith Jamal, HAPI FHIR
I'm not sure exactly what is going on here, but I would speculate that it's a timing thing relating to multiple threads trying to do a create on a client-assigned ID at the exact same time.

I'd recommend looking at your upload logic to see if you can avoid multiple threads inserting the same resource by ID at the same time.

To unsubscribe from this group and stop receiving emails from it, send an email to hapi-fhir+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hapi-fhir/5934cb4c-d725-4cf9-91d8-0f3127314ac6%40googlegroups.com.

Sajith Jamal

unread,
Feb 24, 2020, 4:50:35 PM2/24/20
to HAPI FHIR
hmmm......the whole point of the multi threaded insert approach was due to HAPI not having a bulk importer to rely on yet, and I needed a speedy way to insert around ~500k of data.

After posting the issue on stackoverflow, and digging into it further based on comments: https://stackoverflow.com/questions/60367009/deadlock-exception-during-select

I was able to bypass the actual deadlock issue by setting the transaction isolation level to READ_UNCOMMITED.
However I would like your recommendation on whether this approach is correct, as you would know better than me on how the actual forced_id table is being used

If you can even recommend a different bulk insert approach that entirely bypasses this problem, I am open to that as well

regards,
Sajith

James Agnew

unread,
Feb 24, 2020, 8:54:34 PM2/24/20
to Sajith Jamal, HAPI FHIR
FWIW I'm definitely not saying that you can't multithread ingestion, I just mean that you should be careful that your multithreading isn't causing multiple threads to be writing the same resource at the same time.

READ_UNCOMMITTED is fine if your workload is primarily write oriented (i.e. you're doing backloading), so certainly it's fine if you only use it during backloading. It could potentially cause some inconsistencies in search results when you're doing reads since it does allow uncommitted data to be read by other clients. Whether that is acceptable would depend on your use cases.

Cheers,
James

To unsubscribe from this group and stop receiving emails from it, send an email to hapi-fhir+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hapi-fhir/1252bca8-69a2-4dcb-86e3-9506c29f2963%40googlegroups.com.

Sajith Jamal

unread,
Feb 24, 2020, 10:11:37 PM2/24/20
to HAPI FHIR
cool thanks, that clears it up, and yes during backloading I won't have anyone reading it so then this approach should be fine

thanks,
Sajith

pooja....@gmail.com

unread,
Apr 28, 2020, 6:40:37 AM4/28/20
to HAPI FHIR
Hello Sajith,

Just started working on HAPI FHIR server and I am also experiencing the same issue that you were facing. Trying to insert (PUT) 10 resources in 1 bundle with 5 concurrent users + loop count = 10 thru Jmeter and getting LockAcquisitionException on HFJ_FORCED_ID table.

I have read the entire email chain and it was concluded at setting the isolation level = read_uncommitted.

Since you are an experienced user, did you find any other solution or is it fixed later etc.?
 
Thanks
Pooja

Vivek Namboothiri

unread,
Dec 8, 2021, 12:18:22 AM12/8/21
to HAPI FHIR
We have seen the same kind of deadlock behavior with HAPI 5.3.0 and SQL Server 2017 combination.
The use case is that we were doing parallel uploads with multiple bundles ranging from 2-100 resources for the following resources: Procedure, ProcedureRequest,AllergyIntolerance and Composition. We have disabled the referential integrity check during this use case. Here is the error that we receive:

2021-12-07 09:01:25.634 [http-nio-8080-exec-9] WARN  o.h.e.jdbc.spi.SqlExceptionHelper [SqlExceptionHelper.java:137] SQL Error: 1205, SQLState: 40001
2021-12-07 09:01:25.634 [http-nio-8080-exec-9] ERROR o.h.e.jdbc.spi.SqlExceptionHelper [SqlExceptionHelper.java:142] Transaction (Process ID 145) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
2021-12-07 09:01:25.635 [http-nio-8080-exec-9] ERROR c.u.f.r.s.i.ExceptionHandlingInterceptor [ExceptionHandlingInterceptor.java:140] Failure during REST processing
ca.uhn.fhir.rest.server.exceptions.InternalErrorException: Failed to call access method: org.springframework.dao.CannotAcquireLockException: could not execute query; SQL [select forcedid0_.resource_type as col_0_0_, forcedid0_.resource_pid as col_1_0_, forcedid0_.forced_id as col_2_0_, resourceta1_.res_deleted_at as col_3_0_ from hfj_forced_id forcedid0_ inner join hfj_resource resourceta1_ on (resourceta1_.res_id=forcedid0_.resource_pid) where forcedid0_.resource_type=? and (forcedid0_.forced_id in (?))]; nested exception is org.hibernate.exception.LockAcquisitionException: could not execute query
        at org.springframework.boot.web.servlet.support.ErrorPageFilter.doFilter(ErrorPageFilter.java:126)
        at org.springframework.boot.web.servlet.support.ErrorPageFilter.access$000(ErrorPageFilter.java:64)
        at org.springframework.boot.web.servlet.support.ErrorPageFilter$1.doFilterInternal(ErrorPageFilter.java:101)
        at org.springframework.boot.web.servlet.support.ErrorPageFilter.doFilter(ErrorPageFilter.java:119)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
        at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:262)


Do we have any constraints on making parallel writes for HAPI with SQL Server? Do we have any recommendation/best practice to follow apart from what is mentioned in the previous part of this email thread so that this issue could be avoided as parallel uploads is a very common use case?

Regards,
Vivek

James Agnew

unread,
Dec 8, 2021, 5:46:23 PM12/8/21
to Vivek Namboothiri, HAPI FHIR
That version of HAPI FHIR is fairly old, does this still happen on the latest (5.6.0) release?

Cheers,
James

--
You received this message because you are subscribed to the Google Groups "HAPI FHIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hapi-fhir+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hapi-fhir/0cb875ac-0a8a-49c2-acb7-a616fde78851n%40googlegroups.com.

Vivek

unread,
Dec 8, 2021, 11:13:09 PM12/8/21
to James Agnew, HAPI FHIR
Hi James,
The issue is reproducible with 5.6.0 version of HAPI as well.

Regards,
Vivek
--
Regards,
Vivek

Vivek

unread,
Dec 10, 2021, 9:19:23 AM12/10/21
to James Agnew, HAPI FHIR
Further had a look at this issue and tried out the following with HAPI 5.6.0:
> Upgraded to SQL Server 2019
> Set READ_UNCOMMITTED at both spring boot and database levels(for both SQL Server 2017 and 2019)
> Changed the dependency version for SQL server in pom.xml

It turns out that none of the above changes seems to fix the deadlock issue which happens at the SQL Server. However we just tried out the same set of concurrent calls with PostgreSQL database and the dead lock does not happen there. Also found that the dependency in pom.xml for SQL Server is not listed by default, but available for others like PostgreSQL and MySQL.  Does this mean that HAPI does not officially support SQL server in production deployments? In other words, what are the databases supported by HAPI server for production use cases?

Thanks in advance.

Regards,
Vivek
--
Regards,
Vivek
Reply all
Reply to author
Forward
0 new messages