Load balancing multiple writers

247 views
Skip to first unread message

Diego Victor de Jesus

unread,
Apr 26, 2021, 2:54:24 AM4/26/21
to Orthanc Users
Hello. 

Since the new 1.9.2 Orthanc release I was wondering about how one could implement a load balancer for multiple writers listening through DICOM. Is that possible with the DICOM protocol or only with DICOMweb?

James Manners

unread,
Apr 26, 2021, 3:01:07 AM4/26/21
to Orthanc Users
Hi Diego,

You could use NGINX to act as a tcp load balancer (https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/). I know you can use NGUNX to proxy a simple dicom connection but have not tried it to load balance the connections. Give it a go and let us know how it goes. 

There are other proxys you could use use as HA Proxy or Traefik. 

James

Binary Logo
James Manners • Director
Suite 3, Level 2, 10 Queens Road, Melbourne, Victoria 3004, Australia

On 26 Apr 2021, at 4:54 pm, Diego Victor de Jesus <diegov...@gmail.com> wrote:

Hello. 

Since the new 1.9.2 Orthanc release I was wondering about how one could implement a load balancer for multiple writers listening through DICOM. Is that possible with the DICOM protocol or only with DICOMweb?

--
You received this message because you are subscribed to the Google Groups "Orthanc Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to orthanc-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/orthanc-users/08fe78a3-40c8-4b8d-aa41-06f6a8e8cb6fn%40googlegroups.com.

Diego Victor de Jesus

unread,
Apr 26, 2021, 9:48:45 AM4/26/21
to Orthanc Users
Hi James! 

I set up a test environment with Docker similar to what i already have in production. Since the NGINX Docker image comes with streaming support out of the box, it was easier than i expected.

However, i have seen no significant improvement in storage times with this new setup. I compared the storage times of 11 studies sent all at once (8 CRs, 1 DX, 1 CT, 1 MR) between 1 writer Orthanc and N writer Orthancs architecture
The times always ranged between 5~6 minutes with the 1 Orthanc architecture being faster. Since it was a quick test, maybe i forgot some important configuration.

Anyway, here is the source code of the test project (created with Docker Compose). It is attached on this post.

The tests were run under Docker on Windows 10.
multiple-writers-test.zip

Diego Victor de Jesus

unread,
Apr 27, 2021, 5:43:37 PM4/27/21
to Orthanc Users
I thought about routing incoming studies to different modalities based on their type (CR, DX, etc). The problem with routing is that i can only send the instance to another modality when it is already stored on the first one. 

And since C-STORE doesn't have headers identifying the modality, NGINX can't decide anything besides the least_conn setting.


Diego Victor de Jesus

unread,
Apr 27, 2021, 11:53:30 PM4/27/21
to Orthanc Users
I was looking at  ReceivedInstanceFilter, but I couldn't find a Python equivalent event. I don't know but i could use that event to send the instance to an Orthanc writer peer and reject the instance returning false. I will get back when i have some results on that.

Sébastien Jodogne

unread,
Apr 28, 2021, 1:35:41 AM4/28/21
to Orthanc Users
The "OrthancPluginRegisterIncomingHttpRequestFilter2()" primitive of the C/C++ SDK corresponds to Lua callback "ReceivedInstanceFilter()":

This C primitive is not automatically mapped by the code generation using Clang, but could certainly be manually wrapped:

In the meantime, consider writing a C/C++ plugin:

Sébastien-

Sébastien Jodogne

unread,
Apr 28, 2021, 1:37:16 AM4/28/21
to Orthanc Users
Sorry, typo, I was referring to "OrthancPluginRegisterIncomingDicomInstanceFilter()":
Reply all
Reply to author
Forward
0 new messages