Orthanc dying on recieving DICOM - how to troubleshoot?

1,276 views
Skip to first unread message

Mark Hodge

unread,
Jul 12, 2016, 1:10:32 AM7/12/16
to Orthanc Users
I have Orthanc receiving DICOMs from a MRI using a modified version of AutoClassify.py to write each instance out to disk.

For several small MRI's (1600-3000 images) it works fine, however, when I do a test with a 51627 images it is failing:




 Once this happens Orthanc seems to be automatically restarted every 14-20 minutes going from the logs (which contain no useful hints).




From AutoClassify I get the following message:


Writing new DICOM file: \\storage.hcs-p01.otago.ac.nz\its-pacs\DICOMExport\Unknown\QA Phantom NiCl - test\DMHDS_PHANTOM\MR - ep2d_fid 55 mins\1.3.12.2.1107.5.2.19.46231.2016071209113992632029081.dcm
Unable to write instance 614b7d22-8e421c89-8c5d2453-91d77e05-bcbe361d to the disk
Traceback (most recent call last):
  File "C:\admin\DICOM-export\DICOM-export.py", line 161, in <module>
    'limit' : 4   # Retrieve at most 4 changes at once
  File "C:\admin\DICOM-export\RestToolbox.py", line 58, in DoGet
    resp, content = h.request(uri + d, 'GET')
  File "C:\Admin\Python\lib\site-packages\httplib2\__init__.py", line 1314, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "C:\Admin\Python\lib\site-packages\httplib2\__init__.py", line 1064, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File "C:\Admin\Python\lib\site-packages\httplib2\__init__.py", line 987, in _conn_request
    conn.connect()
  File "C:\Admin\Python\lib\http\client.py", line 826, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "C:\Admin\Python\lib\socket.py", line 711, in create_connection
    raise err
  File "C:\Admin\Python\lib\socket.py", line 702, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
>>>

Does anyone have any advice on how to troubleshoot what might be going on? Increase/decrease limit/something else?

Cheers,
Mark

Auto Generated Inline Image 1
Auto Generated Inline Image 2

Sébastien Jodogne

unread,
Jul 12, 2016, 1:48:05 AM7/12/16
to Orthanc Users
Hello,

Have you enabled some plugin in Orthanc (notably the PostgreSQL plugin)? If so, please disable all of them to be sure the problem lies within the core of Orthanc.

If the problem still appears using the default database engine (SQLite), make sure the directory containing the "OrthancStorage" folder is large enough to hold all the files.

Finally, please post here the log file of Orthanc in "--verbose" mode. Debugging on our side will not be possible if you do not provide us with a set of problematic files and the way to reproduce the issue (e.g. with storescu).

HTH,
Sébastien-

Mark Hodge

unread,
Jul 12, 2016, 5:29:21 PM7/12/16
to Orthanc Users
Thanks Sébastien.

I'm still very much a novice with Orthanc or DICOM so please bear with me.

Can you or anyone else tell me how to start Orthanc successfully from the command line in Windows with the verbose mode enabled? (I have been running it as a service so this isn't something I am familiar with).

I tried using the following and a window flicks open then closes and nothing appears in the log file. Redirecting the output to a file with > also shows nothing apart from the enter password prompt text:

C:\Users\sstmarkh>runas /user:registry\orthanc-svc "c:\Program Files (x86)\Orthanc\Orthanc Server 1.1.0\Orthanc-1.1.0-Release.exe \"--verbose\""
Enter the password for registry\orthanc-svc:
Attempting to start c:\Program Files (x86)\Orthanc\Orthanc Server 1.1.0\Orthanc-1.1.0-Release.exe "--verbose" as user "registry\orthanc-svc" ...

I have not enabled any plugins, the storage is on a file server with 1TB currently assigned so storage limit is not the problem. I have attached my configuration.json but it is almost original.

I have tried starting the service with --verbose in the Start parameters but don't know if that has done anything - ie., nothing new is in the log file and when I upload a few known good instances they appear in the Orthanc GUI but nothing is logged. How can I test if verbose logging is enabled - should it say that verbose logging is turned on in the log file? I have attached the log created when I ran started Orthanc with --verbose in the Start parameters field of the Orthanc Properties window in the services manager.

Cheers,
Mark
Configuration.json
Orthanc-1.1.0-Release.log.20160713-092118.5200

Sébastien Jodogne

unread,
Jul 13, 2016, 2:48:48 AM7/13/16
to Orthanc Users
Hello,

To ease things (especially wrt. user permissions), I suggest you to stop the Orthanc service, and to manually start the command-line version of Orthanc that is available for download at:

Then, put your "Configuration.json" and the just-downloaded "Orthanc-1.1.0-Release.exe" into the same folder (e.g. "C:\Temp"). Finally, type in a command-line shell:

# cd C:\Temp
# Orthanc-1.1.0-Release.exe --verbose Configuration.json > Orthanc.log 2<&1

HTH,
Sébastien-

Mark Hodge

unread,
Jul 14, 2016, 12:37:48 AM7/14/16
to Orthanc Users
Thank you very much for that Sébastien. I have been trying this out today.

Since moving Orthanc's storage to local disk and modifying the Construct a target path section of the AutoClassify.py to use a "tidy" function to remove invalid characters from filenames all of our other small test scans have gone through successfully, and the expected number of instances have been exported by AutoClassify.py. The big test is tonight when a scan that has previously broken Orthanc is transferred to us.

Here is the code I am using to replace invalid (to Windows) characters from the a to d variables used in the export path construction (also using PatientID sans Patient Name):

    a = tidy('%s' % (GetTag(patient, 'PatientID')))
    b = tidy(GetTag(study, 'StudyDescription'))
    c = tidy('%s_%s' % (GetTag(series, 'Modality'), GetTag(series, 'SeriesDescription')))
    d = tidy('%s.dcm' % GetTag(instance, 'SOPInstanceUID'))


The function is as below - nice and simple, and just replaces spaces and invalid characters with the underscore character:

def tidy(value):   
    for c in '\/:*?"<>|! ':             #  Replace these characters with underscores - used to ensure valid file name and path
        value = value.replace(c,'_')
    return value;


It may be useful to add some of your excellent advice here to the Troubleshooting DICOM communications article at https://orthanc.chu.ulg.ac.be/book/faq/dicom.html

Will let you know how we get on with the large test scan.

Cheers,
Mark






On Wednesday, July 13, 2016 at 6:48:48 PM UTC+12, Sébastien Jodogne wrote:
Hello,

Sébastien Jodogne

unread,
Jul 14, 2016, 3:50:16 AM7/14/16
to Orthanc Users
Hello,

Thanks for your suggestions!

I have updated the sample scripts so as to better handle special characters (in the spirit of your "tidy()" function):

I have also added a FAQ entry to explain how to generate meaningful Orthanc logs (with the associated links in the Troubleshooting DICOM communications entry):

Kind regards,
Sébastien-

Mark Hodge

unread,
Jul 14, 2016, 5:34:38 PM7/14/16
to Orthanc Users
Hi Sébastien.

I have attached the logs from last night's large copy (from both Orthanc and the AutoClassify script).

The relevant section is at the end of the of each log file. I'm wondering if this is being caused by a race condition between theOrthanc and AutoClassify.py since it seems to be at a random instance.

We are retrying a large scan today and if that works will suggest that they try re-sending the large scan again without AutoClassify active.

If you see anything else that suggests elsewhere to look in the logs please let me know.

Cheers,
Mark
Orthanc 707pm 15-07-2016 NZ.zip
log from AutoClassify.zip

Mark Hodge

unread,
Jul 15, 2016, 1:45:39 AM7/15/16
to Orthanc Users
Hi Sébastien.

A scan has failed with the following snippet in the log (full log attached). Looks like it failed and then (I guess) a retry was manually started (I'm remote from the scanner which is connected to us via a fiber link).

...
I0715 16:49:16.356463 ServerContext.cpp:260] New instance stored
E0715 16:49:16.372097 StoreScp.cpp:300] Store SCP Failed: DUL Peer Requested Release
E0715 16:49:46.544027 CommandDispatcher.cpp:877] DIMSE failure (aborting association): DIMSE No data available (timeout in non-blocking mode)
I0715 16:50:23.528476 CommandDispatcher.cpp:491] Association Received from AET ORGKSMR on IP 10.92.1.242
I0715 16:50:23.544095 CommandDispatcher.cpp:689] Association Acknowledged (Max Send PDV: 131060)
I0715 16:50:23.575351 ServerContext.cpp:264] Already stored
...

In another recent thread with the same error (https://groups.google.com/forum/#!topic/orthanc-users/6_n47MjHnas) you mentioned the following:

I have just added two new options to fine-tune the DICOM timeouts:

  // Set the timeout (in seconds) after which the DICOM associations
  // are closed by the Orthanc SCP (server) if no further DIMSE
  // command is received from the SCU (client).
  "DicomScpTimeout" : 30,

  // The timeout (in seconds) after which the DICOM associations are
  // considered as closed by the Orthanc SCU (client) if the remote
  // DICOM SCP (server) does not answer.
  "DicomScuTimeout" : 10,


In your case, you would need to increase "DicomScpTimeout".

Please could you tell me whether these options solve your issue?



I wonder whether this is likely what I need to be trying as well? If yes, is there a separate download for a version of Orthanc command line that I need to download that will utilize these configuration changes? I have Orthanc-1.1.0-Release.exe which I downloaded yesterday as below (times in New Zealand local time):

 

And can you confirm what would be a reasonable DicomScpTimeout to use?

Cheers,
Mark





Orthanc.log
Auto Generated Inline Image 1

Sébastien Jodogne

unread,
Jul 15, 2016, 2:53:06 AM7/15/16
to Orthanc Users
Hello,


On Friday, July 15, 2016 at 7:45:39 AM UTC+2, Mark Hodge wrote:
A scan has failed with the following snippet in the log (full log attached). Looks like it failed and then (I guess) a retry was manually started (I'm remote from the scanner which is connected to us via a fiber link).

...
I0715 16:49:16.356463 ServerContext.cpp:260] New instance stored
E0715 16:49:16.372097 StoreScp.cpp:300] Store SCP Failed: DUL Peer Requested Release
E0715 16:49:46.544027 CommandDispatcher.cpp:877] DIMSE failure (aborting association): DIMSE No data available (timeout in non-blocking mode)
I0715 16:50:23.528476 CommandDispatcher.cpp:491] Association Received from AET ORGKSMR on IP 10.92.1.242
I0715 16:50:23.544095 CommandDispatcher.cpp:689] Association Acknowledged (Max Send PDV: 131060)
I0715 16:50:23.575351 ServerContext.cpp:264] Already stored
...

In another recent thread with the same error (https://groups.google.com/forum/#!topic/orthanc-users/6_n47MjHnas) you mentioned the following:

I have just added two new options to fine-tune the DICOM timeouts:

[...]

I wonder whether this is likely what I need to be trying as well?


Regarding this part of the problem, no, the newly-introduced option "DicomScpTimeout" will not help.

From what I read above, this is your remote modality (the SCU) that has problems in the DICOM transmission. After sending some instance, the DICOM association gets broken, but your modality keeps the channel open (even though no data is exchanged over the channel). After 30 seconds, the timeout occurs in the Orthanc SCP, which closes the connection.

Please get in touch with the paying support of the manufacturer of your modality. They have full access to the source code of Orthanc (the converse is obviously not true), so they will be able to say whether the problem lies in Orthanc or in your modality.

You could also try and replace Orthanc by "storescp" from DCMTK, so as to check whether the same problem occurs with a much more basic C-Store SCP than Orthanc.

Sébastien-

Mark Hodge

unread,
Jul 19, 2016, 12:32:09 AM7/19/16
to Orthanc Users
Thanks Sébastien.

It may be a problem at the scanner or workstation - the technician found corrupt data at their end and since he rebuilt we have now recieved four 11700 image series without incident.

Hopefully we are sorted :-)

Cheers,
Mark

Sébastien Jodogne

unread,
Jul 19, 2016, 2:28:11 AM7/19/16
to Orthanc Users
Great, this is nice news!

Regards,
Sébastien-

Mark Hodge

unread,
Jul 19, 2016, 6:29:48 PM7/19/16
to Orthanc Users
It is looking fairly promising now; we've received the expected number of instances twice now.

There is a problem with the AutoClassify.py leaving a few instances unexported though, but I will discuss that in a separate thread.

Cheers,
Mark
Reply all
Reply to author
Forward
0 new messages