Orthanc 1.5.6 eating /tmp disk continuously

70 views
Skip to first unread message

Pär Kragsterman

unread,
Aug 28, 2019, 7:05:00 AM8/28/19
to Orthanc Users
Dear group!

We've been running our production system on orthanc 1.5.6 on docker for roughly 6 months and we see a behaviour which needs managing. The system continuously eats /tmp space inside the docker container. You can see the content of the docker container /tmp folder prior to today's cleaning activity. I've also attached an image from our disk monitoring of the system over the last 6 months. on that image you see the blue line representing the DICOM storage growth (on a seperate volume) and the yellow line the system volume growth including the docker container.

The system uses an RDS postgres DB for indexing and a seperate EBS volume for DICOM storage. So the growth is purely related to the inner workings of orthanc.

We find that by restarting the container, some of these tmp files go away, but not all.
by removing the container and rebuilding it, the /tmp folder of course reset to zero.

a couple of questions:
1. is there a way to make orthanc manage it's temp files so that they don't grow indefinitely? maybe a system config parameter or a cron job which can be scheduled.
2. is the behaviour normal, or does it indicate some misconfiguration?

thank you all for a great forum!
Pär
www.cmrad.com


root@d67874e5e343:/tmp# ls -l

total 2695360

-rw-r--r-- 1 root root 114623382 Aug 26 20:50 Orthanc-040209a4-0be5-4be2-be20-9f2ea8a86bbc

-rw-r--r-- 1 root root 521502922 Jul 24 09:21 Orthanc-063c0737-3a0b-4486-872e-18a4550eea47

-rw-r--r-- 1 root root   5926051 Jul 24 09:02 Orthanc-19c38036-b95e-428f-a0c7-be7d58a3d45d

-rw-r--r-- 1 root root  51177778 Jul 29 11:15 Orthanc-310ac083-e68e-4cac-adf7-6c62dcd64f6f

-rw-r--r-- 1 root root 114623382 Aug 26 19:10 Orthanc-389e26e0-abdb-4597-af19-73ba794c134f

-rw-r--r-- 1 root root  45888007 May  8 18:46 Orthanc-53ea677e-cca8-4ef1-94c8-b55751707b14

-rw-r--r-- 1 root root 114623382 Aug 26 19:09 Orthanc-5bca9c66-fc26-47da-9ee1-10a1c52c36d9

-rw-r--r-- 1 root root   9032716 Aug 26 20:53 Orthanc-5ccc1d00-4273-4821-a612-b427a12c3a6d

-rw-r--r-- 1 root root  58735156 Jul 29 11:17 Orthanc-6238ad88-c7fe-41a9-82f7-b18eca0dfea1

-rw-r--r-- 1 root root 114623382 Aug 26 19:10 Orthanc-6acc86e2-9670-41b5-b978-9c5c705231e2

-rw-r--r-- 1 root root  60428428 Aug  5 16:47 Orthanc-80ecd5ae-a238-439e-b3ee-47a694c593f0

-rw-r--r-- 1 root root 114623382 Aug 26 19:09 Orthanc-87d4da87-df24-447b-ad48-b7354bb2281b

-rw-r--r-- 1 root root 114623382 Aug 26 19:10 Orthanc-93c7556f-12c0-4726-9ca6-be58b69faaeb

-rw-r--r-- 1 root root 112004615 Jul 24 08:35 Orthanc-97e9ab87-5eb8-459c-a993-a6db8bee4668

-rw-r--r-- 1 root root    725912 Aug 19 12:40 Orthanc-9a64f11c-0e71-470f-8500-07c9789679de

-rw-r--r-- 1 root root 143512751 Aug 26 19:08 Orthanc-a0c30653-ba3e-477e-b0ef-6e66354fbf42

-rw-r--r-- 1 root root  69276993 Aug  5 14:24 Orthanc-a70ddd72-e368-4ec2-a7b8-5ddf6754745e

-rw-r--r-- 1 root root       990 Jul 20 18:05 Orthanc-b5d0adc8-c58a-4b87-aef0-74e291667fc8

-rw-r--r-- 1 root root  60428428 Aug  5 10:22 Orthanc-c449e87f-7e8f-48af-8dc7-78c54a14f5af

-rw-r--r-- 1 root root  48585850 May  8 18:48 Orthanc-c8005ffe-2264-4c73-a20a-f3d71245104f

-rw-r--r-- 1 root root 114623382 Aug 26 20:49 Orthanc-d7bf597d-55fa-467e-9d17-e7ebe19c8a39

-rw-r--r-- 1 root root  53122752 Aug  5 14:27 Orthanc-dd2a26df-dc7a-408e-bbb9-282bb5efeab7

-rw-r--r-- 1 root root  31771305 Aug 26 19:03 Orthanc-e4957bd6-5eab-42cf-9f30-c1a7ba4a7c24

-rw-r--r-- 1 root root 523741287 May  8 18:23 Orthanc-e8881bf6-d365-4333-8ebe-d354a89d7cf7

-rw-r--r-- 1 root root 114623382 Aug 26 19:04 Orthanc-f5882bfb-bb22-40fa-b33c-056c4e2f204e

-rw-r--r-- 1 root root  47119557 Jul 29 11:23 Orthanc-fe399ce9-80e5-4342-b08b-e5bb40b82dc2




orthanc disk growth 6m.png

Sébastien Jodogne

unread,
Aug 28, 2019, 11:33:22 AM8/28/19
to Orthanc Users
Hello,

Orthanc typically creates temporary files when it creates ZIP archives.

The fact that temporary files are left on the disk indicate that Orthanc has exited in a non-clean way, without being able to conclude its finalization (either because of a crash, because of a SIGKILL signal, or because of two SIGTERM signals sent in a row). You should check that your systemd/init services that start/stop your Docker containers are not too aggressive when stopping Orthanc.

This might also result from a bug. If so, please post a minimal working example showing how Orthanc can create a temporary files while not removing it, so that we can work on a fix:

I personally have no clue about the best practice to handle the "/tmp" folder in conjunction with Docker, maybe other users in this forum can give a hint.

HTH,
Sébastien-

Luiz Eduardo Guida Valmont

unread,
Aug 28, 2019, 12:01:42 PM8/28/19
to Pär Kragsterman, Orthanc Users
Hi Pär!
Hi Sébastien!

If I may suggest something, I'd suggest checking the file type for the leftover files. The ID after the "Orthanc-" prefix suggest it's, as Sébastien said, a byproduct of compression. This should happen prior to Orthanc sending archives as responses to ".../archive" URL's.

Unless you mapped your /tmp as a docker volume, your logs suggest this instance ran for a very long time because a couple of files date back to May 8th. Also, something odd happened starting at 19:00 on August 26th; there are many many files from two days ago. Maybe this is a backup routine pulling files through the ".../archive" URL (or any client, for that matter) and it's timing out; it's tough to say much now.

What you can do however to mitigate the problem is actually mount your /tmp and have a cron job (or Scheduled Task in Windows lingo) do a "find -delete" on old /tmp files, like so:

$ find [mounted-tmp-volume-location] -atime 1 -delete

This command will delete temporary files not accessed in more than one day.

Side questions: do you have any kind of plugin doing any sort of temporary file creation? Any plugin at all or is it pure Orthanc?

HTH
--
Luiz Eduardo


--
You received this message because you are subscribed to the Google Groups "Orthanc Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to orthanc-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/orthanc-users/c9b7014a-89e6-4491-bf07-0314f9c52a61%40googlegroups.com.

Luiz Eduardo Guida Valmont

unread,
Aug 28, 2019, 12:03:11 PM8/28/19
to Sébastien Jodogne, Orthanc Users
Hi, Sébastien!

I also have no clue on docker'ised /tmp directories best practices. I just suggested a way to try and buy him some time while a proper solution is worked on.
--
You received this message because you are subscribed to the Google Groups "Orthanc Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to orthanc-user...@googlegroups.com.

Sébastien Jodogne

unread,
Aug 29, 2019, 5:44:39 AM8/29/19
to Orthanc Users
Hello,

Here are 3 additions wrt. to my message of yesterday:

1- Are you aware of the "TemporaryDirectory" and "MediaArchiveSize" configuration options?

2- You might remove the temporary files with pattern "/tmp/Orthanc-*" as part of your startup script, which would clean up any orphan file before restarting Orthanc.

3- If the second solution is not applicable (e.g. because you run more than one instance of Orthanc on your server), please note that I've just submitted a modification to include the process ID of Orthanc in the temporary files it generates:

This information is useful to detect orphan files that do not belong to any running instance of Orthanc, hereby opening the path to the design of a cron job that would remove such orphan files. This feature will be part of forthcoming 1.5.8 release.

HTH,
Sébastien-

Pär Kragsterman

unread,
Aug 29, 2019, 5:59:58 AM8/29/19
to Orthanc Users
Thank you both for the great suggestions. I will decide on an update which stabilises the situation through a housekeeping script to make sure we stop the growth. Your suggestions for that script are appreciated.

A few more details. The only archive compression which takes place on this server are:
1. The odd download of a zipped study by our sysadmins to troubleshoot incidents. This is a rare activity.
2. The reception of gzipped transfers from the orthanc-proxies installed at hospitals. This is a very common event and in my head is more likely to be the source. The Transfer plug-in is used.

If I'm able to replicate the behaviour in our non-production environment, I'll submit a minimal working example. However the replication may not be straight forward and it may depend on the heavy usage of production.

thanks again,
Pär

Sébastien Jodogne

unread,
Aug 30, 2019, 3:32:24 AM8/30/19
to Orthanc Users
Fine, please keep us updated

Sébastien-
Reply all
Reply to author
Forward
0 new messages