Issues with Compression Behavior in dcm4chee – Original File Retained After Compression

35 views
Skip to first unread message

Nagasaiakhilesh Gadamsetty

unread,
May 28, 2025, 1:53:47 AMMay 28
to dcm4che

Hi everyone,


I am using dcm4chee-arc-light 5.33.1 and working on enabling compression in dcm4chee using the available compression rules. I'm facing a couple of issues and would appreciate your help in understanding and resolving them.

 

Issue 1: Original file not deleted after compression

When compression is successfully triggered (e.g., using a delay), I notice that:

  • A compressed copy of the DICOM object is created.
  • However, the original uncompressed file is also retained.
  • During WADO calls, the compressed file is returned (which is expected and good).
  • But the goal of compression is also to reduce storage usage, which isn’t achieved if the original remains.


Question:

Is this behavior expected?

Is there a configuration setting or cleanup mechanism in dcm4chee to delete the original uncompressed file after compression?

 

Issue 2: Compression not triggered without delay

In a separate scenario, when I configure compression without any delay, it does not seem to get triggered at all.


Question:

Is it mandatory to set a delay for compression to work, or is there a known issue or configuration required to trigger compression immediately?

 

Attachments (will be added below):

  1. Screenshot of compression polling interval settingtemp.png
  2. Screenshot of compression rule configurationtemp.png

  3. Relevant logs from dcm4chee

    1. Storage logtemp.png

    2. Compression logtemp.png

  4. Screenshot of file storage directory, showing both compressed and uncompressed file (for simplicity I have just pushed one dicom file)temp.png


Any guidance would be greatly appreciated. Let me know if you need more details or configs.

 

Thanks in advance!

 

Best regards,

Akhilesh

Nagasaiakhilesh Gadamsetty

unread,
Jun 17, 2025, 12:10:00 PMJun 17
to dcm4che

Hi everyone,

I was able to resolve my earlier issue of the original uncompressed files being retained after compression by configuring the Purge Storage Polling Interval Now, the uncompressed images are correctly getting deleted post-compression. Thanks to everyone who might have looked into that.

However, I’m facing another issue now related to adding new instances to an existing (already compressed) series:


Issue: Compression not triggered for additional instances in same series

Here’s the scenario:

  • I push instances for a study/series — compression is triggered correctly based on the compression rules and delay.

  • After some time, I push more instances to the same series.

  • These newly added instances do not get compressed, even though the compression rule applies to them.

Upon investigation, I found that:

  • Compression is triggered only when a Compression Scheduled Date Time entry is created on series, which seems to happen only at the time of initial series creation.

  • Updates to an existing series (i.e., receiving new instances later) do not create/update the Compression Scheduler entry, so compression doesn’t kick in for those.


Question:

Is there a recommended way to ensure that additional instances sent later to an already-compressed series also get compressed?

Can this be achieved via:

  • Some additional configuration or flag?

  • A way to re-queue or refresh the series in the Compression Scheduler when new instances are received?

  • Any extension point or custom scheduler job you’d recommend?

Reply all
Reply to author
Forward
0 new messages