Hi everyone,
I am using dcm4chee-arc-light 5.33.1 and working on enabling compression in dcm4chee using the available compression rules. I'm facing a couple of issues and would appreciate your help in understanding and resolving them.
Issue 1: Original file not deleted after compression
When compression is successfully triggered (e.g., using a delay), I notice that:
Question:
Is this behavior expected?
Is there a configuration setting or cleanup mechanism in dcm4chee to delete the original uncompressed file after compression?
Issue 2: Compression not triggered without delay
In a separate scenario, when I configure compression without any delay, it does not seem to get triggered at all.
Question:
Is it mandatory to set a delay for compression to work, or is there a known issue or configuration required to trigger compression immediately?
Attachments (will be added below):
Screenshot
of compression rule configuration
Relevant logs from dcm4chee
Storage
log
Compression
log
Screenshot
of file storage directory, showing both
compressed and uncompressed file (for simplicity I have just pushed one dicom file)
Any guidance would be greatly appreciated. Let me know if you need more details or configs.
Thanks in advance!
Best regards,
Akhilesh
Hi everyone,
I was able to resolve my earlier issue of the original uncompressed files being retained after compression by configuring the Purge Storage Polling Interval Now, the uncompressed images are correctly getting deleted post-compression. Thanks to everyone who might have looked into that.
However, I’m facing another issue now related to adding new instances to an existing (already compressed) series:
Here’s the scenario:
I push instances for a study/series — compression is triggered correctly based on the compression rules and delay.
After some time, I push more instances to the same series.
These newly added instances do not get compressed, even though the compression rule applies to them.
Upon investigation, I found that:
Compression is triggered only when a Compression Scheduled Date Time entry is created on series, which seems to happen only at the time of initial series creation.
Updates to an existing series (i.e., receiving new instances later) do not create/update the Compression Scheduler entry, so compression doesn’t kick in for those.
Is there a recommended way to ensure that additional instances sent later to an already-compressed series also get compressed?
Can this be achieved via:
Some additional configuration or flag?
A way to re-queue or refresh the series in the Compression Scheduler when new instances are received?
Any extension point or custom scheduler job you’d recommend?