How do I bulk delete files from DCM4CHE

2,102 views
Skip to first unread message

IronStormCrow

unread,
Mar 22, 2013, 10:44:09 PM3/22/13
to dcm...@googlegroups.com
My lab recently switched to a new DICOM server. I have been working to ensure that ALL images have been transferred properly.

I have identified roughly 7,000 files whose checksums do not match between servers. This implies that something went wrong when these particular files were transferred using dcmsnd.

I would like to delete the files from the new server in bulk and reuse dcmsnd to bring them over from the old one.

I am concerned that a raw delete from terminal (rm -f) will corrupt the underlying dcm4che database.

Is there a command that I can run to cleanly delete all of the files in my list? I would like to avoid using the GUI given the number of files.

Thank you.

fleetwoodfc

unread,
Mar 23, 2013, 7:43:00 AM3/23/13
to dcm...@googlegroups.com

The ContentEditService has Mbean operations that can be used to cleanly delete. These can be called remotely using the twiddle script provided in the dcm4chee bin folder.

Alessio

unread,
Mar 25, 2013, 5:29:02 AM3/25/13
to dcm...@googlegroups.com
On 03/23/2013 12:43 PM, fleetwoodfc wrote:
>
> The ContentEditService has Mbean operations that can be used to cleanly
> delete. These can be called remotely using the twiddle script provided in
> the dcm4chee bin folder.
>


Some time ago, I had to trash a lot of studies.

I had used this stupid bash script:

<https://github.com/alcir/mystuff/blob/master/twiddletrash.sh>

The file to put as argument to the script must contain a list of Study
Instance UID. To obtain this list you can query the database:

SELECT distinct(study.study_iuid)
FROM series
JOIN study ON series.study_fk = study.pk
JOIN patient ON study.patient_fk = patient.pk
JOIN instance ON instance.series_fk = series.pk
where study.study_datetime >= '2000-01-01'
and study.study_datetime <= '2010-08-31'

Of course the query can be simpler.

After that you can empty the trash.

I think you can use the purgeStudy MBean if you want to delete them
directly instead of moving the studies to the trash.


Ciao
A

armin...@gmail.com

unread,
Apr 24, 2013, 3:56:49 PM4/24/13
to dcm...@googlegroups.com
Based on our experience the quickest way to consistently manipulate large amount of file (1mio+) is accessing the webinterface (wget.....) from simple bash scripts, because starting twiddle for every studyuid takes too much time.
short answer if you need a sample

stbender

unread,
Sep 20, 2013, 4:18:05 AM9/20/13
to dcm...@googlegroups.com
Hi,
I need to execute the compress() method of the CompressionService multiple times.
So it would be great if you could provide an example for the wget scripts.
Thank you!

florin ghimie

unread,
Sep 22, 2022, 9:46:12 AM9/22/22
to dcm4che
If you are still looking for an answer, I develop a python script to reject the studies... based on a filter -20220101. (basically it will reject all studies before 01.01.2022)
This will work if you have a non secured dcm4chee arc.
Hope it helps.

import requests
import json

pacs_url = 'http://<YOUR.DCM4CHEE.IP.ADDRESS>:8080/dcm4chee-arc/aets/DCM4CHEE'

x = requests.get(pacs_url+'/rs/studies?StudyDate=-20220101')
studies_output = x.text
if studies_output:
    studies = json.loads(studies_output)

    for study in studies:
        print(study['00100010']['Value'][0]['Alphabetic'] + study['00080020']['Value'][0]);
        link = study['00081190']['Value'][0];
        y = requests.post(link+'/reject/113039%5EDCM');
Reply all
Reply to author
Forward
0 new messages