Proper Configuration of Online Cleanup and File Copy Directories?

184 views
Skip to first unread message

Patrick Popek

unread,
Nov 3, 2014, 1:08:49 PM11/3/14
to dcm...@googlegroups.com
I have a dcm4chee running on AWS shipping files to nearline. What I've been struggling with is that It never successfully finds files to delete from ONLINE storage. With Changes below I can delete a significant amount of data, however I'm essentially telling it to stop checking to see if data properly synced with nearline before deletion. Can someone confirm this? Or make any recommendations to the deletion criteria? I want to delete files older than 5-10 weeks unless the disks free space is sub 20Gb. I also want to make sure that the files have indeed been copied from archive to nearline (in this case an S3 bucket) prior to the delete.

Original Values: 
DeleteStudyIfNotAccessedFor: 8w
DeleteStudyOnlyIfNotAccessedFor: 1d
DeleteStudyOnlyIfStorageNotCommited: False 
DeleteStudyOnlyIfExternalRetrievable: True
InstanceAvailabilityOfExternalRetrievable: AUTO
DeleteStudyOnlyIfCopyOnMedia: False
DeleteStudyOnlyIfCopyOnFileSystemOfFileSystemGroup: NEARLINE_STORAGE
DeleteStudyOnlyIfCopyArchived: True
DeleteStudyOnlyIfCopyOnReadOnlyFileSystem: False
ScheduleStudiesForDeletionOnSeriesStored:  True
ScheduleStudiesForDeletionInterval:  10m

Changed Values:
DeleteStudyIfNotAccessedFor: 2w
DeleteStudyOnlyIfNotAccessedFor: 1d 
DeleteStudyOnlyIfStorageNotCommited: False 
DeleteStudyOnlyIfExternalRetrievable:  False
InstanceAvailabilityOfExternalRetrievable: AUTO
DeleteStudyOnlyIfCopyOnMedia: False
DeleteStudyOnlyIfCopyOnFileSystemOfFileSystemGroup: NEARLINE_STORAGE 
DeleteStudyOnlyIfCopyArchived:  False
DeleteStudyOnlyIfCopyOnReadOnlyFileSystem: False
ScheduleStudiesForDeletionOnSeriesStored: True
ScheduleStudiesForDeletionInterval: 10m

My Second Issue. I've got FileCopy setup to output to a directory called tar-outgoing. However I can't find any deletion criteria for this outgoing directory. So my 500GB of Online Disk space filled up with roughly 250GB of archive and 250GB of this file copy outgoing folder (named tar-outgoing). Is there mechanism or service to clean the file copy incoming/outgoing directories up? If not should I just schedule a cronjob to do a delete on files over n time? 

fleetwoodfc

unread,
Nov 5, 2014, 6:19:33 AM11/5/14
to
Re: The second issue.
I have seen the same on Windows systems where the temporary files do not ever get deleted. I have looked at the code that handles this and it appears to be logically correct however I have also 'googled' the subject and found there is a bit more to the story:


and this thread mentions a bug (at least) in the Windows JVM 

p.s. So the cronjob might be your best option for now.

fleetwoodfc

unread,
Nov 5, 2014, 8:43:22 AM11/5/14
to dcm...@googlegroups.com
There is a showDeleterCriteria() method you can use that will convert your config into human readable e.g. your configuration will be:

All studies not accessed for 2w. And studies not accessed for 1d when running out of disk space! Deleter Criteria: 1) Copy on Filesystem Group [Ljava.lang.String;@d1cc89 (NEARLINE_STORAGE)


On Monday, November 3, 2014 1:08:49 PM UTC-5, Patrick Popek wrote:
Reply all
Reply to author
Forward
0 new messages