This is an assumption that isn't universally correct and it's
implications appear to be vastly under-appreciated. One of the reasons
we *don't* delete files automatically is that you cannot you know for
certain, given no other information, that no other process on the system
or elsewhere is still needing that file. You are implicitly assuming
that the file upload through Django means that Django instance is the
only thing using the file. Other Django installs could be using. Other
processes could be. It's impossible to tell.
So knowing when it is safe to delete files requires domain-specific
knowledge.
We need good ideas on addressing the denial of service via disk
exhaustion attack. But, for example, there are patches floating around
(SmileyChris created a new version recently) to allow a FileField to
optionally delete files after saving a replacement. I'm not sure if
Chris's patch saves and then deletes to handle errors correctly right
now, because I haven't reviewed it in that level of depth yet. That
would be something you might enable if you were allowing public file
uploads.
Or you might have a cronjob that periodically looks for and cleans
things up (an easy enough approach today) and the frequency with which
it runs -- or how it is triggered externally -- depends upon your
available resources.
Realise that uploading with the same filename isn't the only way to use
up disk space. The filename is a pretty random string to use as a save
thing and it takes about 35 seconds to write a script to generate
hundreds of thousands of filenames for the same file if different
filenames are the only impediment to DOS'ing a system.
Regards,
Malcolm