This kind of thing usually happens when there are PC replacements and the
muppets copy their entire local drives to network storage "just so I don't
lose any data in the transfer", or when users are doing their own
application upgrades and taking multiple copies of 500Mb datafiles (which
space never gets reclaimed) "in case we have a problem" - that's what WE are
here for guys!
It infuriates me that I have no immediate idea where the space is going - I
can track it retrospectively with scripts that walk the entire 300Gb
filesystem, but I want something quicker
Anyway, this set me to wondering how I could scan the entire fileserver for
file creation. There are plenty of examples on how to scan specific folders,
but I want to think bigger.
My first idea is to repeatedly run NET FILE and look for patterns, but I
don't think that'll be clear enough.
My second (better) idea is to access the Change Journal.
(http://msdn.microsoft.com/library/en-us/fileio/base/change_journals.asp)
but I can't work out how to get information from the USN_RECORD
structure - bear in mind I'm not a programmer so the provided examples don't
mean much to me except in an abstract sense - I don't have the tools/skills
to implement them.
Are there plans to write a COM object that could be scripted against? or a
'skunkworks' tool to dump this into SQL or even decipherable text?
for example, a check of ParentFileReferenceNumber or USN_REASON_FILE_CREATE
during the last 20 minutes could indicate hotspots of file change or
identify individual large files
if nothing else, I'll try vbs and the wshAPIToolkit to see if I can get
anything useful.
Anyone know where the interfaces may be implemented?
regards
paul
"Paul Matear" <pa...@nospam.com> wrote in message
news:u$qvaic8D...@TK2MSFTNGP11.phx.gbl...
the audit option is "Create Files/ Write Data" - even if I restrict it to
"This Folder and Subfolders" from the root of the data area I'm not sure I
could take the hit of auditing the entire file system - in cases where they
are copying up the entire hard drive it would generate a significant number
of events.....
I've been proposing quotas and HSM for a long time now but the company line
is just to throw more disk at it - I've been stressing the problems this
causes with the DR strategy and ongoing costs but I'm not a salesman so seem
to have a problem getting my message across
regards
paul
"Drew Cooper [MSFT]" <dc...@online.microsoft.com> wrote in message
news:%23vMmjsc...@TK2MSFTNGP09.phx.gbl...
regards
paul
"Drew Cooper [MSFT]" <dc...@online.microsoft.com> wrote in message
news:%23vMmjsc...@TK2MSFTNGP09.phx.gbl...
Too bad you can't use quotas - seems more like the right tool for the job.
--
Drew Cooper [MSFT]
This posting is provided "AS IS" with no warranties, and confers no rights.
"Paul Matear" <pa...@nospam.com> wrote in message
news:eDnUl1c8...@TK2MSFTNGP09.phx.gbl...
I've used earlier versions of StorageCentral (when it was precise/wquinn)
and I agree it's a good tool but I've never been permitted to use hard
quotas - I've got a couple of 500Gb v5 licences for use in a new SAN due to
go online soon so maybe I can use the stats from that to push the point.
I'd hoped to be able to use a simpler(?) method for the smaller utility
servers though.....
regards
paul
"Lars Temme" <no_spam@L+A+R+S.T...@fsc.fiserv.com> wrote in message
news:usZA2ln8...@tk2msftngp13.phx.gbl...
You should try watchdisk from poweradmin.com. It's made for exactly
the problem you describe.