Windows Defrag

0 views
Skip to first unread message

Margurite Vizarro

unread,
Aug 3, 2024, 3:47:12 PM8/3/24
to tercsleektussran

Volumes the file system marked as dirty, indicating possible corruption.
You must run chkdsk before you can defragment this volume or drive. You can determine if a volume is dirty by using the fsutil dirty command.

To perform this procedure, you must be a member of the Administrators group on the local computer, or you must have been delegated the appropriate authority. If the computer is joined to a domain, members of the Domain Admins group might be able to perform this procedure. As a security best practice, consider using Run As to perform this procedure.

A volume must have at least 15% free space for defrag to completely and adequately defragment it. defrag uses this space as a sorting area for file fragments. If a volume has less than 15% free space, defrag will only partially defragment it. To increase the free space on a volume, delete unneeded files or move them to another disk.

While defrag is analyzing and defragmenting a volume, it displays a blinking cursor. When defrag is finished analyzing and defragmenting the volume, it displays the analysis report, the defragmentation report, or both reports, and then exits to the command prompt.

Running the defrag command and Disk defragmenter are mutually exclusive. If you're using Disk defragmenter to defragment a volume and you run the defrag command at a command-line, the defrag command fails. Conversely, if you run the defrag command and open Disk defragmenter, the defragmentation options in Disk defragmenter are unavailable.

The defragmentation process runs scheduled task as a maintenance task, which typically runs every week. As an Administrator, you can change how often the task runs by using the Optimize Drives app.

Traditional optimization processes. Includes traditional defragmentation, for example moving files to make them reasonably contiguous and retrim. This is done once per month. However, if both traditional defragmentation and retrim are skipped, then analysis isn't run. Changing the frequency of the scheduled task doesn't affect the once per month cadence for the SSDs.

"He was told by a VMWare engineer that if you run Windows defragmenation on the virtual server while also are using de-duplication on the storage, it can cause corruption of data on the virtual server and/or the VMDK"

Yes....great data there. I've seen reallocate really help as well (the most dramatic example was a bunch of Groupwise servers on FC disk where reallocate cut latencies more than in half). I'm still wrestling between when it makes sense to use "reallocate" vs. "reallocate -p" (if there are any difference in how long it takes to run, how it impacts speed, exactly how much -p helps with snapshot deltas, etc.).

I'd add to your list interaction with VSM. Data transferred after reallocate will be defragmented; but after reallocate -p - not (at least if I correctly understand how it works). This may need to be taken in account if destination is often used for tasks like backup verification.

I noticed that data updates are written to free blocks, meaning the original block is not updated, but kept, since referenced by snapshots earlier made. So, may I conclude fragmentation is inherent to Netapp? May I conclude windows defrag might cause volumes running out of space? May I conclude that (in case we would have enough free space in the volume) the chance that less physical IO is initiated after defrag is negligible or even that in some cases the number of physical IO's might increase? May I conclude Windows will initiate less IO's since it thinks data is sequentialized, but the consequential number of IO's on Netapp is unpredicatable? May I conclude that the sql command "set statistics io on" does not tell me the truth about the number of physical reads executed on Netapp (or any other disk virtualisation/SAN system), only the number of physical IO windows or SQL thinks that have to be done?

When I read this, I start to wonder whether sql server index rebuilds might no longer be best practice, since this will have the same effect on snapshots as windows defrag? May I conclude we benefit HA, DR and fast restore, but that we should review best practices regarding IO optimisation?

I am trying to use it in an attempt to optimize the files on the hard disk that Windows use to boot up the system. I suspect that I need to schedule defrag before Windows boot, however I am not succeeding with it with defrag. Is there other software that works for it?

Defrag is a Windows program and runs while Windows is running. You can schedule a scan disk for before Windows completely loads, but not defrag. There are tools you can use that defragment outside of Windows, but Windows' built-in defrag is more than sufficient for most needs.

The Prefetch directory has one additional salutary function when used in conjunction with the built-in defragmenting tool. Every three days, during idle times, this utility rearranges program code, moving it to the outside of the disk to make it more efficient when loading (to force Windows to perform this optimization without having to do a full defragmentation, use the Defrag.exe command with the -b switch. For instance: defrag c: -b).

Apparently your computer already does this regularly, and unless you move massive files frequently across your hard disk drive and restart several times each day, you're not going to notice must of a benefit.

In my experience, people turn to defrag to speed up their systems much too quickly. I can count on one hand the times defragging has actually sped up systems that I have observed. And as a veteran of corporate and consumer IT support, that's saying something.

Set a scheduled defrag, don't bother with the -b option, and leave it at that. If you have computer slowness there are a myriad other options you should look into that will be much more effective in speeding the system up.

As you note, defrag at boot time allows you to move files that are normally in use by the system after boot. There is actually one case where these files need to be moved... if you are attempting to Shrink a partition to recover space for other uses. Say your C: drive needs more space and you wasted a lot of junk that can be removed on an F: drive. I used to be able to do this, but in Windows 2008, I might free up 75% of the space on a drive and end up only being able to Shrink the drive to half the size. It sounds like the -b option now works against any attempt to help free space at the end of a drive, so it can shrink, by moving program files out.

I find virtual disks are constantly in need of such maintenance when space gets tight. Oh well, at least in Hyper-V I can compact a dynamic drive to recover the empty space in the middle. It just introduces a false sense of capacity when you look at what the VM thinks is available.

save it as 1.bat on your desktop then right click it and run as admin.the first line tells it to wait until you've closed all running applications so it will just sit there if you have anything running.

It depends on what you're trying to defrag. The point of boot time defrag is to attempt to defragment things like the master file table, and the pagefile (which is can be done using system internal's page defrag or other tools). I've never noticed a huge performance difference after defragmenting my page file (on the other hand on XP era systems, a normal defrag can seem like magic).

Actually you can run defrag before Windows starts, just boot into the Recovery Environment, open a command prompt, go into de drive where Windows is installed something like: C:\windows\system32\ then type defrag and it will be running.

I'm working on making a tech-toolkit program, and included in this 'toolkit' will be a button which runs a defrag on the local disk. Currently the batch file I've made for this is simple, it just runs a basic fragmentation analysis:

First of all: set yout project Platform Target property to Any CPU and untick the Prefer 32-bit option (99,9% this is the issue). Then... why starting a batch that invokes the command when you can just do this?

You have full control over which drives, folders and files you defrag. Or simply use the default settings and let Defraggler do the work for you. Simple enough for every day users and flexible enough for advanced users.

So, I finally have my first Host server up (Windows Server 2012) and running with two guests VM (Windows Server 2012). I just wanted to ask to see if it was okay to run the scheduled Windows defrag on the Host and or Guests?

Defrag is pointless if you are using hardware RAID. It is even more pointless inside of a VM. You will basically be shuffling around data randomly as far as your hard drives are concerned. Windows has no insight into the block layout of your storage device and neither does the OS inside of a VM.

To fix this, we ran sdelete -z against the volume from the guest OS. Free space was zeroed out. When I re-thinned again from ASM, the SAN volume shrunk by about 90% of the reclaimable space. sdelete works beautifully when you have SAN guest integration tools.

Allot of 3rd party defrag tools will defrag with no file size limitation - which in turn can cause System Restore Points to start disappearing. More strange is the potential loss of overall hard disk space supposedly not tied to System Restore Points, etc., with hard disk space that mysteriously can't be reclaimed - just search the forums for that.

In my opinion when using 3rd party defrag tools on the system disk where Windows is installed it's better to just have them defrag only the fragmented files, i.e.; via the file list like what Defraggler and umpteen others have and then let Windows itself deal with optimization. If you do things that way your defrag's will be done surprisingly fast, you won't have to disable the built in optimization, and the built in optimization won't have to undo a ton of stuff 3rd party defrag tools do differently.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages