Defrag Hdd

0 views
Skip to first unread message

Prisc Chandola

unread,
Aug 5, 2024, 9:55:54 AM8/5/24
to chanpafica
Youhave full control over which drives, folders and files you defrag. Or simply use the default settings and let Defraggler do the work for you. Simple enough for every day users and flexible enough for advanced users.

Volumes the file system marked as dirty, indicating possible corruption.

You must run chkdsk before you can defragment this volume or drive. You can determine if a volume is dirty by using the fsutil dirty command.


To perform this procedure, you must be a member of the Administrators group on the local computer, or you must have been delegated the appropriate authority. If the computer is joined to a domain, members of the Domain Admins group might be able to perform this procedure. As a security best practice, consider using Run As to perform this procedure.


A volume must have at least 15% free space for defrag to completely and adequately defragment it. defrag uses this space as a sorting area for file fragments. If a volume has less than 15% free space, defrag will only partially defragment it. To increase the free space on a volume, delete unneeded files or move them to another disk.


While defrag is analyzing and defragmenting a volume, it displays a blinking cursor. When defrag is finished analyzing and defragmenting the volume, it displays the analysis report, the defragmentation report, or both reports, and then exits to the command prompt.


Running the defrag command and Disk defragmenter are mutually exclusive. If you're using Disk defragmenter to defragment a volume and you run the defrag command at a command-line, the defrag command fails. Conversely, if you run the defrag command and open Disk defragmenter, the defragmentation options in Disk defragmenter are unavailable.


The defragmentation process runs scheduled task as a maintenance task, which typically runs every week. As an Administrator, you can change how often the task runs by using the Optimize Drives app.


Traditional optimization processes. Includes traditional defragmentation, for example moving files to make them reasonably contiguous and retrim. This is done once per month. However, if both traditional defragmentation and retrim are skipped, then analysis isn't run. Changing the frequency of the scheduled task doesn't affect the once per month cadence for the SSDs.


It's because something (often Windows Defender) has re-opened the files between you analyzing and actually clicking the defrag button.

You can't defrag files that are open.

The longer it is between analyzing and defragging the more likely it is that you may see the message.


Do another analyze and it will not find as many fragmented files, if it finds any.

Any that are now still open will not be listed, but you may see the few 'aborted' ones again if they have been closed again, so defrag them again.


The problem ther is that at 90% capacity used that disk is pretty full, - You have run out of free space required for a defragment to work.

Even the built in Windows defragmenter wont work with a HDD drive that is that full.


A lot of what is listed there seem to be 'Features-on-Demand' packages, are you actually using those features? If not you can remove them.

-us/windows-hardware/manufacture/desktop/features-on-demand-v2--capabilities


"He was told by a VMWare engineer that if you run Windows defragmenation on the virtual server while also are using de-duplication on the storage, it can cause corruption of data on the virtual server and/or the VMDK"


Yes....great data there. I've seen reallocate really help as well (the most dramatic example was a bunch of Groupwise servers on FC disk where reallocate cut latencies more than in half). I'm still wrestling between when it makes sense to use "reallocate" vs. "reallocate -p" (if there are any difference in how long it takes to run, how it impacts speed, exactly how much -p helps with snapshot deltas, etc.).


I'd add to your list interaction with VSM. Data transferred after reallocate will be defragmented; but after reallocate -p - not (at least if I correctly understand how it works). This may need to be taken in account if destination is often used for tasks like backup verification.


I noticed that data updates are written to free blocks, meaning the original block is not updated, but kept, since referenced by snapshots earlier made. So, may I conclude fragmentation is inherent to Netapp? May I conclude windows defrag might cause volumes running out of space? May I conclude that (in case we would have enough free space in the volume) the chance that less physical IO is initiated after defrag is negligible or even that in some cases the number of physical IO's might increase? May I conclude Windows will initiate less IO's since it thinks data is sequentialized, but the consequential number of IO's on Netapp is unpredicatable? May I conclude that the sql command "set statistics io on" does not tell me the truth about the number of physical reads executed on Netapp (or any other disk virtualisation/SAN system), only the number of physical IO windows or SQL thinks that have to be done?


When I read this, I start to wonder whether sql server index rebuilds might no longer be best practice, since this will have the same effect on snapshots as windows defrag? May I conclude we benefit HA, DR and fast restore, but that we should review best practices regarding IO optimisation?


It's never possible to fill the gaps completely since it's unlikely to find files to fit them exactly, this is why Defragger has the option "Defrag freespace (allow fragmentation)" filling the gaps at the expense of fragmentation.


If your files are scattered around the disk but not fragmented then you're making it harder for Windows to store files contiguously resulting in higher file fragmentation than if you defragged freespace.


Note: defragmenting takes a long time, so if you decide to do this you should either be working directly on the console (i.e. with a keyboard and monitor attached directly to the system) or via SSH and screen (from the NerdPack plugin). If someone has success using Shell in a Box let me know, otherwise I recommend you avoid it.


Luckily, xfs_fsr creates its temp directory in the root of the disk, so it looks like a user share. To solve this I just went to the File Integrity Settings page and told it exclude the .fsr directory (you'll have to start the defrag before you have the option to exclude the directory)


One of my drives had 7% directory fragmentation and no file fragmentation. I ran xfs_fsr but it had no effect. It is possible that Cache Dirs prevented changes to the directory structure, but I haven't looked into it.


Once you start the defrag, it will create the .xfs folder. Since it is in the root, it will look like a user share and you'll be able to exclude it. I modified the description to hopefully make that more clear.


Of my 4 data drives, only 1 of them was highly fragmented but only in the directory structure portion. That 4TB disk is 53% used at 2TB and had 17% directory fragmentation but only 6% file fragmentation. This drive is only used for TV Show episodes.


I wasn't able to find a "% complete" anywhere, so I don't know how to estimate how much longer it will run. It sounds like yours is running longer than mine did, but with such a small sample size I'm not sure what that proves.


One thing I did a couple of times was start another shell and re-run the xfs_db command while xfs_fsr was running, so I could see what progress it had made. It doesn't really tell you how much time is left though, since in my case it didn't take the fragmentation down to zero.


Doing some more reading on this, and it seems you can defrag a particular file if desired. Also, you can give it a duration to run and it will only run for that long, but will produce a checkpoint file in /var/tmp/ so it can resume from that point the next time it's kicked off.


To optimize a file, xfs_fsr creates a new copy of an existing fragmented file with fewer extents (fragments) than the original one had. Once the file contents are copied to the new file, the filesystem metadata is updated so that the new file replaces the old one. This implies that you need to have enough free space on the filesystem to store another copy of anything that you want to defragment. The free space issue extends to disk quotas as well; you cannot defragment a file if storing another complete copy of that file would exceed the disk quota of the user that owns that file.


By default, xfs_fsr will work on all your XFS drives for two hours (or a duration you specify) before stopping. I think the idea is that you could put it in a cron job and have it spend a few hours a day keeping things defragmented.


The problem is that "all your XFS drives" includes SSDs, and I don't want it to defrag my SSD. It is possible to pass it a file that lists only the drives you want it to defrag, but I figured it would be easier to pass a single drive on the command line.


I started on my second drive that was 75% fragmented and it apparently ignored my -t 21400 as it's been running for over 12 hours. I'm concerned that it may be locked up now as the GUI is unresponsive and I can't access SMB shares nor open another terminal to it. Has anyone else seen this? Any thoughts on how to recover?

3a8082e126
Reply all
Reply to author
Forward
0 new messages