0 Defragmented Windows 10

0 views
Skip to first unread message

Ophelia Gurin

unread,
Aug 5, 2024, 1:37:58 AM8/5/24
to guipalira
Isearched but couldn't find those articles. Not only that. I found many articles claiming the opposite. So, could someone please explain this? Perhaps, as with many other things, the answer is "it depends" - so what does it depend on?

Keep in mind that my question is not if there's a need for a 3rd party defragmenter, or if there's a need for scheduling a defrag, etc. My question is: On a Windows 10 system that is hardly ever idle, where all drives are the default NTFS, is there a need to use Windows' defragmenter?


As I often say, the guys who wrote the OS are pretty smart and know better. Usually (the initial releases of NTFS did not have a first party defragmenter, and they later found it was needed) . While this article is a decade old, NTFS is NTFS, and many of what it says is probably fundamentally still true. NTFS is designed to minimise defragmentation but eliminating it without some online, in process defragmentation process is hard.


Windows dosen't constantly defragment your hard disk. It periodically checks and does defragmentation if needed. And windows tries its best to keep files compacted as they're created - fragmentation happens when files are modified.


So, the scheduled, or even manual defragentation process is a preventive maintenance check. Its a little like checking your oil. You can pull out your dipstick and check, have some fancy thing that checks it for you (so cars have that?) and most of the time you should be fine. If it isn't, you'd have trouble and wished you checked.


As such, I'd leave the defaults as is. Manual defragmentation runs are probably no longer needed on 7 and better - since the system runs an automatic check every week by default, and defragments as needed.


These filesystems store files in a linear order and when a file is removed, empty space is filled where previously was data. This causes for fragmentation and in these cases fragmenting a harddisk will help.


Because a harddisk that is fragmented becomes slower, developers wanted to find a way to get around the issue. NTFS fileystem was made which stores files differently. Due to how the files are stored, fragmentation is less likely to occur and for that reason you will likely not need to defragment the drive.


Defragmentation is not needed if you don't mind slow computer. Some filesystems needed to be deferagmented more often than others. Probability of fragments creation in partition which is nearly full is higher than in empty partition. This rule is filesystem independent.


I know that Windows uses its defrag program to defragment my partition sometimes when my system goes to an idle state. Also, sometimes I used to execute defrag by myself in order to increase utilization and boot time performance.


But, is there a way to know if it was already executed ever, when it happened and how many times? Does Windows keep some log in its registry or in the event manager which would reveal such information?


I believe that you have to create a scheduled task for it to log the activity. It's possible you can set up a .bat file as well to do the logging. I've always thought it to be odd regarding some of the tasks that are not logged automatically in Windows.


I'm going to argue that you're asking the wrong question, because of one small fact: why do you care how often a drive has been defragmented? How would you know if those defrag operations all completed successfully? What purpose would that information serve to you?


You shouldn't care how often a drive is defragged, but you should care how much it's fragmented, especially when it gets to the point of impacting performance. I personally don't know what point that is, but I know on Windows, you can run defrag X: /a /v to view a verbose (/v) analysis (/a) of the drive before defragmenting it. This will provide you with an approximate percentage of how fragmented the disk is.


In my opinion, anything higher then a few percent is worth a quick defrag pass. It would be trivial to write a batchfile to automate this for you, to only defrag the hard drive if it reaches a certain threshold. In that same batchfile, you could also log to a file when you executed the defrag job, so you could keep count if you wanted. As ioi also mentioned, you could also use a scheduled task to do this.


On Windows Scheduled Tasks

Here there is a scheduled task called "SchedueledDefrag". Here I can see the next execution date and, if the schedule history was previouslly enabled, I can see the last run date.


There's a general rule of thumb or statement that "defragging an SSD is always a bad idea." I think we can agree we've all heard this before. We've all been told that SSDs don't last forever and when they die, they just poof and die. SSDs can only handle a finite number of writes before things start going bad. This is of course true of regular spinning rust hard drives, but the conventional wisdom around SSDs is to avoid writes that are perceived as unnecessary.


One of the most popular blog posts on the topic of defrag and SSDs under Windows is by Vadim Sterkin. Vadim's analysis has a lot going on. He can see that defrag is doing something, but it's not clear why, how, or for how long. What's the real story? Something is clearly running, but what is it doing and why?


As far as Retrim is concerned, this command should run on the schedule specified in the dfrgui UI. Retrim is necessary because of the way TRIM is processed in the file systems. Due to the varying performance of hardware responding to TRIM, TRIM is processed asynchronously by the file system. When a file is deleted or space is otherwise freed, the file system queues the trim request to be processed. To limit the peek resource usage this queue may only grow to a maximum number of trim requests. If the queue is of max size, incoming TRIM requests may be dropped. This is okay because we will periodically come through and do a Retrim with Storage Optimizer. The Retrim is done at a granularity that should avoid hitting the maximum TRIM request queue size where TRIMs are dropped.


When he says volume snapshots or "volsnap" he means the Volume Shadow Copy system in Windows. This is used and enabled by Windows System Restore when it takes a snapshot of your system and saves it so you can rollback to a previous system state. I used this just yesterday when I install a bad driver. A bit of advanced info here - Defrag will only run on your SSD if volsnap is turned on, and volsnap is turned on by System Restore as one needs the other. You could turn off System Restore if you want, but that turns off a pretty important safety net for Windows.


First, yes, your SSD will get intelligently defragmented once a month. Fragmentation, while less of a performance problem on SSDs vs traditional hard drives is still a problem. SSDS *do* get fragmented.


It's also worth pointing out that what we (old-timers) think about as "defrag.exe" as a UI is really "optimize your storage" now. It was defrag in the past and now it's a larger disk health automated system.


Additionally, there is a maximum level of fragmentation that the file system can handle. Fragmentation has long been considered as primarily a performance issue with traditional hard drives. When a disk gets fragmented, a singular file can exist in pieces in different locations on a physical drive. That physical drive then needs to seek around collecting pieces of the file and that takes extra time.


This kind of fragmentation still happens on SSDs, even though their performance characteristics are very different. The file systems metadata keeps track of fragments and can only keep track of so many. Defragmentation in cases like this is not only useful, but absolutely needed.


SSDs also have the concept of TRIM. While TRIM (retrim) is a separate concept from fragmentation, it is still handled by the Windows Storage Optimizer subsystem and the schedule is managed by the same UI from the User's perspective. TRIM is a way for SSDs to mark data blocks as being not in use. Writing to empty blocks on an SSD is faster that writing to blocks in use as those need to be erased before writing to them again. SSDs internally work very differently from traditional hard drives and don't usually know what sectors are in use and what is free space. Deleting something means marking it as not in use. TRIM lets the operating system notify the SSD that a page is no longer in use and this hint gives the SSD more information which results in fewer writes, and theoretically longer operating life.


However, this stuff is handled by Windows today in 2014, and you can trust that it's "doing the right thing." Windows 7, along with 8 and 8.1 come with appropriate and intelligent defaults and you don't need to change them for optimal disk performance. This is also true with Server SKUs like Windows Server 2008R2 and later.


No, Windows is not foolishly or blindly running a defrag on your SSD every night, and no, Windows defrag isn't shortening the life of your SSD unnecessarily. Modern SSDs don't work the same way that we are used to with traditional hard drives.


Yes, your SSD's file system sometimes needs a kind of defragmentation and that's handled by Windows, monthly by default, when appropriate. The intent is to maximize performance and a long life. If you disable defragmentation completely, you are taking a risk that your filesystem metadata could reach maximum fragmentation and get you potentially in trouble.


Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.


Defragmenting a hard drive can improve a computer's or laptop's performance and speed. To reduce fragmentation, a disk optimization tool typically uses compaction to free up larger areas of space. Certain disk defragmentation tools might try to keep smaller files together, especially if they're often accessed sequentially.


Fragmentation doesn't happen as much in Linux-based file systems as the Linux journaling system stores the data across multiple locations in the disk and automatically moves it around as soon as it senses fragmentation.

3a8082e126
Reply all
Reply to author
Forward
0 new messages