Server 2019 Defrag

0 views
Skip to first unread message

Shawna Erholm

unread,
Aug 3, 2024, 1:26:26 PM8/3/24
to olgerlere

I in fact never defragment the data on my servers. I haven't seen enough of a performance gain in file serving to make it worth the performance hit of the time it takes to defrag. In fact most servers won't ever really finish defragmenting unless you take them offline for a few days. If you're using a relatively modern file system (which unless you chose to change the defaults on Windows 2003 you are) it shouldn't matter much anyhow. Also if you're running any sort of striping RAID the fragmentation of files is a non issue since they're already broken down across many disks.

If I have a server where I really want the data clean and defragmented for some reason I am far more likely to back it all up to tape, wipe the drive and restore it. That will write them all down in perfect blocks.

Just about the only use-case I know of for defragmenting a Windows server is to improve backup performance. Backups are just about the only large scale sequential I/O a file-server does, and that's the kind of I/O that notices fragmentation. The kind of I/O file-servers do when users are hitting them is very random, and in that case fragmentation can sometimes improve performance.

At my old job we had a file-server that we'd just migrated to new hardware. Immediately after the migration, the backups were running on the order of 450MB/Minute (this was many years ago, mind). Two years later, that server was backing up around 300MB/Minute. We then defragged it for the first time, and speeds rose back to 450MB/Minute again.

The other use-case for defrag is a backup-to-disk system with the archive stored on NTFS. Backup and restore on that kind of volume is entirely sequential, and that notices fragmentation. However, if the underlaying storage is abstracted enough (such as an HP EVA disk array) even this kind of I/O won't notice fragmentation.

If you are running specific types of applications that cause unavoidable fragmentation you may wish to invest in a server specific defragmentation program (these are designed to run continuously in the background and defrag when/if needed). The type of application that would cause unavoidable fragmentation in a Windows environment would be those that do a lot of lazy writing across multiple files (most robust server designed software avoids this, but something like a desktop download manager, especially some specific BitTorrent clients, exhibit this kind of aggressive fragmentation behavoir)

I ran Diskeeper on the servers in an earlier job and so a measurable performance improvement on both file servers and applications servers. I don't think we got near their published stats but we definitely saw some benefits.

One tool to thing about is Smart Defrag, by IOBit. It defrags in the background, when your computer is idle, and has Deep Optimize and other capabilities. It seems useful, so you could put it on there and not have to worry about defragmenting.

If defrag time/scheduling is a concern, a background defrag solution like one of the Diskeeper Server editions (not free!) is a good choice. It defrags using only idle resources, so there ought to be no impact even on a production server. Some of our servers here use DK, and the admins seem pretty pleased with it.

BTW, some of the BT clients (utorrent comes to mind) have a pre-allocation option for the torrent, so there is no fragmentation during downloads, as long as there is sufficient contiguous free space to accomodate the file.

However, TechNet's article on doing Physical to Virtual conversions recommends defragmentation as a method to reduce the amount of time required to do a P2V. This is especially important if you have a limited maintenance window in which to complete your P2V.

To help minimize the time required for the imaging phase, perform a disk defragmentation on the source computer's hard drives. Also, ensure that you have a fast network connection between the source computer and the host.

Volumes the file system marked as dirty, indicating possible corruption.
You must run chkdsk before you can defragment this volume or drive. You can determine if a volume is dirty by using the fsutil dirty command.

To perform this procedure, you must be a member of the Administrators group on the local computer, or you must have been delegated the appropriate authority. If the computer is joined to a domain, members of the Domain Admins group might be able to perform this procedure. As a security best practice, consider using Run As to perform this procedure.

A volume must have at least 15% free space for defrag to completely and adequately defragment it. defrag uses this space as a sorting area for file fragments. If a volume has less than 15% free space, defrag will only partially defragment it. To increase the free space on a volume, delete unneeded files or move them to another disk.

While defrag is analyzing and defragmenting a volume, it displays a blinking cursor. When defrag is finished analyzing and defragmenting the volume, it displays the analysis report, the defragmentation report, or both reports, and then exits to the command prompt.

Running the defrag command and Disk defragmenter are mutually exclusive. If you're using Disk defragmenter to defragment a volume and you run the defrag command at a command-line, the defrag command fails. Conversely, if you run the defrag command and open Disk defragmenter, the defragmentation options in Disk defragmenter are unavailable.

The defragmentation process runs scheduled task as a maintenance task, which typically runs every week. As an Administrator, you can change how often the task runs by using the Optimize Drives app.

Traditional optimization processes. Includes traditional defragmentation, for example moving files to make them reasonably contiguous and retrim. This is done once per month. However, if both traditional defragmentation and retrim are skipped, then analysis isn't run. Changing the frequency of the scheduled task doesn't affect the once per month cadence for the SSDs.

My boss is listening to our MSP and believing that defraging the virtual server (2008 R2) at the virtual OS level is going to make a noticeable performance difference. My boss loves our MSP. My position with the company is a very long story and not getting into that here.

This comes up time and time again, the simple answer is not to bother, apart from adding IO to the guest and underlying disks, your data is within a flat file, that flat file is what would need to be defragged, if you run thin disks a defragment would simply bloat it (expand it out) based on it writing out data to new locations, meaning the space increases unnecessarily.

But with any dynamic disk or thin disk for the VMware side, moving of blocks inside the guest only expand the disks space without any actual content being in their place, the blocks move and the disk grows and the space used it more than needs be.

I agree there is some benefit and I noted this above, however the gain is more trouble and IO than its worth, it does depend on the disk structure, thick/thin vs dynamic and static disks, but overall the gain is going to cost a lot of IO and disk wear.

Be aware from a VMware perspective it also messes with CBT, which is what backups use, if tiered storage is used it is recommended not to, there are lots of reasons why not and a few for it, the choice ultimately is yours.

I am looking at getting my VCP. There are so many options out there when searching. I received a fair amount of experience with ESXi working towards my associates, should have at the end of the Fall semester. I am not looking at taking one of the expensive boot camps. What can any of you suggest or give your opinion about earning a VCP?

In ORACLE, it is frowned upon to defrag a database at the Operating System level. There are different methods/strategies to reclaim unused space and keep row data together. Is this the same for SQL Server?

Defragmenting within the database (ie at the table level) has almost always solved our performance issues, so never had to go to the extent of OS defragmentation. One of the reasons was that the db had to be shutdown to unlock the files, and nobody really knew how long it would take. But, I believe defragmenting at the DB level should yield most of the benefits.

Due to the underlying dynamics of disk head movement, any database system (Oracle, SQL Server DB2, etc, etc) will suffer a performance loss if the files containing the database are badly fragmented. This applies regardless of operating system (Windows, Unix, Linux, etc, etc). The use of disk systems with a large cache, such as SANs, will hide most of the impact of file fragmentation.

Some SANS have no concept of a contiguous file at the physical level. These treat the disk tracks as a giant car-park and put each track of data in the first free slot. However, even if you use these systems, if the file appears to be fragmented at the operating system level there is some CPU overhead in chaining through a large number of extents.

Regular growth and shrinkage of database files is a good way to fragment them. For this reason Autoshrink should be turned off, and SHRINKFILE should only be used if a permanent reduction in space usage is expected.

Typically, a defragmentation at the physical file level requires the database file to be offline. This is because the defrag program will ignore any file that are in use. Often this means the database manager must be stopped, as defragmenting a disk 1 file or 1 database at a time is far less efficient and slower than defragmenting the whole disk. Because it is hard to predict how long a disk defrag will take, very few sites do regular file defrags due to the database outage required.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages