Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Defragment Util for Unix?

0 views
Skip to first unread message

Jung-Hoon Park

unread,
Oct 25, 1995, 3:00:00 AM10/25/95
to
Hello,

Just wondering... after having Linux installed on my PC and using
it quite a while, I've of course added, removed, modified, and replaced
a number of files. I suspect that my hard drive is quite fragmented as a
result by now, but haven't seen any 'defrag' util.'s like the one for DOS.

Is there a such util anyway? Or, is the defragmentition done
automatically and I just don't know?

Thanks.


Phil Edwards

unread,
Oct 26, 1995, 3:00:00 AM10/26/95
to
With <46lu1s$g...@Mercury.mcs.com>,
it seems Jung-Hoon Park (blin...@MCS.COM) sez:


+ Just wondering... after having Linux installed on my PC and using
+ it quite a while, I've of course added, removed, modified, and replaced
+ a number of files. I suspect that my hard drive is quite fragmented as a
+ result by now, but haven't seen any 'defrag' util.'s like the one for DOS.

There isn't one.


+ Is there a such util anyway? Or, is the defragmentition done
+ automatically and I just don't know?

No, there isn't. It isn't done "automatically," either, because
Unix filesystems don't suffer from fragmentation, by their design.
(Well, not more than 1% or 2%.) If you don't know this, then you
might not want to run Linux; it generally requires some hacking and
technical knowledge.


Luck++;
Phil

--
#include<std/disclaimer.h> The gods do not protect fools. Fools
finger pedw...@gamma.cs.wright.edu are protected by more capable fools.
email pedw...@valhalla.cs.wright.edu -Larry Niven

Helmut Springer

unread,
Oct 26, 1995, 3:00:00 AM10/26/95
to
Phil Edwards (pedw...@valhalla.cs.wright.edu) wrote:
: (Well, not more than 1% or 2%.) If you don't know this, then you

: might not want to run Linux; it generally requires some hacking and
: technical knowledge.
sorry, that's IMHO not true......

I bet there are many roots running UNIX who don't know nothing at all
about the UNIX-filesystem and fragmentation.and since they don't know this
problem e.g. from DOS, they don't ask.

regards
delta

--
helmut 'delta' springer Computing Center Stuttgart University (RUS), FRG
de...@RUS.Uni-Stuttgart.DE InfoSystems, Unix/Net Consulting, StudBox
http://www.uni-stuttgart.de/delta/
phone : +49 711 1319-112 If you've got to do it,
FAX : +49 711 1319-203 do it with cold blood...

Glen Johnson

unread,
Oct 26, 1995, 3:00:00 AM10/26/95
to
Phil Edwards (pedw...@valhalla.cs.wright.edu) wrote:
: With <46lu1s$g...@Mercury.mcs.com>,

: it seems Jung-Hoon Park (blin...@MCS.COM) sez:


: + Is there a such util anyway? Or, is the defragmentition done


: + automatically and I just don't know?

: No, there isn't. It isn't done "automatically," either, because
: Unix filesystems don't suffer from fragmentation, by their design.

By design UNIX file systems encourage fragmentation more than most
proprietary systems(non-DOS). If you define fragmentation to be
having the blocks of an individual file allocated non-contiguously
on the disk. UNIX file systems that follow the Berkley design
(minfree, cylinder groups, rotational delay, maxbpg) will force the
blocks of a file to be scattered across the disk.

Glen


Mark Davis

unread,
Oct 26, 1995, 3:00:00 AM10/26/95
to
grj...@bga.com (Glen Johnson) writes:

>Phil Edwards (pedw...@valhalla.cs.wright.edu) wrote:
>: With <46lu1s$g...@Mercury.mcs.com>,
>: it seems Jung-Hoon Park (blin...@MCS.COM) sez:


>: + Is there a such util anyway? Or, is the defragmentition done
>: + automatically and I just don't know?

>: No, there isn't.

There are utility programs available commercially for many Unixes to
defragment filesystems.

> It isn't done "automatically," either, because
>: Unix filesystems don't suffer from fragmentation, by their design.

>By design UNIX file systems encourage fragmentation more than most
>proprietary systems(non-DOS). If you define fragmentation to be
>having the blocks of an individual file allocated non-contiguously
>on the disk. UNIX file systems that follow the Berkley design
>(minfree, cylinder groups, rotational delay, maxbpg) will force the
>blocks of a file to be scattered across the disk.

Yes, which means that fragmentation in most Unix filesystems is no big deal-
and most are intelligent to the point that performance only suffers
greatly when the disk is starting to get very full.

Otherwise, the easiest way to defragment a file system is to tar it to tape
(verify it), delete all the files, then restore it from tape.
--
/--------------------------------------------------------------------------\
| Mark A. Davis | Lake Taylor Hospital | Norfolk,VA (804)-461-5001x431 |
| Director/SysAdmin | Information Systems | ma...@taylor.infi.net |
\--------------------------------------------------------------------------/

Yoo Chul Chung

unread,
Oct 27, 1995, 3:00:00 AM10/27/95
to
pedw...@valhalla.cs.wright.edu (Phil Edwards) wrote:
>With <46lu1s$g...@Mercury.mcs.com>,
>it seems Jung-Hoon Park (blin...@MCS.COM) sez:
>
>
>+ Just wondering... after having Linux installed on my PC and using
>+ it quite a while, I've of course added, removed, modified, and replaced
>+ a number of files. I suspect that my hard drive is quite fragmented as a
>+ result by now, but haven't seen any 'defrag' util.'s like the one for DOS.
>
>There isn't one.
>

Actually, there is one called defrag at sunsite. Look at the Linux FAQ
in /usr/doc/faq/faq (if you use Slackware). But since the ext2 filesystem
doesn't fragment much in the first place, no one really needs it.

--
Yoo Chul Chung
E-mail: wa...@power3.snu.ac.kr, Webpage: http://dosa3.snu.ac.kr/~wackoen
---
I'm using a computer with a broken mail system.
Forgive me if I don't reply to e-mail. (And mailbomb ro...@power3.snu.ac.kr! ;)

JaDe

unread,
Oct 27, 1995, 3:00:00 AM10/27/95
to
In days of yore (25 Oct 1995 13:01:32 -0500)
Jung-Hoon Park (blin...@MCS.COM) proclaimed:

:Hello,
: Just wondering... after having Linux installed on my PC and using
:it quite a while, I've of course added, removed, modified, and replaced
:a number of files. I suspect that my hard drive is quite fragmented as a
:result by now, but haven't seen any 'defrag' util.'s like the one for DOS.
: Is there a such util anyway? Or, is the defragmentition done
:automatically and I just don't know?

The ext2fs filesystem (which is the predominant one among
Linux users for that last year or two) is designed to
minimized fragmentation during normal operation.

I'd recommend periodically (once a year or so -- once per
fiscal quarter for a busy server) doing a backup (two)
and restore cycle if you're still concerned about it.

This will do more to insure that your backup policies and
procedures are usable than it will for your underlying filesystem
-- but it shouldn't hurt -- if your backups are good.


--
/> JaDe | Star <\
/< \|/ >\
*[/////|:::====================- --*-- -=====================:::|\\\\\]*
\< /|\ >/
\> jade...@netcom.com | star...@netcom.com </

Andrew R. Tefft

unread,
Oct 31, 1995, 3:00:00 AM10/31/95
to
In article <475f3a$r...@giga.bga.com>,
grj...@bga.com (Glen Johnson) writes:
>
>The reason one would defrag a file system is to try to improve
>performance by reducing the average seek time to the data on the
>file system. If the blocks of the files that are most often accessed
>are spread across the entire disk instead of just a portion of the
>disk, on average it takes longer to access the data.

This is only valid if you can read all the blocks in one go.
In a normal unix filesystem, this is unlikely because there will
be all sorts of other reads/writes going on, giving you pretty
random head movement. This is why many unix filesystems spread
files out among cylinder groups.


--

Andy Tefft - new, expanded .sig - tef...@erie.ge.com


Glen Johnson

unread,
Oct 31, 1995, 3:00:00 AM10/31/95
to
Andrew R. Tefft (tef...@erie.ge.com) wrote:
: In article <475f3a$r...@giga.bga.com>,

No, this is also very valid for indexed file (very random access).
If my most frequently accessed data only occupies 25% of the file
system it is better to have all of the block in only 25% of the
FS instead of spread across the entire FS.

If you have a nearly full FS and every block is just as likely to
be accessed as any other block, then fragmented files make no
difference in performance. My experience has been that in most
commercial applications, there is a subset of the total files
that are more likely to be accessed. Also, many (most) system
admins try to not have full disks. So if a disk is only 2/3
full it is still better to have the blocks of the files filling
2/3 of the FS instead spread across the entire FS.

These arguments are only valid for FSs that are on single disks.
Striping across multiple disk with multiple file systems alters
the packed FS benefits.

Glen Johnson


Glen Johnson

unread,
Oct 31, 1995, 3:00:00 AM10/31/95
to
Mark Davis (ma...@taylor.infi.net) wrote:
: grj...@bga.com (Glen Johnson) writes:

: >By design UNIX file systems encourage fragmentation more than most


: >proprietary systems(non-DOS). If you define fragmentation to be
: >having the blocks of an individual file allocated non-contiguously
: >on the disk. UNIX file systems that follow the Berkley design
: >(minfree, cylinder groups, rotational delay, maxbpg) will force the
: >blocks of a file to be scattered across the disk.

: Yes, which means that fragmentation in most Unix filesystems is no big deal-
: and most are intelligent to the point that performance only suffers
: greatly when the disk is starting to get very full.

The reason one would defrag a file system is to try to improve

performance by reducing the average seek time to the data on the
file system. If the blocks of the files that are most often accessed
are spread across the entire disk instead of just a portion of the
disk, on average it takes longer to access the data.

On a test HP file system that was only 10 percent full I followed your
advice and backed up the files, did a newfs, and restored the files.
there were 28 files on the file system. The largest one only had
3000 blocks in it. After backing up and restoring the 3000 blocks
of the file were spread across 48000 blocks of the file system (first
block was 486 and last was 48754). Obviously, accessing the blocks
of this file would be faster if they were contiguous instead of
spread across three fourths of the disk.

Glen


Andrew R. Tefft

unread,
Nov 1, 1995, 3:00:00 AM11/1/95
to
In article <4761s6$a...@giga.bga.com>,

grj...@bga.com (Glen Johnson) writes:
>Andrew R. Tefft (tef...@erie.ge.com) wrote:
>: In article <475f3a$r...@giga.bga.com>,
>: grj...@bga.com (Glen Johnson) writes:
>: >
>: >The reason one would defrag a file system is to try to improve
>: >performance by reducing the average seek time to the data on the
>: >file system. If the blocks of the files that are most often accessed
>: >are spread across the entire disk instead of just a portion of the
>: >disk, on average it takes longer to access the data.
>
>: This is only valid if you can read all the blocks in one go.
>: In a normal unix filesystem, this is unlikely because there will
>
>No, this is also very valid for indexed file (very random access).

A filesystem with a single indexed database is not what I would
classify as "normal", and the allocation methods of such a database
are exactly why using a raw partition for this sort of thing is
recommended over a file in a filesystem.

Glen Johnson

unread,
Nov 1, 1995, 3:00:00 AM11/1/95
to
: >No, this is also very valid for indexed file (very random access).

: A filesystem with a single indexed database is not what I would
: classify as "normal", and the allocation methods of such a database
: are exactly why using a raw partition for this sort of thing is
: recommended over a file in a filesystem.

Rereading my post I see that the missing 's' after file is confusing.
I ment to say indexed files. There are millions of sites (and we are
looking for them since that's where we make much of our money) that use
ISAMs which can not be placed in raw partions.

Glen


0 new messages