Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

DEFRAG for Linux

2 views
Skip to first unread message

der...@spry.com

unread,
Jul 21, 1996, 3:00:00 AM7/21/96
to

Hello,
Anyone know of a disk defragmenter for Linux? Does Linux even
need one, in the way that MS-DOS (and any other OS I've dealt with) does?

Any help is greatly appreciated. Thanks!


Please reply via email,

Derek Simkowiak
der...@spry.com

Jim Nance

unread,
Jul 22, 1996, 3:00:00 AM7/22/96
to

In article <4su965$m...@lal.interserv.net>, der...@spry.com wrote:

> Anyone know of a disk defragmenter for Linux? Does Linux even
>need one, in the way that MS-DOS (and any other OS I've dealt with) does?

The ext2 file system is designed not to need one. It sort of defrags itself
as it runs. Even so I think someone wrote one at one time, but I have not
heard anything about it in several years.

Jim


Frank Wagner

unread,
Jul 22, 1996, 3:00:00 AM7/22/96
to

der...@spry.com wrote:

> Anyone know of a disk defragmenter for Linux? Does Linux even


Hello,

i don't known if it works, but i saw somthing like this at
sunsite.unc.edu/system/Filesystems/ext/defrag.pre-0.3.tar.gz

Frank

--
fr...@sazza.ruhr.de
2:2448/4501.45

Trent Piepho

unread,
Jul 22, 1996, 3:00:00 AM7/22/96
to

In article <4t0nvk$1...@sazza.ruhr.de>,

Frank Wagner <fr...@sazza.ruhr.de> wrote:
>der...@spry.com wrote:
>
>> Anyone know of a disk defragmenter for Linux? Does Linux even
>
>i don't known if it works, but i saw somthing like this at
>sunsite.unc.edu/system/Filesystems/ext/defrag.pre-0.3.tar.gz

I've got defrag 0.6, there might be a newer one. It does work, but as someone
else pointed out, ext2 is designed to resist fragmentation.


--
|Gazing up to the breeze of the heavens \ on a quest, meaning, reason |
|came to be, how it begun \ all alone in the family of the sun |
|curiosity teasing everyone \ on our home, third stone from the sun. |
|Trent Piepho (xy...@u.washington.edu) -- Metallica |

Christopher B. Browne

unread,
Jul 23, 1996, 3:00:00 AM7/23/96
to

In article <4t13li$s...@nntp5.u.washington.edu>, Trent Piepho wrote:
>I've got defrag 0.6, there might be a newer one. It does work, but as someone
>else pointed out, ext2 is designed to resist fragmentation.

The place where defragmentation actually proves useful with "modern"
file systems is in managing logical volumes. This means that one can
keep several disks busy seeking for desired information all at once.
(Probably only useful on a SCSI bus, although I keep hearing rumours
that IDE may someday support multiple simultaneous asynchronous
commands...)

If you assign 4 partitions to a volume, and decide you need one of them
back for some other purpose, you'll need to "defragment" data out of
the desired partition.

I should note that if one were to use striping, that actually represents
an extreme form of *explicit* fragmentation of disk space... A proper
"defrag" utility for LVM-ext3 should be capable of performing increases
in fragmentation as well as reductions, all in the name of improving
performance/reliability.

And I thought that using VFSes one *could* do some limited logical
volume management... Has anyone been doing this in practice?
--
Christopher B. Browne, cbbr...@conline.com, chris_...@sdt.com
Web: http://www.conline.com/~cbbrowne SAP Basis Consultant, UNIX Guy,
Linux Guy. "Windows? Ah... The Athena Project from MIT..."


Randy Young

unread,
Jul 25, 1996, 3:00:00 AM7/25/96
to

On 25 Jul 1996 19:41:34 +0930, David Colquhoun <ko...@adelaide.DIALix.oz.au> wrote:
>Also is there a utillity to calculate used and free space for an EXT2
>Filesystem.

There sure is, a very simple program. df is the command to use.

--
Randy Young
rwy...@pacbell.com
Speaking strictly for myself!


Daniel Jimenez

unread,
Jul 25, 1996, 3:00:00 AM7/25/96
to

In article <4t7h8m$hgr$1...@adelaide.DIALix.oz.au>,
David Colquhoun <ko...@adelaide.DIALix.oz.au> wrote:

>fr...@sazza.ruhr.de (Frank Wagner) writes:
>>der...@spry.com wrote:
>>> Anyone know of a disk defragmenter for Linux? Does Linux even
>
>>Hello,

>
>>i don't known if it works, but i saw somthing like this at
>>sunsite.unc.edu/system/Filesystems/ext/defrag.pre-0.3.tar.gz
>
>>Frank

>
>Also is there a utillity to calculate used and free space for an EXT2
>Filesystem.
>
>I'm thinking of something like DOS CHKDSK , for Linux .
>It is such a basic thing to know that I suspect it will be included in
>Slackware.
>
>I just cant find anything nomatter how hard I RTFM....
>
>Thanks.

'df' is the command to check disk usage. It will tell you, for each
filesystem mounted, how many blocks it has total, how many are free, and
how many are used. It will also tell you what percentage of user-accesible
space has been used up.

From the FM, a.k.a. the Linux FAQ (posted by Ian Jackson to
comp.os.linux.answers):

- -----------------------------------------------------------------------------

Question 4.3. Is there a defragmenter for ext2fs etc. ?

Yes. There is a Linux filesystem defragmenter for ext2, minix and
old-style ext filesystems available on sunsite.unc.edu in
system/Filesystems/defrag-0.6.tar.gz.

Users of the ext2 filesystem can probably do without defrag since ext2
contains extra code to keep fragmentation reduced even in very full
filesystems.

- -----------------------------------------------------------------------------
--
Daniel Jimenez ad...@crl.com, djim...@crl.com
ADRAS Computing http://www.crl.com/~adras

David Colquhoun

unread,
Jul 25, 1996, 3:00:00 AM7/25/96
to

fr...@sazza.ruhr.de (Frank Wagner) writes:

>der...@spry.com wrote:

>> Anyone know of a disk defragmenter for Linux? Does Linux even


>Hello,

>i don't known if it works, but i saw somthing like this at
>sunsite.unc.edu/system/Filesystems/ext/defrag.pre-0.3.tar.gz

>Frank

>--
>fr...@sazza.ruhr.de
>2:2448/4501.45

Also is there a utillity to calculate used and free space for an EXT2
Filesystem.

I'm thinking of something like DOS CHKDSK , for Linux .
It is such a basic thing to know that I suspect it will be included in
Slackware.

I just cant find anything nomatter how hard I RTFM....

Thanks.

--
------------------------------------------------------------------------
Sorry, Mail is broken.
*** ALL replies to NewsGroup Please ***
------------------------------------------------------------------------

Jeff Dege

unread,
Jul 26, 1996, 3:00:00 AM7/26/96
to

On 25 Jul 1996 19:41:34 +0930, David Colquhoun (ko...@adelaide.DIALix.oz.au) wrote:
:
: Also is there a utillity to calculate used and free space for an EXT2
: Filesystem.
:
: I'm thinking of something like DOS CHKDSK , for Linux .
: It is such a basic thing to know that I suspect it will be included in
: Slackware.
:
: I just cant find anything nomatter how hard I RTFM....

Have you tried apropos?

domus:~$ apropos disk | head
bdflush (8) - kernel daemon to flush dirty buffers back to disk.
cfdisk (8) - Curses based disk partition table manipulator for Linux
df (1) - summarize free disk space
du (1) - summarize disk usage
fd (4) - floppy disk device
fdformat (8) - Low-level formats a floppy disk
fdisk (8) - Partition table manipulator for Linux
fsync (2) - synchronize a file's in-core state with that on disk
hd (4) - MFM/IDE hard disk device
mformat (1) - add an MSDOS filesystem to a low-level formatted diskette.

As you can see above, Unix provides df and du to display disk used and
disk free. Most of what CHKDSK.EXE does, though, is to clean up
screwed up file systems. The Unix equivalent is fsck. I can nearly
guarantee, though, that your system is configured to run fsck on all of
your partitions every time you boot.

--
Politician, n.:
An eel in the fundamental mud upon which the superstructure of
organized society is reared. When he wriggles, he mistakes the
agitation of his tail for the trembling of the edifice. As compared
with the statesman, he suffers the disadvantage of being alive.
-- Ambrose Bierce, "The Devil's Dictionary"


Tyson Bigler

unread,
Jul 27, 1996, 3:00:00 AM7/27/96
to

In article <4t7h8m$hgr$1...@adelaide.DIALix.oz.au>,
David Colquhoun <ko...@adelaide.DIALix.oz.au> wrote:
>fr...@sazza.ruhr.de (Frank Wagner) writes:
>
>>der...@spry.com wrote:
>
>>> Anyone know of a disk defragmenter for Linux? Does Linux even
>
>
>>Hello,
>
>>i don't known if it works, but i saw somthing like this at
>>sunsite.unc.edu/system/Filesystems/ext/defrag.pre-0.3.tar.gz
>
>>Frank
>
>>--
>>fr...@sazza.ruhr.de
>>2:2448/4501.45
>
>Also is there a utillity to calculate used and free space for an EXT2
>Filesystem.
>
>I'm thinking of something like DOS CHKDSK , for Linux .
>It is such a basic thing to know that I suspect it will be included in
>Slackware.

I've always been told (and of the opinion) that you do not need to defrag
a unix (whether it be ext2 or not) partition. I could be wrong.

As for a "utility" to calculate free and used space, how 'bout
df [directory]

and so see things like "how much space is this directory taking?"
du -s *


not sure why you use chkdsk to do this under dos, but hey, to each his
own!


Tyson
--
/\ To Bill Gates: You can dress a turd any way you like\
\_| but it's still a turd! |
| ___________________________________________________|___
\_/______________________________________________________/

John Stevens

unread,
Jul 29, 1996, 3:00:00 AM7/29/96
to

In article <4t7h8m$hgr$1...@adelaide.DIALix.oz.au>,
David Colquhoun <ko...@adelaide.DIALix.oz.au> wrote:
>fr...@sazza.ruhr.de (Frank Wagner) writes:
>
>>der...@spry.com wrote:
>
>>> Anyone know of a disk defragmenter for Linux? Does Linux even
>
>
>>Hello,
>
>>i don't known if it works, but i saw somthing like this at
>>sunsite.unc.edu/system/Filesystems/ext/defrag.pre-0.3.tar.gz
>
>>Frank
>
>>--
>>fr...@sazza.ruhr.de
>>2:2448/4501.45
>
>Also is there a utillity to calculate used and free space for an EXT2
>Filesystem.
>
>I'm thinking of something like DOS CHKDSK , for Linux .
>It is such a basic thing to know that I suspect it will be included in
>Slackware.
>
>I just cant find anything nomatter how hard I RTFM....
>
>Thanks.

For used/free disk space, use df.

For checking a file systems integrity, use fsck (well, actually, the
appropriate fsck for the file system type, but fsck is the front end
program. . . )

I can't figure out why anybody would need a "defragmentation program" for an
ext2 type file system. . .

John S.

Dale Pontius

unread,
Jul 30, 1996, 3:00:00 AM7/30/96
to

In article <4t0csg$7...@rtp-avnt-gw.avanticorp.com>,
jln...@avanticorp.com (Jim Nance) writes:

>In article <4su965$m...@lal.interserv.net>, der...@spry.com wrote:
>
>> Anyone know of a disk defragmenter for Linux? Does Linux even
>>need one, in the way that MS-DOS (and any other OS I've dealt with) does?
>
>The ext2 file system is designed not to need one. It sort of defrags itself
>as it runs. Even so I think someone wrote one at one time, but I have not
>heard anything about it in several years.
>
I've run OS/2 on HPFS for years, and am starting to look into Linux.

This defrag question interested me, because HPFS does a very good
job of resisting fragmentation and 'self-healing', as well. It's
interesting to see the same claim made for ext2fs.

I think I'm fairly well aware of most of the differences between
ext2fs and HPFS. (security, links, and case-sensitive being the
biggest) But I have a few other questions...

How well does ext2fs recover from an improper shutdown? HPFS will
automatically do a CHKDSK on the next boot.

Does ext2fs have a lazy-writer, or does it need an explicit sync
of some sort to flush its write buffers? This can affect the previous
question, obviously.

Thanks,
Dale Pontius
(NOT speaking for IBM)


Uwe Bonnes

unread,
Jul 30, 1996, 3:00:00 AM7/30/96
to

Dale Pontius (pon...@btv.ibm.com) wrote:
:
: This defrag question interested me, because HPFS does a very good

: job of resisting fragmentation and 'self-healing', as well. It's
: interesting to see the same claim made for ext2fs.
:
: I think I'm fairly well aware of most of the differences between
: ext2fs and HPFS. (security, links, and case-sensitive being the
: biggest) But I have a few other questions...
:
: How well does ext2fs recover from an improper shutdown? HPFS will
: automatically do a CHKDSK on the next boot.

Very good.
:
: Does ext2fs have a lazy-writer, or does it need an explicit sync


: of some sort to flush its write buffers? This can affect the previous
: question, obviously.

The kernel has.
--
Uwe Bonnes b...@elektron.ikp.physik.th-darmstadt.de

Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt
--------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

Graham Swallow

unread,
Jul 30, 1996, 3:00:00 AM7/30/96
to noise

jd...@winternet.com (Jeff Dege) writes:
> On 25 Jul 1996 19:41:34 +0930, David Colquhoun (ko...@adelaide.DIALix.oz.au) wrote:
>:
>: Also is there a utillity to calculate used and free space for an EXT2
>: Filesystem.

You can good statistics from:

echo stats | debugfs /dev/hda1 | grep used | nl

Each line is an ext2 group zone, almost a mini-fs in itself.
Files created within a directory _usually_ lie on the same zone.

Graham g...@trix.dircon.co.uk
--
-----------------------------------
http://www.users.dircon.co.uk/~trix <-- Linux Info Pages
http://trix.dircon.co.uk/ (dial-up) <-- Raven Kept Here
-----------------------------------

Trent Piepho

unread,
Jul 31, 1996, 3:00:00 AM7/31/96
to

In article <4tl5i2$1b...@mdnews.btv.ibm.com>,
Dale Pontius <pon...@btv.ibm.com> wrote:
[.. defrag stuff deleted ...]

>biggest) But I have a few other questions...
>
>How well does ext2fs recover from an improper shutdown? HPFS will
>automatically do a CHKDSK on the next boot.

ext2 also has a "clean" bit, so if you push reset it fsck will do a full
check, otherwise it will skip it. There is also a mount counter, when it
reaches some number it will do a full check as well.

>Does ext2fs have a lazy-writer, or does it need an explicit sync
>of some sort to flush its write buffers? This can affect the previous
>question, obviously.

It can be run in sync mode, where there is no write behind. But in async mode
there buffers are automaticlly flushed. There is even a different timeout for
data buffers and meta-data buffers.

pa...@fbkltd.com

unread,
Aug 2, 1996, 3:00:00 AM8/2/96
to

JL>In article <4su965$m...@lal.interserv.net>, der...@spry.com wrote:

JL>> Anyone know of a disk defragmenter for Linux? Does Linux even
JL>>need one, in the way that MS-DOS (and any other OS I've dealt with) does?

JL>The ext2 file system is designed not to need one. It sort of defrags itself
JL>as it runs. Even so I think someone wrote one at one time, but I have not
JL>heard anything about it in several years.

JL>Jim

heehehe..<pondering why people defrag so much?>
there is a program to defrage ext2 file system but i don't know were it
is...ext2 takes care of that pretty well itself.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Fly by Knight, Ltd. BBS | Telnet to: fbkltd.com
"Armor Clad Online Entertainment" | Web Page: http://www.fbkltd.com
Voice: (619)754-0313 | Dialup #: (619)754-0129
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-


Kristian Köhntopp

unread,
Aug 2, 1996, 3:00:00 AM8/2/96
to

pon...@btv.ibm.com (Dale Pontius) writes:
>This defrag question interested me, because HPFS does a very good
>job of resisting fragmentation and 'self-healing', as well. It's
>interesting to see the same claim made for ext2fs.

This is, because HPFS and ext2 both are descendants of the BSD
fast filing system (known as ffs or ufs in other Unices). They
are not exactly descendants as in "code inheritance", but they
share the same basic ideas about file system layout and data
structures.

>How well does ext2fs recover from an improper shutdown? HPFS will
>automatically do a CHKDSK on the next boot.

So will ext2. On startup Linux fires up the init program, which
will build the running system based on the information in
/etc/inittab. In inittab, a bunch of startup scripts is called
one way (the Slackware/BSD way) or the other (the RedHat/System
V way).

One of the very first things such a startup script does is to
call fsck for all filesystems mentioned in /etc/fstab. Those
file systems that have not been shut down properly on shutdown
are checked and cleaned. The others are flagged "okay" and left
alone. fsck is the UNIX equivalent of CHKDSK.

>Does ext2fs have a lazy-writer, or does it need an explicit sync
>of some sort to flush its write buffers? This can affect the previous
>question, obviously.

ext2 does not flush its buffers automatically in general. There
are exceptions to this. For example files that have been
flagged "S" which the chattr command are always flushed to disk
immediately. But Linux is running an update daemon that
regularly flushes the buffer cache contents to disk
approximately every 30 seconds. A user level program can also
force such flushes for single files (fflush()) or the entire
system (sync()). Buffers for a file system are also flushed
upon unmount, of course.

Kristian
--
Sie haben sich auf der Grundlage harter Fakten eine feste Meinung
gebildet? Posten Sie sie doch einfach mal nach de.talk.bizarre.
Die kriegen dort beides wieder weich.

Craig Hagan

unread,
Aug 5, 1996, 3:00:00 AM8/5/96
to

Frank Wagner (fr...@sazza.ruhr.de) wrote:
: der...@spry.com wrote:

: > Anyone know of a disk defragmenter for Linux? Does Linux even

you really don't need one, the filesystem takes care of that (to
a greater extent) on its own. HOWEVER, if your disk clears somewhere
aroung 95% capacity, then the fs may start fragmenting files to squeeze
them in.

a _real_ simple way to defrag is to backup to tape and restore
the FS.

-- craig

Kristian Köhntopp

unread,
Aug 5, 1996, 3:00:00 AM8/5/96
to

ha...@news.crocker.com (Craig Hagan) writes:
>you really don't need one, the filesystem takes care of that (to
>a greater extent) on its own. HOWEVER, if your disk clears somewhere
>aroung 95% capacity, then the fs may start fragmenting files to squeeze
>them in.

Well, actually the FS will much earlier start to fragment your
files. On a system with multiple simultaneous open and growing
files or on a system with much file creation and deletion there
is simply no way to insure that there will be no fragmentation
without severe performance penalties. OTOH this does not matter
much, because the buffer cache hides much of the inferred
performance penalty from you.

ext2 organizes the disk in block groups. With 1 KB block size,
these block groups cannot be larger than 8192 blocks (8 MB).
Each block group is basically a small filesystem in itself: At
the beginning of the bg is a little bit of administrative
information and then an array of inodes (256 blocks in default
configuration). The remainder of the bg is data blocks.

When you create a new file, the filesystem decides which block
group to use for this new file. Actually, the filesystem
decides in which bg the inode of the new file should reside.
The location of the inode later influences the location of the
data blocks. If you can read kernel C code, look at
/usr/src/linux/fs/ext2/ialloc.c:ext2_new_inode(). If you can't
read C code, look at that file anyway, but read the comment
before new_inode.

Basically, Linux does the following: If the newly created inode
is not a directory inode (it may be a regular file or some
special file), this inode must be contained in some directory.
Linux tries to place the new files inode in the same bg as the
containing directories inode.

Upon directory lookup operations and in file scans the system
has to load all inodes of files in that directory. Putting
all files in a directory into the same bg makes this faster.
Data blocks of a file will be stored in the same bg as the
files inode is, so they are not far away, too. And data blocks
of related files (related: being in the same dir) are stored
near to each other...

If the newly created inode is a directory inode, though, the
system searches all block groups with more than average free
inodes. Among these block groups it chooses the one with the
most free blocks (and this is what the code says, wheres the
comment above the code is a lie). Thus, the newly created
directory will be in a directory with many free inodes and with
plenty of space for data storage.


As I said, Linux tries to put the data blocks of a file in the
same bg as the inode. Actually, what Linux does is the
following (the policy is defined in fs/ext2/inode.c:
inode_getblk()): The first data block goes into the same bg as
the inode (if there is space). The next data block goes into
the same bg as the data block before. Thus, linux tries to put
the whole file into the same bg as the files inode.

BSD FFS does the same, but forces a change of the bg for each
MB or so (tuneable parameter maxcontig) of file size written.
They do this to keep one large file from filling up a bg at
once. I feel that ext2 should do the same. In BSD, the FFS
changes to the bg with the most free blocks. In my private
modified ext2 the next bg is chosen to keep distances short.


When allocating a new block for a file, Linux assumes that the
file is going to grow further. It preallocates not only the new
block, but also the following 7 contigous blocks or at least as
many of these 7 it can get. This is very similar to BSD 8 KB
blocks, but in Linux what are BSD "fragments" is called
"blocks". The basic unit of allocation in the block bitmap is
the "fragment" in BSD and the "block" in Linux, thus both are
actually very similar.

The only difference is that the Linux solution is more
flexible. In Linux there need not be a fixed number of
"fragments" to form a "block" as in BSD. In fact, these groups
can be smaller, if the fragmentation situation does not allow
for larger units. BSD also requires that blocks must be
aligned on block boundaries, which is not the case in ext2. BSD
reads blocks in one swoop. Linux does so, too, for contigous
blocks, in the clustering code for the file system (I hope. I
did not look up this one in the source :-).


What does all this mean to fragmentation in practice? Well, if
you put two files into different directories, they will be
stored in different block groups. You can see this, if you look
at their inode numbers:

root@white:/mnt# mkdir a b
root@white:/mnt# touch file1 a/file2 b/file3
root@white:/mnt# ls -lai
total 16
2 drwxr-xr-x 5 root root 1024 Aug 5 21:42 .
2 drwxr-xr-x 23 root root 1024 Aug 5 21:42 ..
2041 drwxr-xr-x 2 root root 1024 Aug 5 21:42 a
4081 drwxr-xr-x 2 root root 1024 Aug 5 21:42 b
12 -rw-r--r-- 1 root root 0 Aug 5 21:42 file1
11 drwxr-xr-x 2 root root 12288 Aug 5 21:42 lost+found
root@white:/mnt# ls -li a/file* b/file*
2042 -rw-r--r-- 1 root root 0 Aug 5 21:42 a/file2
4082 -rw-r--r-- 1 root root 0 Aug 5 21:42 b/file3

What do you see? First, the inode number of the file system
root is (per definition) 2. The inode for file1 in the file
systems root is in the same bg as its parent directory: It has
the inode number 12. The newly created directories a and b have
been allocated in different block groups, as can be deduced
from their inode numbers. Files in this directories are located
in the same bg as their parent directory.

You can also see that this rule is violated by the lost+found
directory. Being a directory it should have been in another bg
as the filesystems root. Well, lost+found is artificially
created when mke2fs is run and because the library mke2fs uses
for this purpose is limited and it is easier to do it in the
first bg, lost+found is forced onto the first bg.

Let's have a look at allocation for a larger file:

root@white:/mnt# cp /etc/termcap .
root@white:/mnt# ls -li termcap
13 -rw-r--r-- 1 root root 183951 Aug 5 21:48 termcap
root@white:/mnt# debugfs /dev/hda6
debugfs: stat /termcap
Inode: 13 Type: regular Mode: 0644 Flags: 0x0 Version: 1
User: 0 Group: 0 Size: 183951
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 362
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x32065001 -- Mon Aug 5 21:48:17 1996
atime: 0x32065000 -- Mon Aug 5 21:48:16 1996
mtime: 0x32065001 -- Mon Aug 5 21:48:17 1996
BLOCKS:
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289
290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321
322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337
338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353
354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369
370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401
402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417
418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449
450 451 452 453 454
TOTAL: 181

debugfs:

We copied the file /etc/termcap to our scratch partition and
fired up debugfs in read-only mode on this partition. Stating
the file by inode number gets us a dump all the files meta
information. Among this is the list of block numbers allocated
to the file and they are perfectly contigous.

Copy the same file to directories a and b and see how it gets
block numbers from the 8192 and 16384 range. Delete all three
files again.

How can be provoke fragmentation? Well, if we tried to do all
three operations at once, we would end up with three
unfragmented files in three different block groups. Try:

root@white:/mnt# cp /etc/termcap . & cp /etc/termcap a & cp /etc/termcap b &
[1] 27799
[2] 27800
[3] 27801
root@white:/mnt#
[1] Done cp /etc/termcap .
[2]- Done cp /etc/termcap a
[3]+ Done cp /etc/termcap b

Stat these files again and make sure they are unfragmented.

Now copy the file three times simultaneously to the same
directory:

root@white:/mnt# cp /etc/termcap x & cp /etc/termcap y & cp /etc/termcap z
[ del del del ]

and check the block allocation sequence:

root@white:/mnt# !deb
debugfs /dev/hda6
debugfs: stat /x
Inode: 13 Type: regular Mode: 0644 Flags: 0x0 Version: 1
User: 0 Group: 0 Size: 183951
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 362
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x32065241 -- Mon Aug 5 21:57:53 1996
atime: 0x32065240 -- Mon Aug 5 21:57:52 1996
mtime: 0x32065241 -- Mon Aug 5 21:57:53 1996
BLOCKS:
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289
290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321
322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337
338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353
354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369
370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401
583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598
599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614
615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630
812 813 814 815 816
TOTAL: 181

Notice how the file became fragmented when the operating system
distributed CPU time to the different cp processes.

So files can become fragmented, if

a) two or more processes write to files in the same directory
simultaneously.
b) a block group fills up and block allocation spills over into
the following block groups.
c) repeated file creation and deletion of files in a directory
creates "holes" in the allocation scheme of a single block
group.

To demonstrate b), create a file larger than 8 MB and see what
blocks are allocated to it.

To demonstrate c),

root@white:/mnt# cp /etc/termcap x; cp /etc/termcap y; cp /etc/termcap z
root@white:/mnt# rm x
root@white:/mnt# cat /etc/termcap /etc/termcap > w

and see what blocks are allocated to this file. As you can see,
Linux tries to allocate blocks from the parent directories bg
to file w, but does so from the start of the bg. Thus, this
file is made up from two large extents: One at the start of the
bg and the rest located behind file z.


As you can see, fragmentation can strike quite early in the
life of a file system, if you know how to provoke it.

Kristian
--
Kristian Koehntopp, Wassilystrasse 30, 24113 Kiel, +49 431 688897

Mark Aitchison

unread,
Aug 7, 1996, 3:00:00 AM8/7/96
to

kr...@schulung.netuse.de Wrote a darned good explanation of how ext2fs works,
including...

> BSD FFS does the same, but forces a change of the bg for each
> MB or so (tuneable parameter maxcontig) of file size written.
> They do this to keep one large file from filling up a bg at
> once. I feel that ext2 should do the same. In BSD, the FFS
> changes to the bg with the most free blocks. In my private
> modified ext2 the next bg is chosen to keep distances short.

and

> ... files can become fragmented, if


>
> a) two or more processes write to files in the same directory
> simultaneously.
> b) a block group fills up and block allocation spills over into
> the following block groups.
> c) repeated file creation and deletion of files in a directory
> creates "holes" in the allocation scheme of a single block
> group.

Some questions from that...
(1) Is the fragmentation that arises all that bad? Unlike FAT, the files aren't
likely to have bits and pieces all over the place, but within a bg (and so
little head movement is required). But big files, or ones too big for the
space left in a well-filled bg, could get put into a bg anwhere on disk
without your modified ext2; this doesn't sound so good. I don't suppose it
happens much, but could be important for large volatile files (news feeds?)

(2) Given the way modern disk hardware reads and buffers tracks, is there a
big bonus in making a block group equal to a multiple of the sectors per
cylinder?

(3) If a defragmenter did go through an ext2 file system and rearrange not just
file sectors within a block, but with bg's hold which directories (e.g. I
can imagine cases where it is sensibleee to put several directories into the
same bg), I presume that normal operation of Linux after that wouldn't be
adversely affected (or mess up the degragmentation too quickly)?

(4) The idea of a maxcontig setting is okay for average files, but what if the
main job of the computer is to run a database, where you want that big file
to be contiguous for efficiency? Similary, a directory full of little
shell scripts or html files should be treated differently because there is
a low chance of them being extended much. Data General's AOS had the
contiguity of a file pretty much under user control. Even their earlier
RDOS operating system had three types of file (sequential, contiguous and
random), with sequential being used by default for typical text files where
it was most efficient, random for most others, but contiguous was there if
the user or software deemed it important. Even if Linux doesn't have this,
it should be possible for a smart program to go over the disk and not just
"defrag" in the DOS sense, but optimise what goes where based on heuristics
or on running a tuning program to monitor disk access during typical use.

(5) Is HPFS, or NTFS, or the (I think it was) Veritas Journalling FS that came
with Unixware still better for some jobs? As far as I can see the speed of
ext2 beats the SunOS file default system in every respect. I have heard
that NTFS beats ext2 in (samba) file serving environments, and that HPFS
beats NTFS in most read-write situations, but I've also heard that ext2fs
is generally better that HPFS, so I'm confused!

-------------------------------------------------------------------------------
Mark Aitchison, Physics & Astronomy \_ Phone : +64 3 3642-947 a.h. 3371-225
University of Canterbury, </ Fax : +64 3 3642-469 or 3642-999
Christchurch, New Zealand. /) E-mail: phy...@csc.canterbury.ac.nz
#include <disclaimer.std> (/' "Computologist to the stars."
-------------------------------------------------------------------------------

Christopher B. Browne

unread,
Aug 8, 1996, 3:00:00 AM8/8/96
to

In article <4ub9na$c...@cantuc.canterbury.ac.nz>, Mark Aitchison wrote:
>> ... files can become fragmented, if

>>
>> a) two or more processes write to files in the same directory
>> simultaneously.
>> b) a block group fills up and block allocation spills over into
>> the following block groups.
>> c) repeated file creation and deletion of files in a directory
>> creates "holes" in the allocation scheme of a single block
>> group.
>
>Some questions from that...
>(1) Is the fragmentation that arises all that bad? Unlike FAT, the files aren't
> likely to have bits and pieces all over the place, but within a bg (and so
> little head movement is required). But big files, or ones too big for the
> space left in a well-filled bg, could get put into a bg anwhere on disk
> without your modified ext2; this doesn't sound so good. I don't suppose it
> happens much, but could be important for large volatile files (news feeds?)

News feeds would *not* be a good example of a problem area; news tends to
involve a *large* number of relatively small files. There was a recent
issue of DDJ where an author wrote about building a custom FS under QNX
specifically to handle news; there keep being rumors of people thinking
about writing such a FS for use with UNIX.

>(2) Given the way modern disk hardware reads and buffers tracks, is there a
> big bonus in making a block group equal to a multiple of the sectors per
> cylinder?

The buffering is still going to have a granularity that will make it
worthwhile to make sure that one isn't going to have 1.25 sectors involved.
If it results in throwing a "page" of cache away, that can have as much
effect as having to hit an extra track.

>(4) The idea of a maxcontig setting is okay for average files, but what if the
> main job of the computer is to run a database, where you want that big file
> to be contiguous for efficiency? Similary, a directory full of little
> shell scripts or html files should be treated differently because there is
> a low chance of them being extended much. Data General's AOS had the
> contiguity of a file pretty much under user control. Even their earlier
> RDOS operating system had three types of file (sequential, contiguous and
> random), with sequential being used by default for typical text files where
> it was most efficient, random for most others, but contiguous was there if
> the user or software deemed it important. Even if Linux doesn't have this,
> it should be possible for a smart program to go over the disk and not just
> "defrag" in the DOS sense, but optimise what goes where based on heuristics
> or on running a tuning program to monitor disk access during typical use.

This strikes me as being a place for having two partitions with different
parameters. Stick big files on the "big file" partition, and little ones
on a "little file" partition. That's going to be robust. Having a
"vacuum" process running along isn't going to improve robustness of the
system.

I can't see it running enough to get *totally* debugged.

>(5) Is HPFS, or NTFS, or the (I think it was) Veritas Journalling FS that came
> with Unixware still better for some jobs? As far as I can see the speed of
> ext2 beats the SunOS file default system in every respect. I have heard
> that NTFS beats ext2 in (samba) file serving environments, and that HPFS
> beats NTFS in most read-write situations, but I've also heard that ext2fs
> is generally better that HPFS, so I'm confused!

I'd heard that ext2fs provided better results than NTFS when doing SMB
stuff; that may actually be a case of having an extra 16MB of disk cache
because the OS takes up less memory.

e.g. - Just as wild speculation, suppose NTFS on a 32MB NT box beats
ext2fs on a 16MB Linux machine, but put Linux on the 32MB box and you've
got a spare 16MB of cache and the SMB server *flies*. hesitate to
suggest where "equality" would lie... In giving both of them identical
quantities of RAM, or in giving them both identical quantities of *free*
RAM (after loading the OS).

Kristian Köhntopp

unread,
Aug 8, 1996, 3:00:00 AM8/8/96
to

phy...@phys.canterbury.ac.nz (Mark Aitchison) writes:
>(2) Given the way modern disk hardware reads and buffers
> tracks, is there a big bonus in making a block group equal
> to a multiple of the sectors per cylinder?

Comparing the BSD ufs/ffs source code (I only have a BSDisc
from the stoneages here: 11/94) to the Linux ext2 source code
shows that BSD has a far more complex allocation mechanism for
new blocks than ext2 has (and that the source is somewhat
cleaner structured as far as my taste is concerned :-). This is
partly because of the relatively complex fragment handling, but
much of this code is for calculating the optimal placement of
new blocks. This code takes disk geometry, timing and some
properties of certain disk controllers into considerations.

With modern harddisks, many of these calculations are no
longer as useful as they were when this code was originally
written and published. For examples, many modern harddisks have
no longer a rectangular (sectors * heads * tracks) geometry.
Instead harddisks have more sectors on the outer tracks and
data transfer rates are variable. Block mapping and linear
block addresses make such geometry considerations difficult to
obtain or untrustworthy (some disks lie about their geometry).

Geometry considerations fail completely when used with disks
that assign blocks horizonally, because their track-to-track
movement is faster than their head-to-head movement
(realignment of the head position is required for some disks,
making head switches slow). Geometry considerations also fail
when dealing with RAID arrays or other SCSI2SCSI adapters.

I don't believe that there is enough gain in writing
specialized code for certain hardware configurations. BSDs code
is already quite baroque in these matters, but not up to the
task of modelling nearly all variants of current hardware. If
the return of investment is worth writing such code, it should
probably not go into the general file system, but into either
the device driver or some intermediate layer. Otherwise there
would be no way to deal with different hardware which has
different preferences.

>(1) Is the fragmentation that arises all that bad?

The logic behind BSD assumes that you have many "normal" sized
files that will not be affected by maxcontig relocation at all.
Those large files that could clutter up single block groups are
split intentionally into reasonable sized chunks and
distributed over the disk. This is done to even out any
irregularities that these files may cause to the allocator.

This intentional fragmentation surely does little harm to
single process performance because the fragments are large (a
seek every few megabytes is perfectly in order). It can even
improve total system performance or responsiveness, because
during this seek the controller may be able to service requests
of other processes to another disk.

I don't like the ext2 approach, which is ignoring this problem
completely. In the case of your standard ext2 filesystem (1 KB
blocksize, 8 MB block groups, one inode for each 4 KB
diskspace), one 8 MB file (or several smaller large files) will
completely fill up one or multiple block groups. New files
assigned to this bg will get their inodes in this bg, but their
data will be in some remote block group.

ext2 does goal based allocation: It will try to allocate the
goal block first. If the goal is not available, it will do a
linear search for a free block. The first block allocation for
a file placed in a full bg will be for a goal in that same bg
(inducing a rather long linear search).

Later allocations will go to the remote bg directly, so the
penalty for filling up a bg with few files is not that high. It
just messes up the "inodes should be next to their data blocks"
paradigm, which was one of the reasons for introducing the
concept of block groups at all.


Fragmentation is not THAT bad as long as your fragments are
reasonably sized. In BSD, the smallest chunk a file can be
broken into is a block (8 KB), except at the end of file where
fragments are allocated. In ext2 there can be smaller units
(ext2 blocks are BSD fragments), but ext2 tries to avoid them
by doing preallocation, that is, allocating multiple contiguous
blocks at once.

One could experiment with different maximum settings for the
ext2 preallocator. Currently ext2 does 8 KB preallocation at
most. I changed this for my ext2 to be a dynamic value. My ext2
will preallocate at least 8 KB and at most 256 KB in one go. It
will try to double the file size, i.e. it preallocates 32 KB
for a file that is being extended at position 32 KB and so on.
If there is enough free contiguous space in a bg to hold the
entire file this will usually produce larger fragments, even if
there are multiple writers to the same bg.

Excessively large preallocation can have bad side effects in
environments where disk space is low (large preallocations can
make your disk look full when it isn't) or where quota is tight
(same problem wrt. to one user). Fixing this is easy, but I
didn't bother to do it yet.

BSD has the same problem: It will preallocate entire 8 KB
blocks at the end of a file when a >second< write is made to a
fragment, assuming that is handles a growing file. This
function is disabled automatically when disk space is tight
("Optimization switched from time to space.").

That same fix would be necessary for ext2 doing large
preallocations. But again I have no data proving that large
preallocations are a performance win at all.

>(3) If a defragmenter did go through an ext2 file system and
> rearrange not just file sectors within a block, but with
> bg's hold which directories (e.g. I can imagine cases where
> it is sensibleee to put several directories into the same
> bg), I presume that normal operation of Linux after that
> wouldn't be adversely affected (or mess up the
> degragmentation too quickly)?

Usually you can't even avoid putting multiple directories into
the same bg. For example,

root@white:~# df
[ ... ]
/dev/hda6 307851 16643 275309 6% /mnt

has 39 block groups, but your typical 300 MB file system will
have more than 39 directories. So even simultaneous writes to
different directories can go into the same bg, if both
directories happen to be within one bg. The inode placement
algorithm tries to optimize directory placement based on
information available >at the time of directory creation<. This
decision may turn out to be utterly wrong later, but ext2 never
tries to migrate directories to other block groups.

It may be useful to have some external usermode utility to do
such migration. Migrating a directory does not interfere with
normal ext2 operation as long as the filesystem is unmounted
while the defragmenter operates on it. ext2 won't mess up this
migration later, too, except when you delete and recreate files.


So any ext2 defragmenter has to play by the same rules as DOS
defragmenters do: Busy partitions will mess up more quickly
than static (mostly r/o) partitions do. It is just that ext2
will not mess up as quickly as DOS does and when it does it is
not as harmful to performance as it is to DOS.

Generally it is more useful to plan your partitions carefully
than to run a defragmenter every other day. Partitions should
be planned with lifetimes for files in mind. Scratch trees such
as /var/spool/news and /tmp should not be on the same partition
as your medium-lived home directories and your long-lived
system binaries. Makes backup and reinstallation easier, too.

Of course, a 95% full /var/spool/news will fragment as hell,
but there is nothing you can really do against it and anyway:
Who cares as long as it doesn't damage your other partitions?

As long as you don't partition with file lifetimes in mind,
your recently defragged partition will mess up a few days
later. If you plan partitions carefully, you probably won't
have to defrag ever.

>(5) Is HPFS, or NTFS, or the (I think it was) Veritas
> Journalling FS that came with Unixware still better for
> some jobs?

In "An Implementation of a Log-Structured File System for
UNIX", Seltzer, Bostic, McKusick and Staelin describe the BSD
LFS and compare its performance against two variants of FFS
under different artificially created loads. The usage pattern
created by "a bunch of developing students" is completely
different from the usage pattern created by a large database.
BSD LFS, which is very different from BSD FFS and Linux ext2,
performed very good in the first case, but not so good in the
second. Or to phrase it more to the point: Show me your source
and I will create THE pathologic benchmark.

ext2 performs quite well in comparison to other current
filesystems, so there can't be anything conceptually wrong with
it. The features ext2 is lacking are not performance features.
I would like to see an ext2 with variable inodes (currently
doing this), which can be grown life by simply adding some
disks (know of no project in this direction) and where crash
recovery is not related to filesystem size (know of no project,
again). AIX JFS, DEV AdvFs, Microsoft NTFS come to mind.

Ingo Molnar

unread,
Aug 8, 1996, 3:00:00 AM8/8/96
to

Kristian =?ISO-8859-1?Q?K=F6hntopp?= (kr...@schulung.netuse.de) wrote:

[snip: cool facts about hard disks geometry]

: I don't believe that there is enough gain in writing


: specialized code for certain hardware configurations. BSDs code
: is already quite baroque in these matters, but not up to the
: task of modelling nearly all variants of current hardware. If
: the return of investment is worth writing such code, it should
: probably not go into the general file system, but into either
: the device driver or some intermediate layer. Otherwise there
: would be no way to deal with different hardware which has
: different preferences.

What about the following method:

Apart from the head-switch overhead thing, most disk geometries
could be transformed back into the original geometry. The original
geometry is quite simple, and one generic algorithm could be
written to handle it well. The only hardver specific part would be
the inverse-mapping function. (not counting internal remapping)

The generic algorithm would be something like this:

We have a logical sector number / disk speed function. Based
on this function, we try to allocate in the outer regions first
(small logical sector numbers), in the inner regions only if
necessary (big logical sector numbers). All files have the same
speed priority (we handle them equally), we try to keep the average
transfer rate as high as possible.

we should estimate the costs of fragmenting a file and putting it
into an outer region versus not fragmenting it but putting it into
a (slower) inner region. This can be calculated based on the
following values:

1: time(fragmentA) + time(fragmentB) + latency(A,B)
2: time(nonfragmented)

Where 'time(region)' is the time to read a region described
by the (start_block,size) pair. if 1: < 2:, then the region
should be fragmented, if not then it should be written into
the inner part of the disk, unfragmented.

Other similar rules should be used to control the allocation
scheme. The 'time()' function is simply the hardware dependant
remap+speed function. latency(x,y) is |x-y|*C, where C is a hardware
dependant constant. The rest of the algorithm is hardware
independent.

[snip: a great explanation about fragmentation]

Any irreversible filesystem can be driven into a wrong state.
If we introduce post-allocation mechanizms, like the 'clean-sweaper'
in LFS, we have the possibility to 'fix' any wrong allocation.

IMHO, for application where allocation is important, extra information
should flow from the application into the filesystem. This can be
either done by implementing special filesystems, or by circumventing
the filesystem by using raw devices and other, similar hacks. Or ...
the filesystem could monitor usage patters, and adapt it's allocation
stategy to the patterns. This is an overkill in most cases, and it
doesnt solve the fundamental problem: usage information is lost when
going to the filesystem level. Although, such allocation systems
could be used in cases where the information is really lost, for
example where random users whack the keyboard and nobody really
knows about the allocation scheme :))

If the usage pattern is known, then a filesystem could be used where
the allocation can be controled by the application too (in a failsafe
way). Thus, process-dependent allocation hooks could be used (for
normal processes a default allocation function could be used).

Personally, i like this hook thing the best: it's more flexible than
a per-application filesystem, and it's better than making expensive
statistics about usage patterns and adapting the patterns.

Dreaming about a filesystem, where an Oracle server registers it's
allocation function, and it runs on the same device as the news
server :) I think there is no real reason separating an SQL server
database from a newsserver database. [if both fit on one disk]

-- mingo

ps. and it's not true that an 'unfragmented' filesystem is ideal!
If the usage pattern requires 'file interleaving', then a cleverly
fragmented filesystem performs better than an unfragmented one.


David Bourgin

unread,
Aug 8, 1996, 3:00:00 AM8/8/96
to Dale Pontius, lna...@avanticorp.com

Dale Pontius wrote:
>
> In article <4t0csg$7...@rtp-avnt-gw.avanticorp.com>,
> jln...@avanticorp.com (Jim Nance) writes:
> >In article <4su965$m...@lal.interserv.net>, der...@spry.com wrote:
> >
> >> Anyone know of a disk defragmenter for Linux? Does Linux even
> >>need one, in the way that MS-DOS (and any other OS I've dealt with) does?
> >
> >The ext2 file system is designed not to need one. It sort of defrags itself
> >as it runs. Even so I think someone wrote one at one time, but I have not
> >heard anything about it in several years.
> >
> I've run OS/2 on HPFS for years, and am starting to look into Linux.

OS/2 is good but quite slow compared to kernel 2.0.x and there
really less software available than in Unix world.

> This defrag question interested me, because HPFS does a very good
> job of resisting fragmentation and 'self-healing', as well. It's
> interesting to see the same claim made for ext2fs.

ext2fs is one of the most reliable and well tought system.
I never tried to defrag my system even if I own the last defrag
version (0.6) handy.

> I think I'm fairly well aware of most of the differences between
> ext2fs and HPFS. (security, links, and case-sensitive being the

> biggest) But I have a few other questions...

HPFS is nice but it's less reliable compared to ext2fs due
to superblock copies into ext2fs system.

> How well does ext2fs recover from an improper shutdown? HPFS will
> automatically do a CHKDSK on the next boot.

This is also the case with fsck and v1.04 does it nicely!!!

See at:
ftp://ftp.wsc.com/pub/freeware/linux/update.linux/
ftp://ftp.ibp.fr/pub/linux/update.linux/

> Does ext2fs have a lazy-writer, or does it need an explicit sync
> of some sort to flush its write buffers? This can affect the previous
> question, obviously.

See above.

> (NOT speaking for IBM)

Speaking for me...

Bye,
--
Web: http://www.accescyb.fr/~rezo1/homepage.html David
E-mail: dbou...@wsc.com
(Unsollicted commercial ads to this e-mail will be considered as abuse!)
David Bourgin - Netware/Unix administration/security.
I'm a netsurfer, and as such, a citizen of the world.

Kristian Köhntopp

unread,
Aug 9, 1996, 3:00:00 AM8/9/96
to

David Bourgin <dbou...@wsc.com> writes:
>HPFS is nice but it's less reliable compared to ext2fs due
>to superblock copies into ext2fs system.

As long as you create your ext2 filesystem with standard
parameters (or know your nonstandard parameters by heart) you
don't need any superblock backup to recreate a filesystem.
Superblock backups are just a convenient place to store the
options you used to create the fs. They are never updated after
they have been written by mke2fs.

Instead of using fsck with an alternate superblock, you could
also recreate the filesystem with "mke2fs -S" and fsck with a
regular superblock.

Demonstration:

// Create a filesystem with std parameters
root@white:~# mke2fs /dev/hda6
[ ... ]
root@white:~# mount -t ext2 /dev/hda6 /mnt
root@white:~# cp -R /etc/* /mnt
root@white:~# umount /mnt

// recreate the filesystem superblock, zapping all
// summary information. The primary superblock is
// now identical with all its backup copies again.
root@white:~# mke2fs -S /dev/hda6
[ ... ]

// use fsck to recreate the summary information
// and usage bitmaps again.
root@white:~# fsck /dev/hda6
Parallelizing fsck version 0.5b (14-Feb-95)
e2fsck 0.5b, 14-Feb-95 for EXT2 FS 0.5a, 95/03/19
/dev/hda6 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Fix summary information<y>? yes
[ tons of "error fixed" messages deleted ]
/dev/hda6: ***** FILE SYSTEM WAS MODIFIED *****
/dev/hda6: 290/79560 files, 13253/317992 blocks

// check that all is in order.
root@white:~# mount -t ext2 /dev/hda6 /mnt
root@white:~# ls /mnt | head -3
DIR_COLORS
HOSTNAME
NETWORKING
root@white:~# df /mnt
Filesystem 1024-blocks Used Available Capacity Mounted on
/dev/hda6 307851 3112 288840 1% /mnt

0 new messages