How do I get 43.9% non-contiguous inodes? I thought
ext2 defragged on the fly? Is there a way to defrag
this or should I even care?
Many thanks,
--Tony
aew...@gbis.com
fsck.ext2 -cfvp /dev/sda4
32629 inodes used (2%)
14330 non-contiguous inodes (43.9%)
# of inodes with ind/dind/tind blocks: 13632/1066/3
3089023 blocks used (65%)
0 bad blocks
31832 regular files
781 directories
0 character device files
0 block device files
0 fifos
0 links
7 symbolic links (7 fast symbolic links)
0 sockets
--------
32620 files
You shouldn't particularly care. bits of files scattered all over the
place are pretty good news on multiuser systems! It means the average
seek time across a seek for two different users is less.
Mind you, 43% is a record breaker. Could be achived by writing lots of
little files, erasing half of them, then writing lots of files exactly
twice the size of the erased ones.
Peter
Don't know where you got the notion ext2 'defragged on the fly' -- it
does not. How you get non-contiguous inodes is dependent on your
file system usage -- typically lots of small files created,
extended, deleted.
HOWEVER: "Non-contiguous" is not necessarily a valid measure of
fragmentation, especially if you have a lot of bad blocks.
You can defrag with /sbin/e2defrag, if you have it installed. If not,
try your distribution source media or get the tarball from
ftp://metalab.unc.edu/pub/Linux/system/filesystems/defrag-0.70.tar.gz.
>>>> IMPORTANT WARNING <<<<<
Read the docs carefully ! DO NOT, DO NOT, DO NOT run defrag on a
mounted filesystem. Indeed, I would run defrag only after booting
from a separate root/boot disk, and after competing two backups.
Of course, I'm paranoid about about just about everything :-)
For any user, having accessed the first sector of a file, the least amount of
time will be spent if subsequent sectors in the file reside on the same track
or those immediately adjacent. Any lengthy excursions of the head in gathering
all of the sectors in the file takes time, and therefore wastes time.
On a multiuser system the head will have to switch locations anyway.
On user reads, the users are waiting to be served. So best performance is
gained when all the data the users want is contiguous, meaning block 1 from
user 1 is followed by block 1 of user 2 etc. Obviously this will mean a
severly scattered FS. And when time passes files are deleted and recreated
at will, but randomly located files will overall still garantuee a minimum
distance the head must move to serve another user.
You can probably prove this even mathematically, and I imagine this is done
already, search for research papers on this topic if you like more detailed
info.
Eric
No. Stating something doesn't make it "conventional wisdom". This
particular old wives tale is covered in several FAQs. (not the linux
faq, as far as I can see ... though it hints at it)
> For any user, having accessed the first sector of a file, the least amount of
> time will be spent if subsequent sectors in the file reside on the same track
> or those immediately adjacent. Any lengthy excursions of the head in gathering
Interestingly false. "For any user" is a meaningless phrase. Please
re-examine your preconceptions! The disk and the block device file
systems are completely unaware and uninterested in for whom or why
they are ultimately fetching or writing date. All that happens is
that the VFS layers get tired of cacheing or stalling and i/o
requests dribble downwards through the architecture ...
> all of the sectors in the file takes time, and therefore wastes time.
If by any chance the disk is not available at the moment data is
requested, the cpu will go find something useful to do, like finishing
up some more of your ssh ciphering communication with the window next
door, thus speeding up everything. Thus you should be thankful if your
data is nicely spread out! And if it is nicely spread out, then the
average ondisk distance between data requests is reduced, so the system
speeds up (and time to read a single long file increases, but then
that's what you want in order to get good performance, since reading a
single long file is a very rare event).
Sorry, you can't argue qualatatively like this. You have to argue use
profiles, and take into account other system measures. To help you,
here's my scsi disk measurements:
Device using Narrow/Sync transfers at 40.0 MByte/sec, offset 63
Transinfo settings: current(10/63/0/0), goal(10/127/0/0), user(10/127/0/0)
Total transfers 341607 (256704 reads and 84903 writes)
< 2K 2K+ 4K+ 8K+ 16K+ 32K+ 64K+ 128K+
Reads: 104918 1934 114636 12246 15079 3541 4350 0
Writes: 77172 5843 822 83 151 40 792 0
Peter