Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

how to find bigest files on system

0 views
Skip to first unread message

Chris Gosley

unread,
Aug 9, 2001, 2:12:09 AM8/9/01
to
I have a problem with running out disk space, and I cant figure out which
file/s are taking up all the space.

can someone please tell me a command that I can run from the root of the
filesystem which will show me all files in decending order of size. i.e.....

ls -la -"decedning order of size"

ta


'Dungeon' Dave

unread,
Aug 10, 2001, 5:58:12 PM8/10/01
to
.. and it came to pass that Chris Gosley <c...@ememory.com.au> uttered
forth:

du -ak | sort -nr | head -20

- Disk usage per file, sorted numerically in descending order (putting
largest at the top), showing the top 20 entries.

Amend as you see fit.
--

"Dungeon" Dave, in anti-harvest mode...

From: s...@efront.com
From: ec...@efront.com
From: bry...@efront.com

Juergen Pfann

unread,
Aug 12, 2001, 1:00:13 AM8/12/01
to
'Dungeon' Dave wrote:
>
> .. and it came to pass that Chris Gosley <c...@ememory.com.au> uttered
> forth:
> >I have a problem with running out disk space, and I cant figure out which
> >file/s are taking up all the space.
> >
> >can someone please tell me a command that I can run from the root of the
> >filesystem which will show me all files in decending order of size. i.e.....
> >
>
> du -ak | sort -nr | head -20
>
> - Disk usage per file, sorted numerically in descending order (putting
> largest at the top), showing the top 20 entries.
>

Did you check that out ? I, for one, get only _directories'_ cumulated
disk space as the "Top 20"; not a single _file_ among those -
applying the above to my "normal" situation ( 3 Linux file systems,
3xVFAT, 1xNTFS mounted).

Alternatively :
find / -type f -ls | sort +6 -nr | head 20

-Similarly, the top 20 of all regular files (not directories, device
files, pipes and such), reversely sorted after the 6th field of find's
ls-like formatted output, which is the file size.

Feel free to check out, which is faster.
The advantage of my suggestion is, you're more flexible to restrict
your search by additional criteria, such as only files greater than
1 MByte. You'd simply add "-size +2048" to the "find" command.
Or, even better, restrict to the root fs only ("-xdev"); this has
the additional advantage, you don't get /proc/kcore, which is no
"real" file (but found by "find" as "regular file"), but only a
mirror of your system RAM...

Juergen
Juergen

'Dungeon' Dave

unread,
Aug 12, 2001, 10:46:39 AM8/12/01
to
.. and it came to pass that Juergen Pfann <juerge...@t-online.de>
uttered forth:

>'Dungeon' Dave wrote:
>>
>> .. and it came to pass that Chris Gosley <c...@ememory.com.au> uttered
>> forth:
>> >I have a problem with running out disk space, and I cant figure out which
>> >file/s are taking up all the space.
>> >
>> >can someone please tell me a command that I can run from the root of the
>> >filesystem which will show me all files in decending order of size. i.e.....
>> >
>>
>> du -ak | sort -nr | head -20
>>
>> - Disk usage per file, sorted numerically in descending order (putting
>> largest at the top), showing the top 20 entries.
>>
>
>Did you check that out ? I, for one, get only _directories'_ cumulated
>disk space as the "Top 20"; not a single _file_ among those -
>applying the above to my "normal" situation ( 3 Linux file systems,
>3xVFAT, 1xNTFS mounted).

Bugger - point taken, as the top 20 largest entries are likely to be the
parent directories.

I guess a "df -kP" will tell you which filesystems are filling up first.
This should be a better starting point!

Rich Looke

unread,
Aug 22, 2001, 7:53:32 PM8/22/01
to
You can use the sort command for that. I won't go into the details of sort's
switch settings you need cause I'm not at a Unix/Linux station right now to
get the man page. But I can tell you that I've done it before so you can do
it after looking at the man page.

Basically, is what you need to do is use the ls command with the recursive
switch set and pipe that output into sort. The format of ls's output is
dependable with regards to what column the file size information is in. So
you can setup sort to sort by numerical value starting at such and such
column and ending at so and so column.

So your command structure (sans the necessary switch settings and column
information for sort) would look something simular to this:

ls -laR / | sort -<switches> <column_info>

What's gonna happen here is you'll get a sorted list of 10's of thousands of
files that whizzes past your screen in real time so I'd either pipe the
output to more, or else send it to a file as such:

ls -laR / | sort -<switches> <column_info> | more
ls -laR / | sort -<switches> <column_info> > /tmp/my_sorted_files.txt

Here's a trick I'll show you. You're really only inerested in the really big
files so why bother with the small ones?. Let's say we're interested in
seeing all the files that are over oh, say 25 Megabytes in size. I do this
kind of thing fairly often using one of my favorite commands. find. Here's
how you could set it up to find just the whoppers:

find / -size +25000k -exec ls -la {} \;

This will do a long listing of all the files over 25 megs. You want the long
listing so you know exactly haw big the file is. The find command is a real
powerful command that you can use to look for all kinds of different stuff
in your file system. Like files that are older/newer than a certain date,
haven't been accessed in a certain amount of time, empty directories, files
owned by so and so user or not owned at all, types of files whether they be
directories, named pipes, block device, character device, etc, etc, etc.
Check out the man page for find. You'll see what I mean.

Hope it helps.

Rich Looke

(no direct reply requested. but if you do,
please remove "NOSPAM." from return address). Thanks!


"Chris Gosley" <c...@ememory.com.au> wrote in message
news:Wrqc7.49$YI3....@nsw.nnrp.telstra.net...

0 new messages