Anyway, I was hoping someone might see something from the ‘df –h’
listing that would indicate a problem. The system really can’t do
anything and I keep getting a log message indicating that the system
is full. I’ll check on the process that is writing the ‘disk full’
message for another clue. The system now keeps rebooting itself.
Any help with this greatly appreciated,
Eric
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t1d0s0 34G 33G 0K 100% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 2.4G 1.5M 2.4G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1 34G
33G 0K 100% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
34G 33G 0K 100% /platform/sun4u-us3/lib/sparcv9/
libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
swap 2.4G 376K 2.4G 1% /tmp
swap 2.4G 56K 2.4G 1% /var/run
/dev/dsk/c1t1d0s7 33G 827M 32G 3% /home
du -sh ./* |sort
check /var/adm/messages
rebooting itself?
Anything in /var/crash/`uname -n` ?
Something has obviously been writing to your / partition. Doing an ls /
may provide a clue as to what. It could be everyone!
I don't see a separate partition for /var. In fact, I don't see a
separate partition for *anything* but /. It's customary to lay out your
disk with partitions for / plus /usr, /var I think having one is a very
good idea! A look at /var as it now exists may help you understand why.
It may also help you understand what sort of periodic cleanup you need
to be doing.
Writing things to / does not seem, to me, to be a good idea. Obviously
there are a few things that belong in /; e.g. /usr. I'm talking about
putting generic "junk" in there!
You might want to get yourself a good book or two on system administration!
Here's a sample layout that I've found satisfactory for a workstation
with, AIRC, a 20 GB disk. YMMV!
sunblok_$ df
/ (/dev/dsk/c0t0d0s0 ): 358354 blocks 388161 files
/proc (/proc ): 0 blocks 3829 files
/dev/fd (fd ): 0 blocks 0 files
/etc/mnttab (mnttab ): 0 blocks 0 files
/var/run (swap ): 2349648 blocks 26364 files
/tmp (swap ): 2349648 blocks 26364 files
/scratch (/dev/dsk/c1t1d0s0 ): 270112 blocks 503995 files
/export/home (/dev/dsk/c0t0d0s7 ):109668122 blocks 6724999 files
full of crash dumps? check /var/crash
What is your root partition full of????? When you learn that, you may
be able to figure out what's doing it and kill it!
<snip>
> What is your root partition full of????? When you learn that, you may
> be able to figure out what's doing it and kill it!
>
> <snip>
I would boot from an optical drive in single-user mode in order to
achieve that safely.
Then check /var/core, var/adm/messages and most importantly,
do a du to find out what directory tree is chewing away all that disk
space!
Chech the output of
du -dak / | sort -n | tail
I eventually gave up and reinstalled Solaris 10 from scratch after
backing everything up. I managed to get on the internet with DHCP,
which I hadn't been able to do with Solaris 9 and after upgrading from
Solaris 9 to Solaris 10.
I never did find out exactly why /var kept filling up but I did pin it
down a bunch of 'core' files in /var/fm/fmd or some such.
I DID buy a book on Solaris 10 and I'm reading now.
Eric
#!/bin/sh
/bin/echo "Results in kilobytes:"
if [ $# = 0 ]; then
/usr/bin/du -k -s * | sort -n | /bin/awk \
'{printf(" %9d %s\n",$1,$2);t+=$1} \
END{printf("----------\n%10d kilobytes\n",t)}'
else
/usr/bin/du -k -s $* | sort -n | /bin/awk \
'{printf(" %9d %s\n",$1,$2);t+=$1} \
END{printf("----------\n%10d kilobytes\n",t)}'
fi
There is actually a little more to it, as my original also has
options for printing out in megabytes or gigabytes, but that's
just a matter of dividing the number by the appropriate constant,
so it's not presented here.
I call it "dirsp", which is probably not the best name, but it's
short and hasn't been used by anything else yet.
Run it by first "cd" to the directory in question, then "dirsp".
Or, you can give it a directory as a parameter.
The biggest advantage I find is in sorting the output by size.
So I can "cd" into the sub-directory that's using up the most
space, then repeat the operation. I continue this way until I
isolate the file(s), director(ies) that are getting unusually
large. Then, knowing the perpetrator, I can do something about it.
It's too small and simple to take credit, but I hope someone might
find it useful.
/:-/
Try xdu -- takes as input the output from a du, then shows
it graphically -- nested boxes (size represents size).
David