Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Too many files open in System - Why

0 views
Skip to first unread message

Julian Warren

unread,
Feb 11, 2001, 6:40:59 PM2/11/01
to
Dear All,

We are using a 15 minute cron to run a java app on a FreeBSD i386 server.
This application harvests net pages and then collates the results and ftp's
them to another of our servers.

The entire box regularly fails over with the telnet response to and "ls" of
"Too many files open in System.

It might be the case that the ftp fails when the target server appears full.
we're have not yet proven this one.

How do we find out how many files are really open and what process has then
open?

Does it make any difference how we launch the following script?.

#!/bin/sh
BASE=/usr/home/ourservername/ourpathname/
JRE=/usr/local/jdk1.1.8/bin/jre
# Clean up JS Directories
rm -f $BASE/textfiles/*
cd $BASE
$JRE -classpath .:/usr/local/jdk1.1.8/lib/classes.zip:jaxp.jar:parser.jar
Spider
cd $BASE
ftp -in < ftpScript

The ftp script does not start with a shebang.

Thanks in anticipation

Julian Warren

Julian Warren

unread,
Feb 11, 2001, 6:54:46 PM2/11/01
to
Dear All

Further to my previous posting this is the result of a ulimit query:

bash$ ulimit -a
core file size (blocks) unlimited
data seg size (kbytes) 524288
file size (blocks) unlimited
max memory size (kbytes) unlimited
stack size (kbytes) 65536
cpu time (seconds) unlimited
max user processes 531
pipe size (512 bytes) 1
open files 1064
virtual memory (kbytes) 589824

Just supposing I do manage to find a way of locating stranded open files, is
there any way of forcing them closed?

Regards

Julian Warren

"Julian Warren" <junk...@hotmail.com> wrote in message
news:OmFh6.6969$4a.178...@news.xtra.co.nz...

Jerry Heyman

unread,
Feb 14, 2001, 5:07:30 PM2/14/01
to
In article <IzFh6.6976$4a.177...@news.xtra.co.nz>,

"Julian Warren" <junk...@hotmail.com> writes:
>Dear All
>
>Further to my previous posting this is the result of a ulimit query:
>
>bash$ ulimit -a
>core file size (blocks) unlimited
>data seg size (kbytes) 524288
>file size (blocks) unlimited
>max memory size (kbytes) unlimited
>stack size (kbytes) 65536
>cpu time (seconds) unlimited
>max user processes 531
>pipe size (512 bytes) 1
>open files 1064
>virtual memory (kbytes) 589824
>
>Just supposing I do manage to find a way of locating stranded open files, is
>there any way of forcing them closed?

The number of open files is generally on a per process basis and tied to
a kernel table. I'm not familiar enough with the *BSD kernel to know where
to look for the constant and to determine whether or not it can enlarged.

Question: Does this command run okay in a user's command line prompt?

I'm trying to understand if the environment is different between the
interactive session and the cron job. One thing to do with the cron job
is have it write out to log file.

Change yoru cronjob script to #!/bin/sh -x or put a 'set -x' in the
first line of the file. Also, put an 'env' command in the script so
that you can see ALL the environment variables available to you when
the script executes.

Then from your crontab do the following:

0,15,30,45 * * * * /path/to/script > /path/to/log/file 2>&1

Good luck,

jerry

--
Jerry Heyman 919.224.1442 | Tivoli Systems |"Software is the
Build Infrastructure Architect | 3901 S Miami Blvd | difference between
Jerry....@tivoli.com | RTP, NC 27709 | hardware and reality"
http://www.acm.org/~heymanj

0 new messages