I've running a TCP Server on top of VxWorks Stack.
The TCP Server listens on port 5001 & is ready to accept any number of tcp
clients.
On a Win NT machine, I've a program which is a TCP client program.
The TCP client program connects to the TCP Server, but it doesn't close the
socket.
This is done in a while loop for 50 times (it means we will have 50 TCP
clients connected to the TCP Server)
However, after accepting requests from 44 TCP Clients, the TCP Server gives
an error.
The error code is 0xd0004.
Can anyone help me out? Is there any limit on the number of connections that
can be opened?
Regards,
Raghunath
(O) 91-832-783615 ext 505
(R) 91-832-423024
--
Posted from hidde...@dakiya.controlnet.co.in.116.54.202.in-addr.arpa [202.54.116.69] (may be forged)
via Mailgate.ORG Server - http://www.Mailgate.ORG
This is a FAQ... you should really check previous postings to see if
you can find the answer (not to mention the docs). You need to change
NUM_FILES and recompile your kernel image - each socket takes a file
descriptor.
You can go up to 255 if you, or anything that you are using, makes use
of the select() feature (or you just want to be safe), but since you
only need a few more, I'd suggest going to 75 or 100.
HTH,
John...
"Raghunath Adhyapak" <ragh...@controlnet.co.in> wrote in message news:<NEBBJKOBKBDKADDJBJM...@controlnet.co.in>...
Raghunath Adhyapak wrote:
> Hi all,
>
> I've running a TCP Server on top of VxWorks Stack.
> The TCP Server listens on port 5001 & is ready to accept any number of tcp
> clients.
>
> On a Win NT machine, I've a program which is a TCP client program.
> The TCP client program connects to the TCP Server, but it doesn't close the
> socket.
> This is done in a while loop for 50 times (it means we will have 50 TCP
> clients connected to the TCP Server)
>
> However, after accepting requests from 44 TCP Clients, the TCP Server gives
> an error.
> The error code is 0xd0004.
>
> Can anyone help me out? Is there any limit on the number of connections that
> can be opened?
sockets use file handles... there is a limit on this type of resource but it would also be modified by the
number of files you have open on file systems, and the number of files VxWorks itself may have open for
whatever reason. (I use 64 as a guesstimate of the limit because I think select() is limited to this,
however there may be a kernel setting to set it higher).
David
> sockets use file handles... there is a limit on this type of resource but
> it would also be modified by the number of files you have open on file
> systems, and the number of files VxWorks itself may have open for
> whatever reason. (I use 64 as a guesstimate of the limit because I think
> select() is limited to this,however there may be a kernel setting to set
> it higher).
to increase the number of files you have to set (in config.h) this
define:
#define NUM_FILES <xxxx>
If I remember correctly the implementation of "select()" imposes a
limit of 256 to the number of sockets if you use this call.
Note also that each open socket uses a couple of "clusters" in the
network stack system pool so you have to size this pool accordingly.
You do this by putting the needed values in the following defines,
also in "config.h"
/* Number of clusters in the network stack system pool.
* The file netBufLib.h sets these to the default value 64 if they
* are not defined previously(#ifndef). Values increased here to <xx>.
*/
#define NUM_SYS_64 <xx>
#define NUM_SYS_128 <yy>
#define NUM_SYS_256 <ww>
#define NUM_SYS_512 <zz>
There is a very interesting document from windriver ("Tornado 2.0
network stack configuration/performance issues") that explains how to
configure the IP stack for heavy use.
I had it from a wind river guy a few months ago
what will happen if the network stack system pool is sized in correctly? -
will it just limit the number of connections (like we are experiencing) or
will there be horrible results like bad memory accesses etc.? could this be
a limiter on the number of sockets too then?
Do you think you could point me at this document you mentioned?
Kevin
"Claudio Potenza" <cpot...@onetel.net.uk> wrote in message
news:2025ac82.01091...@posting.google.com...
There are two documents (very similar content): WTN51 and WTN53. Don't
worry about their references to Java and web server products, these
can be considered to be just examples of network intensive
applications. All tech notes for T2 can be downloaded from the
following URL:
http://www.wrs.com/csdocs/product/t2/technote/index.shtml
You will need a WindSurf account/password to get to them though.
HTH,
John...
"Kevin Livingston" <Kevin.Li...@betalasermike.com> wrote in message news:<hf6o7.13$pz5.810@client>...
Based on my understanding, if you undersize your network memory pools
(there are three different pools) what will happen is:
1) if you run out of "network stack system pool" buffers (more
precisely
called "clusters") you will not be able to open sockets
(errno=ENOBUFS)
2) if you run out of "network data system pool" clusters, your
"send()" calls will hang
2) if you run out of "driver pool" clusters you will lose incoming
packets
There should be no other horrible things (crashes, bad mem accesses or
similar)
> Do you think you could point me at this document you mentioned?
No, it was simply given to me by a wind river engineer..
When I go back to my home office (next week) I could try to send it to
you (if I am allowed to do this!)
The number of sockets is limited to 253, with NUM_FILES 253.
The discrepency is because fd numbers 0, 1 and 2 are 'canned' and refer to
other fd numbers. So are not counted as 'files'.
David
buffers can be held up by connections in the TIME_WAIT state (which
happen on the side of the first close(), ie some http servers), and
can slow things down considerably, and possibly halt traffic.
inetstatShow can display the state of the sockets.
The thing that comes to mind is to shorten TIME_WAIT, but to the best
of my knowledge, there's really no good way to do this. In my
opinion, on modern ethernet it's not necessary to wait a full minute,
unless you're sending data over a 300bps modem to the moon or
something. For traffic that isn't absolutely critical (most of the
time stuff isn't), I've come up with a way to allieve the problem, at
least for us: At each close(), allow a number of connections in
TIME_WAIT, but after the number has been reached, at each close(),
kill off one (or all, which is what I do) of the connections in
TIME_WAIT.
IIRC, there's a variable called tcpcb that points to the first TCP
control block and can be used to iterate through the blocks and find
tcp control blocks in the TCPS_TIME_WAIT state, which you then can
call tcp_close() on.
Surrounding the function in taskLock is recommended.
Mark S.