Should I use BSD sockets or go to a lower layer? What are the max # of
BSD sockets that could probably be held open at the same time on LINUX?
How do I go to a lower layer if I wanted too? Resources? Any ideas
would be great?
joe r
If you can implement a TCP stack in your userland code, you could use
the ethertap facility to really get down low. I'm planning to use it
for a special network monitoring tool.
Otherwise I would suggest fanning the connections out over multiple
forked child processes that can (de-)multiplex the streams to the
parent or to each other as needed.
The limit is the number of file descriptors available, usually 1024 or
maybe 4096, but this can be changed though things could begin to get
slow at that point since I believe they are not searched using methods
that are good at that scale. OTOH, that many open could be slow on the
whole system even if spread over a number of processes. I'm just not
sure and hopefully a kernel guru will reply.
--
| Phil Howard - KA9WGN | My current websites: linuxhomepage.com, ham.org
| phil (at) ipal.net +----------------------------------------------------
| Dallas - Texas - USA | phil-evaluates-email...@ipal.net
My advice to you would be to have one thread 'poll' on every 1,024
connections.
DS
Best regards!
"David Schwartz" <dav...@webmaster.com> wrote in message
news:39A009B6...@webmaster.com...
> Best regards!
Well, 300000 is not possible with TCP/IP. Your limit on one UNIX system will be
something like 48k. For 300000 or higher use connectionless connections via
UDP.
--
================================================================
Aurel Balmosan | au...@xylo.owl.de, a...@orga.com
http://gaia.owl.de/~aurel/ |
================================================================
300,000 at one time? Please consider the implications:
If these 300k are active at *average* dialup speeds (2.4Kbit/s),
then you are talking about 72 MByte/s throughput. This is very
hard to achieve on a single x86 PC box:
a) Gigabit ethernet is at best 125 MB/s .
b) The PCI bus can only handle 133 MB/s flat-out.
c) Main memory read throughput is around 250 MB/s .
Furthermore, you will need at a least 1 Gigabit/s connection
to the `net. The normal way is to have a cluster of PCs.
I would recommend at least 10. They would be 100baseTX
networked to a high speed router.
Now, if the 300,000 are not to have any sustained traffic
at all, that's a different story. Authentication is
certainly reasonable, but for that you don't need to
maintain TCP connections open. Open-and-shut or use
UDP as another poster suggested.
-- Robert
There is no such limit.
> Will the performance decrease significantly when connections wave up to 300
> thousand or so?
You can't sustain 300,000 TCP connections on any operating system I
know of.
DS
joe r
"David Schwartz" <dav...@webmaster.com> wrote in message
news:39A30CB2...@webmaster.com...
>
> John Chen wrote:
> >
> > If the limit number of file descriptors in each process is 1024, does it
> > mean that each thread can hold 1024 connections?
>
> There is no such limit.
>
> > Will the performance decrease significantly when connections wave up to
300
> > thousand or so?
>
Has anyone seen it working? If yes, was it usable with 100K open and active
connections?
Maciej
> | I am building an IM server for LINUX that will accept connections and
> | hold them open for asyn communication (very much the same as AOL). I
> | need some ideas about how to go about this.
...
> The limit is the number of file descriptors available, usually 1024 or
> maybe 4096, but this can be changed though things could begin to get
> slow at that point since I believe they are not searched using methods
> that are good at that scale. OTOH, that many open could be slow on the
> whole system even if spread over a number of processes. I'm just not
> sure and hopefully a kernel guru will reply.
and don't forget to buy memory. Every open TCP socket will consume something
from 8K to 64k (depending on your system defaults), even more if window
scaling is on.
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| Hans-Peter Huth Phone: +49 89 636 430 71 |
| Siemens AG, Dept. ZT IK 2 FAX: +49 89 636 51115 |
| Otto-Hahn-Ring 6 E-mail: Hans-Pe...@mchp.siemens.de|
|D-81730 Munich http://alpha.mchp.siemens.de (insiders only) |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
IK 2 Internet Mobility. Wir sind schon drin!
------------------------ cut here 8< --------------------------------
I think it can be done if you program everything in Assembly. However, this
is a *nasty* job, for even the OS and Network-controls have to be coded
again (to gain max speed). And it will be unlikely that this implementation
is very good, because Assembly becomes very untransparant if it becomes too
large and complex. And time must certainly not be an issue.
But I think, in theory, it can be done (if I can do it for a 3.57 MHz 8-bit
Z80, it is certainly possible on a 32-bit (or is it 64-bit?) AMD Athlon
1000). However I also think it's best to divide the job over several
computers.
By the way, what terrible big system are you setting up if you expect to
handle up to 300.000 simultaneous connections???
~Grauw
--
>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<
email me: laur...@yahoo.com or ICQ: 10196372
visit my homepage at http://grauw.blehq.org/
>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<
"Laurens Holst" <laur...@geocities.com> wrote in message
news:8o2ll3$b9jea$1...@reader3.wxs.nl...
> > > Will the performance decrease significantly when connections wave up
to
> 300
> > > thousand or so?
> >
...and this can be done with our nice lvs-project for linux. Check it
out at http://www.linuxvirtualserver.org. We would be happy if you could
test it on your setup :)
Regards,
Roberto Nibali, ratz
Does someone have a reasonable estimate of the memory required per
connection to survive a routing glitch or a set of extremely slow
client connections where most of your connections will have sent
a full TCP window of data without getting an acknowledgment. Also,
what happens in Linux when this requirement exceeds available
RAM/swap? I think I have seen this situation crash an otherwise robust
freebsd box.
Les Mikesell
lesmi...@home.com
I think that could be as much as 64KB pr. conection,
with large windows enabled it could perhaps be
even more. But a newly opened TCP connection is
supposed to start with a much smaller window size
and only expand it as it gets acknowledgements. The
problem actually would be worst at high speed lines
and lines with long latency like satelite links.
I wouldn't be surprised if that could crash Linux.
I have seen Linux crash because too much memory was
used for ram disks, if the TCP implementation is as
bad as the ram disk implementation it would be bad.
--
Kasper Dupont