Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Linux server to hold thousands of tcp connections?

1 view
Skip to first unread message

J R

unread,
Aug 19, 2000, 3:00:00 AM8/19/00
to

I am building an IM server for LINUX that will accept connections and
hold them open for asyn communication (very much the same as AOL). I
need some ideas about how to go about this.

Should I use BSD sockets or go to a lower layer? What are the max # of
BSD sockets that could probably be held open at the same time on LINUX?
How do I go to a lower layer if I wanted too? Resources? Any ideas
would be great?

joe r


phil-new...@ipal.net

unread,
Aug 20, 2000, 3:00:00 AM8/20/00
to

If you can implement a TCP stack in your userland code, you could use
the ethertap facility to really get down low. I'm planning to use it
for a special network monitoring tool.

Otherwise I would suggest fanning the connections out over multiple
forked child processes that can (de-)multiplex the streams to the
parent or to each other as needed.

The limit is the number of file descriptors available, usually 1024 or
maybe 4096, but this can be changed though things could begin to get
slow at that point since I believe they are not searched using methods
that are good at that scale. OTOH, that many open could be slow on the
whole system even if spread over a number of processes. I'm just not
sure and hopefully a kernel guru will reply.

--
| Phil Howard - KA9WGN | My current websites: linuxhomepage.com, ham.org
| phil (at) ipal.net +----------------------------------------------------
| Dallas - Texas - USA | phil-evaluates-email...@ipal.net

David Schwartz

unread,
Aug 20, 2000, 3:00:00 AM8/20/00
to

J R wrote:
>
> I am building an IM server for LINUX that will accept connections and
> hold them open for asyn communication (very much the same as AOL). I
> need some ideas about how to go about this.
>
> Should I use BSD sockets or go to a lower layer? What are the max # of
> BSD sockets that could probably be held open at the same time on LINUX?
> How do I go to a lower layer if I wanted too? Resources? Any ideas
> would be great?

My advice to you would be to have one thread 'poll' on every 1,024
connections.

DS

John Chen

unread,
Aug 21, 2000, 10:35:57 PM8/21/00
to
If the limit number of file descriptors in each process is 1024, does it
mean that each thread can hold 1024 connections?
Will the performance decrease significantly when connections wave up to 300
thousand or so?

Best regards!

"David Schwartz" <dav...@webmaster.com> wrote in message
news:39A009B6...@webmaster.com...

Aurel Balmosan

unread,
Aug 22, 2000, 3:00:00 AM8/22/00
to
In comp.os.linux.development.system John Chen <qch...@beijing.mot.com> wrote:
> If the limit number of file descriptors in each process is 1024, does it
> mean that each thread can hold 1024 connections?
> Will the performance decrease significantly when connections wave up to 300
> thousand or so?

> Best regards!

Well, 300000 is not possible with TCP/IP. Your limit on one UNIX system will be
something like 48k. For 300000 or higher use connectionless connections via
UDP.
--
================================================================
Aurel Balmosan | au...@xylo.owl.de, a...@orga.com
http://gaia.owl.de/~aurel/ |
================================================================

Robert Redelmeier

unread,
Aug 22, 2000, 3:00:00 AM8/22/00
to
John Chen wrote:
>
> Will the performance decrease significantly when connections wave up
> to 300 thousand or so?

300,000 at one time? Please consider the implications:

If these 300k are active at *average* dialup speeds (2.4Kbit/s),
then you are talking about 72 MByte/s throughput. This is very
hard to achieve on a single x86 PC box:

a) Gigabit ethernet is at best 125 MB/s .
b) The PCI bus can only handle 133 MB/s flat-out.
c) Main memory read throughput is around 250 MB/s .

Furthermore, you will need at a least 1 Gigabit/s connection
to the `net. The normal way is to have a cluster of PCs.
I would recommend at least 10. They would be 100baseTX
networked to a high speed router.

Now, if the 300,000 are not to have any sustained traffic
at all, that's a different story. Authentication is
certainly reasonable, but for that you don't need to
maintain TCP connections open. Open-and-shut or use
UDP as another poster suggested.

-- Robert

David Schwartz

unread,
Aug 22, 2000, 3:00:00 AM8/22/00
to

John Chen wrote:
>
> If the limit number of file descriptors in each process is 1024, does it
> mean that each thread can hold 1024 connections?

There is no such limit.

> Will the performance decrease significantly when connections wave up to 300
> thousand or so?

You can't sustain 300,000 TCP connections on any operating system I
know of.

DS

J R

unread,
Aug 22, 2000, 3:00:00 AM8/22/00
to
I know for a fact that NetWare can support 100k tcp connections on a single
x86 box.

joe r

"David Schwartz" <dav...@webmaster.com> wrote in message

news:39A30CB2...@webmaster.com...


>
> John Chen wrote:
> >
> > If the limit number of file descriptors in each process is 1024, does it
> > mean that each thread can hold 1024 connections?
>

> There is no such limit.
>

> > Will the performance decrease significantly when connections wave up to
300
> > thousand or so?
>

Maciej Golebiewski

unread,
Aug 23, 2000, 3:00:00 AM8/23/00
to
> I know for a fact that NetWare can support 100k tcp connections on a single
> x86 box.

Has anyone seen it working? If yes, was it usable with 100K open and active
connections?

Maciej

Hans-Pe...@mchp.siemens.de

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
In comp.protocols.tcp-ip phil-new...@ipal.net wrote:

> In comp.os.linux.development.system J R <j...@j.com> wrote:

> | I am building an IM server for LINUX that will accept connections and
> | hold them open for asyn communication (very much the same as AOL). I
> | need some ideas about how to go about this.

...


> The limit is the number of file descriptors available, usually 1024 or
> maybe 4096, but this can be changed though things could begin to get
> slow at that point since I believe they are not searched using methods
> that are good at that scale. OTOH, that many open could be slow on the
> whole system even if spread over a number of processes. I'm just not
> sure and hopefully a kernel guru will reply.

and don't forget to buy memory. Every open TCP socket will consume something
from 8K to 64k (depending on your system defaults), even more if window
scaling is on.

--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| Hans-Peter Huth Phone: +49 89 636 430 71 |
| Siemens AG, Dept. ZT IK 2 FAX: +49 89 636 51115 |
| Otto-Hahn-Ring 6 E-mail: Hans-Pe...@mchp.siemens.de|
|D-81730 Munich http://alpha.mchp.siemens.de (insiders only) |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
IK 2 Internet Mobility. Wir sind schon drin!
------------------------ cut here 8< --------------------------------

Laurens Holst

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
> > Will the performance decrease significantly when connections wave up to
300
> > thousand or so?
>
> You can't sustain 300,000 TCP connections on any operating system I
> know of.

I think it can be done if you program everything in Assembly. However, this
is a *nasty* job, for even the OS and Network-controls have to be coded
again (to gain max speed). And it will be unlikely that this implementation
is very good, because Assembly becomes very untransparant if it becomes too
large and complex. And time must certainly not be an issue.

But I think, in theory, it can be done (if I can do it for a 3.57 MHz 8-bit
Z80, it is certainly possible on a 32-bit (or is it 64-bit?) AMD Athlon
1000). However I also think it's best to divide the job over several
computers.

By the way, what terrible big system are you setting up if you expect to
handle up to 300.000 simultaneous connections???


~Grauw


--
>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<
email me: laur...@yahoo.com or ICQ: 10196372
visit my homepage at http://grauw.blehq.org/
>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<

Dan Harpold

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
If you are hosting 300,000 active sessions on one box, you are taking a big
risk. You must spread that load across multiple boxes. Just think if you
_are_ able to spend the time to get it to work, one hardware failure will
crash all 300,000 connections. You need to have some load balancing and
redundancy.


"Laurens Holst" <laur...@geocities.com> wrote in message
news:8o2ll3$b9jea$1...@reader3.wxs.nl...


> > > Will the performance decrease significantly when connections wave up
to
> 300
> > > thousand or so?
> >

Ratz

unread,
Aug 25, 2000, 3:00:00 AM8/25/00
to
Dan Harpold wrote:
>
> If you are hosting 300,000 active sessions on one box, you are taking a big
> risk. You must spread that load across multiple boxes. Just think if you
> _are_ able to spend the time to get it to work, one hardware failure will
> crash all 300,000 connections. You need to have some load balancing and
> redundancy.

...and this can be done with our nice lvs-project for linux. Check it
out at http://www.linuxvirtualserver.org. We would be happy if you could
test it on your setup :)

Regards,
Roberto Nibali, ratz

Les Mikesell

unread,
Sep 9, 2000, 12:33:42 PM9/9/00
to

"Dan Kegel" <da...@alumni.caltech.edu> wrote in message
news:39B47110...@alumni.caltech.edu...

>
> > What are the max # of
> > BSD sockets that could probably be held open at the same time on LINUX?
>
> Somewhere between 10,000 and 100,000 with the 2.4 kernel.
> I suggest starting out with a single-threaded model using
> poll(). You may never need to add additional threads; since
> you won't be doing any disk I/O or slow computations, nothing
> should block long enough for you to benefit from multiple threads.

Does someone have a reasonable estimate of the memory required per
connection to survive a routing glitch or a set of extremely slow
client connections where most of your connections will have sent
a full TCP window of data without getting an acknowledgment. Also,
what happens in Linux when this requirement exceeds available
RAM/swap? I think I have seen this situation crash an otherwise robust
freebsd box.

Les Mikesell
lesmi...@home.com

Kasper Dupont

unread,
Sep 11, 2000, 9:44:08 AM9/11/00
to
Les Mikesell wrote:
[...]

>
> Does someone have a reasonable estimate of the memory required per
> connection to survive a routing glitch or a set of extremely slow
> client connections where most of your connections will have sent
> a full TCP window of data without getting an acknowledgment. Also,
> what happens in Linux when this requirement exceeds available
> RAM/swap? I think I have seen this situation crash an otherwise robust
> freebsd box.
>
> Les Mikesell
> lesmi...@home.com

I think that could be as much as 64KB pr. conection,
with large windows enabled it could perhaps be
even more. But a newly opened TCP connection is
supposed to start with a much smaller window size
and only expand it as it gets acknowledgements. The
problem actually would be worst at high speed lines
and lines with long latency like satelite links.

I wouldn't be surprised if that could crash Linux.
I have seen Linux crash because too much memory was
used for ram disks, if the TCP implementation is as
bad as the ram disk implementation it would be bad.

--
Kasper Dupont

0 new messages