it's true that DXSock currently outperforms Indy as a TCP server.
I doubt that the real world advantage is this significant.
There's a lot more to a socket library than how many connections
it will accept. Whether the fact that DXSock accepts more
concurrent connections than Indy makes it "better" will depend
on your requirements.
Grahame
Indy Pit Crew
thanks
herbert schuster
Probably someone should read something about threading and what some OS can
do and what can not.
Regards,
Gregor
> Indy can handle way more than 200.
Can you put a number on that claim ?
Alex
>Can you put a number on that claim ?
I've had overa 1000 simultaineous http connections (From localhost to
localhost)
using TIdHTTP and TIdHTTPServer.
Actual HTTP Performance isn't very good like that.
What kind of numbers do you want?
Grahame
Indy Pit Crew
> >Can you put a number on that claim ?
>
> I've had overa 1000 simultaineous http connections (From localhost
to
> localhost)
> using TIdHTTP and TIdHTTPServer.
Thank you for the information.
> What kind of numbers do you want?
>
As stated in the original message: "can handle mor than 50.000
concurrent connections
from a single server" ...
Alex
The problem is that numbers like this really don't say much as they vary
much case by case. Environment, implementation, organization... these
can all be variants that alter maximum possible connections. I have no
doubt that DX Sock is better than indy at some things, but perhaps Indy
is better at others. Getting numbers to quantify this is really difficult.
I can state though from personal experience that Indy is ready for
primetime and is plenty server for anything I've run across.
--
Jason Southwell
President & CEO
Arcana Technologies
And I've pushed it even higher in tests. Indy has limits and we admit that.
But to say that it has a limit of 200 is irresponsible, and offensive.
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Want more Indy stuff? Try the Atozed Indy Portal at
http://www.atozedsoftware.com/
* More Free Demos
* Free Articles
* Extra Support
ELKNews - Get your free copy at http://www.atozedsoftware.com
Might want to check the TCP/IP FAQs. You'll find even most versions of MS's
server operating systems cannot even handle that. The one that can (2K IIRC)
can, but will be severely strained itself.
If you are hitting 50,000 concurrent, unless they are really low traffice
like a time server you need to be moving to multiple boxes anyways because
either the OS will start to have trouble, or the network card wont have
enough bandwidth.
And I dont see big benchmarks about "Worlds fastest time server", so Im
guessing its not a huge market.
But again, if your goal is to show statistics, then I can see they might be
useful.
I just did a quick calculation, and unless I made a mistake 50,000
connections on a 100 MB card would mean 250 bytes a second. So again, if you
want to have the speed of a 2400 baud modem, I guess it might be useful.
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Need extra help with an Indy problem?
http://www.atozedsoftware.com/indy/support/
That's not quite what I meant.
DXSock can handle 50,000 connections. But is that a usable
number? how was it generated? Were the connections in question
doing anything? Is this a real world number? The problem with
all these numbers is that there's no clear real world criteria. My
requirements are different to yours
Maybe you do have an application where the outright number of
dormant connections is the major criteria of a network library's
performance. I've never seen one of them before.
If you post some specific criteria about connections, then we
have something to go on. My 1000 connections were sitting
in a loop retrieving a single page. I got to about 100 pages/sec
before maxing about 15 connections, and adding connections
only slowed things down. If we're going to benchmark, then as
a network library user, the benchmark that matters to me is:
* all http requests channelled to a single procedure
function GetPage(AUrl: string):String;
begin
result := 'This is a page from '+AUrl;
end;
* a graph showing the throughput of pages sent to the client
per second on the Y Axis, against concurrent connections
on the left,
* along with a report providing client and server code, along
with performance characteristics of computers and networks
* and packet trace for the duration of the tests
Not that I'd bother to look at all that - but if it's worth comparing,
then it's worth comparing properly ;-)
Grahame
Indy Pit Crew
> If you post some specific criteria about connections, then we
> have something to go on.
An echo server that receives a 1k message from a client every 20
seconds or so.
Alex
They didn't provide an exact numbers.
There is a question then: who said that UNKNOWN(!!!) number that is greater
than 200 is less than UNKNOWN(!!!) number that is greater than 50,000?
In other words, if X > 200 and Y > 50000 then which one (X or Y) is greater?
Answer: it depends on exact values of X and Y. :)
Ok, I've found an exact value of Y. It's 799.
1. It's far less than promised 50000.
2. It still says nothing about X.
--
Regards
Illya Kysil, software developer
Delphi/C/C++/C#/Java/Forth/Assembler
If it is NOT SOURCE, it is NOT SOFTWARE. (C) NASA
I wonder whether they could make that secure ;-)
Grahame
They didn't provide an exact numbers.
There is a question then: who said that UNKNOWN(!!!) number that is greater
than 200 is less than UNKNOWN(!!!) number that is greater than 50,000?
In other words, if X > 200 and Y > 50000 then which one (X or Y) is greater?
Answer: it depends on exact values of X and Y. :)
Laugh and relax.
Sounds a bit strange... At least my machine runs out of sockets way
before that number of connections can be achieved.
--
Markku Uttula
URL: http://www.disconova.com/utu/ "Are you hot? Or at least cute?"
MAIL: markku...@disconova.com "If not, are you at least easy?"
<g> Given that they manage to let buffer overflow make their way into the
simple TCP/IP services, I wouldn't be surprised if they had a surprise or
to in reserve for us there :)
Good luck,
Stephane
I believe that it is 50,000 requests per second, not necessarily concurrent
connections. I also believe that the 200 number for Indy was rps for
Winshoes.
DXSock works pretty well with a dual DS3 routing to a gigabit internal
processing network.
Although it was in early May when I evaluated it, I believe that Indy 9 has
a multi-threaded-connection demo showing how many concurrent connections
that Indy was servicing. I had it up to about 750 concurrent connections
that worked pretty well. However as I incremented it beyond that the demo
application displayed that it was dropping connections. I no longer remember
the name of the demo, but its not in the Indy 8 release.
With DXSock I routinely have over a 1000 concurrent connections.
When you have a large number of threads it is critical that the stack size
be downsized accordingly. One of my applications typically runs over 5000
threads. The threads were designed to operate mostly in a blocked-wait
state, and utilization is pretty evenly distributed across processors.
Regards,
Jeff Crump
I've run it on my development machine PII-400(1 CPU)/512MB RAM/Windows XP Pro.
The very first results:
1) it can create only 2021 threads per process(!!!);
2) it takes almost 40-50 seconds to execute the loop
(i.e. WinXP can't create more than ~1000 threads per second);
3) CPU is 100% busy and is totally wasted.
IMHO the results are saying for themselfs.
program MaxThreads;
{$APPTYPE CONSOLE}
uses
Windows, SysUtils;
function ThreadProc(Param: Pointer): Cardinal; stdcall;
begin
Sleep(10000);
Result:= 0;
end;
const
MaxThreadCount = 50000;
var
ThreadId: THandle;
StartTime: TDateTime;
I: Integer;
begin
StartTime:= Now;
for I:= 0 to MaxThreadCount - 1 do
begin
CreateThread(nil, 1024, @ThreadProc, nil, 0, ThreadId);
end;
WriteLn(Format('Processing time(msecs): %f', [(Now - StartTime) * MSecsPerDay]));
ReadLn;
end.
PS: DO NOT EVER RUN THIS PROGRAM FROM DELPHI!
You must change the projects stack size downward. Delphi creates each thread
with the same stack size as the application.
> 2) it takes almost 40-50 seconds to execute the loop
> (i.e. WinXP can't create more than ~1000 threads per second);
The default thread size is 1MB, so 2000+ threads puts the application past
the 2GB memory limit.
> 3) CPU is 100% busy and is totally wasted.
> IMHO the results are saying for themselfs.
Please change the project stack size to $10,000 or lower and it will work
fine.
You can't get much past 100 under the delphi debugger - it get's slower
and slower and eventually you access violate. This is issues with the
debugger, it happens in any threaded code
Remarkably enough, this is 200 threads when the single process is
both client and server. Perhaps this is where the 200 number comes
from.
We know that the Indy server kernel is 1 thread per connection, and
therefore none of this is a surprise. And yes, you will get better numbers
out of other designs. I'm doubtful that it matters in the real world.
Personally, I run a number of production web and soap servers using
TIdHTTPServer (though they usually identify themselves as IIS), and
I haven't had any problems. my hightest hit rate isn't very high though -
only
about 10/sec
Grahame
"Alex Brainman" <brai...@sussan.com.au> wrote in message
news:3da1...@newsgroups.borland.com...
> Please change the project stack size to $10,000 or lower and it will work
> fine.
You can't make it lower than $10000.
I've set it to $10000. And what?
1. Ok, I can create more than 8000 threads.
But then my OS said that it can't save data to C:\$Mft, i.e. the whole system was blown up.
2. Anyway, it takes 70 seconds to execute the loop.
> You can't make it lower than $10000.
Agreed.
> I've set it to $10000. And what?
> 1. Ok, I can create more than 8000 threads.
> But then my OS said that it can't save data to C:\$Mft, i.e. the whole
system was blown up.
I don't have that problem Win2000 Pro or Win2000 Server
> 2. Anyway, it takes 70 seconds to execute the loop.
>
What's setup time compared against service time? When I dynamically create a
datamodule I prepare all queries in advance; it takes longer on the front
end, less time during execution (For BDE components anyway)
10 connections per seconds is still a rather large number. In fact, it
summs up to 850k+ connections per day. Now, if that's the mean connection
count per second and considere that a typical connection lasts 8 seconds
(which is pretty long for a connection except if you're handeling large
files), that leaves use with more than 100k hits per day which is the
maximum number of connections recommanded by MS for a typical IIS server
(it's also the the treshhold of the "high volum" server setting in IIS).
I'm still inpressed :)
Good luck,
Stephane
function ThreadProc(Param: Pointer): Cardinal; stdcall;
begin
Sleep(10000);
Result:= 0;
end;
...
TC:= MaxThreadCount;
while TC > 0 do
begin
//use default 16k thread stack
if CreateThread(nil, $4000, @ThreadProc, nil, CREATE_SUSPENDED, ThreadId) <> 0 then
begin
Inc(ThreadCount);
Dec(TC);
end
else
begin
Sleep(1);
end;
end;
...
I have no desire to get into it with you. You win.
DXSock works for me. Indy works for you.
Jeff Crump
>It seems I've built a very simple test to prove DXSock advertising to be a dirty lie.
>This program creates a 50000 threads which do almost nothing - a sleep for a 10(ten) seconds
>ONLY.
pmfji, I have no experience whatsoever with DXSocks but from what I
have read is that they use completion ports for socket communication.
This is the best way to create scalable servers in windows.
From:
http://msdn.microsoft.com/msdnmag/issues/1000/Winsock/Winsock.asp
::The overlapped I/O mechanism in Win32Ž allows an application to initiate an operation and receive notification of its completion later.
::This is especially useful for operations that take a long time to complete. The thread that initiates the overlapped operation is then free
::to do other things while the overlapped request completes behind the scenes
It seems that a thread can be used for more than one socket
connection. It could be that your test is therefore not representive
for DXSocks.
just my 2 cents,
Martijn Brinkers
Completion ports wont overcome the bandwidth or socket limitations discussed
here, only the threading issues.
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Want to keep up to date with Indy?
Join Indy News - it free!
http://www.atozedsoftware.com/indy/news/
Martijn Brinkers
> ok, code appended below.
Does that means I get the code to look at ? Because I don't see
anything ...
> on my notebook (1GHz) 1000 connections
> for this is only 5% utilization. We don't get beyond 1000
connections. I'll
> have to more or a look but the primary thread just hangs a little
short of
> 1000 client threads created. I don't know why.
So, for my example, the "1000 connections" is the number ?
Alex
> > An echo server that receives a 1k message from a client every 20
> > seconds or so.
>
> Yes, and I see a big market for echo servers. ...
If you're *really* interested, I'll change it a bit: A message server
where clients send a 1k text message to "a selected one/group or all
currently connected" clients every 20 seconds. I hope my new proposal
better satisfies your "been real life" criteria.
Alex
huh? I'll append it (again?) (btw, it's real rough - I had about 20min to
knock it up, and I had imagined I'd be testing less threads than I turned
out to be testing)
> > on my notebook (1GHz) 1000 connections
> > for this is only 5% utilization. We don't get beyond 1000
> connections. I'll
> > have to more or a look but the primary thread just hangs a little
> short of
> > 1000 client threads created. I don't know why.
>
> So, for my example, the "1000 connections" is the number ?
this was with client and server in the same process. Since the
limit appeared to be process/thread related, you can call it around
2000 server side connections. This morning I tried again and it
went to about 4000 before it all went belly up. Presumably this
is starting to run out of sockets? I didn't have time to debug it.
I would've thought that 2000 would be enough for a real system?
proviso: this was localhost only, so it wasn't running across
an actual card/network. I assume that this will stress the code
more.
Grahame
program Project1;
{$APPTYPE CONSOLE}
uses
SysUtils,
SyncObjs,
IdTCPServer,
IdTCPClient,
windows;
type
THelper = class (TObject)
private
procedure ServerExecute(AThread: TIdPeerThread);
end;
TStats = record
Loops : cardinal;
Wait : TDateTime;
end;
{ THelper }
procedure THelper.ServerExecute(AThread: TIdPeerThread);
var
s : string;
begin
repeat
s := AThread.Connection.ReadLn;
write('.');
AThread.Connection.WriteLn(s);
until not AThread.Connection.Connected;
end;
var
GServer : TIdTCPServer;
GHelper : THelper;
GLock : TCriticalSection;
GStopClients : boolean;
GClientCount : cardinal;
GClientStats : Array of TStats;
Procedure StartServer;
begin
GClientCount := 0;
GStopClients := false;
GLock := TCriticalSection.create;
GHelper := THelper.create;
GServer := TIdTCPServer.create(nil);
GServer.DefaultPort := 23455;
GServer.OnExecute := GHelper.ServerExecute;
GServer.Active := true;
end;
function Client(Parameter: Pointer): Integer;
var
s1, s2 : string;
i : integer;
LClient : TIdTCPClient;
LId : cardinal;
LStart : TDateTime;
begin
GLock.Enter;
try
inc(GClientCount);
SetLength(GClientStats, GClientCount);
LId := GClientCount - 1;
GClientStats[LId].Loops := 0;
GClientStats[LId].Wait := 0;
finally
GLock.Leave;
end;
SetLength(s1, 1024);
for i := 1 to 1024 do
begin
s1[i] := chr((i mod 64)+32);
end;
LClient := TIdTCPClient.create(nil);
try
LClient.Host := '127.0.0.1';
LClient.port := 23455;
LClient.Connect;
repeat
LStart := now;
LClient.WriteLn(s1);
s2 := LClient.ReadLn;
if s1 <> s2 then
begin
try
raise exception.create('returned wrong data');
except
// suppress
end;
end;
GLock.Enter;
try
inc(GClientStats[LId].Loops);
GClientStats[LId].Wait := GClientStats[LId].Wait + (now - LStart);
finally
GLock.Leave;
end;
for i := 0 to 20 do
begin
if GStopClients then break;
sleep(1000);
end;
until GStopClients;
finally
FreeAndNil(LClient);
end;
GLock.Enter;
try
dec(GClientCount);
finally
GLock.Leave;
end;
end;
procedure StartClient;
var
LDummy : Cardinal;
begin
closeHandle(BeginThread(nil, 8192, @Client, nil, 0, LDummy));
sleep(random(100));
end;
procedure StopClients;
var
LDone : boolean;
begin
GStopClients := true;
repeat
sleep(50);
GLock.Enter;
try
LDone := GClientCount = 0;
finally
GLock.Leave;
end;
until LDone;
end;
procedure StopServer;
begin
FreeAndNil(GServer);
FreeAndNil(GHelper);
FreeAndNil(GLock);
end;
procedure DisplayResults;
var
i : integer;
LLoops : cardinal;
LWait : TDateTime;
begin
writeln(inttostr(Length(GClientStats))+' threads');
LLoops := 0;
LWait := 0;
for i := Low(GClientStats) to High(GClientStats) do
begin
LLoops := LLoops + GClientStats[i].Loops;
LWait := LWait + GClientStats[i].Wait;
end;
writeln('Average Loop count per thread: ',
(LLoops/Length(GClientStats)):4:4);
writeln('Average delay (ms): ', (LWait * (24 * 60 * 60 *
1000)/Length(GClientStats)):4:4);
end;
var
s : string;
i : integer;
begin
IsMultiThread := true;
Write('Enter number of threads:');
readln(s);
StartServer;
for i := 1 to StrToInt(s) do
begin
StartClient;
end;
writeln('Press any key to stop');
readln;
StopClients;
StopServer;
DisplayResults;
writeln('Press any key to close');
readln;
end.
> ... I'll append it (again?) ...
Got it this time. Thank you.
> I would've thought that 2000 would be enough for a real system?
So, 2000 that is.
Alex
Correct me if I'm wrong. I've tried your program and, looking at Task
Manager, I can see it tops out on 1978 threads, regardless of
requested number. So, I presume, some threads/connections just
disappear into obliviation without an error message or something. So,
with your assumption about client and server being in one process, it
puts us to 1978 server connections tops.
Alex
yes and no. firstly, on my system yesterday it was 1976. today it was around
4700. The limit isn't server connections, it's threads. But if we get to
4700, then
we're starting to get to the system limit for concurrent TCP/IP Connections?
Grahame
There are a few such like this, but they are quite rare. If you look at all
the protocols, mail, news, http, and so on, none of the major protocols has
this need.
The only protocols that might need this are things like ICQ, stock quotes,
etc... and those should be UDP because of needs. So the need you demonstrate,
should be UDP anyways....
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Need extra help with an Indy problem?
http://www.atozedsoftware.com/indy/support/
what else is your computer doing? How much ram has it? how fast is it?
Etc. (I don't really want to know)
>
> > The limit isn't server connections, it's threads.
>
> I don't care about threads, we were talking about "client
> connections", threads or no threads.
In indy one connection = one thread. Two threads if the client connection
and the server connection are on the same machine and you don't share the
client connections in just a few (or one) thread.
Programming a server that serves multiple connections in a single thread is
possible, and sometimes done, but it really makes programming it a pain. I
work with a similiar event model where a single thread serves multiple
events sequentially. It means the programmer has the task for each
connection down into small units, so that they can be cycled through, so
that no thread waiting at the end times out. Basically, it is putting the
threading at an application-programmer-level, rather than an OS or compiler
level, and this is the wrong place to put such complexity, IMO.
My point, so inarticulately made, is that the whole premise "product xyz can
do 999999999 concurrent connections, whereas product abc can only do 9!" is
not a qualified statement! For some of your programs DXSocks might be
suitable, for others Indy. You can even pick and mix them in the same
product! There is no simple answer because the only person who knows your
requirements is you... and one of your requirements might be cost ;-)
Will.
> There are a few such like this, but they are quite rare. If you look
at all
> the protocols, mail, news, http, and so on, none of the major
protocols has
> this need.
>
And how often do we get to write mail, news, http servers ?
> The only protocols that might need this are things like ICQ, stock
quotes,
> etc... and those should be UDP because of needs. So the need you
demonstrate,
> should be UDP anyways....
>
I guess you know better what I should do. But I was looking for an
answer to my question ...
Alex
who knows? I don't
> > The limit isn't server connections, it's threads.
>
> I don't care about threads, we were talking about "client
> connections", threads or no threads.
Well, I know you don't care about threads. But under
the current design of Indy, it would appear that the number
of connections that can be held open at a given time is
limited by the effective system thread limit, whatever that
is. Unless under some circumstances this exceeds the
number of connections that TCP/IP can support (another
branch of this thread suggests this is around 4k?)
So I reiterate my point. The limit is a system thread limit.
This isn't the simple story you appear to be looking for, but I
can't do anything about that
Grahame
Not in the version I have (circa July 2001), and from what I understand not
in 3 (Disclaimer: but I could be wrong).
DXSock is a pretty good product, but I moved to my own code so to implement
overlapped IO on multiple sockets per thread. Didn't go fully completion
port as you described, I just used overlapped reads and writes and a few
other tricks, squeezing 62 connections per thread.
But these methods are not a panacea, there are serious considerations beyond
just how you read and write to sockets. The thread method with all its
disadvantages at least allows you to encapsulate some possibly time
consuming operations per thread without excluding other threads (ok, you
might limit resource access, like to a database say, but that's more
resource management).
With n per thread architectures, you really need all you disk reads and
writes to be overlapped as well, and you can't do that when you're using
your favourate local database to validate users. If you do access within the
thread, you potentially exclude the other 61 threads for a critical number
of ms when they should be doing something else. 62 connections validating
users and every connection suffers.
Not to mention how complicated it is writing architectures like this. My
code now handles multiple server services and clients in the same overlapped
engine but that came after much sweat and tears. Unfortunately the nature of
it is that it does effect your secondary code also, notably on disk access
that now needs to be overlapped too, and on any blocking calls that would
have existed in the threaded version.
Yes there are a number of ways to address all of this, but AFAIK neither
Indy nor DXSock think that way yet. IMO however, it would be very difficult
to write a component based framework and expect people to start using it
without really understanding the issues that overlapped/completion ports
bring about.
My 2 Euro Cent ;)
D.
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.394 / Virus Database: 224 - Release Date: 03/10/2002
Exactly. But Indy 10 takes care of this - you can have 1 thread, mutliple
connections, but program it just like Indy 9.
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Qualified help FAST with Indy Experts Support
from the experts themselves:
I've stated this several times in the past, but that limit is pretty high,
and high enough for over 98% of the needs.
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Want more Indy stuff? Try the Atozed Indy Portal at
http://www.atozedsoftware.com/
* More Free Demos
* Free Articles
* Extra Support
HTTP servers are one of the most implemented things, followed by mail servers.
And the other protocols implemented, typically folow the pattern of mail, news,
etc.. and how their commands and structures work.
> I guess you know better what I should do. But I was looking for an
> answer to my question ...
Maybe I missed it, can you restate it?
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Want to keep up to date with Indy?
Join Indy News - it free!
http://www.atozedsoftware.com/indy/news/
ELKNews - Get your free copy at http://www.atozedsoftware.com
Indy 10 might interest you. :)
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Need extra help with an Indy problem?
http://www.atozedsoftware.com/indy/support/
ELKNews - Get your free copy at http://www.atozedsoftware.com
Its actually pretty easy to use. I wouldnt recommend trying to read the core
that handles it, but the user end is quite easy.
> support - although you guys do deserve credit for the amount of support you
> give to Indy here.
Thanks.
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Want to keep up to date with Indy?
Join Indy News - it free!
http://www.atozedsoftware.com/indy/news/
ELKNews - Get your free copy at http://www.atozedsoftware.com
Well not since I think version 7. See :
Text file explaining the new protocol
http://www.rejetto.com/icq/data/ICQv7proto.zip
Quote:
>These notes are about the protocol used by ICQ 2000+ for client-server
>communication. Since version 2000 the protocol drastically changed. It was
>over UDP, and now it is over TCP.
Martijn Brinkers
> HTTP servers are one of the most implemented things, followed by
mail servers.
For educational purposes, you mean <g> ?
> Maybe I missed it, can you restate it?
You're free to reread the thread you're replying to.
Alex
> The server will get a number from the client, wait up to 2 seconds and
> send
> back the modulo 128 of the number it received. The client sends it's
> thread
> handle as number. Notifications to the main thread (for statistics) are
> done exclusively with
... interloked variables.
Good luck,
Stephane
I ran tests on how many connections Indy9 could handle per second, and
things break down REAL fast, because I ran into the "Socket Error 10048,
Address allready in use issue". This is in no way Indys fault, but the test
becomes kind of pointless. Say I can acheave 200 connects per second, that
means I have (5000-1024)/200 ~= 20 seconds before I run out of sockets.
I did get much higher numbers than 200 connects per second, but it started
dropping connections for some reason, probably due to some error on my part.
After each run, I had to wait for 4 minutes, to release the sockets :-(
cheers,
m
Thread pooling probably won't improve these numbers since it is only
effective for HTTP-type servers where you have a large number of clients
but where the average session time is small.
I'll do the test with the new Indy 10 code, but you will have to wait for
that a bit since it's not yet completely ready.
> I ran tests on how many connections Indy9 could handle per second, and
> things break down REAL fast, because I ran into the "Socket Error 10048,
> Address allready in use issue". This is in no way Indys fault, but the
> test
> becomes kind of pointless. Say I can acheave 200 connects per second, that
> means I have (5000-1024)/200 ~= 20 seconds before I run out of sockets.
That's strange... It is actually a message that only happens when you try
to listen on a socket to a local port. It shouldn't happen in clients...
> I did get much higher numbers than 200 connects per second, but it started
> dropping connections for some reason, probably due to some error on my
> part.
>
> After each run, I had to wait for 4 minutes, to release the sockets :-(
Yeah, that's a problem with some TCP stack for server sockets. Are you sure
you didn't use a new server component for each connection ?
Good luck,
Stephane
Yeah, you're of course right.
> I'll do the test with the new Indy 10 code, but you will have to wait for
> that a bit since it's not yet completely ready.
Just say when!
> > I ran tests on how many connections Indy9 could handle per second, and
> > things break down REAL fast, because I ran into the "Socket Error 10048,
> > Address allready in use issue". This is in no way Indys fault, but the
> > test
> > becomes kind of pointless. Say I can acheave 200 connects per second,
that
> > means I have (5000-1024)/200 ~= 20 seconds before I run out of sockets.
>
> That's strange... It is actually a message that only happens when you try
> to listen on a socket to a local port. It shouldn't happen in clients...
I could send you the code. After extensive running, there are a lot of
TIME_WAIT ports in the netstat list, even on the client computer. I'm not
sure why that is. As you say, client sockets shouldn't go in to TIME_WAIT -
(and cause Socket Error 10048)
> > After each run, I had to wait for 4 minutes, to release the sockets :-(
>
> Yeah, that's a problem with some TCP stack for server sockets. Are you
sure
> you didn't use a new server component for each connection ?
Yep.
cheers,
m
I think you mean borland.public.attachment (no .delphi)
cheers,
m
Notes:
1/ Do NOT toy with the client too much. It hasn't been foolproofed and
playing with the buttons will definitely have "unexpected results".
2/ I haven't placed too much intelligence in the server statisticks, in
particular in the exception handeling.
3/ For more accurate result, run the client and server on diferent machines.
Feel free to post coment on the code or on the result you encountered (good
or bad).
Good luck,
Stephane
Right, right...
Good luck,
Stephane
> I've ran some TCP tests yesterday and ...
The man I was looking for !
> I'll post the source of the client and server in the attachement NGs
later
> today.
Will, certainly, check it out.
> ... Feel free to toy with it but don't push too many buttons at
once:
> it's not built to be foolproof and you can probably get inproper
answer if
> you start doing more than one test at once.
Noted.
Thanks again.
Alex
> >> Can you put a number on that claim ?
>
> Stephane is posting details using DEFAULT Indy management. No
pooling, no
> nothing. So the "minimum ceiling". His tests correspond with what I
have
> always stated.
So, Stephane's findings should give me the answer. I just don't
understandand why it is qualified as "minimum ceiling", the whole
point of my question was "max number of client connections"; if you
have any corrections to his code to improve the numbers, please, let
me know. I'm after "best possible", not DEFAULT, not "minimum ceiling"
...
> This type is in the 2% I've talked about and usually only needed for
chat
> servers, etc... Most uses use UDP for this, with the exception of
most chat
> servers.
So you've said it before. I'll remember it now.
Alex
> Feel free to post coment on the code or on the result you
encountered (good
> or bad).
I'm using:
- Delphi 5 (build 6.18) Update Pack 1
- ftp://indyten:ind...@ftp.nevrona.com
- Windows 200 (Build 2195; Service Pack 3)
- 512M of RAM
- single processor (something Gig or whatever)
I've compiled your source, small problem, probably, related to Form
save format changed, I'm guessing, you're using newer version of
Delphi.
I've used "Multithreaded job test" button only, because it does what
I'm looking for: simaltaneous connections with low traffic. Client app
is unuseable, it crashes on me (without any errors reported) nearly
every time and more so as you get to bigger numbers (150-200
connections). I've end up writing my own client (using my own small
wrapper for WinSock) to immitate your client instead.
The server is quite flaky too if you connect too many (~2000) clients
or disconnect client in unexpected place (anywhere but before it goes
into receive of start of client packet). It works well until
"Exception count" would goes non zero. From that moment it behaves
strange, with "Clients Connected" start going up and down, up and down
without any real clients being connected or disconnected. "Exception
count" goes really high (like 500,000) and CPU usage goes 100%.
I couldn't connect more then 1982 clients: server goes bonkers and all
new clients get connected, but no conversation happening, so, I
gather, new connections just sit blocked on IO, I don't count them as
"connected" because they're not functioning.
To sum it up: the app needs to be fixed before you can use it even as
demo, and the limit on number of simultaneous connections, seems to be
somewhere around 1982.
Would be happy to make any adjustments to my test, if you advise me
how.
Alex
Indy 10 is anything but stable right now. I would not recommend running any
tests or basing any valuable code on it in is current form.
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Qualified help FAST with Indy Experts Support
from the experts themselves:
http://www.atozedsoftware.com/indy/support/
ELKNews - Get your free copy at http://www.atozedsoftware.com
A minimum ceiling means the "max" connections that Indy can handle under a
"default" design with no optimizations. I've always said that its around 1,000,
which Stephanes test shows.
This is concurrent mind you, non concurrent does not have a limit.
> point of my question was "max number of client connections"; if you
> have any corrections to his code to improve the numbers, please, let
> me know. I'm after "best possible", not DEFAULT, not "minimum ceiling"
For concurrent you'd have to switch a lot of code in the current models. Thats
one of the things Indy 10 is addressing with more server models.
For faster reconnects, you can of course use thread pooling.
> > - ftp://indyten:ind...@ftp.nevrona.com
>
> Indy 10 is anything but stable right now.
I just go by:
"Stephane Grobety" <gro...@fulgan.com> wrote in message
news:3da3daf2$1...@newsgroups.borland.com...
>
> ... It is designed to use Indy 10 ...
>
so if he advises me to change, I will.
> I would not recommend running any
> tests or basing any valuable code on it in is current form.
I'll keep your advise in mind.
Alex
> ... I've always said that its around 1,000,
> which Stephanes test shows.
So 1000 is the magic number ?
> This is concurrent mind you, non concurrent does not have a limit.
Understood.
> For concurrent you'd have to switch a lot of code in the current
models. Thats
> one of the things Indy 10 is addressing with more server models.
So for more then 1000 connections, it WILL come in Indy 10 ?
> For faster reconnects, you can of course use thread pooling.
Not interested.
Alex
More or less yes.
> So for more then 1000 connections, it WILL come in Indy 10 ?
Short of you writing a lot of code, yes.
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Need extra help with an Indy problem?
http://www.atozedsoftware.com/indy/support/
ELKNews - Get your free copy at http://www.atozedsoftware.com
> > So for more then 1000 connections, it WILL come in Indy 10 ?
>
> Short of you writing a lot of code, yes.
>
I've missed that ... But we'll deal with it when we get there.
Alex
yes, I noted that too. Something isn't thread safe down there.
It only shows up under *heavy* load. But damned if know what.
Any suggestions?
BTW, for all readers still reading this thread, Indy10 will
probably not work on win95 OSR1 machines once the
next code release is made ( a few weeks?). Worst case is
that your application may not actually load. I will be keeping
an eye on this because it will affect some of my clients (yes, I
still have some running win95 OSR1 or release A or whatever
M$ ended up calling it)
Grahame
Indy Pit Crew
Woaw, that's an OLD version of windows you're running ;)
> I've compiled your source, small problem, probably, related to Form
> save format changed, I'm guessing, you're using newer version of
> Delphi.
I'm using Delphi 6 enterprise SP 2. Sorry not to have mentionned that.
> I've used "Multithreaded job test" button only, because it does what
> I'm looking for: simaltaneous connections with low traffic. Client app
> is unuseable, it crashes on me (without any errors reported) nearly
> every time and more so as you get to bigger numbers (150-200
> connections). I've end up writing my own client (using my own small
> wrapper for WinSock) to immitate your client instead.
Funny... I as able to connect without any trouble 1000 threads with both
the client and server running on the same machine. The default I used as
thread number (500) has never failed to work.
Maybe you should check the project stack size: I have changed mine to 1024
in order not to run into trouble with stack space allocation but it might
not have reflected on the source I provided. But even without that change
it should have work properly.
> The server is quite flaky too if you connect too many (~2000) clients
> or disconnect client in unexpected place (anywhere but before it goes
> into receive of start of client packet). It works well until
> "Exception count" would goes non zero. From that moment it behaves
> strange, with "Clients Connected" start going up and down, up and down
> without any real clients being connected or disconnected. "Exception
> count" goes really high (like 500,000) and CPU usage goes 100%.
That's what I experienced at around 3k connections (a bit lower than that
actually). I have others reporting numbers similar to mine.
> I couldn't connect more then 1982 clients: server goes bonkers and all
> new clients get connected, but no conversation happening, so, I
> gather, new connections just sit blocked on IO, I don't count them as
> "connected" because they're not functioning.
Well, I didn't do it but it would be trivially easy to add a "timeout
counter" to the client threads in order to have a demonstration of what
you've described. I can't say I have noticed such an effect myself (AFAIK,
all the thread that connected DID send and receive the data on a regular
basis or they wouldn't have disconnected properly when I signal the "stop
all" event) but I could have missed it somehow. I know that when you reach
the server limit, however, some thread do get permanently stuck (I'm
assuming it's one the read or write call).
> To sum it up: the app needs to be fixed before you can use it even as
> demo, and the limit on number of simultaneous connections, seems to be
> somewhere around 1982.
Could you be more precise on what kind of "fix" you had to do ? I have
reports of others that, AFAIK, where able to run both the client and the
server "as is" (although, as I stated, neither has been "foolproofed" to
the level of a commercial application).
> Would be happy to make any adjustments to my test, if you advise me
> how.
Well, if you're refearing to the number of connections, please check the
stack size of the server app. For the client, I would be glad to help if
you explained what you mean by "client app is unusable".
Good luck,
Stephane
Actually, the code was designed t run with the Indy 10 version of the code
that was on the public FTP as long as it was available so my guess is that
you used the same code as myself or something really close to. I have a
version running with the new code but it's nowhere stable enough for prime
time I'm afraid.
Good luck,
Stephane
bah - linux. Almost anything will pull x-windows over. actually, seriously,
something in the debugger/ide does
> > BTW, for all readers still reading this thread, Indy10 will probably not
> > work on win95 OSR1 machines once the next code release is made ( a few
> > weeks?). Worst case is that your application may not actually load. I
will
> > be keeping an eye on this because it will affect some of my clients
(yes,
> > I still have some running win95 OSR1 or release A or whatever M$ ended
up
> > calling it)
>
> Why?
cause we will be using windows fibers, and the entry points for
windows fibers were not available on at least some win95 systems.
depends whether I get round to making them dynamically bound,
or someone else does it
Grahame
> bah - linux. Almost anything will pull x-windows over. actually,
> seriously, something in the debugger/ide does
Not x-windows, but sometimes wineserver. Though you should never run kylix
debugger and vmware at once, lots of trouble with that.
> cause we will be using windows fibers, and the entry points for windows
> fibers were not available on at least some win95 systems. depends whether
> I get round to making them dynamically bound, or someone else does it
Ah ok.
johannes
> Actually, the code was designed t run with the Indy 10 version of
the code
> that was on the public FTP as long as it was available so my guess
is that
> you used the same code as myself or something really close to.
I've downloaded Indy from ftp://indyten:ind...@ftp.nevrona.com
yesterday, so, I guess, it is the same code.
> I have a
> version running with the new code but it's nowhere stable enough for
prime
> time I'm afraid.
Well, it'll get better.
Alex
> ... I know I would never assume that
> anything comms related was impossible to them just because it was
impossible
> for me to do.
Very good assumption on your part !!!
Alex
PS: I really hope you don't read it personally <BG>.
--
Regards
Illya Kysil, software developer
Delphi/C/C++/C#/Java/Forth/Assembler
If it is NOT SOURCE, it is NOT SOFTWARE. (C) NASA
> Then any new thread will get default 1MB RESERVED stack space,
> which makes 2K threads to eat all memory available to process in standard
Win XXX Pro.
> In Server editions one can use up to 3GB.
The binary from the DXSock group to test how many threads can be created by
your OS will work Windows 95, 98, Me, NT, 2K or XP and create more than 2000
threads. Not that I would refer to your assertions as a "dirty lie" or
anything, that would be polemics (trollery).
> If it is NOT SOURCE, it is NOT SOFTWARE. (C) NASA
Nice quote and I totally agree, But it wasn't NASA it was RMS in this essay
http://www.virtualschool.edu/mon/ElectronicProperty/StallmanSoftwareShouldBe
Free
rgds,
Richard Morris
> Am I missunderstanding something here? How can 50.000 connections be open
at
> one time?
Yes, just not using OS's defaults anyhow.
You need to configure Windows properly.
More info on the BPDX newsgroups.
How true.
see the binary posted in Attachments subject b.p.d.Internet.Winsock thread
"DXSock 3.0 are better than Indy?"
On my XP Pro Toshiba laptop with no tweaking at all (and while running msn
messenger/ICQ/and a bunch of other apps) I get 37,000 threads, with some
tweaking of the system and extra RAM that could be increased to well over
50,000.
rgds,
Richard Morris
> Probably someone should read something about threading and what some OS
can
> do and what can not.
And somebody else should actually try and see if something claimed on the
MSDN is actually right before giving up hopes
or assuming that's the ultimate truth.
Check RAM's post in attachments. I got bored after 30000 concurrent threads
on my Athlon 1800, 1Gb RAM but I could have continued
for a long time.
Maybe you should read more carefully comparison pages before putting words
in other people's mouth.
DXSock handles 800 connections WHEN Indy handles 200 and Borland sockets
handle 170ish.
It doesn't say ANYWHERE that DXSock handles 50000 *when* Indy only handles
200.
It is true that DXSock outperforms *any* other offer out there as far as
maximum concurrent threads at the same time.
The trick is very very simple and does work like magic. I would ask Ozz
directly for his demo that demonstrates
how you can reach 50000 threads on a decent machine.
No many people (including MS) know how to do this but I assure you it's real
as Ozz can easily prove you.
--
Alessandro Federici
-System Architect
-Borland Certified Consultant
-DXSquad Team Member
Homepages:
www.remobjects.com (home of the RemObjects SDK)
www.msdelphi.com (home of the DSOAP Toolkit)
www.projectdionysus.com (home of the best thing ever happened in the
Delphi world)
EMail: al...@msdelphi.com
hi
That's pretty interesting. Is the DXSock team happy to post any information
about the technique so that it can be used elsewhere, or is there a royalty
in the secret that not even MS knows?
Grahame
> That's pretty interesting. Is the DXSock team happy to post any
information
> about the technique so that it can be used elsewhere, or is there a
royalty
> in the secret that not even MS knows?
Ask Ozz directly.
He told me how he managed to do it two days ago and I was like "WTF, duh!
Obvious!".
Other than that I cannot really tell you anything more but maybe if you ask
him he'd gonna share it with you too.
I am being serious, no joke here and no sarcasm.
We met at the Chicago Midway airport to discuss about our collaboration and
finally met in person and , in between
other things I asked him how the heck he managed to do that. Whe he told me
I was shoked by the simplicity of the solution.
If you shrink the thread stack down small enough surely you can, but thats not
supported on all version of Windows. XP does with the Ex calls.
But more to the point - what good is 37,000 threads? You cant schedule them
all. The size of a quantum and context switch overhead would make the system
quite slow if a reasonable number of them were active.
> But more to the point - what good is 37,000 threads?
It really doesn't matter to the terms of this aggression-thread.
Somebody claimed it's not possible and also said the BPDX site claims that
DXSock handles 50.000 threads *when*
Indy handles 200. This is a total (and looks to me intentional BTW)
missrepresentation of what is on the site which clearily states
the test environment and the purpose of the test.
So , let me understand. When some poeple here couldn't manage how to do such
a simple thing it was a lie.
Now that it's demonstrated how that is possible (and utterly simple) you
guys switch to the usefullness of it?
Just make up your mind guys!
This whole thread look to me like a pathetic attempt to missrepresent BPDX.
I might be wrong
but this switch of subject really makes me even stronger on this position.
> > Min stack size: $00004000
> > Max stack size: $00100000
> Then any new thread will get default 1MB RESERVED stack space,
> which makes 2K threads to eat all memory available to process in
standard Win XXX Pro.
> In Server editions one can use up to 3GB.
That explains <g> ...
Alex
In article <3d9ec251$1...@newsgroups.borland.com>, h.sch...@shnet.at
says...
> The Developer of the DXSock 3.0 Components from BrainPatchworkDX say that
> their server components can handle mor than 50.000 concurrent connections
> from a single server while the indy components can handle only 200.
> Can anybody explain me if it is true, and if so, why ?
>
> thanks
>
> herbert schuster
>
>
>
actually, you don't differ with nost of the posts here.
Everyone agrees that DXSock outperforms Indy.
At question are whether:
1. The difference is as wide as claimed
2. The difference is claimed to be as wide as this (below)
(I tried to check but www.dxsock.com has been down for hours)
3. Whether this difference is actually significant, and if so, when
4. Whether any one is still reading this thread
Grahame
Indy Pit Crew
Creating several thousand thread is no greate feat (none at all actually).
Creating 50000 active threads each having a connected and active client is,
however, not possible in a single process (and even doing do in a single
machine is probably not possible).
Yes, the fact that the Indy component uses Borland's TThread class means
that it will uses the same stack size as the project initially (one meg)
and that this stack stize is the ultimately limitating factor of the
"deafult" indy server component. Yes, you can push that limit further up by
changing the compiler defaults but you won't get much out of it because of
other factor.
I state again: In Indy 8, 9 and the version of 10 available to the public,
the "reasonable" number of concurrently connected client per server process
is 1000, can be pushed to 2k without loosing anything but performance and
could be pushed up to 3k in some cases (but you WILL have trouble).
Can DXSock handle more connections ? I honestly don't know because a/ there
is no trial version anywhere to test, b/ I don't even know for sure what
kind of technology DXsocks use.
What I'm sure of is that, if DXSocks uses the same technique as Indy (that
is, one thread per connection, no socket pooling), then it can't possibly
outperform Indy by a great deal. AFAIK, using socket pooling and IOCP is
the only way to push that number above but it would have other performance
impact (like being mainly effective if you have a large number of connected
clients but only a small number of active ones).
Good luck,
Stephane
You'll forget me not to go to that web page, the DXSocks web site being
down as I write this but this is simply no infromation at all. First, it
aly speaks about the "first" set of tests. Second, it says nothing about
the type and ammount of data- Third, it only makes vague claims about the
hardware used. Fourth, it simply ommits to speak of the type of test that
was done (client and server on same machine ? Different machines ? etc),
type of OS used or even type of network card (In BW test, an Intel server
NIC will SMOKE your OEM 3C90x card).
Sorry, that quotation is only a way to hide the fact that no information
about how these "tests" where conduced.
(Ok, I didn't specify what NICs I used in my test. I'll correct that by
stating that the ards used on client machinee are a 3com Megaherz 10/100
PCMCIA card and a 3C920 inborad chip whil the server is using an Intel PRO
100+ server card running over a 100 megs network with an old but good
quality 3COM hub whil no other machine on the segment where doing anything
special: It was lunch time).
Good luck,
Stephane
how long is he going to be down?
Grahame
replace "any" with "enough" and you get more accurate sentance. It doesn't
change anything to the bottom line though:; The claims made on that web
page are not supported by any fact and the web page doesn't give enough
information even to juge the numbers it gives (I'm not even talking about
reproducing them). In ANY kind of experiment or benchmark, that is called a
fake.
> What --> "accept a connection, read the inbound data, write this data back
> out, and disconnect."
> How --> using DXSock servers or the others in the list.
>
> It doesn't matter how big the data is.
There is no need for me to quote more to show that you don't have a good
understanding on how TCP/IP works.
> Assume it is the minimum TCP packet
> side (which I think could be enough to handle binary requests of a
> RemObjects server or very small SOAP requests from the raw socket
> prospective).
"asuming" isn't part of this game. If you use minimum sized-packet without
option, you will have a very different result than if you use packets that
match the path MTU exactly. This can make a many fld difference in
thoughoughtput.
As for "very small SOAP request", there is no such a thing: SOAP is, by
it's very nature, very expensive in in size and the packets it generates
are often bigger than the Ethernet MTU.
> > (Ok, I didn't specify what NICs I used in my test. I'll correct that by
> > stating that the ards used on client machinee are a 3com Megaherz 10/100
>
> So you have home user kind of hardware.
Not quite. The NICs in the clients are not high-performance ones since they
are in laptop. The HUB is professional quality and so is the server NIC.
I'm not using gigabit ethernet, though.
> I don't know for sure what hardware Ozz used but knowing the amount of
> money
> he spent in Cisco routers and the kind of work he does
> I wouldn't expect him to work JUST with small NICs like yours.
> That kind of performance is for high speed/load servers. Those don't use
> 100Mb nics.
LOL, putting money in CISCO router is not a guarantee for performances.
CISCO makes hardware that is very nice to manage and do also produce
ISP-grade routing hardware. But given that it takes him several days to
replace a "network connection" (since that's the reason you gave for his
web site to be down" it can't be more than semi-pro hardware like the one
you'll see in many small enterprise network.
But: 1/ he's the only one that can answer for that 2/ I don't really care
for the answer. The main point is that he published "benchmarks" that are
not only very suspicious looking, but also without giving information on
how they where generated and that is NOT being honest to the reader.
> Anyhow, I will put togheter some bencharms for RO myself and I will ask
> you
> for more information on what kind of information you'd find useful.
Like source code of the program used to generate these benchmarks (client
and server), exact hardware and software specs, testing methodology, what
was measured and how.
> There's one thing I have to agree upon anyways: BPDX components are NOT as
> straighforward to use as the Indy ones. The learning curve is higher but
> once you get there it really makes sense.
That's irelevant to the discussion.
Stephane
> If you use minimum sized-packet
> without nagle option, you will have a very different result than if you
use packets
> that match the path MTU exactly. This can make a many fld difference in
> thoughoughtput.
The missing word was "nagle" between "without" and "option".
Stephane
--
Robert Love - (rlove at slcdug.org)
Salt Lake City Delphi Users Group - http://www.slcdug.org
Delphi JEDI - http://www.delphi-jedi.org
Turbopower TPX Member - http://www.turbopower.com
> > The Developer of the DXSock 3.0 Components from BrainPatchworkDX say
that
> > their server components can handle mor than 50.000 concurrent
connections
> > from a single server while the indy components can handle only 200.
> > Can anybody explain me if it is true, and if so, why ?
you said that ...
> Maybe you should read more carefully comparison pages before putting words
> in other people's mouth.
> It doesn't say ANYWHERE that DXSock handles 50000 *when* Indy only handles
> 200
along with some other accusations that people had made these claims up.
Check http://www.bpdx.com/components/dxsock/comparison/:
Maximum connections:
Indy 9.x: 200+
DXSock 3: 50,000+
I think that bpdx.com is the place where the claims were made up.
BTW, part of this thread was a reasoned explanation of why 50,000
concurrent TCP connections was actually impossible, before we got
diverted into threads.....
Grahame
Not entirely but the site is up.
They are still fixing the newsgroups.