Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[simonhf@gmail.com: Fwd: event iocp request]

3 views
Skip to first unread message

Joshua N Pritikin

unread,
Oct 25, 2007, 10:58:49 PM10/25/07
to perl...@perl.org
I don't do Windows so...

--
Make April 15 just another day, visit http://fairtax.org

Marc Lehmann

unread,
Nov 22, 2007, 10:43:16 PM11/22/07
to Joshua N Pritikin, perl...@perl.org, sim...@gmail.com
On Thu, Oct 25, 2007 at 07:58:49PM -0700, Joshua N Pritikin <jpri...@pobox.com> wrote:
> I'm a big fan Event... mainly because it works with great stability on
> both Win32 & Linux. There's only one thing bugging me... On Linux then
> Event can handle into the high hundreds of sockets before things start
> to get too slow. However, on Win32 there seems to be a limitation of
> 64 sockets...

Thats due to windows limitations, you can recompile Event with
FD_SETSIZE=1024 or so (but this makes it even slower).

> being used. Do you know a work around for this? If not then is there
> any chance of you implementing Win32 IOCP into Event?

IOCP are not really event-based, they are I/O based, so they cannot sensibly
be integrated into Event as the models are completely different.

> I was always hoping that he might build it into libevent (which also
> suffers from the 64 handle limit problem on Win32).

There is a replacement library called libev which doesn't suffer from
that, and also has a perl module interface.

Btw., libevent also has an event-based model and is incompatible to iocp
for its event core (there is a buffering module that could use it for I/O,
though).

> libevent sources. Anyway, is there any chance of building his library
> directly into Event?

Unlikely, as the library doesn't deliver events for I/O readyness, which is
all that Event is about.

--
The choice of a Deliantra, the free code+content MORPG
-----==- _GNU_ http://www.deliantra.net
----==-- _ generation
---==---(_)__ __ ____ __ Marc Lehmann
--==---/ / _ \/ // /\ \/ / p...@goof.com
-=====/_/_//_/\_,_/ /_/\_\

Marc Lehmann

unread,
Nov 26, 2007, 2:23:35 AM11/26/07
to Simon Hardy-Francis, perl...@perl.org
On Sun, Nov 25, 2007 at 08:07:39PM -0800, Simon Hardy-Francis <sim...@gmail.com> wrote:
> I was grepping the libev source code for FD_SETSIZE (which apparently
> is set by default to 64 for Win32). Have you ever tried setting
> FD_SETSIZE to a larger value?

Yes, it works about everywhere I tested, although the fd_set is only
used on windows by default. It does work with windows, too, although
performance on windows overall makes it basically pointless to use many
sockets there, the only thing you can hope for on windows is that your
programs work with reduced performance.

> I found this mail here suggesting that
> a value of 16384 works fine:
> http://www.mail-archive.com/d...@apr.apache.org/msg14069.html

Yes with "newer" windows versions, winsocket works around the kernel limit of
64 objects to wait for per thread (which is hardcoded into windows), so more
actually works.

The drawback is that the fd_set on windows are O(n) operations for
everything (as opposed to O(1) as everywhere else), so increaing the fd
set is not automatically a big win.

> Recently I was testing epoll on Ubuntu and managed to get 100k
> connected sockets without problem.

If you are careful you cna get that with select, too, in almost the same time
even :)

> then top reported a memory usage which lead be to believe that an
> average socket connection used 9.5KB.

That of course depends majorly on the socket buffers used (and how much
of that the connection actually uses). Linux is quite good at scaling
dynamically, but if you really have a lot of connections resizing the
socket buffers below the minimum might turn out to be a big win (for
example, for an nntp server you rarely need more than 0.5kb receive
window+buffer).

> Have you tried libev with large amounts of sockets?

Well, quite obviously the benchmark was run with 100k sockets. I do not
usually have these many sockets in my own daemons, which rarely have more
than 10k connections.

Since everything is dynamically sized (excluding select on windows),
and complexity for that is amortised O(1) for all fd operations
inside libevent, seeing _libev_ behaviour with more fds is relatively
uninteresting. What is more interesting is kernel behaviour (but there are
no surprises there), but most interesting is not actually performance, but
correctness.

As a simple example, freebsd (and to a lesser extent openbsd and darwin)
has some very loud proponents for kqueue over other such mechanisms (such
as the teensy linux epoll), and indeed kqueue is marginally better by
design (but suffers from the same design problems epoll does), but when I
ported rxvt-unicode to libev (mostly to iron out portability problems in
libev, as rxvt-unicode is deployed on a wide range of platforms), we found
out that the situation is:

openbsd: broken accoridng to reports I received
freebsd: broken (gives endless readyness notifications for e.g. ptys, or none at all)
darwin: completely broken (doesn't work corretcly even for sockets in most versions)
netbsd: works!

This means that, all the horrible slowness of poll is not an important
issue if there is no replacement mechanism for it. For a generic event
handling library, kqueue is no option on those systems, because it isn't
a generic event handling interface (its documented as one, but it doesn't
work in practise).

Windows has similar issues: the readyness notification system does not
exist with windows, mostly because windows doesn't have a generic I/O
model, so there is similar breakage. Unfortunately, with windows, there
is no workaround possible (short of providing your own read/write and
basically the full unix API).

(libev has the ability to embed kqueue into a poll-based loop, btw. for
those platforms where you know sockets work with kqueue, but this has not yet
been exposed to perl)

> Do you happen to know what the maximum number of sockets is for Win32?

There isn't "the maximum" there are many such, and they depend on windows
version (for example server or not) and configuration. There is also the
handle limit, many windows version simply cannot hand out more than 16k
handles per process etc. etc. I'd check the msdn documentation and your
own tests.

Simon Hardy-Francis

unread,
Nov 25, 2007, 11:07:39 PM11/25/07
to Marc Lehmann, Joshua N Pritikin, perl...@perl.org
On Nov 22, 2007 7:43 PM, Marc Lehmann <sch...@schmorp.de> wrote:
> > I was always hoping that he might build it into libevent (which also
> > suffers from the 64 handle limit problem on Win32).
>
> There is a replacement library called libev which doesn't suffer from
> that, and also has a perl module interface.

Thanks for the tip about the new libev -- I'll give it a try.

I was grepping the libev source code for FD_SETSIZE (which apparently
is set by default to 64 for Win32). Have you ever tried setting

FD_SETSIZE to a larger value? I found this mail here suggesting that

Recently I was testing epoll on Ubuntu and managed to get 100k
connected sockets without problem. With the 100k sockets connected


then top reported a memory usage which lead be to believe that an

average socket connection used 9.5KB. Have you tried libev with large
amounts of sockets? Do you happen to know what the maximum number of
sockets is for Win32?

-- Simon

0 new messages