>can not find channel named "sock66"
> while executing
>"fileevent $sock writable [list socketinit $jb $jt $ms $pn]"
> (procedure "socketevents" line 8)
> invoked from within
>"socketevents 0 jt6522049 {0 tid0xb3d9cbb0 sock66 127.0.0.1 1336 ...
> ("after" script)
Just wondering, does Tcl have internal limits, related to sockets,
that could explain this?
I suppose I could look at the source, but it helps if you know where
to look ...
--
Internet service
http://www.isp2dial.com/
Tcl does not have limits, but the OS might.
The message means that by the time you attempted to put the fileevent
command on then channel the channel was closed.
--
+--------------------------------+---------------------------------------+
| Gerald W. Lester |
|"The man who fights for his ideals is the man who is alive." - Cervantes|
+------------------------------------------------------------------------+
>>can not find channel named "sock66"
>> while executing
>>"fileevent $sock writable [list socketinit $jb $jt $ms $pn]"
>> (procedure "socketevents" line 8)
>> invoked from within
>>"socketevents 0 jt6522049 {0 tid0xb3d9cbb0 sock66 127.0.0.1 1336 ...
>> ("after" script)
>Just wondering, does Tcl have internal limits, related to sockets,
>that could explain this?
Err, never mind.
I think my backend imap server (run from xinetd) is refusing the
connection, and I forgot to use fconfigure to check for errors before
handing the socket down to to my worker thread.
This stuff is hard to get right ...
>Tcl does not have limits, but the OS might.
>The message means that by the time you attempted to put the fileevent
>command on then channel the channel was closed.
Yes, now I see, I found another defect in my code. Ugh. I wish I
could find someone else to blame ...
This stuff is easier to get right in Tcl than in most other programming
languages. (Asynchronous programming is tricky enough that many people
think that threaded synchronous programming is easier. If only that was
really true...)
Donal.
>John Kelly wrote:
To use events or threads, that is the question.
I'm interested in massive scalabilty that can "go fast at all costs,"
and I read Zeus web server uses a select/event model. I wondered, why
not use a select/event model in a hybrid design with threads, or even
separate pids, to broaden the load supporting structure?
Since Tcl has threads and events, it's been a useful tool for modeling
the idea. It's hard to go fast on a large scale, and it helps to have
a modeling tool that exposes weaknesses and defects in the design.
The answer is easy: only use threads if you've determined absolutely
that you *need* threads. Otherwise, use events.
If you can't definitively answer the question "do I really need
threads?", attempting to use threads will only give you headaches.
That's my humble opinion, anyway.
I've found it's even one better.
Use events to do all your I/O. If something outside your control
blocks(*), or computes too long on a multi-CPU machine, bring in the
transaction data in the main thread, fire off a thread to process the
data, and send the result back out with the main thread.
(*) SQL connections, COM object interfaces, etc.
--
Darren New / San Diego, CA, USA (PST)
Remember the good old days, when we
used to complain about cryptography
being export-restricted?
Yes. And looking more closely at John's case, we may even explain why
it is easier: the proper, balanced, use of exceptions in the library.
Opening a file descriptor, failing, and using fconfigure on it, is
_very_ hard to write in Tcl (kudos to John for suceeding anyway ;-),
while in many other languages a special return value is used instead
of an exception. (Come to think of it, even I wonder how John managed
to get a file descriptor from a rejected connection, given the
callback style of [socket -server]...)
-Alex
Typo. I meant:
> (Come to think of it, I even wonder ...
-Alex
>If you can't definitively answer the question "do I really need
>threads?", attempting to use threads will only give you headaches.
You may have a point. My experimental server seems to be stretching
the limits of Tcl's thread capability.
Last night I thought it was my fault, but after fixing my bug, now I
see something else. My "connection refused" error is normal, that's
because the back end imap server is overloaded with too many incoming
connections from my Tcl proxy server.
But before I can even get that far, I see a problem with Tcl threads.
I accept incoming sockets in a main thread, and transfer them to
worker threads using thread::transfer. Using a test script, I blast
connection requests to the server, the count determined by a command
line argument.
It works fine up to 30. But after that, some debugging output shows
that Tcl gets stuck using the same two socket names over and over:
go -- jt6712371 sock61 127.0.0.1 143 -- sock60 127.0.0.1 3147
go -- jt2635630 sock63 127.0.0.1 143 -- sock62 127.0.0.1 3149
go -- jt9804835 sock65 127.0.0.1 143 -- sock64 127.0.0.1 3151
go -- jt3275858 sock67 127.0.0.1 143 -- sock66 127.0.0.1 3153
go -- jt6800889 sock67 127.0.0.1 143 -- sock66 127.0.0.1 3155
go -- jt6943829 sock67 127.0.0.1 143 -- sock66 127.0.0.1 3157
go -- jt5429430 sock67 127.0.0.1 143 -- sock66 127.0.0.1 3159
go -- jt2197973 sock67 127.0.0.1 143 -- sock66 127.0.0.1 3161
go -- jt1866061 sock67 127.0.0.1 143 -- sock66 127.0.0.1 3163
go -- jt9452841 sock67 127.0.0.1 143 -- sock66 127.0.0.1 3165
Now if I comment out my code that does the thread transfer, leaving
all the sockets in the same main thread where they first get accepted,
Tcl can handle that OK:
go -- jt6712371 sock61 127.0.0.1 143 -- sock60 127.0.0.1 2495
go -- jt2635630 sock63 127.0.0.1 143 -- sock62 127.0.0.1 2497
go -- jt9804835 sock65 127.0.0.1 143 -- sock64 127.0.0.1 2499
go -- jt3275858 sock67 127.0.0.1 143 -- sock66 127.0.0.1 2501
go -- jt6800889 sock69 127.0.0.1 143 -- sock68 127.0.0.1 2502
go -- jt6943829 sock71 127.0.0.1 143 -- sock70 127.0.0.1 2503
go -- jt5429430 sock73 127.0.0.1 143 -- sock72 127.0.0.1 2504
go -- jt2197973 sock75 127.0.0.1 143 -- sock74 127.0.0.1 2505
go -- jt1866061 sock77 127.0.0.1 143 -- sock76 127.0.0.1 2506
go -- jt9452841 sock79 127.0.0.1 143 -- sock78 127.0.0.1 2507
go -- jt2650185 sock81 127.0.0.1 143 -- sock80 127.0.0.1 2508
go -- jt7134259 sock85 127.0.0.1 143 -- sock84 127.0.0.1 2510
go -- jt6295587 sock83 127.0.0.1 143 -- sock82 127.0.0.1 2509
For me, Tcl thread::transfer is not very useful, if it can't handle
stress, and that dashes any idea of using Tcl for my server.
Unfortunately, I don't have much more time to spend on this, so this
is about as far as I can go towards submitting a bug report.
>> This stuff is easier to get right in Tcl than in most other programming
>> languages. (Asynchronous programming is tricky enough that many people
>> think that threaded synchronous programming is easier. If only that was
>> really true...)
>Yes. And looking more closely at John's case, we may even explain why
>it is easier: the proper, balanced, use of exceptions in the library.
>Opening a file descriptor, failing, and using fconfigure on it, is
>_very_ hard to write in Tcl (kudos to John for suceeding anyway ;-),
Yeah, I got that working pretty good. But the Tcl thread support just
doesn't cut it for me. See related post ...
>I wonder how John managed to get a file descriptor from a rejected
>connection, given the callback style of [socket -server]...)
You can examine the code, I'll post a link to it later ...
You have a 30 processor machine? If not and you only have one processor,
because Tcl threads are native OS threads you'll be loading that machine
quite hard. I suspect that it is far better in practice to think in
terms of sending packets of work to a pool of threads, since then you
can scale that correctly for your hardware and get maximum performance.
Luckily, the thread pool (tpool) stuff in the Thread package makes this
style of programming easy.
Donal.
Not sure exactly what part of the Tcl thread support is to blame here,
but I am sympathetic to the "forget threads" approach anyway... So
let's get back to the roots of the problem:
(a) Can your server be expressed as accept()+fork()+exec() (like
inetd, nowait mode)
(b) Can it be expressed as (a) with persistence (inetd, wait mode)
(c) If (a) or (b) is true, but performance is inadequate, do you have
evidence about what exactly is the bottleneck:
- the fork() overhead
- the init overhead of the exec()ed tclsh
- the context switch penalty among many simultaneously living
children
Depending on your answers to the above, I believe we could propose
varied prognosis about Tcl's ability to satisfy your needs.
-Alex
That's easy as there's only one way: asynchronously connecting sockets.
The file descriptor's there, but there's nobody home... :-)
Donal.
>> But the Tcl thread support just doesn't cut it for me.
>Not sure exactly what part of the Tcl thread support is to blame here,
The part where you detach a socket from one thread, and then attach it
in a different thread.
>(a) Can your server be expressed as accept()+fork()+exec() (like
>inetd, nowait mode)
>(b) Can it be expressed as (a) with persistence (inetd, wait mode)
>(c) If (a) or (b) is true, but performance is inadequate, do you have
>evidence about what exactly is the bottleneck:
> - the fork() overhead
> - the init overhead of the exec()ed tclsh
> - the context switch penalty among many simultaneously living
I don't know the answer to all those questions. All I know is, I
believe a select/event model with horizontal scaling, using either
threads or pids, is what I'm interested in.
If anyone wants see the code that causes the socket/thread transfer
problem, here's the link I promised:
ftp://isp2dial.com/imapwwwbui/imapwwwbui-0.002.tgz
Hmm, that might be unrelated, or might be a realted issue:
http://sourceforge.net/tracker/index.php?func=detail&aid=1555698&group_id=10894&atid=110894
Michael
Very strange. You designed a complex thing without knowing whether a
much simpler thing like inetd would do the job ? -- (Sorry but I won't
dive into your code to dig that.)
In case you don't know what inetd does (sorry if you do), here it
goes:
proc insok {sok args} {
eval exec $::cmd <@ $sok >@ $sok 2>@ stderr &
close $sok
}
socket -server insok $port
vwait forever
(except it is a native binary of course)
-Alex
>> For me, Tcl thread::transfer is not very useful, if it can't handle
>> stress, and that dashes any idea of using Tcl for my server.
>
>Hmm, that might be unrelated, or might be a realted issue:
>http://sourceforge.net/tracker/index.php?func=detail&aid=1555698&group_id=10894&atid=110894
Yes, two sockets with the same name, that's what I see. Sounds
related. Thanks for the pointer, Michael.
Well, inetd does quite a bit more than that (especially logging and
managing multiple sockets). But that's definitely the core of what it does.
Did you know that Tcl 8.4 is happy working as the contained program in
such situations? It will understand those connected sockets to be
sockets, allowing the script to examine where the connection is coming
from instead of just thinking that they're a generic channel...
Donal.
>If anyone wants see the code that causes the socket/thread transfer
>problem, here's the link I promised:
>ftp://isp2dial.com/imapwwwbui/imapwwwbui-0.002.tgz
That link is gone now, but here's an updated link:
ftp://isp2dial.com/imapwwwbui/imapwwwbui-0.003.tgz
There were only minor editorial changes between the two, there is no
difference in function. Version 0.003 will likely be that last of my
Tcl experiment, it's time to move on. Libevent looks interesting ...
Wow -- [fconfigure stdin] returning the socket-specific part: how
wonderful.
How long has it been so ? 8.4.0 ?
I never noticed the release note relieving that long-standing
favourite of mine !
-Alex
So you've tried threads, *not* the pure event-driven paradigm (with
subprocesses), then you give up Tcl, switch to another environment,
and there try the event-driven approach, still not knowing whether
your case would be covered by inetd ? What's the rationale ?
-Alex
Code diving indicates that this has been so since 8.4a3, specifically
2001-06-18 according to ChangeLog.2001, but for some reason it's not
logged in the changes file. Bug number is #219137.
> I never noticed the release note relieving that long-standing
> favourite of mine !
Most of the time, if you notice such things then there's something
wrong... :-D
Donal.