TclSOAP & SSL -- Can I reuse connections?

200 views
Skip to first unread message

David Bleicher

unread,
Nov 28, 2001, 4:00:20 PM11/28/01
to
Howdy,

Looking at the TclSOAP docs and code, there is a function to get a handle
(token) to the most recently used HTTP call. My short question is: Is
there anyway to reuse that same connection for subsequent SOAP calls within
a given script? In other words, I'd like to create/configure a single HTTP
connection and then make multiple SOAP calls over that connection.

The long version of the question follows. Thanks,

David
---
--
-

Long Version: Why I care --

I'm using TclSOAP and TLS extensions to do SOAP over HTTP over SSL with
client certificates. To my utter delight, everything works, and worked
right from the start. Sweet! Now, being greedy, I want it to work fast.
Here's the problem:

Its my understanding that each and every call to a SOAP method (remote
function) does the following:

1. Opens the socket
2. Sends / Receives SOAP over HTTP
3. Closes the socket

If so, then when I use SSL and client certs the list grows for every single
call:

1. Open the socket
2. Perform SSL Handshake
----(A) Verify Server
---------(Local file I/O on SVR/CA cert)
----(B) Verify Client
---------(Local file I/O on Client cert)
---------(Retrieve password for private key)
---------(Decrypt private key if encrypted)
---------(Send client public key)
----(C) Generate session key
----(D) Resend all encrypted with session key
3. Send SOAP over HTTP (over SSL)
4. Close socket.

Step two (2) is pretty darned expensive to perform for each and every method
call. Especially as it may perform multiple file I/O calls each time.
Rather than jumping blindly forward, I thought I'd ask for
opinions/thoughts/guidance:

1. Can we support persistent connections so that we avoid repeating the
handshake for every call?
2. How? (or) Why not?

My guess is that a custom transport could be created in the SOAP.tcl file
that requires (as a parameter) an HTTP handle, and then uses it. I'm not
sure if this is the best approach, or what pitfalls it might create. I'll
take a crack at this myself if nobody has any other ideas. Hopefully,
somebody has some better ideas :-)

On a side note, the TLS extension expects certificate data to be passed to
it by file name. Anyone know a TCL way to fool it into getting this data
from an internal script variable? If so, at least I could avoid the
repetitive file I/O.

Pat Thoyts

unread,
Nov 29, 2001, 10:40:42 AM11/29/01
to
"David Bleicher" <mo...@geofinity.com> wrote in message news:<u0ak5d3...@corp.supernews.com>...

> Howdy,
>
> Looking at the TclSOAP docs and code, there is a function to get a handle
> (token) to the most recently used HTTP call. My short question is: Is
> there anyway to reuse that same connection for subsequent SOAP calls within
> a given script? In other words, I'd like to create/configure a single HTTP
> connection and then make multiple SOAP calls over that connection.
>

The short answer is maybe :)

I've never tried to do this but using SSL with TclSOAP is something I
want
to get done. Unfortunately I do most of my work behind a particularly
obnoxious firewall which can make exploring the capabilities of
something like SSL difficult.

The token you get is provided by the standard tcl http package. I
believe that this package is doing an open and close on the socket
before returning the data to you. The token is actually an array. You
can see the channel that was in use as $::http::x(sock) but the
folowing shows the problem:

(tcl) 201 % package require http
2.4
(tcl) 202 % set tok [http::geturl http://localhost//]
::http::1
(tcl) 204 % fconfigure $::http::1(sock)
can not find channel named "sock412"

I notice that http::geturl -querychannel allows us to POST data _FROM_
a channel to the remote server. I don't see anything letting us
perform the queries over a previously opened channel. Anyone know
better?

I guess we could use http::register to feed in a previously opened
channel. We then can override http::Finish and fix it to not close the
socket. In fact the following works:

(tcl) 205 % proc ::http::close {channel} {
return
}
(tcl) 206 % set tok [http::geturl http://localhost/]
::http::2
(tcl) 207 % fconfigure $::http::2(sock)
-blocking 0 -buffering full -buffersize 8192 -encoding iso8859-1
-eofchar {{} {}} -translation {auto crlf} -peername {172.0.0.1
localhost 80} -sockname {127.0.0.1 localhost 2462}
(tcl) 208 % close $::http::2(sock)

However, here I'm using tkcon under tcl8.3 on windows 2000 and while
the socket is open the response time for tkcon is really bad. I'm not
sure why?


Can you post a summary of your successes with SOAP and TLS as I'd like
to
be able to do some testing of this but havn't yet had the time to work
my way through the SSL end of things.

OK So that wan't so short...

> The long version of the question follows. Thanks,
>
> David
> ---
> --
> -
>
> Long Version: Why I care --
>
> I'm using TclSOAP and TLS extensions to do SOAP over HTTP over SSL with
> client certificates. To my utter delight, everything works, and worked
> right from the start. Sweet! Now, being greedy, I want it to work fast.
> Here's the problem:
>
> Its my understanding that each and every call to a SOAP method (remote
> function) does the following:
>
> 1. Opens the socket
> 2. Sends / Receives SOAP over HTTP
> 3. Closes the socket

Yup. See above

>
> If so, then when I use SSL and client certs the list grows for every single
> call:
>
> 1. Open the socket
> 2. Perform SSL Handshake
> ----(A) Verify Server
> ---------(Local file I/O on SVR/CA cert)
> ----(B) Verify Client
> ---------(Local file I/O on Client cert)
> ---------(Retrieve password for private key)
> ---------(Decrypt private key if encrypted)
> ---------(Send client public key)
> ----(C) Generate session key
> ----(D) Resend all encrypted with session key
> 3. Send SOAP over HTTP (over SSL)
> 4. Close socket.
>
> Step two (2) is pretty darned expensive to perform for each and every method
> call. Especially as it may perform multiple file I/O calls each time.
> Rather than jumping blindly forward, I thought I'd ask for
> opinions/thoughts/guidance:
>
> 1. Can we support persistent connections so that we avoid repeating the
> handshake for every call?
> 2. How? (or) Why not?

Maybe - see above.

>
> My guess is that a custom transport could be created in the SOAP.tcl file
> that requires (as a parameter) an HTTP handle, and then uses it. I'm not
> sure if this is the best approach, or what pitfalls it might create. I'll
> take a crack at this myself if nobody has any other ideas. Hopefully,
> somebody has some better ideas :-)

Certainly. Depending upon the reason for the slow response time I
mention
above I see no problem with enabling a persistent connection for some
cases.
There is a move to put in a 'destroy' procedure to tear down
'SOAP::create' created methods. The channel destruction could be done
in this proc handily.

>
> On a side note, the TLS extension expects certificate data to be passed to
> it by file name. Anyone know a TCL way to fool it into getting this data
> from an internal script variable? If so, at least I could avoid the
> repetitive file I/O.

Feel free to post patches, comments etc to the sourceforge project at
http://sourceforge.net/projects/tclsoap
Placeing bug/feature requests helps to get them into the pipeline and
future work can take such suggestions into consideration.

Cheers,

Pat Thoyts

David Bleicher

unread,
Nov 29, 2001, 3:39:07 PM11/29/01
to
Pat,

Thanks for the analysis and suggestions. I'm studying them now and will
post comments as I figure things out. In the mean time, you asked:

> Can you post a summary of your successes with SOAP and TLS as I'd like
> to
> be able to do some testing of this but havn't yet had the time to work
> my way through the SSL end of things.

Sure can. I had created a number of SOAP functions on an Apache/Linux
server using PHP/SOAP4X (for some) and Perl/SOAPLite (for others). I happen
to be using Win2k (and sometimes WinXP) on the client side, both work the
same. For TCL, I am using ActiveState's Win2k binary build of TclPro
(1.5.0.1 beta) which is based on TCL 8.3.4. I'm using version 1.6.1 of your
excellent TclSOAP package, and version 1.4.1 of the TLS Win2k binary.

In order to test SSL capabilities, I wanted to keep the SOAP call simple
(trivial) and so created a service called "upit". Upit takes one string as
a parameter ("instring") and returns the upper-case equivalent. As an
aside, I'm not making any use of the URI yet, so the value I'm using is
bogus. So to start:

package require http
package require SOAP
SOAP::create upit -uri "Fake_URI" \
-proxy "http://www.fakewebsite.com/soap/" \
-params {"instring" "string"}
puts [upit "string to convert"]

This worked fine and (as you can guess) prints "STRING TO CONVERT" on the
console. My next step was to turn on SSL for the website (using mod_ssl)
and try the same service over HTTPS. To do this, I only had to add two
lines (and change the proxy to "https"):

package require http
package require SOAP
package require tls ## Use the tls package
http::register https 443 ::tls::socket ## Register the https transport
SOAP::create upit -uri "Fake_URI" \
-proxy "https://www.fakewebsite.com/soap/" \
-params {"instring" "string"}
puts [upit "string to convert"]

This also worked! Almost too easy. My next step was to change the server
(again using mod_ssl) so that it required a client certificate for accessing
the URL (SOAP proxy). For the client certificate in question, I used a
PKCS12 cert that I had already installed in IE. To use it with TCL/TLS, I
had to export it (from the IE certificate store) and then convert it to
"PEM" format (using openssl). This created a single file that contained
both my private and public keys. When you create a pkcs12 file, you must
choose whether or not to password protect (encrypt) your private key. If
you don't encrypt it, then you only have to add one more line to the script
to make things work:

package require http
package require SOAP
package require tls
tls::init -certfile c:\\name_of_cert_file.pem ## Tell tls where to find
the certificate
http::register https 443 ::tls::socket
SOAP::create upit -uri "Fake_URI" \
-proxy "https://www.fakewebsite.com/soap/" \
-params {"instring" "string"}
puts [upit "string to convert"]

Everything still works. Two notes: if you don't use PKCS12, then the
private key is stored in a separate file. In that case, you will also need
to add a "tls::init --keyfile <path>" command. Also, not encrypting your
private key is a "Bad Idea(tm)". A better idea is to encrypt your private
key when you export it. If you do so, then the tls package will need to
know what the password is so that it can decrypt your private key. The tls
package allows you to redefine a procedure named "tls::password" in your
script, and will automatically call this proc whenever it needs to read a
private key that is encrypted. The tls package expects the proc to return
the password necessary. So, to support an encrypted private key in a pkcs12
cert, here's the finished program:

package require http
package require SOAP
package require tls
proc tls::password {} {
## Do something to prompt the user for a password
return "this_is_the_password"
}
tls::init -certfile c:\\name_of_cert_file.pem
http::register https 443 ::tls::socket
SOAP::create upit -uri "Fake_URI" \
-proxy "https://www.fakewebsite.com/soap/" \
-params {"instring" "string"}
puts [upit "string to convert"]

That's it! Everything works, and I can now call my little function "upit".
The only caveat I can find is the one from my original post, the whole SSL
handshake process has to occur every time a I use the upit function. In
other words, if I call upit six times in a row, I have to do the whole SSL
handshake six times. My next step is to study the suggestions you provided,
and try to figure out a way to reuse the socket so that I only perform the
handshake once.

Thanks again for your help,

David
---
--
-


Erik Leunissen

unread,
Nov 30, 2001, 3:09:17 AM11/30/01
to
Confirmative message:

Some month's ago I received an answer from Brent Welch concerning an
issue closely related to the one raised in this thread. He stated that
the http package (release 2.3 at that time) does not support keep-alive
connections. The http-package's functionality appears to be limited to
HTTP 1.0 features.

If someone manages to hack keep-alive support into the http-package I'm
interested, because I also believe they make SSL connections much more
efficient.

Greetings,

Erik Leunissen.
====================

Pat Thoyts

unread,
Nov 30, 2001, 9:33:17 AM11/30/01
to
"David Bleicher" <mo...@geofinity.com> wrote in message news:<u0d79jn...@corp.supernews.com>...
[snip]

>
> This also worked! Almost too easy. My next step was to change the server
> (again using mod_ssl) so that it required a client certificate for accessing
> the URL (SOAP proxy). For the client certificate in question, I used a
> PKCS12 cert that I had already installed in IE. To use it with TCL/TLS, I
> had to export it (from the IE certificate store) and then convert it to
> "PEM" format (using openssl). This created a single file that contained
> both my private and public keys. When you create a pkcs12 file, you must
> choose whether or not to password protect (encrypt) your private key. If
> you don't encrypt it, then you only have to add one more line to the script
> to make things work:
>
> package require http
> package require SOAP
> package require tls
> tls::init -certfile c:\\name_of_cert_file.pem ## Tell tls where to find
> the certificate
> http::register https 443 ::tls::socket
> SOAP::create upit -uri "Fake_URI" \
> -proxy "https://www.fakewebsite.com/soap/" \
> -params {"instring" "string"}
> puts [upit "string to convert"]


This is an excellent example, thank you! I had no idea about this
certificate business but I'm glad it has been so painless.

I had better enhance the documentation :)

Pat Thoyts

David Bleicher

unread,
Nov 30, 2001, 12:29:42 PM11/30/01
to
Erik Leunissen <e.leu...@hccnet.nl> wrote in message news:<3C073EAD...@hccnet.nl>...

> Confirmative message:
>
> Some month's ago I received an answer from Brent Welch concerning an
> issue closely related to the one raised in this thread. He stated that
> the http package (release 2.3 at that time) does not support keep-alive
> connections. The http-package's functionality appears to be limited to
> HTTP 1.0 features.
>
> If someone manages to hack keep-alive support into the http-package I'm
> interested, because I also believe they make SSL connections much more
> efficient.
>
> Greetings,
>
> Erik Leunissen.
> ====================
>

Erik,

Thanks for the note. You are right on the money and, unfortunately,
you have answered my question. I say unfortunately, because it wasn't
the answer I wanted. It is, of course, the >>right<< answer, and I
appreciate it.

Having played with direct socket access to my server last night, it
seems clear now that I would have to use keep-alive to accomplish what
I am trying to do. So far, I have gotten multiple requests/responses
to go over a single socket connection by reading/writing directly to
the tls socket. However, I haven't read the HTTP1.1 spec closely
enough to make this work consistently, or the way I want it to. I'll
keep plugging, but it seems likely that any solution I come up with
with be specific not only to the Apache server I use, but also to the
particular configuration of that server I have implemented. Bummer.
We'll see...

Pat,

I inadvertantly broke this thread and posted my response to your
message under the heading "TclSOAP & SSL [LONG]". I hope you find it,
and find it valuable. I didn't cross-post it to the sourceforge site,
but will if you like. Just let me know. Ignoring persistent
connections, TclSOAP over HTTP over SSL works fine right now, with or
without client certificates. Thanks!

My current thinking is that I'll have to find a work-around for HTTP
keep-alive (per Erik's message above) before I can use the persistent
connection with TclSOAP. If I can figure this out, even just for my
own server configuration, I will then try to fit it back into SOAP and
pass along any code I come up with. That's a big if :) --

Thanks to both of you for your help,

David
---
--
-

David Bleicher

unread,
Nov 30, 2001, 4:47:05 PM11/30/01
to
Pat.T...@bigfoot.com (Pat Thoyts) wrote in message
>
> This is an excellent example, thank you! I had no idea about this
> certificate business but I'm glad it has been so painless.
>
> I had better enhance the documentation :)
>
> Pat Thoyts

I'm happy to oblige. BTW: The examples I gave showed the use of
encryption and client authentication. They don't show how to validate
that the server is "who it claims to be". I was using my own server
and watching the logs, so I didn't worry about it at the time.

The TLS package supports server authentication as well with the
-cafile, -cadir, -request, and -require switches to the ::tls::init
command. While I haven't tried it myself, I suspect that one more
line of code is all it would take. Kudos to Matt Newman for the TLS
package, and to everyone behind the amazing OpenSSL toolkit.

Feel free to use any of the examples in your own docs. If you'd like
a "TclSOAP over SSL HOWTO" I'd be happy to whip one up and submit it
to the sourceforge site, just let me know.

Pat Thoyts

unread,
Nov 30, 2001, 8:12:32 PM11/30/01
to
mo...@geofinity.com (David Bleicher) writes:

That would be an excellent plan. Much better than having me translate
SSL ideas into anything comprehensible.

--
Pat Thoyts http://www.zsplat.freeserve.co.uk/resume.html
To reply, rot13 the return address or read the X-Address header.
PGP fingerprint 2C 6E 98 07 2C 59 C8 97 10 CE 11 E6 04 E0 B9 DD

Erik Leunissen

unread,
Dec 1, 2001, 6:48:15 AM12/1/01
to Pat.T...@bigfoot.com, mo...@geofinity.com
Another thought that just came up:

Maybe some idea's about implementing persistent connections could be
derived from the code for the Tcl http server (tclhttpd package, also by
Brent Welch).

Tclhttpd *does* support keep-alives. It does so by:

- investigating whether a socket should be closed or not, after a
response has been returned to the client;
- if not, it cleans up the state associated with the connection (i.e. a
socket reset instead of a close).

I think it may be worth while to take a closer look to the tclhttpd code
in order to get an idea as to what extent the concept may be applied to
the client side, maybe using Pat Thoyts's suggestions with respect to
adjusted usage of http::procs earlier in this thread.

Please note that I am unfamiliar with TclSOAP and how it relates to
HTTP, but I am (more or less) familiar with the combination of http and
tclhttpd for a client-server application.

Greetings,

Erik Leunissen.
==========================

Chang LI

unread,
Dec 1, 2001, 8:32:03 AM12/1/01
to
"David Bleicher" <mo...@geofinity.com> wrote in message news:<u0d79jn...@corp.supernews.com>...
> ---

Where to download the SOAP to test your examples?

Chang

David Bleicher

unread,
Dec 1, 2001, 11:45:32 AM12/1/01
to
"Chang LI" <chang...@hotmail.com> wrote in message
news:89cc6e1f.01120...@posting.google.com...

Chang,

The client code is written in TCL and is included (complete) in the post.
It requires:

The TclSOAP package available at: http://tclsoap.sourceforge.net/
The TLS package available at: http://tls.sourceforge.net
The OpenSSL toolkit (required by the TLS package) available at:
http://www.openssl.org/

The server code is written in PHP and runs on an Apache server with mod_ssl
(it's a private server, I cannot provide public access to it). There is no
download for the code, but here is the complete source:

<?php
include("class.soap_client.php");
include("class.soap_server.php");
$server = new soap_server;
$server->add_to_map("upit", array("string"), array("string"));

function upit($instring) {
return strtoupper($instring);
}
$server->service($HTTP_RAW_POST_DATA);
?>

To use it, you'll need your own web server with:

PHP available at http://www.php.net/
The "SOAPX4" package available at http://dietrich.ganx4.com/soapx4/

If you want to rewrite the server code in TCL, please refer to Pat's
documentation on TclSOAP from the site above. I have not tested this yet,
but I'm guessing that the TCL server code (Pat, are you watching???) that
would match my example client code would look SOMETHING like:

## UNTESTED CODE -- May not work at all!
## --soap1.tcl-- Provides "upit" SOAP method
##
package require SOAP::Domain
SOAP::Domain::register -prefix /soap -namespace Fake_URI

proc Fake_URI::/upit {instring} {
return [string toupper $instring]
}


I hope that answers your question.

David
---
--
-


David Bleicher

unread,
Dec 1, 2001, 12:50:02 PM12/1/01
to
"Erik Leunissen" <e.leu...@hccnet.nl> wrote in message
news:3C08C37F...@hccnet.nl...
<snip>

> Tclhttpd *does* support keep-alives. It does so by:
>
> - investigating whether a socket should be closed or not, after a
> response has been returned to the client;
> - if not, it cleans up the state associated with the connection (i.e. a
> socket reset instead of a close).
>
<snip>

Erik,

Excellent suggestion, I'll take a look at the tclhttpd code when I get a
minute. My current obstacle is very close to what you are referring to. My
client code doesn't know how to (reliably) determine if the web server is
done sending for a particular request. This is probably just because I
still (shame on me) haven't examined the spec. Just to broadcast my
ignorance (I have plenty to spare :), here's a non-functional example. Note
the >>faulty<< "while" loops that read from the socket:

package require tls
set s [::tls::socket www.fakesitename.net 443]
fconfigure $s -buffering none
fconfigure stdout -buffering none
puts $s "GET /index.html HTTP/1.1"
puts $s "Host: www.fakesitename.net:443"
puts $s "\n"
while {[gets $s content] >= 0} { puts $content }
puts "\nREQUEST 2\n"
puts $s "GET /index.html HTTP/1.1"
puts $s "Host: www.fakesitename.net:443"
puts $s "\n"
while {[gets $s content] >= 0} { puts $content }
close $s

If you run this code, the "while" loop will just sit there continuously
trying to read the socket until it times out. This is because it doesn't
get an EOF from the gets statement (duh). If, however, you modify the while
loop and give it a test that >>accurately<< determines that the first
request is done sending, then everything works. Just to prove this (this is
not a solution, just a lame hack) you can modify the while loop to test the
"content" of the response. For example, I know that the last line in the
"index.html" file is "</html>". So, if I rewrite the while loops as:

while {[gets $s content] >= 0} {
puts $content
if {[string match "</html>*" $content]} {break}
}

It works. My server log clearly indicates that both requests went over the
same socket, and that the SSL handshake occurred only once. Of course, this
is NOT a good general purpose test <sheepish grin>

So, my current question is "What is the general purpose test?" In other
words, "How do I know when the server is done sending the first response?"
I suspect that if I were to "RTFM" on HTTP 1.1 I'd already know the answer,
but I haven't gotten there yet (again, shame on me). If you already know
the answer, and are willing to let me "cheat", I'd appreciate hearing it.

In the mean time, I'll keep plugging...

David
---
--
-


Erik Leunissen

unread,
Dec 1, 2001, 4:16:17 PM12/1/01
to
David,

Again some thoughts interspersed into your original message:

>
> minute. My current obstacle is very close to what you are referring to. My
> client code doesn't know how to (reliably) determine if the web server is
> done sending for a particular request.

Maybe I missed something, but my first response is: why do you want to
code this yourself if there is the http package where this problem must
have been adressed (regardless whether the connection is supposed to
persist or not)?

As for the while loop, you mention:
The approach taken in http (and tclhttpd for that matter) is
event-driven, which is fundamentally different from using a while loop
as in your example code.

Because initially, I myself was unfamiliar with event-driven
programming, it took me some time to grasp its advantages and to get
accustomed to a totally different way of thinking about program flow
control. Right now, I am quite enthusiastic about its efficiency for I/O
bound processes like in network applications.

For event-driven behaviour the code for the http package uses calls to
the following commands:
- fileevent
- after
Maybe it is wise to take a look at the man pages for these commands to
get acquainted with the Tcl event loop and how to use it for your
purpose.

An event driven approach annihilates your problem of using code that
doesn't know when to begin its tasks.

As for a condition to detect end of a response on the socket, I looked
into the source code of the http package. I'm not quite sure how this is
achieved.

Hope to have helped thus far,

Erik Leunissen
=====================

David Bleicher

unread,
Dec 2, 2001, 12:26:59 AM12/2/01
to
"Erik Leunissen" <e.leu...@hccnet.nl> wrote in message
news:3C0948A1...@hccnet.nl...

> David,
>
> Again some thoughts interspersed into your original message:
>
> >
> > minute. My current obstacle is very close to what you are referring to.
My
> > client code doesn't know how to (reliably) determine if the web server
is
> > done sending for a particular request.
>
> Maybe I missed something, but my first response is: why do you want to
> code this yourself if there is the http package where this problem must
> have been adressed (regardless whether the connection is supposed to
> persist or not)?

Fair question. I don't want to code any of this myself. I've benefitted
from using the http package for years now, and would like nothing more than
to use it for this problem as well. That said, its a fairly sophisticated
package and I'd hoped to start small by trying to understand the keep-alive
issue by itself, before trying to tackle the inner workings of the whole
http package. I set out to write a trivial script that just exercises
keep-alive, knowing that once I had that working, the great http package
would be there to do all the rest of the work for me.

I'm not saying that this is the best strategy, just the one I happened to
choose.

>
> As for the while loop, you mention:
> The approach taken in http (and tclhttpd for that matter) is
> event-driven, which is fundamentally different from using a while loop
> as in your example code.
>
> Because initially, I myself was unfamiliar with event-driven
> programming, it took me some time to grasp its advantages and to get
> accustomed to a totally different way of thinking about program flow
> control. Right now, I am quite enthusiastic about its efficiency for I/O
> bound processes like in network applications.
>
> For event-driven behaviour the code for the http package uses calls to
> the following commands:
> - fileevent
> - after
> Maybe it is wise to take a look at the man pages for these commands to
> get acquainted with the Tcl event loop and how to use it for your
> purpose.
>
> An event driven approach annihilates your problem of using code that
> doesn't know when to begin its tasks.

Yes, it does do that, and I fully agree that an event-driven approach is
better suited to network programming. The "fileevent" approach does solve
the problem of knowing when to start reading. Unfortunately, it shifts the
problem to another domain. With fileevent, I know when data is ready but I
don't know how much is ready, or which request the data belongs to. Here is
my previous (throwaway) code, re-implemented as an event-driven program:

package require tls
set s [::tls::socket www.fakesitename.net 443]

proc showIt {s} {
set partial [read $s 10]
puts -nonewline $partial
if {[string length $partial] != 10} {
close $s
exit
}
}

fileevent $s readable [list showIt $s]

## Now sending Req 1


puts $s "GET /index.html HTTP/1.1"
puts $s "Host: www.fakesitename.net:443"
puts $s "\n"

## "Now sending Req 2
puts $s "GET /index2.html HTTP/1.1"


puts $s "Host: www.fakesitename.net:443"
puts $s "\n"

flush $s

vwait forever

Now, the above code works, and is quite fast. The connection is persistent
and used for both requests (verified). Unfortunately, what comes out is the
combined results of both requests. There is no event that indicates where
one response ends, and the other begins. I'm back to examining the content.
Faster code, nicer code, but still essentially the same problem.

> As for a condition to detect end of a response on the socket, I looked
> into the source code of the http package. I'm not quite sure how this is
> achieved.

I finally looked as well and the answer (as near as I can tell) is "It isn't
achieved". And if you think about it, why should it? Since the http
package only supports HTTP 1.0, and doesn't support keep-alive, there will
only ever be one response per socket. Since that is the case, ALL data that
shows up on that socket will be associated with that ONE request. In other
words, read everything the socket has into a single response.

The code that does the reading in the http package is at the bottom of the
"http::Event" procedure. In my copy of the code (v2.4) look at lines
717-736. I believe you will see code that does almost exactly what my
(newly event-driven) sample does above. If there isn't a callback
associated with the token, it just reads an arbitrary amount of data from
the socket, and appends it to the "body" element in the state array. This
code gets called (by fileevent) again and again until there is no data left.
It doesn't differentiate between multiple responses, because it knows (well,
Brent knew) that there is only one response per socket. Please take a look
if you have the time, I'd sure appreciate a second opinion.

Just to add insult to injury, there's one more nasty bit. In HTTP 1.0
(without keep-alive), the web server is kind enough to signal when it is
done sending data. The data can be fully read, and the socket closed
immediately. When keep-alive is present (e.g. HTTP 1.1), there's no EOF to
test for and the socket will stay open until the server-configured timeout
has occurred. As a result, there's actually a bug in my example code above.
The final call to "showIt" will sit there listening to the socket until the
timeout has occurred. Once it times-out, it will display the final few
bytes of data and then exit gracefully. Until the timeout, however, it just
sits there. No test that I can identify (eof, fblocked, return length,
etc.) has offered a decent workaround. Any ideas?

So now that I'm on the "event" track (BTW: thanks for the suggestion, it
really is the right way to go) my next challenge is to figure out how to
split the bundle back into two responses. The only thing that comes to mind
is that each response in the bundle starts with the "HTTP/1.1" header line.
It would be easy to slice on this, and I think the spec supports doing that.
Any thoughts? As an aside, I'm making the assumption (and we all know what
a bad idea that is) that responses will be returned in the same order they
were requested. If not, then I have another problem. Any idea if the
standard provides this guarantee?

David
---
--
-

Erik Leunissen

unread,
Dec 2, 2001, 10:43:45 AM12/2/01
to mo...@geofinity.com
David Bleicher wrote:
>

> Fair question. I don't want to code any of this myself. I've benefitted
> from using the http package for years now, and would like nothing more than
> to use it for this problem as well. That said, its a fairly sophisticated
> package and I'd hoped to start small by trying to understand the keep-alive
> issue by itself, before trying to tackle the inner workings of the whole
> http package. I set out to write a trivial script that just exercises
> keep-alive, knowing that once I had that working, the great http package
> would be there to do all the rest of the work for me.
>

I understand. That does make sense.


> ....


> I finally looked as well and the answer (as near as I can tell) is "It isn't
> achieved". And if you think about it, why should it? Since the http
> package only supports HTTP 1.0, and doesn't support keep-alive, there will
> only ever be one response per socket. Since that is the case, ALL data that
> shows up on that socket will be associated with that ONE request. In other
> words, read everything the socket has into a single response.

> ....

I believe that even if a connection is intended for a single request,
and the client needs to read all the data returned from the server, it
just as well needs to know when to stop reading from the socket.

> The code that does the reading in the http package is at the bottom of the
> "http::Event" procedure. In my copy of the code (v2.4) look at lines
> 717-736. I believe you will see code that does almost exactly what my
> (newly event-driven) sample does above. If there isn't a callback
> associated with the token, it just reads an arbitrary amount of data from
> the socket, and appends it to the "body" element in the state array. This
> code gets called (by fileevent) again and again
> until there is no data left.

^^^^^^^^^^^^^^^^^^^^^^^^^^^

If that is the case, then wouldn't it mean that you found your stop
condition right here?

> It doesn't differentiate between multiple responses, because it knows (well,
> Brent knew) that there is only one response per socket. Please take a look
> if you have the time, I'd sure appreciate a second opinion.

I'm going to. I'm getting quite interested in this problem.

>
> Just to add insult to injury, there's one more nasty bit. In HTTP 1.0
> (without keep-alive), the web server is kind enough to signal when it is
> done sending data. The data can be fully read, and the socket closed
> immediately. When keep-alive is present (e.g. HTTP 1.1), there's no EOF to
> test for

Darn!
This stop condition is different from `until there is no data left'.
We need to be sure about the stop condition that is actually being used.
I agree that if HTTP/1.0 checks for eof, because in this case the server
closes the socket, then we cannot use that condition for keep-alive
connections.

> and the socket will stay open until the server-configured timeout
> has occurred. As a result, there's actually a bug in my example code above.
> The final call to "showIt" will sit there listening to the socket until the
> timeout has occurred. Once it times-out, it will display the final few
> bytes of data and then exit gracefully. Until the timeout, however, it just
> sits there. No test that I can identify (eof, fblocked, return length,
> etc.) has offered a decent workaround. Any ideas?


Many tests that can be devised, but I think it is wise not to offend the
specification.
I've dug into the code of Tclhttpd because it must tackle the same
problem when receiving a request on a keep-alive socket. Furthermore, I
looked for any signals that are sent to the client that could be used
for determination of a stop condition for reading.
(Line nrs. below refer to the file httpd.tcl from the package
tclhttpd3.2.1.)


I think that the following is the best example that I could find in the
code:

tclhttpd resets the read callback by doing

fileevent $sock readable {}

when reading post data in proc HttpdReadPost. It appears that the
mime-headers (whatever that may be) give information in advance about
the content length. The value for the content length is used to
determine the size of the read.

I suspect that this same technique is used at the client side because
tclhttpd sends a header with information about the content-length. See
procedures Httpd_ReturnFile (line 1098) and Httpd_ReturnData (line
1145).

If the client pre-reads the header information, then it can know how
much data will follow.

>
> So now that I'm on the "event" track (BTW: thanks for the suggestion, it
> really is the right way to go) my next challenge is to figure out how to
> split the bundle back into two responses. The only thing that comes to mind
> is that each response in the bundle starts with the "HTTP/1.1" header line.
> It would be easy to slice on this, and I think the spec supports doing that.
> Any thoughts?


Since we have already started digging into this, we could make it a
joint effort to:
1. come up with a design for keep-alive support in the Tcl http package
2. estimate the amount of time needed to implement it
3. do the coding.

As for step 1.
I'd really care for an adjustment/extension of the Tcl-http package
that:
- extends the functionality with the keep-alive feature
- does not offend the spec.
- works for your purpose
- works in combination with Tclhttpd

In any case I'm willing to commit to steps 1 and 2. Depending on the
outcomes of step 2 and the amount of spare time left for me, I'm willing
to contribute to coding. Mind that I am not a professional programmer.

How about this?


> As an aside, I'm making the assumption (and we all know what
> a bad idea that is) that responses will be returned in the same order they
> were requested. If not, then I have another problem. Any idea if the
> standard provides this guarantee?
>

I think this is a stage where we should not proceed without consulting
the specification.

Greetings,

Erik Leunissen.
===================

> David
> ---
> --
> -

Pat Thoyts

unread,
Dec 4, 2001, 11:33:01 AM12/4/01
to
Here is a preliminary patch to the http 2.4 file from tcl8.3.4 which
adds HTTP keepalive support. The extra options are -keepalive
[true|false],
-socketvar varname, -protocol version.

For example:

set tok1 [http::geturl $url/page.html -keepalive 1 -socketvar ::s]
set tok2 [http::geturl $url/pic1.gif -keepalive 1 -socketvar ::s]
set tok3 [http::geturl $url/pic2.gif -keepalive 0 -socketvar ::s]

For the first call, variable ::s is unset and so a new connection is
created and the HTTP data obtained. For the second and third calls ::s
contains the name of the channel and so this channel is re-used to
fetch
the next two requests. In the final call we indicate that we don't
intend
to re-use the channel and so it will be closed when the request
completes.

HTTP servers typically close an open connection after a short time if
it's
not in use. When this happens the variable will be set to {} and so
the
following sequence will correctly open the connection twice (provided
the
timeout occurred).

set tok1 [http::geturl $url/page.html -keepalive 1 -socketvar ::s]
after 10000 ;# let the remote host timeout
set tok2 [http::geturl $url/pic1.gif -keepalive 0 -socketvar ::s]

I'm interested in comments about this user interface suggestion.
Note that I now collect the text data using binary translation to that
I can compare the provided Content-Length value with the amount
received
and know when the data has been received. We need to perform an
'auto' translation once the request has completed - Suggestions?

I also need to check that this doesn't mess up older servers using
HTTP/1.0 or HTTP/0.9.

Pat Thoyts


PGP fingerprint 2C 6E 98 07 2C 59 C8 97 10 CE 11 E6 04 E0 B9 DD


*** http.tcl.orig Tue Sep 11 02:52:22 2001
--- http.tcl Tue Dec 04 16:19:02 2001
***************
*** 161,166 ****
--- 161,169 ----
set state(error) [list $errormsg $errorInfo $errorCode]
set state(status) error
}
+ if {[info exists state(-socketvar)] && [info exists
$state(-socketvar)]} {
+ set $state(-socketvar) {}
+ }
catch {close $state(sock)}
catch {after cancel $state(after)}
if {[info exists state(-command)] && !$skipCB} {
***************
*** 179,185 ****

# http::reset --
#
! # See documentaion for details.
#
# Arguments:
# token Connection token.
--- 182,188 ----

# http::reset --
#
! # See documentation for details.
#
# Arguments:
# token Connection token.
***************
*** 242,247 ****
--- 245,253 ----
-timeout 0
-type application/x-www-form-urlencoded
-queryprogress {}
+ -protocol 1.1
+ -keepalive 0
+ -socketvar {}
state header
meta {}
coding {}
***************
*** 257,263 ****
set state(charset) $defaultCharset
set options {-binary -blocksize -channel -command -handler
-headers \
-progress -query -queryblocksize -querychannel -queryprogress\
! -validate -timeout -type}
set usage [join $options ", "]
regsub -all -- - $options {} options
set pat ^-([join $options |])$
--- 263,269 ----
set state(charset) $defaultCharset
set options {-binary -blocksize -channel -command -handler
-headers \
-progress -query -queryblocksize -querychannel -queryprogress\
! -validate -timeout -type -protocol -keepalive -socketvar}
set usage [join $options ", "]
regsub -all -- - $options {} options
set pat ^-([join $options |])$
***************
*** 330,353 ****
set async ""
}

! # If we are using the proxy, we must pass in the full URL that
! # includes the server name.
!
! if {[info exists phost] && [string length $phost]} {
! set srvurl $url
! set conStat [catch {eval $defcmd $async {$phost $pport}} s]
! } else {
! set conStat [catch {eval $defcmd $async {$host $port}} s]
! }
! if {$conStat} {
!
! # something went wrong while trying to establish the connection
! # Clean up after events and such, but DON'T call the command
callback
! # (if available) because we're going to throw an exception from
here
! # instead.
! Finish $token "" 1
! cleanup $token
! return -code error $s
}
set state(sock) $s

--- 336,365 ----
set async ""
}

! # See if we are supposed to use a previously opened channel.
! upvar $state(-socketvar) s
! if {![info exists s] || $s == {}} {
!
! # If we are using the proxy, we must pass in the full URL
that
! # includes the server name.
!
! if {[info exists phost] && [string length $phost]} {
! set srvurl $url
! set conStat [catch {eval $defcmd $async {$phost $pport}}
s]
! } else {
! set conStat [catch {eval $defcmd $async {$host $port}}
s]
! }
! if {$conStat} {
!
! # something went wrong while trying to establish the
! # connection Clean up after events and such, but DON'T
! # call the command callback (if available) because we're
! # going to throw an exception from here instead.
!
! Finish $token "" 1
! cleanup $token
! return -code error $s
! }
}
set state(sock) $s

***************
*** 402,408 ****
}

if {[catch {
! puts $s "$how $srvurl HTTP/1.0"
puts $s "Accept: $http(-accept)"
puts $s "Host: $host:$port"
puts $s "User-Agent: $http(-useragent)"
--- 414,420 ----
}

if {[catch {
! puts $s "$how $srvurl HTTP/$state(-protocol)"
puts $s "Accept: $http(-accept)"
puts $s "Host: $host:$port"
puts $s "User-Agent: $http(-useragent)"
***************
*** 662,668 ****
set s $state(sock)

if {[eof $s]} {
! Eof $token
return
}
if {[string equal $state(state) "header"]} {
--- 674,680 ----
set s $state(sock)

if {[eof $s]} {
! Eof $token 1
return
}
if {[string equal $state(state) "header"]} {
***************
*** 686,692 ****
set idx [lsearch -exact $encodings \
[string tolower $state(charset)]]
if {$idx >= 0} {
! fconfigure $s -encoding [lindex $encodings $idx]
}
}
if {[info exists state(-channel)] && \
--- 698,705 ----
set idx [lsearch -exact $encodings \
[string tolower $state(charset)]]
if {$idx >= 0} {
! fconfigure $s -encoding [lindex $encodings $idx] \
! -translation {binary crlf}
}
}
if {[info exists state(-channel)] && \
***************
*** 727,732 ****
--- 740,748 ----
if {$n >= 0} {
incr state(currentsize) $n
}
+ if {$state(currentsize) >= $state(totalsize)} {
+ Eof $token
+ }
} err]} {
Finish $token $err
} else {
***************
*** 799,805 ****
# Side Effects
# Clean up the socket

! proc http::Eof {token} {
variable $token
upvar 0 $token state
if {[string equal $state(state) "header"]} {
--- 815,821 ----
# Side Effects
# Clean up the socket

! proc http::Eof {token {force 0}} {
variable $token
upvar 0 $token state
if {[string equal $state(state) "header"]} {
***************
*** 809,815 ****
set state(status) ok
}
set state(state) eof
! Finish $token
}

# http::wait --
--- 825,835 ----
set state(status) ok
}
set state(state) eof
! if {$state(-keepalive) && ! $force} {
! catch {after cancel $state(after)}
! } else {
! Finish $token
! }
}

# http::wait --

Erik Leunissen

unread,
Dec 4, 2001, 12:53:08 PM12/4/01
to Pat Thoyts
Pat,

Thanks for coming up with a concrete proposal.

First comments:

1. The interface using the options -keepalive, -socketvar is quite clear
to me.

2. As far as my limited overview reaches (N.B.): I didn't detect any
things you didn't cover.

3. I missed what happened with the case, where the [gets $s line]
receives zero bytes (in proc http::Event). In the original code three
cases were distinguished:
$n == -1
$n == 0
$n > 0

In the code you supplied below, I see that the latter case has been
replaced with a condition $n >= 0, and can't see what happened with the
original code for the case $n == 0.

Another matter: how about sending your patch for comment to Brent Welch
at some stage?

Greetings,

Erik Leunissen.
--

Replace `fake' by `hccnet' to obtain the real e-mail adres.

Pat Thoyts wrote:


--

Replace `fake' by `hccnet' to obtain the real e-mail adres.

Pat Thoyts

unread,
Dec 4, 2001, 1:53:57 PM12/4/01
to
Pat.T...@bigfoot.com (Pat Thoyts) writes:

>Here is a preliminary patch to the http 2.4 file from tcl8.3.4 which

[snip]

Hmm. I think that patch has been a bit mangled by the web posting
software. Once again:

For example:

--
Pat Thoyts http://www.zsplat.freeserve.co.uk/resume.html
To reply, rot13 the return address or read the X-Address header.

David Bleicher

unread,
Dec 4, 2001, 10:37:13 PM12/4/01
to
Pat,

First, thanks! Second, wow!

The patch works. I've tested it (from windows -> Unix/Apache) in a number
of scenarios, all sucessfully. It works with tls (e.g. SSL), plain or with
client certificate authentication. In the latter case (with certs) the
speed-up for subsequent calls is dramatic. I haven't benchmarked it yet,
but subjectively it makes a BIG difference.

I've also tested this with dynamic (chunked transfer) php content. I was
sure this was gonna be a gotcha (silly me), but it worked like a champ. The
toughest test was all of the above (dynamic content over SSL with client
cert auth) and it sailed through. The speed-up here was just as noticable
with the first connection "seeming" to take a second or so, and subsequent
connections "seeming" instant. In all cases, my server log confirmed the
persistent connections were working:

[04/Dec/2001 21:09:11 19945] [info] Awaiting re-negotiation handshake
[04/Dec/2001 21:09:11 19945] [info] Connection: Client IP: xx.xx.xx.xx,
Protocol: SSLv3, Cipher: EDH-RSA-DES-CBC3-SHA (168/168 bits)
[04/Dec/2001 21:09:13 19945] [info] Subsequent (No.2) HTTPS request
received for child 3 (server www.xxx.xxx:443)
[04/Dec/2001 21:09:14 19945] [info] Subsequent (No.3) HTTPS request
received for child 3 (server www.xxx.xxx:443)
[04/Dec/2001 21:09:15 19945] [info] Subsequent (No.4) HTTPS request
received for child 3 (server www.xxx.xxx:443)
[04/Dec/2001 21:09:31 19945] [info] Connection to child 3 closed with
standard shutdown (server www.xxx.xxx:443, client xx.xx.xx.xx)

> HTTP servers typically close an open connection after a short time if it's
> not in use. When this happens the variable will be set to {} and so the
> following sequence will correctly open the connection twice (provided the
> timeout occurred).

Yep. Sure does...

> I'm interested in comments about this user interface suggestion.

The interface makes perfect sense to me. My one thought regards the
addition of both the -protocol and -keepalive switches. I understand the
need for both, and appreciate the default for the -protocol switch. That
said, the keep-alive support in HTTP 1.0 (kinda, sorta) with the Connection:
Keep-Alive and the Keep-Alive headers may cause some confusion. Someone
looking at the interface as it stands might assume that they could gain 1.0
spec Keep-Alive support by specifying -protocol 1.0 -keepalive 1 on the
request. I'm not sure why anyone would want to do this (unless they are
forced to use an ancient server?) but my testing indicates that this doesn't
work. Specifically, when specifying -protocol 1.0, connections do NOT
persist, regardless of the value of -keepalive. Note, the connections still
work fine, they just don't persist.

I can't see the value in hacking 1.0 keep-alive support into the package.
I'd feel very comfortable with a documentation statement that "keep-alive
support is provided for HTTP/1.1 requests only". Someone else may disagree,
though again, I can't imagine why.

> Note that I now collect the text data using binary translation to that
> I can compare the provided Content-Length value with the amount received
> and know when the data has been received. We need to perform an
> 'auto' translation once the request has completed - Suggestions?

I'll take a look if Erik doesn't beat me to it.

>
> I also need to check that this doesn't mess up older servers using
HTTP/1.0 or HTTP/0.9.
>

Hmmm... I'll see if I can find one. Any idea what might still be in use
that we need to consider? I'm guessing that the issue will be the default
value of -protocol. Currently, you have this set to 1.1 regardless of
whether or not the user wishes to perform keep-alive (or even provides the
extended parameters). For backwards compatibility, perhaps we should
default to 1.0 (current http package value) and introduce logic to set it to
1.1 only if keep-alive is requested?

On a related topic, the current patch has a side effect that if I use the
old syntax for geturl (i.e. without specifying the -socketvar or -keepalive
switches) the first request works, but subsequent requests break with the
message 'can not find channel named "sockXXX"' where XXX is (presumably)
the socket number from the previous call. I'm guessing this means
that -socketvar is not being unset when using the old syntax. I'll take a
look at this tomorrow if I get the chance.

Once again, thanks for doing this. I was way off on a tangent (Erik, to his
credit, was trying to put me back on track) and who knows how long it would
have been before I even started to approach an answer. Imagine my delight,
after going away for a day, to come back and find it all done. Nice.

David
---
--
-

Pat Thoyts

unread,
Dec 5, 2001, 5:30:10 AM12/5/01
to
"David Bleicher" <mo...@geofinity.com> wrote in message news:<u0r5jb2...@corp.supernews.com>...

> Pat,
>
> First, thanks! Second, wow!
>
> The patch works.
[snip]

Excellent!

> The interface makes perfect sense to me. My one thought regards the
> addition of both the -protocol and -keepalive switches. I understand the
> need for both, and appreciate the default for the -protocol switch. That
> said, the keep-alive support in HTTP 1.0 (kinda, sorta) with the Connection:
> Keep-Alive and the Keep-Alive headers may cause some confusion. Someone
> looking at the interface as it stands might assume that they could gain 1.0
> spec Keep-Alive support by specifying -protocol 1.0 -keepalive 1 on the
> request. I'm not sure why anyone would want to do this (unless they are
> forced to use an ancient server?) but my testing indicates that this doesn't
> work. Specifically, when specifying -protocol 1.0, connections do NOT
> persist, regardless of the value of -keepalive. Note, the connections still
> work fine, they just don't persist.

That's because I didn't get around to inserting a Connection: Keep-alive
header for the 1.0 case. I've now tried this with Apache and it works OK. My
feeling is that 1.1 should be the default as most servers now use this. I could
be wrong though.

>
> I can't see the value in hacking 1.0 keep-alive support into the package.
> I'd feel very comfortable with a documentation statement that "keep-alive
> support is provided for HTTP/1.1 requests only". Someone else may disagree,
> though again, I can't imagine why.

Didn't take much :)

if { $state(-protocol) == 1.0 && $state(-keepalive)} {
puts $s "Connection: Keep-Alive"
}

> On a related topic, the current patch has a side effect that if I use the
> old syntax for geturl (i.e. without specifying the -socketvar or -keepalive
> switches) the first request works, but subsequent requests break with the
> message 'can not find channel named "sockXXX"' where XXX is (presumably)
> the socket number from the previous call. I'm guessing this means
> that -socketvar is not being unset when using the old syntax. I'll take a
> look at this tomorrow if I get the chance.

Doh! I broke it at the last minute when I started tidying up. Fixed now. I'll
issue another patch today for comment and then submit it to SourceForge Tcl for
discussion there.

Must check the proxying functionality to.

Pat Thoyts

Pat Thoyts

unread,
Dec 5, 2001, 4:44:11 PM12/5/01
to
OK Version 2 of a patch to provide HTTP/1.1 protocol support
to the Tcl http package - in particular for persistent connection
support.

The changes involve sending a new header or not depending upon the
protocol version and checking for both end-of-file and the counted end
of data using the Content-Length header. The content conversions have
had to be moved to the Eof procedure so that we can correctly count
the number of bytes received.

I have run the tcl8.3.4/tests/http.test with this and passed all except
those I expect to fail. Plus I've been running TkChat using the patched
http package without any trouble.

For the new usage:

set tok1 [http::geturl $url/page.html -keepalive 1 -socketvar ::s]
set tok2 [http::geturl $url/pic1.gif -keepalive 1 -socketvar ::s]
set tok3 [http::geturl $url/pic2.gif -keepalive 0 -socketvar ::s]

I've been checking strange charsets too using Apaches index.html.ru.utf8
and so on. These are all fine so far.

I fixed some typos too!

Please give it a try with your favourite Tcl web application (I've tried
TclSOAP already!) and let me know of any problems. I'll then submit this
to sourceforge - or do we have to do a TIP for this sort of thing?



Pat Thoyts http://www.zsplat.freeserve.co.uk/resume.html
To reply, rot13 the return address or read the X-Address header.

PGP fingerprint 2C 6E 98 07 2C 59 C8 97 10 CE 11 E6 04 E0 B9 DD

*** http.tcl.orig Tue Sep 11 02:52:22 2001
--- http.tcl Wed Dec 05 15:48:18 2001
***************
*** 21,26 ****
--- 21,28 ----
# "ioerror" status in favor of raising an error
# 2.4 Added -binary option to http::geturl and charset element
# to the state array.
+ # 2.5 Added HTTP/1.1 support for persistent connections. New options
+ # -protocol, -keepalive, -socketvar.

package require Tcl 8.2
# keep this in sync with pkgIndex.tcl
***************
*** 65,71 ****

# http::register --


#
! # See documentaion for details.
#
# Arguments:

# proto URL protocol prefix, e.g. https
--- 67,73 ----

# http::register --


#
! # See documentation for details.
#
# Arguments:

# proto URL protocol prefix, e.g. https
***************
*** 100,106 ****

# http::config --


#
! # See documentaion for details.
#
# Arguments:

# args Options parsed by the procedure.
--- 102,108 ----

# http::config --


#
! # See documentation for details.
#
# Arguments:

# args Options parsed by the procedure.
***************
*** 161,166 ****
--- 163,171 ----


set state(error) [list $errormsg $errorInfo $errorCode]
set state(status) error
}
+ if {[info exists state(-socketvar)] && [info exists $state(-socketvar)]} {
+ set $state(-socketvar) {}
+ }
catch {close $state(sock)}
catch {after cancel $state(after)}
if {[info exists state(-command)] && !$skipCB} {
***************
*** 179,185 ****

# http::reset --
#
! # See documentaion for details.
#
# Arguments:
# token Connection token.

--- 184,190 ----



# http::reset --
#
! # See documentation for details.
#
# Arguments:
# token Connection token.
***************
*** 242,247 ****

--- 247,256 ----


-timeout 0
-type application/x-www-form-urlencoded
-queryprogress {}
+ -protocol 1.1
+ -keepalive 0
+ -socketvar {}

+ binary false


state header
meta {}
coding {}
***************
*** 257,263 ****
set state(charset) $defaultCharset
set options {-binary -blocksize -channel -command -handler -headers \
-progress -query -queryblocksize -querychannel -queryprogress\
! -validate -timeout -type}
set usage [join $options ", "]
regsub -all -- - $options {} options
set pat ^-([join $options |])$

--- 266,272 ----

--- 339,370 ----


set async ""
}

! # See if we are supposed to use a previously opened channel.

! if {$state(-socketvar) != {}} {

*** 402,411 ****


}

if {[catch {
! puts $s "$how $srvurl HTTP/1.0"
puts $s "Accept: $http(-accept)"
puts $s "Host: $host:$port"
puts $s "User-Agent: $http(-useragent)"

foreach {key value} $state(-headers) {
regsub -all \[\n\r\] $value {} value
set key [string trim $key]
--- 419,434 ----


}

if {[catch {
! puts $s "$how $srvurl HTTP/$state(-protocol)"
puts $s "Accept: $http(-accept)"
puts $s "Host: $host:$port"
puts $s "User-Agent: $http(-useragent)"

+ if { $state(-protocol) == 1.0 && $state(-keepalive)} {
+ puts $s "Connection: Keep-Alive"
+ }
+ if { $state(-protocol) > 1.0 && ! $state(-keepalive) } {
+ puts $s "Connection: close" ;# RFC2616 sec 8.1.2.1
+ }
foreach {key value} $state(-headers) {
regsub -all \[\n\r\] $value {} value
set key [string trim $key]
***************
*** 662,693 ****


set s $state(sock)

if {[eof $s]} {
! Eof $token
return
}
if {[string equal $state(state) "header"]} {

if {[catch {gets $s line} n]} {
Finish $token $n
} elseif {$n == 0} {
! variable encodings
set state(state) body
if {$state(-binary) || ![regexp -nocase ^text $state(type)] || \
[regexp gzip|compress $state(coding)]} {
# Turn off conversions for non-text data
! fconfigure $s -translation binary
if {[info exists state(-channel)]} {
fconfigure $state(-channel) -translation binary
}
- } else {
- # If we are getting text, set the incoming channel's
- # encoding correctly. iso8859-1 is the RFC default, but
- # this could be any IANA charset. However, we only know
- # how to convert what we have encodings for.
- set idx [lsearch -exact $encodings \
- [string tolower $state(charset)]]
- if {$idx >= 0} {
- fconfigure $s -encoding [lindex $encodings $idx]
- }


}
if {[info exists state(-channel)] && \

![info exists state(-handler)]} {
--- 685,710 ----


set s $state(sock)

if {[eof $s]} {
! Eof $token 1
return
}
if {[string equal $state(state) "header"]} {

if {[catch {gets $s line} n]} {
Finish $token $n
} elseif {$n == 0} {
! # We ignore HTTP/1.1 100 Continue returns. RFC2616 sec 8.2.3
! if {[lindex $state(http) 1] == 100} {
! return
! }
set state(state) body
+ fconfigure $s -translation binary
if {$state(-binary) || ![regexp -nocase ^text $state(type)] || \
[regexp gzip|compress $state(coding)]} {
# Turn off conversions for non-text data
! set state(binary) true
if {[info exists state(-channel)]} {
fconfigure $state(-channel) -translation binary


}
}
if {[info exists state(-channel)] && \

![info exists state(-handler)]} {
***************
*** 727,732 ****
--- 744,754 ----


if {$n >= 0} {
incr state(currentsize) $n
}

+ # If Content-Length - check for end of data.
+ if {$state(totalsize) > 0 \
+ && $state(currentsize) >= $state(totalsize)} {


+ Eof $token
+ }
} err]} {
Finish $token $err
} else {
***************
*** 799,805 ****
# Side Effects
# Clean up the socket

! proc http::Eof {token} {
variable $token
upvar 0 $token state
if {[string equal $state(state) "header"]} {

--- 821,827 ----


# Side Effects
# Clean up the socket

! proc http::Eof {token {force 0}} {
variable $token
upvar 0 $token state
if {[string equal $state(state) "header"]} {
***************

*** 808,820 ****
} else {


set state(status) ok
}
set state(state) eof
! Finish $token
}

# http::wait --

#
! # See documentaion for details.
#
# Arguments:
# token Connection token.

--- 830,868 ----
} else {
set state(status) ok
}
+
+ if {! $state(binary)} {
+
+ # If we are getting text, set the data's encoding
+ # correctly. iso8859-1 is the RFC default, but
+ # this could be any IANA charset. However, we
+ # only know how to convert what we have encodings
+ # for.
+
+ variable encodings
+ set idx [lsearch -exact $encodings \
+ [string tolower $state(charset)]]
+ if {$idx >= 0} {
+ set state(body) [encoding convertfrom \
+ [lindex $encodings $idx] \
+ $state(body)]
+ }
+
+ # Translate text line endings
+ set state(body) [string map {\r\n \n \r \n} $state(body)]
+ }
+

set state(state) eof
! if {$state(-keepalive) && ! $force} {
! catch {after cancel $state(after)}
! } else {
! Finish $token
! }
}

# http::wait --

#
! # See documentation for details.
#
# Arguments:
# token Connection token.
***************

*** 836,842 ****

# http::formatQuery --


#
! # See documentaion for details.

# Call http::formatQuery with an even number of arguments, where
# the first is a name, the second is a value, the third is another
# name, and so on.
--- 884,890 ----

# http::formatQuery --


#
! # See documentation for details.

# Call http::formatQuery with an even number of arguments, where
# the first is a name, the second is a value, the third is another
# name, and so on.

Erik Leunissen

unread,
Dec 6, 2001, 5:03:57 AM12/6/01
to
Erik Leunissen wrote:

> ...

> 3. I missed what happened with the case, where the [gets $s line]
> receives zero bytes (in proc http::Event). In the original code three
> cases were distinguished:
> $n == -1
> $n == 0
> $n > 0

> ...

But Erik Leunissen's vision was blurred at the time he wrote that.
Remark withdrawn.

Greetings,

Erik.

lvi...@yahoo.com

unread,
Dec 7, 2001, 9:39:57 AM12/7/01
to

According to Pat Thoyts <Cng.G...@ovtsbbg.pbz>:
: I'll then submit this

:to sourceforge - or do we have to do a TIP for this sort of thing?


It is up to the individual extension project whether they use TIP or
something similar, or whether they just accept enhancements and modifications.

Pop a note onto the tclhttpd mailing list and check.

--
"I know of vanishingly few people ... who choose to use ksh." "I'm a minority!"
<URL: mailto:lvi...@cas.org> <URL: http://www.purl.org/NET/lvirden/>
Even if explicitly stated to the contrary, nothing in this posting
should be construed as representing my employer's opinions.

Pat Thoyts

unread,
Dec 7, 2001, 4:34:05 PM12/7/01
to
lvi...@yahoo.com writes:

>According to Pat Thoyts <Cng.G...@ovtsbbg.pbz>:
>: I'll then submit this
>:to sourceforge - or do we have to do a TIP for this sort of thing?
>
>
>It is up to the individual extension project whether they use TIP or
>something similar, or whether they just accept enhancements and modifications.
>
>Pop a note onto the tclhttpd mailing list and check.
>

This isn't for tclhttpd - this is for tcl core.

--

Chang LI

unread,
Dec 8, 2001, 12:20:12 PM12/8/01
to
Pat Thoyts <Cng.G...@ovtsbbg.pbz> wrote in message news:<wkd71q3...@zsplat.freeserve.co.uk>...

Following is the result to run the all.tcl in tests on the NT with
Tcl8.3.4
What is wrong with the configuration? Does XMLRPC work on Windows?

==== xmlrpc-1.1 XMLRPC Method creation FAILED
==== Contents of test case:

XMLRPC::create xmlrpcTest -uri urn:xmlrpc-Test -proxy
http://localhost:8015/rpc/test -params { "arg" "string" } -name
"tests.xmlrpcTest" -transport ::XMLRPC::transport_test

==== Test generated error:
invalid command name "::::SOAP::xmlrpc_request"
---- Result should have been:
::xmlrpcTest
==== xmlrpc-1.1 FAILED


==== xmlrpc-1.2 XMLRPC cget URI value FAILED
==== Contents of test case:

catch {XMLRPC::cget ::xmlrpcTest -uri} result
set result

---- Result was:
invalid command name "SOAP::cget"
---- Result should have been:
urn:xmlrpc-Test
==== xmlrpc-1.2 FAILED


==== xmlrpc-1.3 Reset the URI value FAILED
==== Contents of test case:

catch {XMLRPC::configure ::xmlrpcTest -uri urn:new-xmlrpc-Test}
result
set result

---- Result was:
invalid command name "SOAP::configure"
---- Result should have been:
::xmlrpcTest
==== xmlrpc-1.3 FAILED


==== xmlrpc-1.4 XMLRPC cget the new URI value FAILED
==== Contents of test case:

catch {XMLRPC::cget ::xmlrpcTest -uri} result
set result

---- Result was:
invalid command name "SOAP::cget"
---- Result should have been:
urn:new-xmlrpc-Test
==== xmlrpc-1.4 FAILED


==== xmlrpc-2.1 XML generation with no arguments FAILED
==== Contents of test case:

catch {::xmlrpcTest} result
set result

---- Result was:
invalid command name "::xmlrpcTest"
---- Result should have been:
wrong # args: should be "xmlrpcTest arg"
==== xmlrpc-2.1 FAILED


==== xmlrpc-2.2 XML generation with one argument FAILED
==== Contents of test case:

if { ! [catch {::xmlrpcTest testParameter} result] } {
set result $::XMLRPC::testXML
}
set result

---- Result was:
invalid command name "::xmlrpcTest"
---- Result should have been:
<?xml version='1.0'?>
<methodCall><methodName>tests.xmlrpcTest</methodName><params><param><value><string>testParameter</string></value></param></params></methodCall>
==== xmlrpc-2.2 FAILED


==== xmlrpc-2.3 XML generation with two arguments FAILED
==== Contents of test case:

set failed [catch {::XMLRPC::configure ::xmlrpcTest -params {
"text" "string" "number" "double" }} result]
if { ! $failed } {
set failed [catch {::xmlrpcTest textParam 1.3} result]
}
if { ! $failed } {
set result $::XMLRPC::testXML
}
set result

---- Result was:
invalid command name "SOAP::configure"
---- Result should have been:
<?xml version='1.0'?>
<methodCall><methodName>tests.xmlrpcTest</methodName><params><param><value><string>textParam</string></value></param><param><value><double>1.3</double></value></param></params></methodCall>
==== xmlrpc-2.3 FAILED


==== xmlrpc-2.4 XML generation with array argument FAILED
==== Contents of test case:

set failed [catch {::XMLRPC::configure ::xmlrpcTest -params {nums
int()} } result]
if { ! $failed } {
set failed [catch {::xmlrpcTest {1 2 3}} result]
}
if { ! $failed } {
set result $::XMLRPC::testXML
}
set result

---- Result was:
invalid command name "SOAP::configure"
---- Result should have been:
<?xml version='1.0'?>
<methodCall><methodName>tests.xmlrpcTest</methodName><params><param><value><array><data><value><int>1</int></value><value><int>2</int></value><value><int>3</int></value></data></array></value></param></params></methodCall>
==== xmlrpc-2.4 FAILED


==== xmlrpc-2.5 XML generation with struct argument FAILED
==== Contents of test case:

set failed [catch {::XMLRPC::configure ::xmlrpcTest -params
{point struct} } result]
if { ! $failed } {
set failed [catch {::xmlrpcTest {x 1 y 2}} result]
}
if { ! $failed } {
set result $::XMLRPC::testXML
}
set result

---- Result was:
invalid command name "SOAP::configure"
---- Result should have been:
<?xml version='1.0'?>
<methodCall><methodName>tests.xmlrpcTest</methodName><params><param><value><struct><member><name>x</name><value><int>1</int></value></member><member><name>y</name><value><int>2</int></value></member></struct></value></param></params></methodCall>
==== xmlrpc-2.5 FAILED


==== xmlrpc-3.1 XMLRPC reply processing FAILED
==== Contents of test case:

proc replyProc {methodName xml} {
set xml "<methodResponse><params><param><value>FIXED</value>"
append xml "</param></params></methodResponse>"
return $xml
}
set failed [catch {::XMLRPC::configure ::xmlrpcTest -params {text
string num double} -replyProc [namespace current]::replyProc} result]
if { ! $failed } {
set failed [catch {::xmlrpcTest textParam 1.3} result]
}
set result

---- Result was:
invalid command name "SOAP::configure"
---- Result should have been:
FIXED
==== xmlrpc-3.1 FAILED


==== xmlrpc-3.2 reply post processing via -postProc FAILED
==== Contents of test case:

proc replyProc {methodName xml} {
set xml "<methodResponse><params>"
append xml "<param><value>FIXED</value></param>"
append xml "<param><value><double>1.3</double></value></param>"
append xml "</params></methodResponse>"
return $xml
}

proc postProc { methodName reply } {
return [lindex $reply 1]
}
set failed [catch {::XMLRPC::configure ::xmlrpcTest -replyProc
[namespace current]::replyProc -postProc [namespace
current]::postProc} result]
if { ! $failed } {
set failed [catch {::xmlrpcTest textParam 1.3} result]
}
set result

---- Result was:
invalid command name "SOAP::configure"
---- Result should have been:
1.3
==== xmlrpc-3.2 FAILED

xpath.test
all.tcl: Total 40 Passed 29 Skipped 0 Failed 11
Sourced 2 Test Files.
Files with failing tests: xmlrpc.test

Chang

Phil Dietz

unread,
Dec 11, 2001, 11:45:23 AM12/11/01
to
One quibble about the keepalive patch:

If the remote side closes the keepalive socket, shouldn't the HTTP
package sense this and open a new socket instead of returning
state(status)=eof?

So code will become clunky as everyone must do the call 2 times now:

set x [http::geturl https://www.fedex.com/ -timeout 10000 -keepalive
1 -socketvar ::s]
upvar #0 $tok state
########################################################################################################
# server must have disconnected our keepalive socket...send again
(and reopen the keepalive socket)
########################################################################################################
if { $state(status) == "eof" } {
set x [http::geturl https://www.fedex.com/ -timeout 10000 -keepalive
1 -socketvar ::s]
upvar #0 $tok state
}

Other than that quibble, it works really well.

Phil

Pat Thoyts

unread,
Dec 11, 2001, 7:47:21 PM12/11/01
to
ped...@west.com (Phil Dietz) writes:

>One quibble about the keepalive patch:
>
>If the remote side closes the keepalive socket, shouldn't the HTTP
>package sense this and open a new socket instead of returning
>state(status)=eof?
>
>So code will become clunky as everyone must do the call 2 times now:

(Replied via e-mail)

In brief: "Works for me."

--

Phil Dietz

unread,
Dec 12, 2001, 3:38:52 PM12/12/01
to
Pat Thoyts <Cng.G...@ovtsbbg.pbz> wrote in message news:<wkofl5g...@zsplat.freeserve.co.uk>...

> ped...@west.com (Phil Dietz) writes:
>
> >One quibble about the keepalive patch:
> >
> >If the remote side closes the keepalive socket, shouldn't the HTTP
> >package sense this and open a new socket instead of returning
> >state(status)=eof?
> >
> >So code will become clunky as everyone must do the call 2 times now:
>
> (Replied via e-mail)
>
> In brief: "Works for me."

I'm thinking I didnt patch the file right.
I had to do it by hand as 'patch' failed with both the http.tcl from
8.3.4 and 8.4.a.3.

Reply all
Reply to author
Forward
0 new messages