Since lots of people, including me, are having trouble using TLS
extension I am trying to use the following as a temporary workaround:
set fp [open "|openssl s_server -accept 9009" r+]; # (you will need a
server.pem file in the current working directory).
When I connect to localhost:9009 with a web browser, I am able to use
"gets $fp" to retrieve the request from the web browser. However, when
I send a response back to the browser, using "puts $fp 'html data'",
the browser will not receive the response until I close the connection
(close $fp). I tried "flush $fp", that didn't work. Any ideas how to
fix that?
Thanks
Hello,
I think the usage is the following, but I haven't tried it:
package require tls 1.5
set fp [::tls::socket -server acceptCmd -require 1 -cafile caPublic.pem
-certfile server.pem 9009]
There's more details at http://wiki.tcl.tk/2630
When you say "will not receive", are you sure it's not rather "will
not display" ?
I mean, have you used a network sniffer, and seen the bytes sent over
the wire only in one big burst at the end ? If so, that's baffling. If
not, maybe you can clarify the HTTP part (not described in your post).
Depending on the headers in your HTTP response, the browser may (or
may not) accept to display things before the end: Content-Length vs.
Transfer-encoding: chunked.
-Alex
I experience the same symptom as described here, but only with specific
data, even if server and client live on the same host.
It is a very strange phenomenon:
Specific GIF images do not get delivered, while others do. This is very
reproducible on my system (Linux, TLS1.5, tcl8.4.17).
I'm sort of surprised because until a week ago I've never seen this
behaviour while having used TLS for years.
I tested that sending these same images over a regular, non-encrypted
channel do get delivered.
As of now I'm still unsure whether the congestion occurs at the server
side (sending the image), or at the client side (receiving the image).
A sniffer should reveal that ... to be continued, more experimenting
needed to narrow down the circumstances where this behaviour occurs.
Erik
--
leunissen@ nl | Merge the left part of these two lines into one,
e. hccnet. | respecting a character's position in a line.
By any chance, did you [fconfigure -translation binary] ?
A CRLF translation might insert or delete a byte and wreck the size
assumption...
-Alex
I'm transferring images all the time: -translation binary has been taken
care of. The weird thing is that while most GIF images are being
received correctly, a few specific GIF images (ceteris paribus) are not
being delivered until I manually terminate the server process (the
sending process).
The cause for this oddity must be found somewhere else/deeper
Then let's dig :-)
Can you send me one of your offending GIFs, preferably a small one ?
Along with the server-side writer code of course.
-Alex
I'd guess that the problem is somewhere on the server side, on the
grounds that it's very easy to make sure that the client side is right
since it's written in Tcl, right? :-)
Donal.
Yes, the client side is tclhttp (see sourceforge), a variant of the
regular Tcl http client package
(http://sourceforge.net/projects/tclhttp1-1/).
However, I just found out that the congestion takes place at the client.
The GIF image (size: 61403 bytes) is being read up to the last 4215 bytes.
The read handler is still registered (says [fileevent readable $sock]),
so why are these last 4215 bytes not being read? Inspecting with [file
readable $sock], the socket isn't readable any more !!! Hmpff (baffled)
Another interesting fact: running the client on a windows host, while
leaving the server where it has always been (on the same linux host
where the client is normally running), doesn't have the problematic
behaviour. It receives the "offending" GIF image properly.
The server side writer is a plain Httpd_ReturnData from the
tclhttpd3.5.1 package.
I send one of the offending GIF images to you by email because the news
group manager refuses it.
Please see also the news in my response to Donals post in this thread.
Greetings,
Since the problem arises on an unix client, can you strace it ? Try to
identify the socket's fd with a preliminary run, so that you can add "-
e read=$fd" to the strace, so that we can see all the bytes...
-Alex
Yes, cool especially because HTTP/1.1 connection keepalive is a
requirement for sensible use of SSL/TLS.
Are we
> going to see the core http/1.0 package updated to http/1.1 with code
> from this package anytime soon???
Good point.
Not really(*). That's why SSL has an SSL session ID in the initial
handshake.
(*) For some definition of "sensible" of course.
--
Darren New / San Diego, CA, USA (PST)
"That's pretty. Where's that?"
"It's the Age of Channelwood."
"We should go there on vacation some time."
Hi Darren,
I have never known otherwise than:
1. not keeping connections alive (as in HTTP/1.0) means: each
transaction consumes a connection (i.e. a connection is always closed
after a transaction has been completed).
2. SSL/TLS performs a (relatively expensive) handshake for each
connection created.
Conclusion
==========
Using SSL without connection keepalive means that the SSL handshake is
performed for each transaction (which is what I deem not sensible in
most cases).
Could you please indicate where my knowledge/logic is going wrong (and
possible where a SSL session ID intervenes in the process)?
Thanks,
Sensible or not, it's expensive as it's a full bidirectional handshake
with a significant amount of computation (as the code uses public key
crypto to establish a session key). Keeping the connection alive avoids
these costs. HTTP session keys are something else; there's many layers
of complexity in HTTP, and the SSL part and the session part are in
different parts of the stack.
> Could you please indicate where my knowledge/logic is going wrong (and
> possible where a SSL session ID intervenes in the process)?
In theory, it is possible to resume a previous session despite dropping
the socket. I wouldn't be at all surprised if that failed to work; it's
not necessary for HTTP (which is basically stateless, and for good
reason) and so probably isn't tested nearly as thoroughly as you might hope.
Donal.
Read the SSL spec. The session ID is like an SSL cookie that lets the
server pick up the SSL session with minimal handshaking (and no need to
regenerate shared secrets etc). I suspect it was specifically included
to support protocols that disconnect after every command like HTTP.
Sure, it's still more overhead than keeping the session open; hence my
footnote.
In my experience, it works just fine. It's exactly what SSL session IDs
are for, and it's been in there (as far as I remember) since day one,
since SSL was on HTTP long before keepalive was.
>
> set fp [open "|openssl s_server -accept 9009" r+]; # (you will need a
> server.pem file in the current working directory).
>
> When I connect to localhost:9009 with a web browser, I am able to use
> "gets $fp" to retrieve the request from the web browser. However, when
> I send a response back to the browser, using "puts $fp 'html data'",
> the browser will not receive the response until I close the connection
> (close $fp). I tried "flush $fp", that didn't work. Any ideas how to
> fix that?
>
Hello vitick,
I'm investigating this issue with the help of Alexandre Ferrieux.
Currently, we believe that a race condition related to the SSL secured
channel is causing the misbehaviour.
In my case, I use the Tcl tls1.5 extension. You appear not to use tls1.5
and nevertheless experience the same type of symptom as I do. Therefore,
the problem may be inside the openssl library which is a common factor
in both our cases. However, I'm unsure about the SSL functionality that
is employed at the client side of your setup.
In order to find this out, I would very much like to know about your
"openssl s_server" exercise above:
- which browser you have been using?
- which platform were you running on?
Because you made the client connect to localhost, I assume that server
process and client process were both running under the same
(non-windows) operating system. If that wasn't always the case, please
let me know.
I'd be grateful for your cooperation,
Erik Leunissen
Eric,
I was able to make it work, it was just a matter of sending the
correct sequence of http headers back to the browser.
I used Firefox. There was some extra SSL data that has to be read back
(to clear the buffer) after the browser received and displayed the
response, I didn't dwell much on it, just used "gets $fp" until all
data was absorbed.
It was a good educational experience. I realized that I can't really
use it for anything more than one connection at a time. I can't open a
socket for each incoming connection, so basically it seems pretty
useless for anything serious.
---Victor
Just so I'm clear: You've found you can't use the TLS extension to open
more than 1 SSL socket for each incoming connection at a time?!?! That
doesn't seem right! Can anyone corroborate?
We are not talking about TLS. We are talking about calling the openssl
command from TCL.
If you're always spawning a child binding to the same port, then I'd
say the error is rather between keyboard and chair ;-)
-Alex
So, basically, I cannot use the following code:
while {[gets $sock data] >= 0} {...} -- which is the code used most of
the time in examples on how to get the request data from the client.
I instead ended up using something like this:
set counter 0
while {![eof $sock]} {
incr counter
if {$counter >= 1000} {break}
gets $sock data
if {$data == ""} {
continue
}
...
process the data
...
}
I hope this will get fixed, because timing out is not a good solution
(not happen yet to me with the time out count set to 1000, but in some
cases it probably will happen). I want to read and process every
request correctly.
Of course, there is probably a better solution. That's where people
like Alex Ferrieux often come to the rescue. ;)
if {[gets $sock data] < 0} {continue}
or
gets $sock data
if {$data == ""} {continue}
I was just too sleepy. :)
I'm flattered :-) , but unfortunately, even after an intensive trace
analysis with Erik, we've not yet formally managed to blame TLS. To my
knowledge, Erik is working on extracting a simpler subcase from his
https problem.
However, looking back atthis thread, I realize that we drifted a bit
from your case, which may be different from Erik's after all , or even
better, have the very same root cause and be easier to identify !
If the failing case with TLS is as simple as it seems, maybe you can
get in contact with TLS's author via SF. I'm sorry not to offer as
much in-depth help as you were expecting, but I never used SSL (except
through the ssh executable), let alone TLS, and setting it up here is
a bit resource-consuming :-{
However, once you get an analysis that can be projected back on my lay-
Tcler's mindset (talking about fileevents and stacked channels), I'm
interested.
Best regards,
-Alex
Indeed.
As a first, random scenario I removed http from the equation; only to
find that the problematic behaviour didn't show up anymore. This was a
rough/random experiment, in which I did not take care to ensure that all
channel properties were the same as in the problem case (such as buffering).
So it appears that finding a reduced scenario that still exhibits the
problem requires more effort, especially a more systematic approach.
Need more time ...
Erik
There are two modifications that I would suggest
that may more effectively prevent the empty reads (and hopefully
confirm/deny whether we're experiencing the same probelem).
1. Use [fileevent] and a read handler
=====================================
#
# This makes your read commands execute
# only when the channel is readable
#
proc handleRead {sock} {
if {[catch {gets $sock line} n] || [eof $sock]} {
error "broken channel"
}
if {$n = 0} {
# The infamous empty read occurred.
# Simply bail out and let the next file event call
# this read handler again, hopefully with more
# substantial data.
#
# May be combined with the ($n < 0) case below
#
return
}
if {$n < 0} {
# Incomplete line.
# May be combined with the ($n == 0) case above
#
return
}
#
# process $line here
#
}
fileevent $sock readable [list handleRead $sock]
2. Insert a wait/sleep into the read handler
============================================
proc handleRead {sock} {
after 20 ;# <==== inserted
if {[catch {gets $sock line} n] || [eof $sock]} {
error "broken channel"
}
if {$n = 0} {
# The infamous empty read occurred.
# Simply bail out and let the next file event call
# this read handler again, hopefully with more
# substantial data.
#
# May be combined with the ($n < 0) case below
#
return
}
if {$n < 0} {
# Incomplete line.
# May be combined with the ($n == 0) case above
#
return
}
#
# process $line here
#
}
If the inserted [after] command affects the number of empty reads, then
we're probably experiencing the same race condition.
I'd be grateful if you tried these and reported back.
Erik
I will do that as soon as I get some free time. Should be soon.
---Victor
Release notes:
http://sourceforge.net/project/shownotes.php?release_id=585820&group_id=13248
Download:
http://sourceforge.net/project/showfiles.php?group_id=13248
Not in my case.
Erik.
Hello Eric,
Your suggestion worked. Using "fileevents" to wait for each line of
data stopped the number of empty reads to 2-3 instead of 200-400+.
Using [after 20] did not do anything. The only problem I have is with
Konqueror (I tested with Konqueror, Opera and Firefox).
Konqueror closes the connection after each request and does not honor
the "keep-alive" header.
Thanks,
---Victor
Hmm, sorry to be the naysayer, but I had the hope that together we'd
dig the bug out instead of burying it deeper :-)
From what you and Erik (and others) wrote, it looks like TLS is known
to yield empty reads. That alone should be enough to get us on a
furious bug-hunting session. Jeff, Pat, any taker ?
-Alex
Thanks for testing this Victor.
> Using [after 20] did not do anything.
And if you let the wait time vary (up to 100 ms), does that make a
difference?
The only problem I have is with
> Konqueror (I tested with Konqueror, Opera and Firefox).
> Konqueror closes the connection after each request and does not honor
> the "keep-alive" header.
With respect to your original post:
> When I connect to localhost:9009 with a web browser, I am able to use
> "gets $fp" to retrieve the request from the web browser. However, when
> I send a response back to the browser, using "puts $fp 'html data'",
> the browser will not receive the response until I close the connection
> (close $fp). I tried "flush $fp", that didn't work. Any ideas how to
> fix that?
Do you still perceive this kind of choking behaviour?
Greetings,
Erik.
>
> Thanks,
>
> ---Victor
Yesterday I've managed to create a reduced setup that reproduces the
problem of the choking reads (along with the odd empty reads); no http
involved, just Tcl and TLS. I am in the process of cleaning it up and
writing the text for a bug report.
I expect that the maintainers will want to have this reduced setup (as I
know you agree with).
It's too late to do it now, but I intend to send you the reduced setup,
(even before finishing the bug report), so you'll be the first to be
able to play with it.
Greetings,
Erik
> -Alex
Excellent, thanks Erik ! (and TIA for the strace ;-)
-Alex