Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

openssl from tcl

127 views
Skip to first unread message

vit...@gmail.com

unread,
Mar 13, 2008, 4:34:13 PM3/13/08
to
Hello all,

Since lots of people, including me, are having trouble using TLS
extension I am trying to use the following as a temporary workaround:

set fp [open "|openssl s_server -accept 9009" r+]; # (you will need a
server.pem file in the current working directory).

When I connect to localhost:9009 with a web browser, I am able to use
"gets $fp" to retrieve the request from the web browser. However, when
I send a response back to the browser, using "puts $fp 'html data'",
the browser will not receive the response until I close the connection
(close $fp). I tried "flush $fp", that didn't work. Any ideas how to
fix that?

Thanks

David Gravereaux

unread,
Mar 13, 2008, 5:26:25 PM3/13/08
to


Hello,

I think the usage is the following, but I haven't tried it:

package require tls 1.5
set fp [::tls::socket -server acceptCmd -require 1 -cafile caPublic.pem
-certfile server.pem 9009]

There's more details at http://wiki.tcl.tk/2630


signature.asc

USCode

unread,
Mar 19, 2008, 1:41:59 PM3/19/08
to
What was the resolution here?
Is there a general problem with bugs in the TLS extension or just
problems USING the TLS extension? I naively assumed it was just a Tcl
wrapper over the OpenSSL library and so it should be relatively stable
but perhaps there's more involved there...?

Alexandre Ferrieux

unread,
Mar 19, 2008, 3:58:24 PM3/19/08
to

When you say "will not receive", are you sure it's not rather "will
not display" ?
I mean, have you used a network sniffer, and seen the bytes sent over
the wire only in one big burst at the end ? If so, that's baffling. If
not, maybe you can clarify the HTTP part (not described in your post).
Depending on the headers in your HTTP response, the browser may (or
may not) accept to display things before the end: Content-Length vs.
Transfer-encoding: chunked.

-Alex

Erik Leunissen

unread,
Mar 19, 2008, 6:24:35 PM3/19/08
to

I experience the same symptom as described here, but only with specific
data, even if server and client live on the same host.

It is a very strange phenomenon:
Specific GIF images do not get delivered, while others do. This is very
reproducible on my system (Linux, TLS1.5, tcl8.4.17).
I'm sort of surprised because until a week ago I've never seen this
behaviour while having used TLS for years.

I tested that sending these same images over a regular, non-encrypted
channel do get delivered.

As of now I'm still unsure whether the congestion occurs at the server
side (sending the image), or at the client side (receiving the image).

A sniffer should reveal that ... to be continued, more experimenting
needed to narrow down the circumstances where this behaviour occurs.

Erik
--
leunissen@ nl | Merge the left part of these two lines into one,
e. hccnet. | respecting a character's position in a line.

Alexandre Ferrieux

unread,
Mar 19, 2008, 7:33:26 PM3/19/08
to

By any chance, did you [fconfigure -translation binary] ?
A CRLF translation might insert or delete a byte and wreck the size
assumption...

-Alex

Erik Leunissen

unread,
Mar 20, 2008, 4:05:10 PM3/20/08
to
>
> By any chance, did you [fconfigure -translation binary] ?
> A CRLF translation might insert or delete a byte and wreck the size
> assumption...
>

I'm transferring images all the time: -translation binary has been taken
care of. The weird thing is that while most GIF images are being
received correctly, a few specific GIF images (ceteris paribus) are not
being delivered until I manually terminate the server process (the
sending process).

The cause for this oddity must be found somewhere else/deeper

Alexandre Ferrieux

unread,
Mar 20, 2008, 4:30:38 PM3/20/08
to
On Mar 20, 9:05 pm, Erik Leunissen <l...@the.footer.invalid> wrote:
> > By any chance, did you [fconfigure -translation binary] ?
> > A CRLF translation might insert or delete a byte and wreck the size
> > assumption...
>
> I'm transferring images all the time: -translation binary has been taken
> care of. The weird thing is that while most GIF images are being
> received  correctly, a few specific GIF images (ceteris paribus) are not
> being delivered until I manually terminate the server process (the
> sending process).
>
> The cause for this oddity must be found somewhere else/deeper

Then let's dig :-)
Can you send me one of your offending GIFs, preferably a small one ?
Along with the server-side writer code of course.

-Alex

Donal K. Fellows

unread,
Mar 20, 2008, 5:28:09 PM3/20/08
to
Erik Leunissen wrote:
> The cause for this oddity must be found somewhere else/deeper

I'd guess that the problem is somewhere on the server side, on the
grounds that it's very easy to make sure that the client side is right
since it's written in Tcl, right? :-)

Donal.

Erik Leunissen

unread,
Mar 20, 2008, 6:10:58 PM3/20/08
to
Donal K. Fellows wrote:
>
> I'd guess that the problem is somewhere on the server side, on the
> grounds that it's very easy to make sure that the client side is right
> since it's written in Tcl, right? :-)
>

Yes, the client side is tclhttp (see sourceforge), a variant of the
regular Tcl http client package
(http://sourceforge.net/projects/tclhttp1-1/).

However, I just found out that the congestion takes place at the client.
The GIF image (size: 61403 bytes) is being read up to the last 4215 bytes.

The read handler is still registered (says [fileevent readable $sock]),
so why are these last 4215 bytes not being read? Inspecting with [file
readable $sock], the socket isn't readable any more !!! Hmpff (baffled)

Another interesting fact: running the client on a windows host, while
leaving the server where it has always been (on the same linux host
where the client is normally running), doesn't have the problematic
behaviour. It receives the "offending" GIF image properly.

Erik Leunissen

unread,
Mar 20, 2008, 6:13:07 PM3/20/08
to
Alexandre Ferrieux wrote:
>
> Then let's dig :-)
> Can you send me one of your offending GIFs, preferably a small one ?
> Along with the server-side writer code of course.
>

The server side writer is a plain Httpd_ReturnData from the
tclhttpd3.5.1 package.

I send one of the offending GIF images to you by email because the news
group manager refuses it.

Please see also the news in my response to Donals post in this thread.

Greetings,

USCode

unread,
Mar 20, 2008, 7:09:32 PM3/20/08
to
Erik Leunissen wrote:
> Yes, the client side is tclhttp (see sourceforge), a variant of the
> regular Tcl http client package
> (http://sourceforge.net/projects/tclhttp1-1/).
>
I just took a look at the tclhttp1-1 package in Sourceforge and it adds
http/1.1 features to the current core http/1.0 package ... cool! Are we
going to see the core http/1.0 package updated to http/1.1 with code
from this package anytime soon???

Alexandre Ferrieux

unread,
Mar 21, 2008, 4:11:56 AM3/21/08
to

Since the problem arises on an unix client, can you strace it ? Try to
identify the socket's fd with a preliminary run, so that you can add "-
e read=$fd" to the strace, so that we can see all the bytes...

-Alex

Erik Leunissen

unread,
Mar 21, 2008, 4:45:44 AM3/21/08
to
USCode wrote:
> I just took a look at the tclhttp1-1 package in Sourceforge and it adds
> http/1.1 features to the current core http/1.0 package ... cool!

Yes, cool especially because HTTP/1.1 connection keepalive is a
requirement for sensible use of SSL/TLS.

Are we
> going to see the core http/1.0 package updated to http/1.1 with code
> from this package anytime soon???

Good point.

Darren New

unread,
Mar 21, 2008, 12:11:44 PM3/21/08
to
Erik Leunissen wrote:
> Yes, cool especially because HTTP/1.1 connection keepalive is a
> requirement for sensible use of SSL/TLS.

Not really(*). That's why SSL has an SSL session ID in the initial
handshake.


(*) For some definition of "sensible" of course.

--
Darren New / San Diego, CA, USA (PST)
"That's pretty. Where's that?"
"It's the Age of Channelwood."
"We should go there on vacation some time."

Erik Leunissen

unread,
Mar 21, 2008, 12:54:31 PM3/21/08
to
Darren New wrote:
>
> Not really(*). That's why SSL has an SSL session ID in the initial
> handshake.
>

Hi Darren,

I have never known otherwise than:

1. not keeping connections alive (as in HTTP/1.0) means: each
transaction consumes a connection (i.e. a connection is always closed
after a transaction has been completed).

2. SSL/TLS performs a (relatively expensive) handshake for each
connection created.

Conclusion
==========
Using SSL without connection keepalive means that the SSL handshake is
performed for each transaction (which is what I deem not sensible in
most cases).


Could you please indicate where my knowledge/logic is going wrong (and
possible where a SSL session ID intervenes in the process)?

Thanks,

Donal K. Fellows

unread,
Mar 21, 2008, 1:07:43 PM3/21/08
to
Erik Leunissen wrote:
> Using SSL without connection keepalive means that the SSL handshake is
> performed for each transaction (which is what I deem not sensible in
> most cases).

Sensible or not, it's expensive as it's a full bidirectional handshake
with a significant amount of computation (as the code uses public key
crypto to establish a session key). Keeping the connection alive avoids
these costs. HTTP session keys are something else; there's many layers
of complexity in HTTP, and the SSL part and the session part are in
different parts of the stack.

> Could you please indicate where my knowledge/logic is going wrong (and
> possible where a SSL session ID intervenes in the process)?

In theory, it is possible to resume a previous session despite dropping
the socket. I wouldn't be at all surprised if that failed to work; it's
not necessary for HTTP (which is basically stateless, and for good
reason) and so probably isn't tested nearly as thoroughly as you might hope.

Donal.

Darren New

unread,
Mar 21, 2008, 1:26:48 PM3/21/08
to
Erik Leunissen wrote:
> Could you please indicate where my knowledge/logic is going wrong (and
> possible where a SSL session ID intervenes in the process)?

Read the SSL spec. The session ID is like an SSL cookie that lets the
server pick up the SSL session with minimal handshaking (and no need to
regenerate shared secrets etc). I suspect it was specifically included
to support protocols that disconnect after every command like HTTP.

Sure, it's still more overhead than keeping the session open; hence my
footnote.

Darren New

unread,
Mar 21, 2008, 1:28:03 PM3/21/08
to
Donal K. Fellows wrote:
> In theory, it is possible to resume a previous session despite dropping
> the socket. I wouldn't be at all surprised if that failed to work; it's
> not necessary for HTTP (which is basically stateless, and for good
> reason) and so probably isn't tested nearly as thoroughly as you might
> hope.

In my experience, it works just fine. It's exactly what SSL session IDs
are for, and it's been in there (as far as I remember) since day one,
since SSL was on HTTP long before keepalive was.

Erik Leunissen

unread,
Mar 26, 2008, 3:09:00 AM3/26/08
to
vit...@gmail.com wrote:

>
> set fp [open "|openssl s_server -accept 9009" r+]; # (you will need a
> server.pem file in the current working directory).
>
> When I connect to localhost:9009 with a web browser, I am able to use
> "gets $fp" to retrieve the request from the web browser. However, when
> I send a response back to the browser, using "puts $fp 'html data'",
> the browser will not receive the response until I close the connection
> (close $fp). I tried "flush $fp", that didn't work. Any ideas how to
> fix that?
>

Hello vitick,

I'm investigating this issue with the help of Alexandre Ferrieux.
Currently, we believe that a race condition related to the SSL secured
channel is causing the misbehaviour.

In my case, I use the Tcl tls1.5 extension. You appear not to use tls1.5
and nevertheless experience the same type of symptom as I do. Therefore,
the problem may be inside the openssl library which is a common factor
in both our cases. However, I'm unsure about the SSL functionality that
is employed at the client side of your setup.

In order to find this out, I would very much like to know about your
"openssl s_server" exercise above:

- which browser you have been using?
- which platform were you running on?

Because you made the client connect to localhost, I assume that server
process and client process were both running under the same
(non-windows) operating system. If that wasn't always the case, please
let me know.


I'd be grateful for your cooperation,

Erik Leunissen

vit...@gmail.com

unread,
Mar 31, 2008, 10:51:59 PM3/31/08
to
On Mar 26, 12:09 am, Erik Leunissen <l...@the.footer.invalid> wrote:
> vit...@gmail.com wrote:
>
> > set fp [open "|openssls_server -accept 9009" r+]; # (you will need a

> > server.pem file in the current working directory).
>
> > When I connect to localhost:9009 with a web browser, I am able to use
> > "gets $fp" to retrieve the request from the web browser. However, when
> > I send a response back to the browser, using "puts $fp 'html data'",
> > the browser will not receive the response until I close the connection
> > (close $fp). I tried "flush $fp", that didn't work. Any ideas how to
> > fix that?
>
> Hello vitick,
>
> I'm investigating this issue with the help of Alexandre Ferrieux.
> Currently, we believe that a race condition related to the SSL secured
> channel is causing the misbehaviour.
>
> In my case, I use theTcltls1.5 extension. You appear not to use tls1.5

> and nevertheless experience the same type of symptom as I do. Therefore,
> the problem may be inside theopenssllibrary which is a common factor

> in both our cases. However, I'm unsure about the SSL functionality that
> is employed at the client side of your setup.
>
> In order to find this out, I would very much like to know about your
> "openssls_server" exercise above:

>
> - which browser you have been using?
> - which platform were you running on?
>
> Because you made the client connect to localhost, I assume that server
> process and client process were both running under the same
> (non-windows) operating system. If that wasn't always the case, please
> let me know.
>
> I'd be grateful for your cooperation,
>
> Erik Leunissen
> --
> leunissen@ nl | Merge the left part of these two lines into one,
> e. hccnet. | respecting a character's position in a line.

Eric,
I was able to make it work, it was just a matter of sending the
correct sequence of http headers back to the browser.
I used Firefox. There was some extra SSL data that has to be read back
(to clear the buffer) after the browser received and displayed the
response, I didn't dwell much on it, just used "gets $fp" until all
data was absorbed.
It was a good educational experience. I realized that I can't really
use it for anything more than one connection at a time. I can't open a
socket for each incoming connection, so basically it seems pretty
useless for anything serious.

---Victor

USCode

unread,
Apr 1, 2008, 12:36:05 PM4/1/08
to
vit...@gmail.com wrote:
> Eric,
> I was able to make it work, it was just a matter of sending the
> correct sequence of http headers back to the browser.
> I used Firefox. There was some extra SSL data that has to be read back
> (to clear the buffer) after the browser received and displayed the
> response, I didn't dwell much on it, just used "gets $fp" until all
> data was absorbed.
> It was a good educational experience. I realized that I can't really
> use it for anything more than one connection at a time. I can't open a
> socket for each incoming connection, so basically it seems pretty
> useless for anything serious.
>
> ---Victor

Just so I'm clear: You've found you can't use the TLS extension to open
more than 1 SSL socket for each incoming connection at a time?!?! That
doesn't seem right! Can anyone corroborate?

vit...@gmail.com

unread,
Apr 1, 2008, 12:50:22 PM4/1/08
to

We are not talking about TLS. We are talking about calling the openssl
command from TCL.


Alexandre Ferrieux

unread,
Apr 1, 2008, 5:24:36 PM4/1/08
to
> command from TCL.- Hide quoted text -

If you're always spawning a child binding to the same port, then I'd
say the error is rather between keyboard and chair ;-)

-Alex

Message has been deleted

vit...@gmail.com

unread,
Apr 9, 2008, 10:51:25 AM4/9/08
to
I have finally gotten back to the TLS code and there is definitely
some kind of a race condition problem where often you would get lots
of empty data in the beginning of the request. I have spent some time
figuring a way out of it and the only way I could deal with it is to
retry "gets sock" after each empty data until I get some good data. I
don't like that solution at all because it might last indefinitely, so
I had to set up a counter and stop after
some number of [gets sock]. The counter has not exceeded 500 so far
but you never know.

So, basically, I cannot use the following code:
while {[gets $sock data] >= 0} {...} -- which is the code used most of
the time in examples on how to get the request data from the client.

I instead ended up using something like this:

set counter 0
while {![eof $sock]} {
incr counter
if {$counter >= 1000} {break}
gets $sock data
if {$data == ""} {
continue
}
...
process the data
...

}

I hope this will get fixed, because timing out is not a good solution
(not happen yet to me with the time out count set to 1000, but in some
cases it probably will happen). I want to read and process every
request correctly.
Of course, there is probably a better solution. That's where people
like Alex Ferrieux often come to the rescue. ;)

vit...@gmail.com

unread,
Apr 9, 2008, 11:06:26 AM4/9/08
to
I have posted and removed one message where I had an incorrect code.
Google lets you do that. For other systems please disregard the code
where it says if {![gets $sock data] >= 0} {continue}
It should have been:

if {[gets $sock data] < 0} {continue}

or

gets $sock data
if {$data == ""} {continue}

I was just too sleepy. :)

Alexandre Ferrieux

unread,
Apr 9, 2008, 11:09:50 AM4/9/08
to
On Apr 9, 4:51 pm, vit...@gmail.com wrote:
> Of course, there is probably a better solution. That's where people
> like Alex Ferrieux often come to the rescue. ;)

I'm flattered :-) , but unfortunately, even after an intensive trace
analysis with Erik, we've not yet formally managed to blame TLS. To my
knowledge, Erik is working on extracting a simpler subcase from his
https problem.

However, looking back atthis thread, I realize that we drifted a bit
from your case, which may be different from Erik's after all , or even
better, have the very same root cause and be easier to identify !

If the failing case with TLS is as simple as it seems, maybe you can
get in contact with TLS's author via SF. I'm sorry not to offer as
much in-depth help as you were expecting, but I never used SSL (except
through the ssh executable), let alone TLS, and setting it up here is
a bit resource-consuming :-{

However, once you get an analysis that can be projected back on my lay-
Tcler's mindset (talking about fileevents and stacked channels), I'm
interested.

Best regards,

-Alex

Erik Leunissen

unread,
Apr 9, 2008, 3:11:13 PM4/9/08
to Alexandre Ferrieux
Alexandre Ferrieux wrote:
>
> I'm flattered :-) , but unfortunately, even after an intensive trace
> analysis with Erik, we've not yet formally managed to blame TLS. To my
> knowledge, Erik is working on extracting a simpler subcase from his
> https problem.
>

Indeed.

As a first, random scenario I removed http from the equation; only to
find that the problematic behaviour didn't show up anymore. This was a
rough/random experiment, in which I did not take care to ensure that all
channel properties were the same as in the problem case (such as buffering).

So it appears that finding a reduced scenario that still exhibits the
problem requires more effort, especially a more systematic approach.

Need more time ...

Erik

Erik Leunissen

unread,
Apr 9, 2008, 4:27:36 PM4/9/08
to
vit...@gmail.com wrote:
> ... I have spent some time

> figuring a way out of it and the only way I could deal with it is to
> retry "gets sock" after each empty data until I get some good data.

There are two modifications that I would suggest
that may more effectively prevent the empty reads (and hopefully
confirm/deny whether we're experiencing the same probelem).

1. Use [fileevent] and a read handler
=====================================
#
# This makes your read commands execute
# only when the channel is readable
#
proc handleRead {sock} {
if {[catch {gets $sock line} n] || [eof $sock]} {
error "broken channel"
}
if {$n = 0} {
# The infamous empty read occurred.
# Simply bail out and let the next file event call
# this read handler again, hopefully with more
# substantial data.
#
# May be combined with the ($n < 0) case below
#
return
}
if {$n < 0} {
# Incomplete line.
# May be combined with the ($n == 0) case above
#
return
}
#
# process $line here
#
}
fileevent $sock readable [list handleRead $sock]


2. Insert a wait/sleep into the read handler
============================================
proc handleRead {sock} {
after 20 ;# <==== inserted
if {[catch {gets $sock line} n] || [eof $sock]} {
error "broken channel"
}
if {$n = 0} {
# The infamous empty read occurred.
# Simply bail out and let the next file event call
# this read handler again, hopefully with more
# substantial data.
#
# May be combined with the ($n < 0) case below
#
return
}
if {$n < 0} {
# Incomplete line.
# May be combined with the ($n == 0) case above
#
return
}
#
# process $line here
#
}

If the inserted [after] command affects the number of empty reads, then
we're probably experiencing the same race condition.

I'd be grateful if you tried these and reported back.

Erik

vit...@gmail.com

unread,
Apr 12, 2008, 1:20:22 PM4/12/08
to

I will do that as soon as I get some free time. Should be soon.

---Victor

xet7

unread,
Apr 13, 2008, 3:16:27 PM4/13/08
to

Erik Leunissen

unread,
Apr 13, 2008, 4:53:15 PM4/13/08
to
xet7 wrote:
> Does tls 1.6 version, that fixes some bugs, help in these issues?
>

Not in my case.

Erik.

vit...@gmail.com

unread,
Apr 14, 2008, 4:28:44 PM4/14/08
to
On Apr 9, 1:27 pm, Erik Leunissen <l...@the.footer.invalid> wrote:

Hello Eric,

Your suggestion worked. Using "fileevents" to wait for each line of
data stopped the number of empty reads to 2-3 instead of 200-400+.
Using [after 20] did not do anything. The only problem I have is with
Konqueror (I tested with Konqueror, Opera and Firefox).
Konqueror closes the connection after each request and does not honor
the "keep-alive" header.

Thanks,

---Victor

Alexandre Ferrieux

unread,
Apr 14, 2008, 5:05:45 PM4/14/08
to

Hmm, sorry to be the naysayer, but I had the hope that together we'd
dig the bug out instead of burying it deeper :-)

From what you and Erik (and others) wrote, it looks like TLS is known
to yield empty reads. That alone should be enough to get us on a
furious bug-hunting session. Jeff, Pat, any taker ?

-Alex

Erik Leunissen

unread,
Apr 14, 2008, 5:08:14 PM4/14/08
to
vit...@gmail.com wrote:
>
> Hello Eric,
>
> Your suggestion worked. Using "fileevents" to wait for each line of
> data stopped the number of empty reads to 2-3 instead of 200-400+.

Thanks for testing this Victor.


> Using [after 20] did not do anything.

And if you let the wait time vary (up to 100 ms), does that make a
difference?

The only problem I have is with
> Konqueror (I tested with Konqueror, Opera and Firefox).
> Konqueror closes the connection after each request and does not honor
> the "keep-alive" header.

With respect to your original post:

> When I connect to localhost:9009 with a web browser, I am able to use
> "gets $fp" to retrieve the request from the web browser. However, when
> I send a response back to the browser, using "puts $fp 'html data'",
> the browser will not receive the response until I close the connection
> (close $fp). I tried "flush $fp", that didn't work. Any ideas how to
> fix that?

Do you still perceive this kind of choking behaviour?


Greetings,

Erik.


>
> Thanks,
>
> ---Victor

Erik Leunissen

unread,
Apr 14, 2008, 5:27:55 PM4/14/08
to Alexandre Ferrieux
Alexandre Ferrieux wrote in c.l.t.:

>
> From what you and Erik (and others) wrote, it looks like TLS is known
> to yield empty reads. That alone should be enough to get us on a
> furious bug-hunting session. Jeff, Pat, any taker ?
>

Yesterday I've managed to create a reduced setup that reproduces the
problem of the choking reads (along with the odd empty reads); no http
involved, just Tcl and TLS. I am in the process of cleaning it up and
writing the text for a bug report.

I expect that the maintainers will want to have this reduced setup (as I
know you agree with).

It's too late to do it now, but I intend to send you the reduced setup,
(even before finishing the bug report), so you'll be the first to be
able to play with it.


Greetings,

Erik


> -Alex

Alexandre Ferrieux

unread,
Apr 14, 2008, 5:37:18 PM4/14/08
to

Excellent, thanks Erik ! (and TIA for the strace ;-)

-Alex

0 new messages