On Mac OS X, I’ve unwrapped the latest tclhttpd starkit distribution
and replaced the htdocs directory with mine; wrapped everything and
copied it onto a Win XP box. If I try to download a .swf file nothing
happens: tclhttp is still responsive but the request just hangs there.
However, if I start it with the -docRoot flag everything works as
expected!
On Mac OS X works correctly, either using the starkit docroot or an
external one. Every other file type (text, movies) are served correctly
regardless the OS and the docroot used.
I’m banging my head against the wall... Does anyone have any suggestion?
Thank you in advance!
--
Giorgio Valoti
Not an expert on tclhttpd, but a few questions:
* Can you use wget/curl to download a fake .swf file, a real .swf
file?
* or Is the problem that an .swf application cannot access the .swf
file?
* What is wrong with using the -docRoot flag if this makes everything
work?
Some web servers use OS based mapping files to serve content, it is
possible that XP doesn't have the correct mapping, or the file is
missing or doesn't exist on XP, or your particular OS installation. (I
know some perl scripts do this).
> On Jun 16, 8:37 am, Giorgio Valoti <giorgi...@me.com> wrote:
>> Hi all,
>> I’m getting a very weird bug(?) with tclhttpd.
>>
>> On Mac OS X, I’ve unwrapped the latest tclhttpd starkit distribution
>> and replaced the htdocs directory with mine; wrapped everything and
>> copied it onto a Win XP box. If I try to download a .swf file nothing
>> happens: tclhttp is still responsive but the request just hangs there.
>> However, if I start it with the -docRoot flag everything works as
>> expected!
>>
>> On Mac OS X works correctly, either using the starkit docroot or an
>> external one. Every other file type (text, movies) are served correctly
>> regardless the OS and the docroot used.
>>
>> I’m banging my head against the wall... Does anyone have any suggestion?
>
> Not an expert on tclhttpd, but a few questions:
>
> * Can you use wget/curl to download a fake .swf file, a real .swf
> file?
I’ve used curl with the real .swf and it hangs after having received
the headers.
> * or Is the problem that an .swf application cannot access the .swf
> file?
> * What is wrong with using the -docRoot flag if this makes everything
> work?
According to the requirements the final app should be distributed as a
starpack, so everything should be self-contained.
>
> Some web servers use OS based mapping files to serve content, it is
> possible that XP doesn't have the correct mapping, or the file is
> missing or doesn't exist on XP, or your particular OS installation. (I
> know some perl scripts do this).
Since starkits use vfs this shouldn’t be a problem. Also, this would
explain why using a regular directory with -docRoot doesn’t show any
problem.
--
Giorgio Valoti
> […]
>>
>> * Can you use wget/curl to download a fake .swf file, a real .swf
>> file?
>
> I’ve used curl with the real .swf and it hangs after having received
> the headers.
Just tried with a fake swf. Same results
> […]
--
Giorgio Valoti
Just a thought,
/Ashok
Sometimes I use telnet (from linux/unix) to actually test buggy
applications,
wget/curl are sometimes just too smart.
Also: what are the headers?
Another thing to test is just copying the file and changing the file
extension to something likely unused: .sswwff. Compare the headers
received. Web servers and clients should handle any content it doesn't
understand as binary data */*. Some clients peek at the content, some
peek at the file extension, but I think the correct behavior is to
consult the headers content-type, etc. My guess is that it is a
combination of local configuration and server/client mismatch.
And it’s worse than that. I took a step back and tried the standard
starkit distribution. The home page is shown as expected, however, the
gif spacer under htdocs/images/Space.gif hangs (same as “my” .swf
files).
Then I tried to:
- add a copy of htdocs/links/README.txt as htdocs/links/LEGGIMI.txt: OK
- add a copy of htdocs/links/README.txt as htdocs/links/test.txt: OK
- edit test.txt deleting all the content except the first paragraph: OK
- same thing except the first 65 chars: OK
- same thing except the first 64 chars: HANGS
And guess what? The Space.gif it’s a 49 byte file
So I put a copy of one of the SWFs, which is way bigger than 64B, under
htdocs/links and… it hangs as well.
--
Giorgio Valoti
wget http://www.tcl.tk/starkits/tclhttpd.kit
wget http://www.kroc.tk/fichiers/tclkit-cli.zip
unzip tclkit-cli.zip
wget http://www.flashanywhere.net/content/file/24700/Vuvuzela-Button.swf
./tclkit-cli tclhttpd.kit -docRoot .
Then I browse to http://localhost:8015/ with Safari and when click on
Vuvuzela-Button.swf it works as expected.
--
David Zolli
Until you use something like telnet, or Tcl's socket to try a raw
request, and show us what you typed and the full result, it is hard to
help much further. From what I understand, it isn't hanging, it is
just returning some headers. Telnet will maybe show some data getting
sent.
Have you tried it under a Window box?
--
Giorgio Valoti
> […]
>
> Until you use something like telnet, or Tcl's socket to try a raw
> request, and show us what you typed and the full result, it is hard to
> help much further. From what I understand, it isn't hanging, it is
> just returning some headers. Telnet will maybe show some data getting
> sent.
With curl I don’t see any response header:
C:\Documents and Settings\admin\Desktop\curl-7.19.5>curl -v
http://localhost:801
5/images/Space.gif
* About to connect() to localhost port 8015 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8015 (#0)
> GET /images/Space.gif HTTP/1.1
> User-Agent: curl/7.19.5 (i586-pc-mingw32msvc) libcurl/7.19.5 zlib/1.2.3
> Host: localhost:8015
> Accept: */*
>
But, as you said, maybe with telnet I could get some more details? I’ll
try it as soon as I can get back to my dev machine.
--
Giorgio Valoti
Error processing main startup script "Z:\Downloads\tclhttpd.kit\bin
\httpdthread.tcl".
couldn't open "/tmp/tclhttpd.default": no such file or directory
while executing
"open $Config(AuthDefaultFile) w 0660"
/tmp/tclhttpd.default doesn't look like a valid path on XP.
--
David Zolli
Yes, I’ve patched it to look for TMP or TEMP environment variables,
before using /tmp.
--
Giorgio Valoti
> On 2010-06-17 17:36:11 +0200, tom.rmadilo said:
>
>> […]
>>
>> Until you use something like telnet, or Tcl's socket to try a raw
>> request, and show us what you typed and the full result, it is hard to
>> help much further. From what I understand, it isn't hanging, it is
>> just returning some headers. Telnet will maybe show some data getting
>> sent.
>
> With curl I don’t see any response header:
>
> C:\Documents and Settings\admin\Desktop\curl-7.19.5>curl -v
> http://localhost:801
> 5/images/Space.gif
> * About to connect() to localhost port 8015 (#0)
> * Trying 127.0.0.1... connected
> * Connected to localhost (127.0.0.1) port 8015 (#0)
>> GET /images/Space.gif HTTP/1.1
>> User-Agent: curl/7.19.5 (i586-pc-mingw32msvc) libcurl/7.19.5 zlib/1.2.3
>> Host: localhost:8015
>> Accept: */*
I have reduced the problem to line (proc Httpd_ReturnFile on httpd.tcl):
fcopy $in $sock -command [list HttpdCopyDone $in $sock $close]
which copies the contents of the file to the socket. For some reasons
it blocks when some particular files are requested on Windows XP and
tclhttpd serves the file from a starkit.
Any clues?
--
Giorgio Valoti
File a bug: #3018050
--
Giorgio Valoti
Is it possible the response code makes another request back to the
same server? I have an application which runs under regular tcl
[socket], but hangs under tclhttpd. I gave up looking for the reason,
but months later considered that I messed up my fcopy command. What I
would look for is an [fcopy] which doesn't go into the background with
a -command option and doesn't place a file into nonblocking mode. It
is also possible that some code, for some reason puts one of the
channels back into blocking mode, or tries to work with either the $in
or $sock channels. Maybe there is some way to monitor channel
configuration changes during a background fcopy. (Also, all file
transfers should be in -translation binary mode, which isn't the
default under windows).
> […]
>>
>> I have reduced the problem to line (proc Httpd_ReturnFile on httpd.tcl):
>>
>> fcopy $in $sock -command [list HttpdCopyDone $in $sock $close]
>>
>> which copies the contents of the file to the socket. For some reasons
>> it blocks when some particular files are requested on Windows XP and
>> tclhttpd serves the file from a starkit.
>
> Is it possible the response code makes another request back to the
> same server? I have an application which runs under regular tcl
> [socket], but hangs under tclhttpd. I gave up looking for the reason,
> but months later considered that I messed up my fcopy command. What I
> would look for is an [fcopy] which doesn't go into the background with
> a -command option and doesn't place a file into nonblocking mode. It
> is also possible that some code, for some reason puts one of the
> channels back into blocking mode, or tries to work with either the $in
> or $sock channels. Maybe there is some way to monitor channel
> configuration changes during a background fcopy. (Also, all file
> transfers should be in -translation binary mode, which isn't the
> default under windows).
Overall, there are three fcopy calls on tclhttp module: one on cgi.tcl
and should be not involved in plain file request; the other two are on
Httpd_CopyPostData, same thing; the last is the one cited before and
it’s used on Httpd_ReturnFile:
...
set in [open $path] ;# checking should already be done
fconfigure $in -translation binary -blocking 1
if {$offset != 0} {
seek $in $offset
}
fconfigure $sock -translation binary -blocking $Httpd(sockblock)
set data(infile) $in
Httpd_Suspend $sock 0
fcopy $in $sock -command [list HttpdCopyDone $in $sock $close]
...
--
Giorgio Valoti
So unless this server is handling this connection in an independent
thread, it could block the entire server.
> fconfigure $sock -translation binary -blocking $Httpd(sockblock)
> set data(infile) $in
> Httpd_Suspend $sock 0
> fcopy $in $sock -command [list HttpdCopyDone $in $sock $close]
Same here: http servers shouldn't block unless they are running
threads for each request. If the request makes a request back to
itself, you could expect a lockup/stall whatever.
Going in and out of blocking mode could be a problem.
> […]
>> the last is the one cited before and
>> it’s used on Httpd_ReturnFile:
>>
>> ...
>> set in [open $path] ;# checking should already be done
>> fconfigure $in -translation binary -blocking 1
>> if {$offset != 0} {
>> seek $in $offset
>> }
>
> So unless this server is handling this connection in an independent
> thread, it could block the entire server.
>
>> fconfigure $sock -translation binary -blocking $Httpd(sockblock)
>> set data(infile) $in
>> Httpd_Suspend $sock 0
>> fcopy $in $sock -command [list HttpdCopyDone $in$sock $close]
>
> Same here: http servers shouldn't block unless they are running
> threads for each request. If the request makes a request back to
> itself, you could expect a lockup/stall whatever.
>
> Going in and out of blocking mode could be a problem.
I have configured tclhttpd to run in multithreaded mode, however, if I
understand correctly what you say, we should see a lockup when tclhttpd
is run with multithreading off. Am I correct?
--
Giorgio Valoti