The Tcl Contributed Sources archive can now be accessed directly over
the web, including downloading any of the files, at
Accessing the archive this way from your web browser is significantly more
efficient than with FTP, and should be much faster as well.
Karl Lehenbauer
--
"Egon, this reminds me of the time you
were going to drill a hole in your head."
"That would have worked if you
hadn't stopped me."
--
[[Send Tcl/Tk announcements to tcl-an...@mitchell.org
Send administrivia to tcl-announ...@mitchell.org
The primary Tcl/Tk archive is ftp://ftp.neosoft.com/pub/tcl/]]
>The Tcl Contributed Sources archive can now be accessed directly over
>the web, including downloading any of the files, at
thanks for running this archive and making it accessible over the web.
Unfortunately, you all seem to have color monitors ... on a greyscale
(64 color) monitor, the text comes out black on black (i.e., invisible).
Sure would appreciate if you could make your coloring greyscale/monochrome-
compatible ...
Ciao,
Stefan.
--
Dr. Stefan Wrobel, GMD, FIT.KI, Schloss Birlinghoven, D-53754 Sankt Augustin
Tel.: +49/2241/14-0, Fax: -2889 EMail: stefan...@gmd.de
WWW http://nathan.gmd.de/persons/stefan.wrobel.html
Secr.: D. Boethgen Tel. -2731, EMail: dagmar....@gmd.de
Argh! OK, the text and links are all greater than 50% and the background
is black on www.neosoft.com/tcl, and black on white below that.
Tomorrow when I get back to work I'll check on the '10 and see if it actually
worked.
More improvements on the way... (s i g h)...
:
: Accessing the archive this way from your web browser is significantly more
: efficient than with FTP, and should be much faster as well.
Why? Both HTTP and FTP use TCP. Accessing the same precompressed data should
give nearly the same network throughput, if you are not using proxy servers.
I don't like proxies because in my experience they even cache incomplete data
in case of network problems, but FTP can restart in the middle with REGET.
Bye, Heribert (da...@ifk20.mach.uni-karlsruhe.de)
The problem is that the overhead of establishing an FTP connection is
much higher -- there is an entire login sequence, new process spawned,
etc, and people browsing FTP sites with webclients cause an FTP login and
logout every time they click on a directory.
: The problem is that the overhead of establishing an FTP connection is
: much higher -- there is an entire login sequence, new process spawned,
: etc, and people browsing FTP sites with webclients cause an FTP login and
: logout every time they click on a directory.
I see.
I normally browse FTP sites within the same FTP session, or I use xarchie.
I guess the webclients should try to cache by keeping the FTP session open
for a while...
Bye, Heribert (da...@ifk20.mach.uni-karlsruhe.de)
Because the HTTP connection is a stateless one, meaning that the connection
between the HTTP server and the client only needs to be open during the
specific data transfer, whereas the ftp server has to maintain an open
connection until a timeout or the user closes it.
--
:s Larry W. Virden INET: lvi...@cas.org
:s <URL:http://www.teraform.com/%7Elvirden/> <*> O-
:s Unless explicitly stated to the contrary, nothing in this posting should
:s be construed as representing my employer's opinions.
> : Accessing the archive this way from your web browser is significantly more
> : efficient than with FTP, and should be much faster as well.
>
> Why?
something not yet mentioned is that FTP makes a second TCP connection
for a data transfer. HTTP only uses one (per fetch). ever done an mget
with a really laaarge file in the middle, and had it fail just after that
file? that's (usually) the command circuit timing out.
> but FTP can restart in the middle with REGET.
a nice feature i've never seen implemented :)
--
Hume Smith <hcls...@isisnet.com> Alumnus Against Advantage Acadia
: In article <4l3q0n$f...@nz12.rz.uni-karlsruhe.de>
DA...@ifk20.mach.uni-karlsruhe.de (Heribert Dahms) writes:
:
: > : Accessing the archive this way from your web browser is significantly
: > : more efficient than with FTP, and should be much faster as well.
: >
: > Why?
:
: something not yet mentioned is that FTP makes a second TCP connection
: for a data transfer. HTTP only uses one (per fetch). ever done an mget
: with a really laaarge file in the middle, and had it fail just after that
: file? that's (usually) the command circuit timing out.
:
Yes, I forgot to mention that FTP usually makes a second connection.
And since you are just saying it, I realize I had such mget problems...
But then, for the second attempt I use a simple get followed by as much as
needed regets.
And I do not care how much resources it consumes if I really urgently need
something from an overseas server without mirror and can finally get it with
dozen attempts via reget while http keeps hanging everytime and I get nothing!
The worst was a 3.5MB file which took me a *week* at average 0.017Kb/sec and
hundreds of failed connection attempts!
Floppies via airmail would have been much faster, but net is free 8-)
: > but FTP can restart in the middle with REGET.
:
: a nice feature i've never seen implemented :)
:
What are you using?
The 3 major platforms, unixes, I currently use have it (hp-ux, Linux, Alpha).
Bye, Heribert (da...@ifk20.mach.uni-karlsruhe.de)
> What are you using?
ncftp/NetBSD-1.0/Amiga. maybe i just haven't looked hard enough,
but i had no idea it was possible until i read the FTP RFC itself.
: > but FTP can restart in the middle with REGET.
:
: a nice feature i've never seen implemented :)
It is common in servers (for Unix, at least), but not in clients! If
you can find a client that supports RESTART (the protocol command
behind reget), there is a good chance that you will be able to use it
with the various FTP sites out there.
Has anyone written an FTP client in Tcl? It ought not to be too hard
to add RESTART, at least for binary. Unix servers seem to treat the
RESTART parameter as a byte offset at which to start a stream mode
transfer.
--
Owen Rees <rt...@ansa.co.uk>
All my news postings and e-mail are personal opinions.
Information about ANSA is at <URL:http://www.ansa.co.uk/>.
The issue isn't the resource impact on users - but on the server from which
the user is attempting to fetch.
:
:: > but FTP can restart in the middle with REGET.
::
:: a nice feature i've never seen implemented :)
::
:What are you using?
:The 3 major platforms, unixes, I currently use have it (hp-ux, Linux, Alpha).
I am using a Sun Solaris 2.4 and it doesn't have anything called REGET
in it's help msg or man page.
: Hume Smith (hcls...@isisnet.com) wrote:
: : In article <4l3q0n$f...@nz12.rz.uni-karlsruhe.de>
DA...@ifk20.mach.uni-karlsruhe.de (Heribert Dahms) writes:
:
: : > but FTP can restart in the middle with REGET.
: :
: : a nice feature i've never seen implemented :)
:
: It is common in servers (for Unix, at least), but not in clients! If
: you can find a client that supports RESTART (the protocol command
: behind reget), there is a good chance that you will be able to use it
: with the various FTP sites out there.
:
or QUOTE may be useful...
: Has anyone written an FTP client in Tcl? It ought not to be too hard
: to add RESTART, at least for binary. Unix servers seem to treat the
you'll need binary I/O to Tcl first, of cause.
: RESTART parameter as a byte offset at which to start a stream mode
: transfer.
Yes, in my unintentional experiences it restarted at the last 512byte boundary.
Bye, Heribert (da...@ifk20.mach.uni-karlsruhe.de)
: According to Heribert Dahms <DA...@ifk20.mach.uni-karlsruhe.de>:
: :And I do not care how much resources it consumes if I really urgently need
: :something from an overseas server without mirror and can finally get it with
: :dozen attempts via reget while http keeps hanging everytime and I get
nothing!
:
: The issue isn't the resource impact on users - but on the server from which
: the user is attempting to fetch.
:
I knew that the server was meant 8-)
If you want something which uses very low server resources, look at FSP.
It uses UDP and only a few byte per "connection" of state information, but in
fact there's no connection: At most one unacknowkleged packet is outstanding.
Depending on the timeout and retry settings, even longer network breakdowns
can be survived, so no need for REGET! You can also easily limit the maximum
network bandwidth consumed by all the clients in sum and it doesn't need root
priviledge. Try finding it at ftp.germany.eu.net ...
Bye, Heribert (da...@ifk20.mach.uni-karlsruhe.de)