A month or two ago, I started this discussion about TCL versus other
popular scripting languages. I was very new to TCL, but within a short
time, I fell in love with it. It came as a blessing from the sky, in
helping me quickly develop a binary based internet server. I found out
about TCL via Expect, which was excellent at automating "questions &
answers". I was wondering why I had never heard of TCL before, neither
did my colleagues. Then the discussion I started lead to a myriad of
ideas from others on how to promote TCL.
I'm an experienced C/C++ programmer. Perl claims to be structured in
similar ways, but it's self proclaimed similarity with C is only more
misleading, since basic things like passing pointers (references) and
other things, are like C but different, thus confusing. Odd choice of
weird characters. Why pass references as \$var instead of &var or
&$var, and more of such things.
Anyway, since Perl not only requires, but even seems to encourage
sloppy, unstructured programming, and the use of lots of odd characters
and context based meaning (you declare an array as @var but reference
its members as $var[] and other confusing things that are prone to
error), readibility of Perl code is nearly as challenging as reading
assembler code, thus also the challenge of having it maintained by
another or by oneself a year later.
But, working with Perl minded colleagues, and having been asked to
maintain and improve their sloppy Perl code, I was forced to pick up a
Perl book, a popular one I think..
I couldn't find in it what year it was published, but it's by the
publisher "Osborne", called "The Complete Reference" by Martin C.
Brown.
On the first few pages, it makes a comparison between Perl and other
languages. Comparing it with Python, it says something to the extent
that "Pyton is object oriented, very structured, for structured minded
people.. but I couldn't think of one thing that would make Python
better than Perl" (obviously he doesn't find structure to be
important.. I thought it was one of the most important things for
serious coding).
But more interesting is what it said about TCL. I quote pieces:
"Tcl was never developed as a general-purpose scripting language,
although many people use it as such"
"Strings are null terminated (as they are in C). This means that Tcl
cannot be used for handling binary data."
(first they compare string handling with C, then claim it can not
handle binary data.. Logically this implies that C can not handle
binary data either. Bad comparison, but I'm just saying his statement
is not logical ... must be an old book too, as Tcl handles binary data
just fine).
"Tcl is also generally slower on iterative operations over strings"
"You cannot pass arrays by value or by reference; they can only be
passed by name"
(upvar and uplevel allow for this and is actually even more powerfull,
plus Perl references aren't real references like in C, so I'm not sure
how this makes Perl more powerful or useful in this regard. In perl you
can't define your expected parameters in your subroutines, but have to
parse them in the subroutine body from a generic ARGV style array. You
can do this in TCL too, but also specify them in the function header).
"Perl supports the entire POSIX function set, but Tcl supports a much
smaller subset of the POSIX function set, even using external
packages."
"It should be clear from this description that Perl is a better
alternative to Tcl in situations where you want easy access to the rest
of the OS. Most significantly, Tcl will never be a general-purpose
scripting language."
That TCL is lacking many readily available packages compared to Perl,
due to its large contributing community, is one thing.. but to doom a
language for "never being general-purpose" is a bit of a blunt
statement I think. Perl too started out as 'just a text processing and
listing" tool.
But there are some valid statements made too:
"Tcl is generally slower than Perl", because "strings are converted to
numbers only when calculation is required", "arrays are stored whithin
what Perl would treat as a hash. Accessing true Tcl array is therefor
slightly slower, as it has to look up associative entries in order to
decipher the true values".
"Unlike Perl, which parses the script before optimizing and then
executing, Tcl is a true interpreter. This reduces optimization options
for Tcl."
"same Tcl interpretation technique also means that only way to debug
Tcl code and search for syntactic errors is to actually execute the
code."
I remember the days that simple "basic" could be compiled into
extremely fast binary code.
Perl has binary compiler, but is actually not machien code, but some
code (virtual machine) that is included in the binary together with the
perl opcodes.
I thought any interpreted language could be compiled into machine code.
There are plenty of Tcl packages available, scattered all over the
place. I still believe one of the major advantages and success of Perl
over Tcl is CPAN, an large central collection of packages that are easy
to 'download and install'. The problem with Perl is that uninstalling
packages is not as easy. How is this with Tcl? Isn't it possible to
create such a CPAN like repository for Tcl packages, and bundle Tcl
with a simple 'install <package>" feature, with dependency checking?
Forget about promoting Tcl via magazines or whatever.. first create a
CPAN like network, then tell people about it, otherwise you'll lose
their interest as quickly as you gained it when you don't lower the
barrier for installing and extending and contributing code.
I believe such network should have highest priority for promoting Tcl
to a larger audience. Tcl is actually so easy to learn that it could,
for this reason alone, have an edge over Perl.
Tcl has some weaknesses (that in many cases are its strengths, like
EverytingIsAString). But I don't see why Tcl couldn't be extended to
include some non string data types for speed. From what I understand,
the array is also an exception to the EIAS rule.
Sorry to bring this back up again, but in the last discussion, people
got motivated, and ideas flew all over the place as how to make Tcl a
bigger success (after discussion with some bitterness why Perl and
other languages gained more popularity: being included with default
distributions), we seem to be back where we started. Then someone said
"stop talking, start doing, or nothing gets done". Wise comment, but
one thing was missing.. what should we start doing first?
I strongly believe in a CPAN like network where it's as easy to install
packages as it is to contribute. It could be as simple as "tclupload
<package> <description>" with a "tclsearch <package|description>" and
"tclinstall".
For the actual storage space, if we can not find someone to sponsor the
server space, we could maybe use public mirroring sites, or bittorrent
like networks first.
Are there people who agree with me and my priority list, or have better
ideas? If so, the next thing to do is make a "TO-DO" list, broken up in
concrete steps, and then look for volunteers to take responsibility for
some of these partial steps. The further we break down these steps, the
less time and effort it would cost for people to use as little or as
much as their time to contribute.
I'm sure this thread will include discussions and people willing to
contribute but 'get lost' among other posts, so I'd like to have a
place where people can 'sign up' to contribute. I have no such place
now, no mailing list, or anything. If someone does, please let me
know.. otherwise you can get your name listed by sending an e-mail to
reageer (at) yahoo dot you know, and we can start organizing something
more seriously.
Lisa (alter-ego).
Simon Geard
> "Strings are null terminated (as they are in C). This means that Tcl
> cannot be used for handling binary data."
It is not true (I think since 8.0).
http://www.tcl.tk/man/tcl8.4/TclLib/StringObj.htm
command 'binary' is very powerfull for binary operations.
Robert
A better statement of fact would have been, "Tcl was not initially conceived
as a general-purpose scripting language, although many people use it as such"
> "Strings are null terminated (as they are in C). This means that Tcl
> cannot be used for handling binary data."
> (first they compare string handling with C, then claim it can not
> handle binary data.. Logically this implies that C can not handle
> binary data either. Bad comparison, but I'm just saying his statement
> is not logical ... must be an old book too, as Tcl handles binary data
> just fine).
Statement is well out of date, this has been false since 8.0.
> ...
> "Perl supports the entire POSIX function set, but Tcl supports a much
> smaller subset of the POSIX function set, even using external
> packages."
> "It should be clear from this description that Perl is a better
> alternative to Tcl in situations where you want easy access to the rest
> of the OS. Most significantly, Tcl will never be a general-purpose
> scripting language."
True and very false. Tcl has extensions that can access most if not all OS
features and is easily extended to access others. Tcl proper does not
provide low level access directly to all OS features, instead it provides OS
independent (and has for at least 10 years) ways to access most common features.
> ...
> But there are some valid statements made too:
>
> "Tcl is generally slower than Perl", because "strings are converted to
> numbers only when calculation is required",
Statement is well out of date, this has been "false" since 8.0. Tcl keeps a
dual representation of data. If you use the value of a variable over and
over again in calculations the contents will not be converted back and forth
to a string. The string representation will be created only when needed --
but the numeric representation will still exists.
> "arrays are stored whithin
> what Perl would treat as a hash. Accessing true Tcl array is therefor
> slightly slower, as it has to look up associative entries in order to
> decipher the true values".
This is comparing apples and oranges here because they have the same "name"
-- he should (I think) be comparing Perl arrays to Tcl lists.
> "Unlike Perl, which parses the script before optimizing and then
> executing, Tcl is a true interpreter. This reduces optimization options
> for Tcl."
Statement is well out of date, this has been "false" since 8.0. Tcl
compiles and optimizes procedures on the fly the first time the are called.
Tcl actually interps the compiled and optimised byte code and not the strings.
> "same Tcl interpretation technique also means that only way to debug
> Tcl code and search for syntactic errors is to actually execute the
> code."
This is true. It is also, I think, true for Perl -- If you create on the
fly a string in Perl and tell it to evaluate it.
> I remember the days that simple "basic" could be compiled into
> extremely fast binary code.
> Perl has binary compiler, but is actually not machien code, but some
> code (virtual machine) that is included in the binary together with the
> perl opcodes.
> I thought any interpreted language could be compiled into machine code.
See statement above.
> There are plenty of Tcl packages available, scattered all over the
> place. I still believe one of the major advantages and success of Perl
> over Tcl is CPAN, an large central collection of packages that are easy
> to 'download and install'. The problem with Perl is that uninstalling
> packages is not as easy. How is this with Tcl? Isn't it possible to
> create such a CPAN like repository for Tcl packages, and bundle Tcl
> with a simple 'install <package>" feature, with dependency checking?
Yes and no. This has been talked about for years. Part of the problem is
that a lot of Tclers, myself included, do not want a CPAN with its tons of
bad and buggy code -- we would rather a higher quality if smaller set of
modules, e.g. TclLib (http://tcllib.sf.net).
> ...
> Tcl has some weaknesses (that in many cases are its strengths, like
> EverytingIsAString). But I don't see why Tcl couldn't be extended to
> include some non string data types for speed. From what I understand,
> the array is also an exception to the EIAS rule.
The correct statement since 8.0 is "Everything can be converted to a string"
-- this is even true of arrays in a way (i.e. [array get myArrayName]) .
> I strongly believe in a CPAN like network where it's as easy to install
> packages as it is to contribute. It could be as simple as "tclupload
> <package> <description>" with a "tclsearch <package|description>" and
> "tclinstall".
>...
Then do it and let us see it.
--
+--------------------------------+---------------------------------------+
| Gerald W. Lester |
|"The man who fights for his ideals is the man who is alive." - Cervantes|
+------------------------------------------------------------------------+
1. The tclkit related sdx utility: doing 'sdx update' is an extremly
cool way to get or update starkits from a repository
See for example:
http://wiki.tcl.tk/11181
2. The STAN tools create a kind of package repository similar to debs
http://wiki.tcl.tk/12370
3. The CANTCL tools create a package repository with RDF / Dublin Core
metadata
http://wiki.tcl.tk/1961
4. Joe English maintains a nice list of Tcl packages in his GUTTER system
http://wiki.tcl.tk/gutter
5. The Tcl Modules Tip for a new 'packaging format' should make
building a repository for end users (for 8.5) easier
http://wiki.tcl.tk/12999
Maybe i even missed some, but for all of those the code is freely
available as a starting point for a repository.
One main thing a repository probably needs to get of the ground is a
reasonable amount of seeding with useful packages for the common platforms.
Michael
What on earth does that mean!? Answers on a postcard please...
> "Strings are null terminated (as they are in C). This means that Tcl
> cannot be used for handling binary data."
That's not been true since, oh, 1997.
> "Tcl is also generally slower on iterative operations over strings"
Depends on the operation.
> "You cannot pass arrays by value or by reference; they can only be
> passed by name"
Actually, Tcl these days passes virtually everything by reference (the
main exceptions are probably selected callback scripts and messages sent
between threads), but values are copy-on-write (i.e. values are cloned
whenever a change to them would make it appear like the value is mutable).
The downside of mutable references (such as Perl and many other
languages have) is that they are amazing sources of mysterious bugs when
a piece of code in one place changes a value that has accidentally been
shared unexpectedly, resulting in a strange change somewhere entirely
differently (often in another module). This is an amazingly confusing
thing to run across as a bug, and Tcl is just about completely clear of
these things.
> "Perl supports the entire POSIX function set, but Tcl supports a much
> smaller subset of the POSIX function set, even using external
> packages."
Perl doesn't support all POSIX stuff actually. I've run across holes in
its coverage in the past. :-) Also, it doesn't do much to make the POSIX
APIs "Perl-like"; using select() for example is really *much* harder in
Perl than in Tcl. And Tcl provides cross-platform versions of virtually
all those APIs that it does support, so your code has a much greater
chance of being ported transparently from Unix to, say, Windows without
changes. Doing so with Perl is... unusual.
> "It should be clear from this description that Perl is a better
> alternative to Tcl in situations where you want easy access to the rest
> of the OS. Most significantly, Tcl will never be a general-purpose
> scripting language."
I think this comes down to the author saying "you guys aren't allowed to
compete and I'll make up rules to say just that, coz yuo iz teh l@m3r
5uxxx0r5!!!!!!!" or something like that. I tend to ignore such foolish
opinions, as they're not usually grounded in fact.
Happily you've spotted these statements for the FUD they are too. :-)
> "Tcl is generally slower than Perl", because "strings are converted to
> numbers only when calculation is required", "arrays are stored whithin
> what Perl would treat as a hash. Accessing true Tcl array is therefor
> slightly slower, as it has to look up associative entries in order to
> decipher the true values".
Well, Tcl arrays are only really comparable with Perl hashes anyway. But
that's not exact, and 8.4 includes support for operations that make
using a variable holding a Tcl list about as fast as Perl arrays. (Yeah,
I know they're not precisely comparable. So sue me.)
> "Unlike Perl, which parses the script before optimizing and then
> executing, Tcl is a true interpreter. This reduces optimization options
> for Tcl."
Untrue. Tcl uses an on-the-fly bytecode compiler. But the level at which
we factored the bytecodes is such that optimizers are less important
than in a language like Perl (where each individual bytecode does quite
a bit less). It's just a different design.
> "same Tcl interpretation technique also means that only way to debug
> Tcl code and search for syntactic errors is to actually execute the
> code."
Tcl has a long history of supporting "foreign" languages. Classic
examples of this are available as the expect and sqlite3 packages. If we
were going to force everything to be simply statically checkable, we'd
throw this well-loved baby out with the bathwater. (You might as well
take Perl to task for requiring the putting of foreign language elements
into strings. It's just a silly argument.) Instead, we leave the
provision of static checking tools to the community to develop; several
are available as third-party products.
Donal.
>
>> "same Tcl interpretation technique also means that only way to debug
>> Tcl code and search for syntactic errors is to actually execute the
>> code."
>
>
> This is true. It is also, I think, true for Perl -- If you create on
> the fly a string in Perl and tell it to evaluate it.
How popular is creating and using code like that in perl or python?
the barrier between code and data is perceived as high
with any typed language, isn't it?
either it is data or it is code never both.
uwe
Lisa Pearlson wrote:
> There are plenty of Tcl packages available, scattered all over the
> place. I still believe one of the major advantages and success of Perl
> over Tcl is CPAN, an large central collection of packages that are easy
> to 'download and install'. The problem with Perl is that uninstalling
> packages is not as easy. How is this with Tcl? Isn't it possible to
> create such a CPAN like repository for Tcl packages, and bundle Tcl
> with a simple 'install <package>" feature, with dependency checking?
Hmm, many Tcl extensions have very few, if any dependencies.
It is very simple to install and uninstall extensions to Tcl - since
they are loaded from directories living on the ::auto_path variable, it
is just a matter of putting the extension directory with a valid
pkgIndex.tcl file in one of these locations or removing it from there
when it's not longer needed. You can even modify ::auto_path at runtime
to include another directory where extensions are. And Tclkit driven
programming makes depending on the right versions and also the
deployment of them sooo easy...
But sadly we live in a world where people are used to complicated and
confusing installation routines that lead to scattered files and
versions anywhere on your system. Then, there are mechanisms invented
to simplify this process again, called installers. It's all... well -
business as usual. Human beings seem to tend to complicated things.
But I agree with you that the extensions are too scattered on the net -
I am often in the situation of not finding the right thing at the right
time. A solution would be to have the latest releases and descriptions
for them aggregated at http://www.tcl.tk/. That is the first entry
point to Tcl/Tk related information on the net and I think a extension
repository is well placed there. Maybe under the "software" section (I
remember that it was something like this before...)
Extensions should take place there, together with a short description,
a link out to the extension homepage and a direct download link. I
think I have seen such a site somewhen, but it was out of date.
Maintenance could be made easy with a modified wikit, that integrates
in www.tcl.tk. I think the best way is, that extension developers
update/maintain their section in this wiki by themselves.
It would also be a good starting point for installer like scripts - but
I consider this as second requirement. The most important thing would
be aggregation/integration of the scattered extensions.
> Forget about promoting Tcl via magazines or whatever.. first create a
> CPAN like network, then tell people about it, otherwise you'll lose
> their interest as quickly as you gained it when you don't lower the
> barrier for installing and extending and contributing code.
That's what my suggestion above is about.
> Tcl has some weaknesses (that in many cases are its strengths, like
> EverytingIsAString). But I don't see why Tcl couldn't be extended to
> include some non string data types for speed. From what I understand,
> the array is also an exception to the EIAS rule.
Actually, the EIAS rule is visible to the Tcl develper only. Internally
the values are hold in a smart way that makes efficient and fast
conversion at any time possible.
Eckhard
Perl wasn't originally developed "as a general-purpose scripting
language", either.
I certainly use Tcl as a general-purpose language.
>Anyway, since Perl not only requires, but even seems to encourage
>sloppy, unstructured programming, ...
I don't equate allowing "sloppy unstructured programming" with
encouraging same.
I have seen plenty of sloppy unstructured programming in
pedantic-structure languages & it is very hard to fix & improve. I'm
having to do that right now with some badly done Java code & it's a
nightmare compared to a lot of mediocre C & Perl code I've had to work
with.
Because a language forces structure doesn't mean it will result in
good, supportable, reusable code.
A structure-less language can be used to create good, supportable,
reusable code.
It's up to the coder.
>... "Pyton is object oriented, very structured, for structured minded
>people.. but I couldn't think of one thing that would make Python
>better than Perl" (obviously he doesn't find structure to be
>important.. I thought it was one of the most important things for
>serious coding).
The point is a design based on good structure, not that which the
language forces.
As for TCL, I am using it because a RedHat distro comes with Tcl/Tk,
but not with Perl's Tk nor Python's Tk, & I'm working in an
environment where adding the Tk packages is a huge bureaucratic
nightmare. So I use the tool that works, I write good, structured,
supportable, reusable code, & just get on with it.
Oh, yeah, & I tend to view all the object-class-method-template
blither as obfuscation...:-)
--
<> Robert Geer & Donna Tomky | |||| We sure |||| <>
<> bg...@xmission.com | == == find it == == <>
<> dto...@xmission.com | == == enchanting == == <>
<> Albuquerque, NM USA | |||| here! |||| <>
In addition to other people's replies I'd like to add a few comments:
> "You cannot pass arrays by value or by reference; they can only be
> passed by name"
Arrays aside (why can't Tcl handle it like Jim?) I find Tcl's concept
of variables very similar to C and is very comfortable to think about.
Forget fancy stuff like by-reference or by name. In C, from the
programmer's perspective, there are only two kinds of access to values:
using a variable or using a pointer. Tcl emulates this perfectly
(though some will consider it bad form) by allowing us access to the
variable 'name' as well as the variable itslef (EIAS, by the way, this
is actually a commonly requested 'feature' in C/C++ as witnessed by the
various requests on Usenet of methods to access the symbol table)
Hence getting the value of a variable:
set foo
gettig the value "pointed" to by the variable:
set [set foo]
> I remember the days that simple "basic" could be compiled into
> extremely fast binary code.
> Perl has binary compiler, but is actually not machien code, but some
> code (virtual machine) that is included in the binary together with the
> perl opcodes.
> I thought any interpreted language could be compiled into machine code.
I used to think that too. But Tcl is so much more than just a simple
programming language. The problem is that Tcl code can basically
"intpret" itself. Consider the problem of compiling the eval or source
command. The compiler would need to get the data to be sourced or
evaled so that they can be compiled. In itself this is not so much a
problem but consider the following (very dangerous) code:
set channel [socket www.somehost.com 1337]
fconfigure $channel -blocking 1
set code ""
while 1 {
append code [gets $channel]
if {[info complete $code]} {
puts $channel [eval $code]
flush $channel
set code ""
}
}
Now, how is the compiler supposed to compile code coming in from a
socket? Would the compiled program need to include an embedded compiler
so that the code can be compiled and then executed at runtime?
<snipped the rest>
I guess so. Perl also has an eval function, which probably why Perl
compilers do embed a compiler in the exe.
If you include this embedded compiler only for situations where the
eval isn't static piece of tcl code, as part of the compiler
optimization, I think it would be a very useful compiler.
I hardly ever use eval, but not sure what packages do that I use.
Lisa
Buggy code can hurt the image of Tcl, however, if Tcl is the best thing
out there, but userbase remains very limited because the window display
with lots of useful things is missing and people have to google for
'something that lets me do this', then this will stagnate the
development of Tcl, hurting all of us.
Having a large user base will create more buggy code, but also more
quality code and new tcl applications and additions and tools, we never
dreamed of ourselves before.
You should compare it to debian packages. There are 'experimental' and
'stable' releases. Aside from making it easy for users to contribute
code, one could also make it easy for the community to give feedback on
such contribution that is either "positive" or "nevative" (used for
simple score for star reading) with optional details about 'bug
reports'.
Besides, who decides now that some available tcl packages you can find
(with some difficulty now) is quality and another is buggy? Maybe I
made some quality code, but don't know where to publish it? Yes, you
can figure everything out with some effort.. but that's the whole
issue. It should be minimal effort to participate, and there should be
a central place.
like: pear.php.net
It would be better to start out with a web/wiki based repository,
before tools are built to automate some of these things.
I think packages.tcl.tk would be a good url, with similar repository as
pear.php.net.
It would have a web interface, to view the packages, as well as
HTTP/FTP to the files so they can be easily downloaded/installed and
later easily automated with download/install scripts.
Is something similar to http://pear.php.net also what you had in mind?
Lisa
I didn't make that assumption by false logical deduction, but said "not
only does Perl allow, but even encourage (I mean those behind Perl
philosophy) sloppy programming".
Obviously they don't call it sloppy themselves, but 'powerfull'.
The more cryptic and 'short' you write code, the more it tickles their
nerd-strings.
Rather than writing things out over multiple lines for clarity, they
aim to write it all in a single line.. so their perl code looks like
one big regular expression.
A colleague of mine had a piece of code like this:
if ( $myhash->{body}->{route}->{service}->{protocol} eq 'smtp' ) {
...
} elsif ( $myhash->{body}->{route}->{service}->{protocol} eq 'ftp' ) {
...
} elsif ($myhash->{body}->{route}->{service}->{protocol} eq 'http') {
...
}
My suggestion to write:
my $protocol = $myhash->{body}->{route}->{service}->{protocol};
and test on $protocol rather than write out the hash selection a dozen
times, was answered with "you don't want to use an extra variable when
you don't have to.. it goes against Perl philosophy).
Whateverrrrr.. I don't understand why Perl doesn't have a 'switch'
statement.
Anyway, this is not a good illustration.. look at any perl code and try
to understand it by looking at it.. reading C code is quite a bit
easier than Perl code.. and nobody denies it, but they seem to be proud
of it.
Lisa
It's good for some stuff, and deeply obfuscating for other stuff.
Indeed, going by the evidence of some non-Tcl software I work with, it
takes an object system to attain the true nirvana of obfuscation where
you can stare at a piece of code for days and still not understand it at
all! :-( But a well-designed object system in a suitable domain is a
great help. I suppose it's the usual rule: use the right tool for the
job. (It's just a shame that so many programmers see OO as their only
hammer.)
Donal.
(bin) 61 % proc y {b} {return [expr {$b -1}]}
(bin) 62 % y 7
6
(bin) 63 % set x {proc y {a} {return [expr {$a + 1}]}}
proc y {a} {return [expr {$a + 1}]}
(bin) 64 % eval $x
(bin) 65 % y 7
8
(bin) 66 %
assume procedure y is in a compiled script, when the eval statement is
executed, the compiled code would have to be discarded, any references
(calls) to the compiled code would be obsolete; not impossible, but
isn't that an interpreter, we would get?
I haven't looked into the history of Perl's switchlessness.
I can tell you a couple of things:
A. good stylists use hashes and/or block-level
assignment to $_ or the ternary operator
to compose source that's arguably more
powerful than any C-like switching. Here's
an example for the case at hand:
for ($myhash->{body}->{route}->{service}->{protocol}) {
/smtp/ and do {...; last;};
/ftp/ and do {...; last;};
/http/ and do {...; last;};
}
This *is* Perl, and there are MANY variations
on this approach.
B. There's an existing switch extension <URL:
http://kobesearch.cpan.org/htdocs/perl/Switch.html >
which Perl 6 will largely replicate with the "given"
keyword.
I think you use it more than you think you do. Anything that takes a
block of code to execute, like bindings of the "while" command, uses
eval in some sense.
while {![eof $socket]} [read $socket]
--
Darren New / San Diego, CA, USA (PST)
"I think these anchovies are spoiled.
They're not flat."
Compiling, unlike interpreting, is a very slow process. I don't mean
compiling to bytecodes - which can be done lazily or partially like
Tcl does. I mean compiling to machine instructions.
Simple bytecode compiling into a virtual machine designed to run your
language is quite fast as demonstrated by Perl, Python and Tcl. But
compiling to real machine instructions is much harder because your
compiled code also needs to do a lot of stuff that is required to make
a real CPU tick whereas bytecodes can expect the virtual machine (the
interpreter) to do the right thing. One example is instruction
scheduling on some RISC machines.
So, if my code above may actually be *slower* running in real machine
code as opposed to being interpreted since the incoming packets will
constantly invoke the embedded compiler.
One solution to this is of course to embed an interpreter along with
the compiled code so that anything requiring dynamic evaluation of data
can simply be interpreted. Of course, once you have embedded the
interpreter, it is very tempting to simply eval the whole code like
what freewrap, mktclapp and starpack does.
Eval is not the only thing that's hard for the compiler to digest.
Consider how hard it would be to compile the following bits of Tcl
code:
# Overriding a built in function/standard library:
rename puts _original_puts
proc puts {args} {
set f [open log.txt]
_original_puts $f $args
close $f
eval _original_puts $args
}
# Completely removing a built in function:
rename open {}
# The unknown hack:
proc unknown {args} {
if {[lindex $args 1] == "="} {
upvar 1 [lindex $args 0] v
set v [eval expr [lrange $args 2 end]]
} else {
error "invalid command name [lindex $args 0]"
}
}
foo = 10
bar = 2
foo = $foo * $bar
puts $foo
# The proc that changes itself:
proc alreadyDone {} {
proc alreadyDone {} {return 1}
return 0
}
And many others. Originally I use Tcl as a quick prototyping platform
for C since superficially the syntax looks similar and Tcl have lots of
commands which are similar to standard C functions. But the more I use
Tcl and the more I get used to Tcl's way of thinking the less trivial
it is to convert my Tcl code to C. Some of Tcl's functionality like
info and fileevents requires lots of scaffolding in C/C++.
> I think packages.tcl.tk would be a good url, with similar repository as
> pear.php.net.
Yes, that is great. packages.tcl.tk sounds very well. We also have to
be aware of the fact that it *must* be an eye catcher - like
pear.php.net is. People get much more easily convinced of something
when it looks good... and Software as well as the Web evolves into
product design more and more.
BTW, I think that wiki.tcl.tk could have a polish too. Some neat, good
looking interface comes always handy.
> Is something similar to http://pear.php.net also what you had in mind?
Not initially - I used PHP sparingly until now and when I used it, I
was always comfortable with the default packages... on Debian/Ubuntu
you can apt-get them and on Windows there is this Xamp thing which
installs everything.
So I have to admit that I have seen pear.php.net the first time yet -
but it makes a very good impression ;-).
The only question is, who should do it? Who is good at
programming/designing web sites and has access to the tcl.tk URL to
place the site there? Once it is there and announced, I am sure that
extension developers will put their packages and descriptions there.
The packages should be organized by category and the site should have a
"latest updates" section as well. Also, it would be good to have a
programs/scripts section where ready to use Tcl/Tk programs can be
browsed an downloaded.
Another point is - as seen on pear.php.net - a feedback/proposals
section where users can request extensions or new features in
extensions and where this can be discussed...
Eckhard
Eckhard Lehmann wrote:
> Lisa Pearlson wrote:
>
>
>>I think packages.tcl.tk would be a good url, with similar repository as
>>pear.php.net.
In times gone by a package repository was discussed: CANTCL in 1999 TIP 55
> http://groups.google.com/group/comp.lang.tcl/browse_frm/thread/b1abfaf2e50d1d7c
and some more threads.
on another note:
i am adverse to addon installations that bypass
the native package management of the host system ( if present ).
This completely sabotages the efforts of any distribution to
keep a system well patched.
( http://lwn.net/Articles/175882/ --> http://blogs.zdnet.com/Ou/?p=172 ).
uwe
While these things are true, they:
- put off people, leaving linux for Nerds only.. making it harder for the
technology to escape dusty attics and change the world.
- throw up a steep learning curve, and refuse to add the option for people
to find something even if they don't know what they are looking for, by
walking through GUI's.
In similar fashion, yes, a lot is already available for TCL, but takes some
effort to find it.. and even more isn't available or has never left
someone's dusty attic.
This type of 'nerdish' fundamentalism, that is only focused on technology,
not on the 'glitter' that is needed to leverage and support that technology,
is doomed. It's not one or the other..
The purpose of making TCL popular, is not that popularity itself, but that
it will help grow the available tools, packages and other activity that will
benefit all of us, will allow us to do more, quicker and easier.
And if some don't want to 'bother', because they believe in a lean-and-mean
Tcl only (which should always be possible), and not wish to support, some
even oppose, easy access and contribution, will see yet again, that great
tools and technology, will lose over crappy ones, like Windows vs. Linux,
like Perl vs. Tcl, Like VHS versus Video2000.
Yes, Tcl is just another tool, and Perl is a better choice for some
purposes. But Tcl is not used by the world for what it is much better at,
than Perl. Simply because they don't know about Tcl, or they get frustrated
for not being able to find the right packages because they're either hidden
or don't exist. That's a pity.. and I don't understand why the founders and
contributors for Tcl, don't hurt for this more than I do, who hasn't
invested as much energy in Tcl yet.
I'm quite convinced that making such repository is much less effort than
creating Tcl. So, it's like a great engine, without a shiny wrapper. And the
shiny wrapper is indeed the first thing that will be judged and create the
image.. And when nobody bothers to use Tcl because of it, they will never
see its true beauty and ease.
From personal experience, I am currently using mysqltcl in tcllib, that I
installed as a debian package. I find mysqlctl too limited, and I saw online
that there is a new version that uses namespaces, but I can not install it.
First, I don't know how to install it manually. I can figure it out and
spend half a day on it to do it, but .. I didn't want to bother, like most
people.
I know that there are other mysql interface packages, but I didn't want to
struggle trying to install them, while breaking debian package manager and
versioning..
So, why did I use debian packages at all, rather than manually installing
the most recent version of tcllib package? Well, because it was easy.. and I
wanted to get my job done quickly and not be forced to learn about the
internals right away to get started.. that goes against 'easy to learn Tcl'.
And I'm much more technical minded and knowledgable than the average Perl
user. People are getting their jobs done in Perl, even if it would take them
half the time and effort in Tcl.. but.. they didn't, because Perl was just
so much easier to install, find and install one of the huge collection of
readily available packages.
I love Tcl syntax, ease of use, without losing power and flexibility, or
even much performance.. but I hate everything else about it.. I just can't
find the things I need.. Are there packages available to handle SSL/TLS? Are
there packages to easily create MIME messages and e-mail them? Are there
packages to easily parse XML data?
I'm sure that some or all of these can be answered with YES, but.. I know
for sure Perl has that, and more.. and this difference between Perl and Tcl
is not because of technological differences, but because the community is
big.
So, I'm a bit dissapointed in how few people support this idea of a
repository. It looks like people have run out of time and energy, ... which
makes sense, because the smaller the community, the more each individual is
expected to do, to keep things going.. Things are getting to be harder and
harder to do, compared to other languages like php and perl, because of the
lack of readily available and easily accessible packages.
> The only question is, who should do it? Who is good at
> programming/designing web sites and has access to the tcl.tk URL to
> place the site there? Once it is there and announced, I am sure that
> extension developers will put their packages and descriptions there.
> The packages should be organized by category and the site should have a
> "latest updates" section as well. Also, it would be good to have a
> programs/scripts section where ready to use Tcl/Tk programs can be
> browsed an downloaded.
> Another point is - as seen on pear.php.net - a feedback/proposals
> section where users can request extensions or new features in
> extensions and where this can be discussed...
>
>
> Eckhard
>
I am not as good of a web programmer/designer as those who do it
professionally on a regular basis, but I could get it working.. My time is
limited too, like everybody elses, have 2 jobs, a full time one during the
week and I run my own business with my own clients on weekends (go with 4
hours of sleep a night only, and if there were more Tcl packages available,
perhaps I could get more hours of sleep :)). Even so, I'd be willing to
contribute significantly to such a repository.
Perhaps there are readily available repository like web code available (like
discussion board and blogging webpackages).
My problem is that I do not yet know enough about Tcl packages, to know how
to design the website and backend database in the best way.. I have ideas as
to how the site should look and can be browsed, but not enough about the
technical design behind it..
I imagine we start out with some kind of central FTP site, with a directory
structure based on categories, similar to pear.php.net, then a simple
website (php) that lets one browse this directory structure and download
(wget) the package.
But, .. I don't know yet how to manually install packages.. and not break
versioning info.. so I guess I need to study this more deeply, so I can come
up with (tcl based?) script that can automatically wget the package and
install it.
Lisa
> I imagine we start out with some kind of central FTP site, with a
> directory structure based on categories, similar to pear.php.net,
> then a simple website (php) that lets one browse this directory
> structure
Tcl! Tcl! :) We have the Rivet! And Tclhttpd!
> and download (wget) the package.
Binary configured [socket] is just fine. It would make the engine
independed from external tools (i.e. Windows hasn't got wget).
> But, .. I don't know yet how to manually install packages.. and not
> break versioning info.. so I guess I need to study this more deeply,
> so I can come up with (tcl based?) script that can automatically wget
> the package and install it.
That's complicated subject indeed. I think that Tcl should go to
unification its packages. Yes, I'm thinking about startpacks.
--
Pozdrawiam (Greetings)!
Googie
If you have a web site that contains Tcl packages, then today you can mount
that web site via the HTTP VFS, add the path where you mounted it to
::auto_list, and package require any packages there.
As to installing those same packages on your local system, that may require
privileges that you as the running user does not have.
> If you have a web site that contains Tcl packages, then today you can
> mount that web site via the HTTP VFS, add the path where you mounted it
> to ::auto_list, and package require any packages there.
>
> As to installing those same packages on your local system, that may
> require privileges that you as the running user does not have.
>
is there a cache VFS module one could stack on top?
uwe
FWIW, Windows has pretty much everything Linux has. The keyword to
search for is "win32". So you'd google {wget win32}.
>I love Tcl syntax, ease of use, without losing power and flexibility, or
>even much performance.. but I hate everything else about it.. I just can't
>find the things I need.. Are there packages available to handle SSL/TLS? Are
>there packages to easily create MIME messages and e-mail them? Are there
>packages to easily parse XML data?
Can you find what you're looking for here?
<URL: http://www.flightlab.com/~joe/gutter/ >
Tcl packages to do all of these things are of course available,
but it's hard to find them if you don't know where to look.
The gutter repository hopes to make them easier to find.
>So, I'm a bit dissapointed in how few people support this idea of a
>repository.
Quite the contrary! If there are two things that every Tcler
agrees on, it's that (1) Tcl should have a CPAN-like extension
repository, and (2) someone else should build it :-)
But seriously, there have been a number of attempts to build
such a thing in the past, but few of them have really taken off.
(With the possible exception of the Starkit Distribution Archive,
<URL: http://mini.net/sdarchive >, which is still active, and
the now-defunct neosoft archive, which was really popular in
its day.)
--Joe English
there is a good chance to find wget on any linux and bsd out of the box
i dont know about OSX, AIX, SUN*.
But WindowsXP in a standard installation?
uwe
The same can be said of Tcl. So? ;-)
On the surface, this looks like a disaster waiting to happen. Today this
would never work for a commercial product, and does not seem reliable enough
for even a private program. When the Internet, last-mile Internet
connections, and web-sites start having 99.99999% uptime then it might be
useful, but that is years away.
> > Tcl! Tcl! :) We have the Rivet! And Tclhttpd!
Yes, ideally it should be done with Rivet or Tclhttpd - or Websh.
> If you have a web site that contains Tcl packages, then today you can mount
> that web site via the HTTP VFS, add the path where you mounted it to
> ::auto_list, and package require any packages there.
The point is to have an unique aggregation of all available packages
where Tcl developers can search/browse/download the extension they
actually need - compiled for their platform and ready to use by just
placing it on ::auto_path or in [myprogram].vfs/lib or elsewhere. It's
about announcing and making the extensions popular, making it *easy*
for users/developers to find them.
I don't know why or how earlier trials to do such a thing failed. But
it is really time now to do it.
> As to installing those same packages on your local system, that may require
> privileges that you as the running user does not have.
It is not necessary to install the package in a place where you don't
have write privileges, once you have *found* it ;-).
Eckhard
Robert
Funny but true!!
Robert
That is a very good start... Things that remain to do are, to put it on
a more "common" url (like packages.tcl.tk), to provide editing
capabilities for extension developers. Search capabilities should be
available too, of course. Extension developers should be able to upload
or provide links to binary builts...
> Quite the contrary! If there are two things that every Tcler
> agrees on, it's that (1) Tcl should have a CPAN-like extension
> repository, and (2) someone else should build it :-)
I am not the best skilled person in web *design*, but I would not
hesitate to participate in building the site, on the backend or
frontend.
Eckhard
> That is a very good start... Things that remain to do are, to put it on
> a more "common" url (like packages.tcl.tk), to provide editing
> capabilities for extension developers. Search capabilities should be
> available too, of course. Extension developers should be able to upload
> or provide links to binary builts...
Yes to the common URL. I think Jeff (or rmax?) is in charge of tcl.tk -
just ask.
If there is anything I can do (noob with Tcl) let me know.
Robert
Unfortunately for you this:
/smtp/ and do {...; last;};
/ftp/ and do {...; last;};
/http/ and do {...; last;};
is still:
for (@array)
{
if ($_ =~ /smtp/)
{
...
}
else {
if ($_ =~ /ftp/)
{
...
}
else {
if ($_ =~ /http/)
{
...
}
}
You can use elsif to clean it up while still being readable.
For all your fancy jargon, Perl still follows the if/then/else model,
which seems to follow the assembly "jump" conditional model (jne, jr, je, jlt...).
Unless Perl somehow uses different cpu instructions, then its a good bet
nothing more is taking place in your switch statement.
The 'C' switch statement works the same way in assembly. There are *NO*
performance improvement's, unless you want to delineate some cpu instructions
that changes the code path in a more efficient manner than if/then/else...
robic0
This msg can't be responded to..
> that changes the code path in a more efficient manner than if/then/elsane...
Really depends on what machine you are running on and what compiler you are
using now does it not?
I know of one machine that was popular in the 80's and 90's who's assembly
language had as instruction Compute CRC (among many other "high level"
instructions) -- and at least one "high" level language compiler that
generated it (VAX and Bliss respectively).
> Can you find what you're looking for here?
>
> <URL: http://www.flightlab.com/~joe/gutter/ >
I could even live with the name. ;-)
And i would like to have ( and help with) the package
descriptions being a bit more detailed.
Like ( notorious blt fan that i am ) metioning
that blt
has vector
has math ops on vector
has a tabnotebook
has reparenting for tabnotebooks
that expect
has signal handling
that tclx
has signal handling
and things like that.
there is a lot of unadvertised functionality
and paradigms burried in some packages.
The name quite often does not tell all.
uwe
I see great potential for very convenient user experience. It would fit
so well with the rest of TCL experience.. Let's keep this idea hot. I
will try to look into a repository like website, study tcl packages in
more detail.. (like, do we provide packages as source code.. or do some
of them require compiling? I'm sure some do.. perl packages are
downloaded as source, then unpacked, make, make test, make install'ed).
Once the repository is there and people upload their packages, making
an install script should be little effort.. the main thing is just to
get all packages published in one place.
Darren New schreef:
Lisa Pearlson wrote:dd
> You should compare it to debian packages. There are 'experimental' and
> 'stable' releases. Aside from making it easy for users to contribute
> code, one could also make it easy for the community to give feedback on
> such contribution that is either "positive" or "nevative" (used for
> simple score for star reading) with optional details about 'bug
> reports'.
From Your comments I guess You really never tried to contribute code to
the debian package system. And You most likely never used the stable
Debian distribution. The debian system is neither easy nor up to date.
(Still Debian is very impressive.)
> Besides, who decides now that some available tcl packages you can find
> (with some difficulty now) is quality and another is buggy? Maybe I
> made some quality code, but don't know where to publish it? Yes, you
> can figure everything out with some effort.. but that's the whole
> issue. It should be minimal effort to participate, and there should be
> a central place.
sourceforge, c.l.t and wiki.tcl.tk are very central places to
participate. Also IMHO the TIP process is unique in its simplicity and
efficiency to integrate new core features.
I think we're confusing each other. You seem to be focusing
on either semantic subtleties or performance; I'm frankly
unsure which. *My* follow-up was aimed at stylistic issues.
> B. There's an existing switch extension <URL:
> http://kobesearch.cpan.org/htdocs/perl/Switch.html >
> which Perl 6 will largely replicate with the "given"
> keyword.
Impressive. 32? different types for the switch "object"
to handle that are described with differentiated behaviour.
How do you ascertain that one of these possible types is not your foot?
uwe
Nice idea, but not needed for a first version. Getting a unified
terminology for such things takes much longer than you expect, and we
can muddle along without it[*]. As long as the description format itself
is such that we can add such features later, we'll be fine.
Donal.
[* Remember the principle that the Best is the enemy of the Good. ]
High-performance Perl does something different (forgive me if I get the
syntax wrong here, but I don't write Perl very often):
BEGIN {
%switch = {
smtp => sub { ...; last; }
ftp => sub { ...; last; }
http => sub { ...; last; }
};
}
for ($myhash->body->route->service->protocol) {
eval $switch{$_} if defined $switch{$_};
}
Funnily enough, Tcl 8.5 does something very similar behind the scenes
with some kinds of [switch] too...
Donal.
use Switch;
switch ($val) {
case 1 { print "number 1" }
case "a" { print "string a" }
case [1..10,42] { print "number in list" }
case (@array) { print "number in list" }
case /\w+/ { print "pattern" }
case qr/\w+/ { print "pattern" }
case (%hash) { print "entry in hash" }
case (\%hash) { print "entry in hash" }
case (\&sub) { print "arg to subroutine" }
else { print "previous case not true" }
}
I believe it has had this since before 2003 as the perldoc says:
"This document describes version 2.10 of Switch, released Dec 29,
2003."
HTH,
Robert
That's not necessarily true. Those of you with long memories will
probably remember TurboPascal, and that compiled to machine code pretty
quickly indeed, even on machines that weren't fast 20 years ago (i.e. my
first PC). By contrast, C and C++ have been relatively slow languages to
compile for quite a long time, largely because of the large number of
optimizations that compilers try to apply, and the idiotic system of
header file including as a way of coupling to libraries...
Donal.
Ouch! Earl, that sounds so harsh, it makes me think I misunderstand you.
I entirely agree that any reliability assessment needs to examine network
connections critically; sometime we can talk about my feelings regarding
RPC and SOA as a whole. At the same time, there's no question in my mind
that plenty of "commercial products" (hosted backup, CRM, Sun's
Grid-from-the-wire, the Microsoft Live line, eBay and Amazon SOA fixtures,
...) already exist, and work to the extent that salaries are getting paid.
Maybe Gerald has described something with more problems than benefits. I
don't see it that way, though; what makes VFS so nearly-indescribably cool
is that it makes it easy to switch between "package require ..." resolving
to a remote access (perhaps on a temporary basis, and perhaps with
intelligent caching), and some kind of more conventional local read.
My conclusion: Gerald's suggestion deserves more careful attention.
Cameron point to several Internet applications that work today.
But then, who said it had to be all internet -- you can have local
webservers on your intranet.
Also not all applications need 99.99999% uptime -- I know several successful
commercial sites/application that get no where near that.
Maybe you should start thinking outside of the "it needs to look exactly
like CPAN" box.
Donal,
The problem may well be that a large part of the people joining discussions
now have only experienced recent implementations of C/C++/C#/Java and do not
have the historic perspective to understand that the small view of the world
that they have had does not always apply to the entire world.
>>> If you have a web site that contains Tcl packages, then today
>>> you can mount that web site via the HTTP VFS, add the path
>>> where you mounted it to ::auto_list, and package require any
>>> packages there.
>> On the surface, this looks like a disaster waiting to happen. Today this
>> would never work for a commercial product, and does not seem reliable
>> enough
> Also not all applications need 99.99999% uptime -- I know several
> successful commercial sites/application that get no where near that.
repeating my last post on this:
stack a (smart)cache module into vfs ( and there is cvs-vfs),
if its been loaded once the cache will provide it after disconnect.
the smartness will fetch an updated version if and when available.
uwe
Certainly. My point was not that Googie's comment isn't valid. I just
know many people who think (for example) that Linux is better because it
has GIMP and Apache and ... and a whole bunch of other stuff that's been
ported to Windows already. I just thought I'd point out how one finds
such things, if one is using Windows but needs such a tool. My comment
was informational, rather than disputive.
Yes. I elaborate the existence proof: commercial anti-virus
products do this, from what I understand, in that they work
"unplugged", but they "call home" as they can for updates.
In the Tcl world, I'm fairly sure Steve Redler, Jean-Claude,
I, and probably others have delivered applications that embed
intelligence about updating themselves by means of VFS.
>>stack a (smart)cache module into vfs ( and there is cvs-vfs),
>>if its been loaded once the cache will provide it after disconnect.
>>the smartness will fetch an updated version if and when available.
> Yes. I elaborate the existence proof: commercial anti-virus
> products do this, from what I understand, in that they work
> "unplugged", but they "call home" as they can for updates.
what i have seen/heard this is more like mirror or rsync.
and prone to botchups like this one:
which leads IMHO to some versioning requirement or backout mechanism
in an implementation.
>
> In the Tcl world, I'm fairly sure Steve Redler, Jean-Claude,
> I, and probably others have delivered applications that embed
> intelligence about updating themselves by means of VFS.
In the real world coda does this as an fs.
there should be some more distrubted file systems that support
disconnect.
uwe
Cameron,
You mean there are people who have not at one time or another delivered
applications that embed intelligence about updating themselves by means of VFS?
Do they use a wire wrap tool to do their programming?
> Do they use a wire wrap tool to do their programming?
do you put your transferbooth in the living room?
OTOH:
I've build statemachines in that way: 7474, 7402, 1n4148,
miles of AWG32 wire lots of prickly pins.
and i still on occasion use my OK-Tools WireWrap Gun.
hey, don't shove my wheelchair!
uwe
I agree with this. This is the model I have adopted for an application I am
developing. However, the apps I have written get their packages from
Tcllib, which is installed on each computer that has Tcl installed. I'm not
sure how much benefit I will gain by just having Tcllib installed on one
computer and the other computers attach to it to get the needed packages.
All computers eventually break and if that one computer goes down then the
entire system would be inoperative. Maybe that is a fair tradeoff, but
before I can justify it I would need to be gaining a lot more convenience
than just not having to install Tcllib on various computers.
> Also not all applications need 99.99999% uptime -- I know several
> successful commercial sites/application that get no where near that.
>
That's true. I was being overly dramatic.
> Maybe you should start thinking outside of the "it needs to look exactly
> like CPAN" box.
>
I really don't know what CPAN is. I just use Tcllib that I get from the Tcl
website, which appears to be a good method for grouping and distributing
packages.
Sorry.
> I entirely agree that any reliability assessment needs to examine
> network connections critically; sometime we can talk about my feelings
> regarding RPC and SOA as a whole. At the same time, there's no question
> in my mind that plenty of "commercial products" (hosted backup, CRM,
> Sun's Grid-from-the-wire, the Microsoft Live line, eBay and Amazon SOA
> fixtures,....) already exist, and work to the extent that salaries are
> getting paid.
>
Sure they work as long as the Internet is up. Just earlier today my DSL
line was going up and down from the current storm, and don't even think
about trying to get the phone company, or ISP provider, to examine the line
and try to determine where moisture is getting into it. However, the apps
you describe are different than having a program on your computer that, once
it is started, then has to go out over the Internet to find the appropriate
packages before it will run. That is what I understood him describing.
Yes, currently there are programs that will go out over the Internet and
determine if they need to be updated, and handle the update. But, if the
Internet is down, or the update site is down, they will still run versus a
program that will not start since it cannot get the appropriate packages.
As an example, I use the Tcl online documentation
(http://www.tcl.tk/man/tcl8.4/TclCmd/contents.htm) as I am programming and
when the site is down it then becomes a pain as I need to find my Tcl book.
The point is if I get upset when I cannot access online documentation how
are customers going to react when their programs will not even start since
they cannot get the required packages?
No, I don't put the transferbooth in the living room -- not safe from dialin
thieves if you do!
Of course, right now there is not much of anything in my livingroom (I live
in the New Orleans area -- Katerina you know).
I know--that's only a survey. Well, look at <URL:
http://www.eweek.com/article2/0,1759,1699460,00.asp >, or
<URL: http://www.oracle.com/ondemand/index.html >, or ...
I'm not arguing that all Tcl practitioners do the same. I'm
only reporting that, from my vantage point, it's not inherently
impossible for any particular application.
Good idea -- any ideas for how this could be best presented?
I just updated BLT's description field to list more of its
features, but two things occur to me: (1) it *still* isn't any
easier to find out that BLT provides vectors unless you already
know that BLT provides vectors; and (2) maintaining a list of
features provided by each package is best done as a collaborative
effort -- collecting all that raw data is more work than I'm
prepared to do on my own.
Maybe a tagging system would work for this? (All the cool Web
apps have tag systems nowadays...)
Also -- BLT has a "tabset" widget; Tix, BWidgets, and Tile have "notebook"
widgets; mkwidgets has a "tabcontrol"; and iWidgets has a "tabnotebook".
These are all different names for the same kind of widget; a user should
be able to find references to all of the above packages when looking up
any of those keywords. (Topic maps maybe?)
Also also -- a lot of the things people are likely to look for
are provided by tcllib, but the gutter website doesn't give any
indication that one might want to look there. A feature map
would be especially useful here.
--Joe English
get lots of these files ;-) cry for help..
sort this to trees with :
A: package as first item
b: usage as first item
C: supp_OS as first item
D:... ...
the best "public" sollution would be a template oriented wiki.
uwe
Joe English wrote:
> >> <URL: http://www.flightlab.com/~joe/gutter/ >
> >
> >I could even live with the name. ;-)
I think an easy to remember URL is *very* important. As a Perl
developer you know to search for modules on www.cpan.org, not on
http://some.obscure.university.subdomain/~ecky/perl/stuff/. For php you
search for packages on pear.php.net, and everyone knows this. Why
should it be more complicated for Tcl?
Joe, do you think it is possible to register packages.tcl.tk as
subdomain to tcl.tk and move your site to this domain? Respectively -
do you like this idea?
> Good idea -- any ideas for how this could be best presented?
For the start, the simple descriptions on a common url (packages.tcl.tk
;-) would do. But to make the maintenance of packages with descriptions
and features easy in the long term, a more or less sophisticated web
application should be built, with database backend (could be metakit,
maybe) and dynamic content, search and so on. Part of this application
should be a package registration system, which provides entry
capabilities for features & descriptions and enforces or at least
encourages the upload for binary builts and .kit files.
It is much work to build such a system, but the outcome would be very
helpful and attract new developers to Tcl. If the work is split on
several interrested individuals working together as a team, that would
make the task easier as a whole. I would like to participate as well.
As webapp framework a Tcl solution should take place - Rivet, Tclhttpd
with the tml templating system or websh, or AOL server - I don't know
which is best (although my favourite would be Rivet)...
Eckhard
gutter.tcl.tk
mhh:
packages.tcl.tk
though the meaning is obvious the problem with "packages" is that
it it is about as sexy .. (oh well i don't have a comparison handy
think of grimy folds and incontinence) need a word that sticks with tcl
there once was discussion about
cantcl
what about
toolbox, lunchbox, sesame,
boox ( Binary Or Other eXtensions )
O'Reilly seems to see tcl as monkey business and bird(kiwi?) stuff.
inow what about
>
> Joe, do you think it is possible to register packages.tcl.tk as
> subdomain to tcl.tk and move your site to this domain? Respectively -
> do you like this idea?
>
>
>>Good idea -- any ideas for how this could be best presented?
>
>
> For the start, the simple descriptions on a common url (packages.tcl.tk
> ;-) would do. But to make the maintenance of packages with descriptions
> and features easy in the long term, a more or less sophisticated web
> application should be built, with database backend (could be metakit,
> maybe) and dynamic content, search and so on. Part of this application
> should be a package registration system, which provides entry
> capabilities for features & descriptions and enforces or at least
see my discussion with the english joe.
> encourages the upload for binary builts and .kit files.
provide some help in the form of *.spec and <debian equivalent> .. for
installation through distribution management.
> It is much work to build such a system, but the outcome would be very
> helpful and attract new developers to Tcl. If the work is split on
> several interrested individuals working together as a team, that would
> make the task easier as a whole. I would like to participate as well.
> As webapp framework a Tcl solution should take place - Rivet, Tclhttpd
> with the tml templating system or websh, or AOL server - I don't know
> which is best (although my favourite would be Rivet)...
all the webstuff/cgi i have ever done was written in shellscript,
but i'd try to help anyway.
>
>
> Eckhard
>
uwe
Tcllib's titchy. There's not much need for using live remote versions of
it. But that's definitely not universally true; some libraries are
*much* bigger than the whole of Tcllib, and can gain substantially from
being delivered "just in time".
Donal.
Registering a server and/or domain is the easy bit. Putting something
there (or at least to a staging/experimental site) is what we ought to
solve first.
Note that if we allow package upload, we *have* to have a mechanism for
access control of some kind. I trust you guys to upload only non-evil
package implementations, but I don't trust everyone in the world the
same way. :-)
Donal.
That's why we've got to keep on telling them. Help them understand that
the world is richer than they perceive it to be. I refuse to give up on
this point...
Donal.
I would rather suggest that this be something more like Gentoo's Portage.
Apart of usual storage and dependencies it has also "use flags", which, for
example, state that some package is available only on some platforms, or on
some platforms that package is still unstable.
I don't think that "making buggy packages" is a valuable argument anyway.
Presently we have a situation that both quality and buggy packages can be
found by googling. Some of them don't compile without knowing some strictly
detailed things, like Incr Tcl.
> Besides, who decides now that some available tcl packages you can find
> (with some difficulty now) is quality and another is buggy? Maybe I
> made some quality code, but don't know where to publish it? Yes, you
> can figure everything out with some effort.. but that's the whole
> issue. It should be minimal effort to participate, and there should be
> a central place.
I also think that the problem is not with appropriate storing packages, but
managing them. Just one central place with a list of all available packages
for Tcl (with possibility to fetch and compile them, just like Gentoo Linux
does); then they can be stored elsewhere. With a possibility to report
feedbacks/problems etc.
--
// _ ___ Michal "Sektor" Malecki <sektor(whirl)kis.p.lodz.pl>
\\ L_ |/ `| /^\ ,() <ethourhs(O)gmail.com>
// \_ |\ \/ \_/ /\ C++ bez cholesterolu: http://www.intercon.pl/~sektor/cbx
"I am allergic to Java because programming in Java reminds me casting spells"
Yes, of course. But Debian was used as an example of packaging, not as a base
idea. I think it would be better to base rather on Gentoo Portage.
> sourceforge, c.l.t and wiki.tcl.tk are very central places to
> participate.
This will not replace a formalized distribution system with unique
installation procedures and bugreport system.
> Also IMHO the TIP process is unique in its simplicity and
> efficiency to integrate new core features.
This is totally different thing.
I would rather see software.tcl.tk and updating this way:
% tcsup update
% tcsup install IncrTcl -exact 3.2
# (Will spawn wget, fetch sources, unpack, compile and install)
[TCSUP stands for Tcl Central Software Update Place - or Procedure/Point as
you like :) ]
There may be a problem on Windows; so first tcsup would have to be installed
manually, leaving up to the user, how to use or install support tools, like
wget. On Windows, anyway, packages would be available in binaries only; for
Windows it is enough. This will need only wget, tar and gzip/bzip2 to install.
> Extensions should take place there, together with a short description,
> a link out to the extension homepage and a direct download link.
And issue database.
> I think I have seen such a site somewhen, but it was out of date.
> Maintenance could be made easy with a modified wikit, that integrates
> in www.tcl.tk. I think the best way is, that extension developers
> update/maintain their section in this wiki by themselves.
I think it would be best to have one central point, where new packages with
their homesites and maintainers would be registerred. Bug reports will be
redirected to them also.
> It would also be a good starting point for installer like scripts - but
> I consider this as second requirement. The most important thing would
> be aggregation/integration of the scattered extensions.
Once it is done, this "second requirement" will not be a problem.
In this case, with apologies to Santayana, "those who do not know history
are doomed to repeat the good bits very late, if at all". 8-)
- Adrian
Very good. So it has to be easy to make some tclwget?
> Dnia 17 Mar 2006 01:32:46 -0800, Lisa Pearlson skrobie:
>> You should compare it to debian packages. There are 'experimental' and
>> 'stable' releases. Aside from making it easy for users to contribute
>> code, one could also make it easy for the community to give feedback on
>> such contribution that is either "positive" or "nevative" (used for
>> simple score for star reading) with optional details about 'bug
>> reports'.
>
> I would rather suggest that this be something more like Gentoo's Portage.
> Apart of usual storage and dependencies it has also "use flags", which, for
> example, state that some package is available only on some platforms, or on
> some platforms that package is still unstable.
Suggesting even another system: DarwinPorts. It is written in Tcl
already, handles compilation and even has an GUI tool, also written in
Tcl/Tk (only for OS X now, but oviously quite portable).
It has also the added virtue that is downloads the actual code from its
home site, so it does not need a repository (except for the ports, which
are not much more than some info, optional patches and other meta
stuff).
Jochem
--
"A designer knows he has arrived at perfection not when there is no
longer anything to add, but when there is no longer anything to take away."
- Antoine de Saint-Exupery
> > Dnia 17 Mar 2006 01:32:46 -0800, Lisa Pearlson skrobie:
> >> You should compare it to debian packages. There are 'experimental' and
> >> 'stable' releases. Aside from making it easy for users to contribute
> >> code, one could also make it easy for the community to give feedback on
> >> such contribution that is either "positive" or "nevative" (used for
> >> simple score for star reading) with optional details about 'bug
> >> reports'.
> >
> > I would rather suggest that this be something more like Gentoo's Portage.
> > Apart of usual storage and dependencies it has also "use flags", which, for
> > example, state that some package is available only on some platforms, or on
> > some platforms that package is still unstable.
> Suggesting even another system: DarwinPorts. It is written in Tcl
> already, handles compilation and even has an GUI tool, also written in
> Tcl/Tk (only for OS X now, but oviously quite portable).
Much better then. Of course, compilation should be available on demand. I
hope it does not use any external tool for fetching and unpacking the
packages?
> It has also the added virtue that is downloads the actual code from its
> home site, so it does not need a repository (except for the ports, which
> are not much more than some info, optional patches and other meta
> stuff).
Gentoo's Portage also does. However Darwin probably will be better as a
template, since Gentoo Portage is scripted in Python.
In regard to your specific questions, the answers, as you
suspected, are all, "Yes":
<URL: http://wiki.tcl.tk/ssl >
<URL: http://wiki.tcl.tk/attachment >
<URL: http://wiki.tcl.tk/xml >
Is that because it has been "talk" for so long and no "action"?
Robert
Yes, there has been action. Steve Cassidy's CANTCL has 30'ish packages
available. I don't know why it hasn't gone any further -- it's been
online several years.
Mainly, I am a Tcl developer, and I write TONS of databases in
Tclhttpd, and I am the lord high network engineer at a non-profit with
a really fat pipe to the Internet. Oh yeah, and I have a pretty cool
domain name: etoyoc.com (also .net .org) that has been lodged into
google for so many years that I could put up a page about "Eskimo Pie
Sand Castles" and it would be on the first item in a search topic the
next time my site's indexed. (Which looking at my logs is several times
daily.)
Here is the plan.
Instead of worrying about indexes and spec files and all that happy
crap, let's index the information into Sqlite. Develop some scripts to
syncronize one's personal repository with a central one (I nominate
me). Downloading packages is cake, use the http package. Sqlite at this
point is compiled for just about everyone's platform and happily runs
inside of the interpretor.
For large collections of objects, our repository will also maintain a
tclIndex file and the paths for Tcl to find them. For every platform we
designate a directory where the packages are installed, preferably
relative to the local Tcl interpretor.
On the developer end, we provide a website with pre-canned forms for
filling in meta-information and documentation. Man files, html pages,
even Latex and PDF documents can be auto-generated by scripts. We
provide an upload button for new versions of the code. We can even
provide an automatically generating test scripts for debugging.
And to add icing to the cake, everything about the package handling
system will itself be a package and be written 100% in tcl.
I'm in the midst of a massive re-write on my intranet application. It
uses a pile of in-house software libraries and itcl code. It will make
an excellent test case to exercise a new package management system.
(I'm currently just dumping the code base to a designated directory and
rsyncing copies to each of the servers that need it.)
I have a website that is currently indexing Magic the Gathering cards.
I'll probably have that reworked by the end of the afternoon at:
-Sean
You could have uploaded code 'approved' first, after you virusscan it, or
perhaps even study the code first.
You could also warn users if the code they are looking at, hasn't been
verified yet, or accomplish the same indirectely by allowing people to vote
for packages as to how cool/reliable/useful it is, and how many voted.
I initially expect all existing (and already used) packages to be available
in this central repository system.. that would already make life so much
easier (currently I have no idea what all is available already)...
Lisa
"Donal K. Fellows" <donal.k...@manchester.ac.uk> wrote in message
news:dvoopm$2lhv$1...@godfrey.mcc.ac.uk...
<packages>
<package name = 'foobar'>
<author>Your name here</author>
<version>0.01</version>
<description>
This is the fabulous foobar package that allows you to foo
and bar.
</description>
<platforms>
Windows
Solaris
HP/UX
</platforms>
<location>path or url/to/file</location>
<md5>FC7EB2BA07710DECECC688BCEA5BB323</md5>
</package>
</packages>
This woud be easy for the authors to supply and the site maintainer to
modify. Alternately you could just have one xml file per package; so
that if you have foo.tcl then foo.xml would describe that file and the
author does everything. I kind of like that way.
Then you can wrap a console and gui around it.
tan> search foo*
1. foo 1.1
2. foo 1.2
3. foobar 5.0
tan> install 1
** installing foo 1.1 **
** foo 1.1 installed **
tan> upgrade foo
** current version 1.1 **
** upgrading foo to 1.2 **
** foo 1.2 installed **
tan>search bar*
1. bar 0.5
2. bar 0.7
tan> query bar
** bar 0.5 is currently installed **
tan> upgrade bar -archive
** moving bar 0.5 to archive directory **
** installing bar 0.7 **
** bar 0.7 installed **
tan> describe bar
** bar 0.7 **
** author: someone **
** description: this is bar the foo packager **
** platforms: Windows, Solaries, HP/UX, Linux **
tan> query installed packages (or query *)
** a list of all the installed packages with versions **
tan> delete bar
** deleting bar 5.0 **
** bar 5.0 deleted **
tan>
You could use the minimalist wget [http://wiki.tcl.tk/12871] that uses
only the Tcl core to work to fetch the packages. The fetch could add a
step to verify the md5 of the file, comparing it to the one in the
repository.xml file or package xml file. You could also use the
stronger SHA-256 instead of MD5.
Something like that...
Robert
or even save this info in a database ;). Metakit should be fine for a
start.
The web application could be done with Tclttpd... I am just now writing
a "quick&go" web app using Tclhttpd and metakit (don't have much time
for this), and I am surprised how powerful these tools are.
When more sophisticated database backends become necessary, the db
access functionality can be changed later (I am thinking of separating
database access functionality completely from the html presentation, of
course).
Eckhard
Robert
FWIW, if you (or anyone else) wants to experiment,
the raw data for the gutter site is available here:
<URL: http://www.flightlab.com/~joe/gutter/packages.xml >
Schema is here:
<URL: http://www.flightlab.com/~joe/gutter/packages.rnc >
[ Earlier ]
>This woud be easy for the authors to supply and the site maintainer to
>modify. Alternately you could just have one xml file per package; so
>that if you have foo.tcl then foo.xml would describe that file and the
>author does everything. I kind of like that way.
For the gutter, I decided early on that package authors shouldn't
be responsible for maintaining their own package records.
Rolling out a new release involves a _ton_ of grunt-work --
especially if you use SourceForge -- this would just be One
More Thing You Have To Do. It's a lot less work for a single
editor (me) to keep an eye on c.l.t., FreshMeat, SourceForge, etc.,
for new release announcements that it would be for every package
author to submit updates to a(nother) central site.
--Joe English
Robert
Isn't that one of the things that happens in response to the breaking
of the Seventh Seal? :-) (Jump that hurdle when we get to it, not
before.)
Donal.
Robert
IMO, something not unlike what happened when I left the community for
over 10 years after running the original Contrib Archive for over a year.
Either Joe hands it off (as I did), or the ball drops but is picked up
by an enthusiastic party (which I think was what happened after I
handed it off 8-).
Either way, Tcl life goes on...
- Adrian
I know I've tried for a year to do something about the fact that
comp.lang.tcl.announce is currently in limbo because no one can find a
way to contact the moderator to get him to set up a replacement/alternative
backup...
--
<URL: http://wiki.tcl.tk/ > Indescribable,uncontainable,all powerful,untameable
Even if explicitly stated to the contrary, nothing in this posting
should be construed as representing my employer's opinions.
<URL: mailto:lvi...@gmail.com > <URL: http://www.purl.org/NET/lvirden/ >
The complaints that I generally hear usually fall into these categories:
1. It's too hard to find whether there is an extension I want
2. It's too hard to find the source for an extension that I know I want
3. It's too hard to build the source for an extension that I know I want and
that I've found
4. My employer/customer/whatever doesn't want to use seperately downloaded
extensions - they only want to use tcl... why don't you
put extension XYZ into the tcl core?
Finding the name of an extension, to me, always seemed like a matter
for google. If google isn't finding it, that means that it wasn't
documented well enough or that it doesn't exist, or at least is no longer
around on the internet.
Finding the source, for me, is, typically, the same thing - as long as
the developer still is around on the net. Google works fine for this.
However, the building of the extensions is what is painful. And that's where,
when CPAN works, it's beauty lies. Not as a search engine (though the
editorially controlled categories and descriptions are useful), and not even
as a location from which to download the modules. But the fact that a
common build technology is simple to use. And of course, the biggest failure,
in my mind, are the cases in CPAN where the module writer deviates from the
default structure and the resulting module doesn't build... During this
particular iteration of discussion of the topic, I haven't noticed
(perhaps it has yet to arrive here) anyone discuss potential solutions to
the most critical problem.
And of course, the way to make all of this appear as if the extensions were
a part of Tcl, would be to have a single command, which when executed, would
download tcl and build it (or optionally download a pre-built tcl), and then,
when additional extensions are required, would download those in the same
fashion. So that the entire database of software would be, in essence, one
integrated environment, where the addition of an extension isn't seen as
something seperate. And of course one would want an interface where problems,
when reported, would get routed to the appropriate developer (or a team
which would then assign appropriately).