I have read the first article John Ousterhout wrote on Tcl. In it he mentions
that the original Tcl library was "about 27000 bytes of object code".
I would find a small, basic, version of Tcl very useful for several embedded
system test tools I have to maintain - the later versions are just too big.
Has anyone got a copy of the original sources? Or any suggestions as to were
they might be found? I can't seem to find anything earlier than version 3.
Any help would be greatly appreciated. <smile>
Richard Thompson
*********************************************************************
Richard Thompson Tel: (301) 428 7094 (inc Voicemail)
Hughes Network Systems Fax: (301) 428 2801
11717 Exploration Lane Email: rtho...@hns.com
Germantown MD 20876
**********************************************************************
"We can lick gravity but sometimes the paperwork is overwhelming."
Werner von Braun
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
It is time for Scriptics and Tcl Community to redefine the Tcl core.
A very small core can be defined and others can be extensions.
Some depreciated commands can also defined as automatic load extensions.
The list, string, file, array, and regexp can be removed out of the core.
They may be defined as automatic load extensions as needed.
The depreciated array commands such as array startsearch, nextelement, anymore, and
donesearch should be throw away. There are some other similar commands.
If we do not want to downsize the Tcl and set a standard Tcl may face
fragment later. Two versions of Tcl is obvious one is for embedded application
with tiny core, another is normal Tcl 8.0 based core.
How many people would like to have multiple Tcl cores?
Shall we need a "TclCE" and a "Tcl98"?
That is open questions.
Best Regards,
Chang
>rtho...@hns.com writes:
>It is time for Scriptics and Tcl Community to redefine the Tcl core.
>A very small core can be defined and others can be extensions.
>Some depreciated commands can also defined as automatic load extensions.
>The list, string, file, array, and regexp can be removed out of the core.
>They may be defined as automatic load extensions as needed.
I agree, a small TCL core would be great for embeded applcations where
the current full version is way to big.
rossc
--
Mr Ross Rodney Cartlidge, Data Network Manager, Information Technology Services
Building H08, The University of Sydney, NSW, 2006, Australia
R.Car...@isu.usyd.edu.au
Ph: +61 2 9351 5506 Fax: +61 2 9351 5001
Mobile: +61 412 551 283
Here's another vote for a small core and standard auto-load
"extensions".
Chris
--
Rens-se-LEER is a county. RENS-se-ler is a city. R-P-I is a school!
>Here's another vote for a small core and standard auto-load
>"extensions".
And another.
This seems to surface regularly for those of us who use (or want
to use) Tcl for embedded systems.
Because of the growth of the Tcl core, Tcl has become less and
less well-suited for embedded systems work ever since Tcl7.5
(when 'socket' moved into the core). I still think Tcl is smaller,
cleaner, and easier to embed than any other famous scripting
languages that I can think of (with the possible exception of Python,
which I plead complete ignorance about).
Is it time to organize a volunteer effort? If such an effort was
organized, would Scriptics be amenable to incorporating the
changes back into the "official" core in a timely fashion?
Zach Frey
--
Zach Frey, Engineering Staff | Voice: (734) 975-7361
Open Networks Engineering, Inc. (ONE) | Fax: (734) 975-6940
2725 S. Industrial Highway, Suite 100 | Email: z...@one.com
Ann Arbor, MI 48104 | Web: http://www.one.com/
One concern I would have would be the run-time performance penalty (if
any) for loading these "automatically loaded" extensions.
I think I would rather have an approach that assumes that the small core
is oriented for embedding, and what's included in that core is determined
at build time, rather than run time. (Although allowing "automatically
loaded" extensions at run time might still be implemented.)
I'm probably inclined to keep all of the data types in the core, plus the
obvious commands for manipulating them (so I would include all the list
commands, for and foreach, string, etc.)
Things like clock, file, socket, trace, format, might be "less needed"
in the core.
We almost need a "relationship diagram" for the different commands to see
how they are related to see how tightly bound they are.
Certainly tracing the history of tcl back to see what was added and when
(and WHY) would say a lot about what commands are truly "core" to the
language.
Certainly if the need for a smaller core could be met by using an older
version of the language, then no current development time would be lost by
giving developers yet something else to do.
David S. Cargo
However tossing out the list commands from the core seems
counterproductive since many many commands return values
as a TCL list. In addition the list commands only operate
on memory so are not a platform dependency.
Constructing a dependency diagram is good.
Take into account dependencies at the C API level
as well. For instance the switch command has
a -regexp switch which means that the regexp support
in the C API has to be in the core. Therefore
the regexp/regsub commands are going to just be
wrappers around the C API function and contribute
very little to the size of the core.
David
There are quite a few commands which have a regular
expression option but that does not mean that the
regular expressions cannot / should not be removed from
the core. If the regular expression stuff is not in
the core and cannot be loaded then the commands would
just have to fail.
Also what is needed on an embedded system varies from
system to system and application to application. Maybe
rather than trying to define what should be in the core
we should restructure the core into a form which is
easier to customise how people need it. e.g. split it
up into functional groups. The groups should be as
self contained as possible but obviously they will
have dependencies. The dependency graph *should* not
have any cycles in although I think that it probably
will. The source could then be restructured so that
the individual groups can be seperately compiled,
possible into their own little archive. Then you simply
link the different bits together.
When defining the groups some Tcl commands may be in
two (or more) groups. e.g. open may be in the 'file'
group but open | may be in the 'process' group.
--
Paul Duffin
DT/6000 Development Email: pdu...@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880 International: +44 1962-816880
And another.
--
<URL:mailto:lvi...@cas.org> Quote: In heaven, there is no panic,
<*> O- <URL:http://www.purl.org/NET/lvirden/> | only planning.
Unless explicitly stated to the contrary, nothing in this posting
should be construed as representing my employer's opinions.
> There are quite a few commands which have a regular
> expression option but that does not mean that the
> regular expressions cannot / should not be removed from
> the core. If the regular expression stuff is not in
> the core and cannot be loaded then the commands would
> just have to fail.
>
> Also what is needed on an embedded system varies from
> system to system and application to application. Maybe
> rather than trying to define what should be in the core
> we should restructure the core into a form which is
> easier to customise how people need it. e.g. split it
> up into functional groups. The groups should be as
> self contained as possible but obviously they will
> have dependencies. The dependency graph *should* not
> have any cycles in although I think that it probably
> will. The source could then be restructured so that
> the individual groups can be seperately compiled,
> possible into their own little archive. Then you simply
> link the different bits together.
>
> When defining the groups some Tcl commands may be in
> two (or more) groups. e.g. open may be in the 'file'
> group but open | may be in the 'process' group.
Perhaps the most useful thing is to remove *all* commands from the core.
Would that be possible? The core would then be just the parsing and
evaluation engine, and *maybe* the byte compiler (though I wish the byte
compiler could be loaded (or not) on demand). If all commands were
optional but auto-loadable an embedded systems developer could pick and
choose exactly what she needs.
I'm far from being an expert at how/if/where/when the core
can/should/might be split up and reduced, but I've been around enough to
think it's probably very non-trivial. And yet, most probably agree it
would be a Good Thing.
--
Bryan Oakley
ChannelPoint, Inc.
> :>Here's another vote for a small core and standard auto-load
> :>"extensions".
> :And another.
> And another.
--
http://www.demailly.com/~dl/
Actually, I've seen hardware embedded programmers ask for the byte compiler
code to be optional...
There's several problems with building a core with no commands. 1, there
would be no way to add new commands if you did this <grin>. 2. Another
stressor we deal with here in clt on a regular basis is how to get one
exeutable with everything in it. The more that is extracted, the more
one must deal with that issue. 3. The more that is unbundled, the most
costly it will be (in terms of I/O, user and real time) to run small tcl
programs. Sometime compare the startup 'costs' of running simple perl
commands from the command line vs awk for instance. Yes, users can do a
lot more with perl than awk even from the command line. But, try to do
something simple like printing the 1st column of every line in a file -
I would expect to see that awk's smaller size and lesser capabilities,
but with everything the simple task needs in the executable, should be
faster in general.
Another possibility is that we could do this as a consulting project
at Scriptics, but that would depend on finding an organization to
foot the bill...
-John-
Zach Frey wrote:
>
> Christopher Nelson wrote in message <363707...@pinebush.com>...
>
> >Here's another vote for a small core and standard auto-load
> >"extensions".
>
> And another.
>
> This seems to surface regularly for those of us who use (or want
> to use) Tcl for embedded systems.
>
> Because of the growth of the Tcl core, Tcl has become less and
> less well-suited for embedded systems work ever since Tcl7.5
> (when 'socket' moved into the core). I still think Tcl is smaller,
> cleaner, and easier to embed than any other famous scripting
> languages that I can think of (with the possible exception of Python,
> which I plead complete ignorance about).
>
> Is it time to organize a volunteer effort? If such an effort was
> organized, would Scriptics be amenable to incorporating the
> changes back into the "official" core in a timely fashion?
>
> Zach Frey
>
> --
> Zach Frey, Engineering Staff | Voice: (734) 975-7361
> Open Networks Engineering, Inc. (ONE) | Fax: (734) 975-6940
> 2725 S. Industrial Highway, Suite 100 | Email: z...@one.com
> Ann Arbor, MI 48104 | Web: http://www.one.com/
--
________________________________________________________________________
John Ousterhout 650-843-6902 tel
Chief Executive Officer 650-843-6909 fax
Scriptics Corporation john.ou...@scriptics.com
The Tcl Platform Company http://www.scriptics.com
Thanks, Dr. Ousterhout.
It looks like what this needs is a volunteer. I'm probably
crazy for doing this, but ... if no one else wants to organize
this, I can. I should at least be able to provide a project web
site, coordination of effort, and some testing. I need to check
with my employer about how much code I can contribute under what
conditions.
If no one else speaks up in the next few weeks, I'll get started.
Zach
--
Zach Frey, Engineering Staff | Voice: (734) 975-7361
Open Networks Engineering, Inc. (ONE) | Fax: (734) 975-6940
2725 S. Industrial Highway, Suite 100 | Email: z...@one.com
Ann Arbor, MI 48104 | Web: http://www.one.com/
-----------== Posted via Deja News, The Discussion Network ==----------
A shrunk down version of Tcl does not have to be incompatible
with the full-size version. If the system is layered properly,
the features of the full-size version can be implemented as
modules loaded dynamically into a minimal interpreter.
E.g: I'd see layering something like this.
Scripts
------------------------------------------------
Tk Networking other packages
\ | /
Runtime \ | /
Event loop + reactive IO
|
Core Tcl language interpreter + IO abstractions
This would allow those of us who'd like to safely embed a simple
Tcl interpreter into an application to do so easily. It would
also allow those of us who want to use Tcl as an application
framework to do so. I, and I guess many readers of this
newsgroup, fall into both of those camps at one time or another.
However, it has become progressively harder to use Tcl in the
former role without modifying the core code which is a shame
since that is what it was originally designed for.
Cheers,
Nat.
--
+------------------------------------------+---------------------+
| Name: Nat Pryce MEng ACGI | Dept. of Computing, |
| Email: n...@doc.ic.ac.uk | Imperial College, |
| Tel: +44 (0)171 594 8394 | 180 Queen's Gate, |
| Fax: +44 (0)171 581 8024 | London SW7 2BZ, |
| WWW: http://www-dse.doc.ic.ac.uk/~np2 | United Kingdom |
+------------------------------------------+---------------------+
The second approach would be to find the functions that are strongly
needed for a smaller tcl. This recognizes that tcl is an applications
language, and that some features are absolutely required in order to
develop useful applications.
There are a couple of ways to do this. One is retrospective: look at
what applications were written before with earlier versions of tcl (where
the assumption is that fewer features were available in earlier versions).
If similar applications could be written in "small tcl," then "small tcl"
might be complete enough. The other approach would be to survey potential
"small tcl" users to see what they need.
If this effort is going to work, there might need to be a site with enough
power to run a cvs archive, a mailing list, a mailing list archive with a
query front end, and a way of posting questionnaires (and looking at the
results). (Or, alternatively, another news group instead of a mailing
list.)
It might be nice if there were a tcl program that could read other tcl
programs and determine what tcl features were used by the tcl program that
was read. One could turn such a program loose on some archives and get
an idea about what features were really used.
Another approach would be to modify tcl itself so that it could report on
what features were used. This might be handy to have for people wanting
to do test coverage of tcl itself, and could be done independent of the
small tcl effort.
David S. Cargo
In that case, make sure at least to synchronize with existing efforts in
that direction (e.g. Laurent Demailly's stcl :
http://www.demailly.com/~dl/stcl.html)
-Alex
Thanks. I had previously looked at Laurent's STCL page, and it seemed
a bit more ambitious than what I have in mind. But, I definately will
sync up with him when I get started.
If anybody knows of any other efforts in this direction, please let me know.
Zach
--Guido van Rossum (home page: http://www.python.org/~guido/)
Wow, Python's father, no less ! We are honoured !
How could we disagree with an intention to share more, and reinvent
fewer wheels ? Okay, to some extent most have already been reinvented (I
mean duplicated between Tcl and Python). Still, this is obviously,
definitely an interesting standpoint for the remaining ones.
So, let's go ahead and write down the specs of 'the ultimate C-level OS
abstraction library', first by merging the Tcl and Python ones (with
duplicates in blinking red).
I have a minimal knowledge of Python, please do your part :)
Tcl.features (random order, off the top of my head; I've removed those I
know are outside Python, like typelessness; some are more language- than
OS-abstraction- oriented, but anyway...):
- (as you suggest) buffered I/O (always on input, configurable on
output)
- event-driven I/O (and a more general event system, including timers
and GUI)
- non-blocking I/O
- sockets
- pipes to a spawned process
- (nearly) full descriptor abstraction (channels)
- binary format/scan
- transparent text end-of-line translation
- refcounted garbage collector
- (as you suggest) hash tables (arrays)
- O(1)-access lists
- powerful and easy to use exception mechanism
- namespaces
- introspection: control structures are first-class procs
- seamless integration of strings and lists (i.e. concatenation of 2
proper lists is a pure-string operation, no need for a specific
operator).
- loadable extensions (see the backlinking problem, cf. Paul Duffin's
upcoming beautiful solution)
- substring, glob-style, and regexp tools.
Tcl *missing* features (if you have solutions from the Python world,
these are candidates for the very first things we'd buy !!!):
- working event-driven and nonblocking I/O on pipes and serial devices
on Windows
- COM (we've got OpTcl, but it doesn't get close to the amazing
Win32COM)
The above is obviously incomplete. It is only intended as a starting
point for a full, pointwise comparison and (hopefully) cooperation.
Your turn ! ;-)
-Alex
Super/Simple/Small/Safe seems to be an oxymoron to me, IMHO. I'd say simple
and smaller first, but with special attention paid to language extension
mechanism, so super and safe can be added (statically or dynamically) later
by user (application programmer) choice!
Fly.
I suggest a Tcl and Python merger! Seriously! Source-wise and organizational-
wise. The potential is enormous! I have a dream the other day, which get me
excited for several days. I'd like to share it with everybody, when I can see
it more clearly...
[snipped, lots of ideas on what might be good candidates for teaming up]
Can this really be happening? Pinch, ouch: yes, it's not a dream...
Imagine the following "Scripting 2000 toolbox", let's call it "S2K":
- A small set of independent function groups, such as hashing, base io,
dynamic vectors, string utilities, regex, etc. Coded in C or C++.
And another one who's time has come, IMO: memory mapped file access.
- This core is based on proven software, aiming for maximum MODULARITY.
Function groups are planned to evolve / be-added right from the start.
- Choose C++ as interface language, and make very careful use of the
template mechanism to avoid inheritance and virtual members, except
where these mechanisms add value and *increase* performance.
- Use the virtual dispatch mechanism of C++ as the binding layer for
extensions. Add wrappers for C and Tcl/Python specific aspects.
- Consider using C++ exceptions, if this does not prevent portability.
A major gain would be to benefit from C++'s automatic stack cleanup.
- Use C++ class wrappers to take care of all reference-counting details.
This is easy, automatic reference counting works really well with C++.
Before this starts to sound like a huge wish-list, I have to add that I
think it can be done without creating a monster. This should not be an
attempt to re-invent wheels yet again, but as a way to leverage whatever
code there is right now, from areas as varied as Tcl, Python, Perl, and
what have you (STL, Icon, Scheme, the list is endless). A good example
would be Henry Spencer's regular expression code, or the PCRE version,
which might turn out to be an excellent choice. With templates, such
code could be parametrized on class/type, allowing the use of the same
code even with the very different object models of Python and Tcl.
The underlying assumptions are, that general-purpose scripting languages
all have very similar requirements (hashing, string handling, dynamic
vectors are all good examples of this), and that a common code base will
attract maximum attention from the most talented minds in the world to
optimize and take such generic function groups to new heights.
C++ makes very good sense IMO, because of the following considerations:
- Templates allow code to be reused no matter what the low level
implementations look like. It really is possible to reuse the
perfect sorting code, despite the fact that strings are often
implemented in very different ways (null bytes vs. <ptr,len>),
to name one example. There is no performance penalty.
- Virtual dispatch vectors are a very efficient mechanism for
late binding in statically typed / compiled languages. A more
generic dynamic extension mechanism could greatly reduce the
amount of work needed to connect *all* scripting languages to
many sorts of language-independent tools and libraries.
- C++ is a 99% superset of C, and does not sacrifice speed. As
a matter of fact, I claim that it adds opportunities for more
performance in the same way that algorithms in C can nowadays
outperform assembly code by using much smarter algorithms.
- Today's C++ compilers no longer imply huge runtimes, nor do
they limit the choice of platforms as much as they used to.
- All new software engineering developments are now taking place
in C++ instead of C, the STL being a prime example. Though a
choice of STL would need to be carefully considered, there are
probably many other valuable C++ bodies of code to pick from.
Does this rule out C? No, with suitable wrappers existing code can be
accomodated just as well. This matters especially for extensions.
What about Guile? Well, that's a different goal. Guile, as far as I
understand it, aims higher and wants to reimplement several scripting
languages on top of a Scheme-based language. Some of the function
groups mentioned here might perhaps be useful to Guile - I don't know.
Collecting existing code and making it fit into a common code base is
mostly orthogonal to the idea of building with Guile, I would expect.
How does this help Tcl? This could be used to arrive at a smaller Tcl
core. Tiny even. How about a Tcl-only version which does not support
traces (variable/command), and which does not support Tk? There are
plenty of simple scripts which do not need this extra capability. It
would not surprise me if it turns out that adding such a fundamental
idea as variable tracing could even be done through templates.
How does this help Python? By adopting this approach, Python could be
the driving force behind such new modularity, and offer many of its very
powerful data structures as spin-off. It would benefit by having even
more people involved in optimizing and improving the algorithms used.
How does this help scripting in general? How about finding new ways to
package and deploy extensions, for ALL languages? Or installers which
benefit from common code? Or more generalized test/porting tools? Or
pluggable string/container implementation experiments? Or combining
efforts to port to embedded/palmtops? The list is really endless...
But isn't scripting language X supposed to fight and destroy language Y?
For three reasons the answer is no, IMO. First of all, Tcl and Python
appear to be so different that they look very much complementary, more
like a macro language versus a data-abstration language than anything
else. Second, there are plenty of technical similarities which might
turn out to require no trade-offs whatsoever. After AWK, Perl, Tcl, and
Python - we all *know* how to get hashing and dynamic vectors "right".
Finally, diversity is not only good, it is in fact the only way to move
forward and spur evolution. Software engineering can now be taken to a
higher level, and focus on new language designs and data model designs.
Core data structures and core algorithms really are solved, just like
the development of C runtime libraries is basically a thing of the past.
IMO, it is time to move to higher levels of abstraction.
Ok, but isn't Perl the big evil we all desperatly want to go away? Nah,
there is plenty of room on this planet. The current investment in Perl
is most likely a multiple of Tcl and Python taken together. It is not
realistic to expect anyone to destroy that investment. And who knows,
people from the Perl community might well embrace this approach, and
even contribute substantially. Either way is fine with me. In terms of
technology (read: performance), Perl still has quite a lot to offer.
All of today's modern scripting languages have more communality than
fundamental differences, despite everyone's personal preferences and
those ever-recurring discussions of language syntax and semantics.
Let's try to move forward. Let's share and combine the best there is.
Not as a goal, but simply as a side effect of striving for excellence.
Scripting, its flexible and portable solutions, will benefit from it.
-- Jean-Claude
Well, due to the many differences in syntax and semantics, I'd not put
a right lot of money on it happening any time soon. But sharing some
core components (like RE and hashing systems) will likely benefit both
communities.
Donal.
--
Donal K. Fellows http://www.cs.man.ac.uk/~fellowsd/ fell...@cs.man.ac.uk
Department of Computer Science, University of Manchester, U.K. +44-161-275-6137
--
If you staple a penguin to the head of a gnu, the result won't catch herring...
: C++ makes very good sense IMO, because of the following considerations:
: - C++ is a 99% superset of C, and does not sacrifice speed. As
: a matter of fact, I claim that it adds opportunities for more
: performance in the same way that algorithms in C can nowadays
: outperform assembly code by using much smarter algorithms.
: - Today's C++ compilers no longer imply huge runtimes, nor do
: they limit the choice of platforms as much as they used to.
: Does this rule out C? No, with suitable wrappers existing code can be
: accomodated just as well. This matters especially for extensions.
I guess you haven't read all of c.l.t about C++ problems in the last years?
There are at least an order of magnitude more/bigger portability issues
in C++ than in C: The main program must be compiled C++, not C, and
linking together different C++ compilers output is in practice impossible!
And then correct loading/initialisation of C++ shared library extensions...
I've been burned by C++, so I avoid it and now use (mostly) ANSI C.
Bye, Heribert (da...@ifk20.mach.uni-karlsruhe.de)
How about USE -- a highly customizable, scalable and fully componentized
"Unified Scripting Environment"!
> - A small set of independent function groups, such as hashing, base io,
> dynamic vectors, string utilities, regex, etc. Coded in C or C++.
> And another one who's time has come, IMO: memory mapped file access.
> - This core is based on proven software, aiming for maximum MODULARITY.
> Function groups are planned to evolve / be-added right from the start.
> - Choose C++ as interface language, and make very careful use of the
> template mechanism to avoid inheritance and virtual members, except
> where these mechanisms add value and *increase* performance.
> - Use the virtual dispatch mechanism of C++ as the binding layer for
> extensions. Add wrappers for C and Tcl/Python specific aspects.
> - Consider using C++ exceptions, if this does not prevent portability.
> A major gain would be to benefit from C++'s automatic stack cleanup.
> - Use C++ class wrappers to take care of all reference-counting details.
> This is easy, automatic reference counting works really well with C++.
If you want to use C++ as a tool box builder, I could not agree more that we
should take advantage of the template mechanism for genericity and
performance, however I'd stay away from the C++ exceptions, which is an
unnecessary performance (space and time) and portability burden for generic
library builder. The script exception mechanism itself should be
independent/orthogonal of that for tool box/extension language.
> The underlying assumptions are, that general-purpose scripting languages
> all have very similar requirements (hashing, string handling, dynamic
> vectors are all good examples of this), and that a common code base will
> attract maximum attention from the most talented minds in the world to
> optimize and take such generic function groups to new heights.
Agreed.
> How does this help scripting in general? How about finding new ways to
> package and deploy extensions, for ALL languages? Or installers which
> benefit from common code? Or more generalized test/porting tools? Or
> pluggable string/container implementation experiments? Or combining
> efforts to port to embedded/palmtops? The list is really endless...
We should learn from the JVM fiasco, and maintain ONE open source USE
VM/plugin (itself modularized with language specific part in extensions), so
our USE bytecode really runs EVERYWHERE!
> Ok, but isn't Perl the big evil we all desperatly want to go away? Nah,
> there is plenty of room on this planet. The current investment in Perl
> is most likely a multiple of Tcl and Python taken together. It is not
> realistic to expect anyone to destroy that investment. And who knows,
> people from the Perl community might well embrace this approach, and
> even contribute substantially. Either way is fine with me. In terms of
> technology (read: performance), Perl still has quite a lot to offer.
I'd include Perl in USE too.
> All of today's modern scripting languages have more communality than
> fundamental differences, despite everyone's personal preferences and
> those ever-recurring discussions of language syntax and semantics.
My contribution to this USE idea: make language parsers/compilers itself
modules/extensions/components of USE. So people can choose their favorite
syntax and semantics, while maximizing the reusability of the underlying
generic facilities (general data structures (dynamic hash and vectors, etc.)
and tools (regexp, Tk, etc.), bytecode, plugin/VM)! Tcl/Tk is a logical
starting point because of its meta language nature.
Imagine:
People like me can use:
Practcl (Perl in USE, Practical is the first word of Perl!) or
Pytcl (Python in USE, still pronounced as Python, a mandatory language
to learn for all my employees, if I ever have some :-)
Language purists can use:
Scl (scheme in USE, pronounced as sicko :-)
Scientists/engineers can use:
Mathcl (MATLAB in USE! make sure the M-files work under it:-)
E-Music composers can use:
Musicl (A multimedia scripting language for MIDI/MOD!)
Religious workers can use:
Miracl (to illustrate/glorify the creation of the universe of course!:-)
or we can just use the smaller Tcl - small, simple and embeddable!
And they ALL run in ALL browsers on ALL platforms! And because of the
component nature of USE only necessary components are installed for a
particular usage!
> Let's try to move forward. Let's share and combine the best there is.
> Not as a goal, but simply as a side effect of striving for excellence.
> Scripting, its flexible and portable solutions, will benefit from it.
Excellent post, Jean-Claude. Looks like my dream is also shared by others.
Let's start from building a high performance generic scripting template
library. I hope this common dream will bring the three scripting amigos
(Guido, John and Larry) together to make the world a better place for the
next century! Heck, let's start a "Unified Scripting Environment Consortium"
(USEC), while we are at it...
Fly.
P.S. Although I haven't programmed extensively in Python, it strikes me as a
very clean and versatile scripting language (if you haven't, check it out at
http://www.python.org) (I have >10k lines of commercial coding experience in
each of C++, Perl and Tcl). The thing in Python that irks me the most is the
overloaded ":" usage (used for list slicing, dictionary definition and
indentation/nesting) which is only good for Python obfuscation and makes it
difficult to determine the correct indentation levels in an editor, unless
you have an elaborate Python mode with a Python parser builtin! Yuck!!! (real
world algorithms COULD have very complicated and deep nesting levels, make
them a nightmare to view/edit/maintain in Python) Come on, Guido, just use
the ONE TRUE curly brackets for indentation!
Please see my other post regarding "Unified Scripting Environment" (USE)
http://x4.dejanews.com/getdoc.xp?AN=413697699
Fly.
I do hope that whatever the results, the designwill be such that much
of this is able to be provided as dynamically loadable libraries. In this
way, this new thread can then support the original thread's intent - a
small workable Tcl.
--
<URL:mailto:lvi...@cas.org> Quote: Saving the world before bedtime.
<*> O- <URL:http://www.purl.org/NET/lvirden/>
Has g++ been ported to all platforms currently supported by Python and
Tcl (VMS, OS/2, Macintosh, ...).
guidova...@my-dejanews.com wrote in message
<72qn74$lmn$1...@nnrp1.dejanews.com>...
>I would like to add support for this idea from an unexpected corner:
>Python. Jean-Claude Wippler has suggested to me that there are many
>facilities that are present in both the Python and the Tcl runtime,
>and that we'd do society a favor by sharing implementation. For example,
>hash tables and buffered file I/O are present in both languages, and
neither
>has something particularly exciting to add -- these are just fundamentals
>that each language needs. Any takers?
>
>--Guido van Rossum (home page: http://www.python.org/~guido/)
>
I don't know about OS/2 or the Mac. Since the g++ installation
notes include a "FOR VMS USERS" section, I'd assume VMS support.
While I like the dream of a Grand Unified Scripting Toolkit Object
(GUSTO), a C++ implementation would currently be limiting in the
embedded systems workplace (IMHO). ANSI C is still the common
denominator here. While many RTOS ("Real-Time OS) vendors do use
gcc/g++ as their native toolchain, many do not. For example, I
doubt that the MRI (http://www.mentorg.com/microtec/), Diab
(http://www.ddi.com/) or SDS (http://www.sdsi.com) toolchains
are going to go away soon. Tying S2K, or GUSTO, or whatever,
to a particular C++ implementation does not sound like a good
idea to me.
Some of the selling points to me of Tcl as a language was its
nice, clean, ANSI C implementation, and it's relatively few
dependancies on system calls. This promised to minimize the
pain of porting Tcl to the RTOS-du-jour. Well, the system call
dependancy list has grown, leading to my desire for "Smaller Tcl".
If the base implementation goes to C++, that would likely lead
to a new desire for a "Smaller, C-only Tcl".
I am still learning the intricacies of C++, so I don't know if
the following suggestion is practical or not. When factoring out
these various features that would make up S2K/USE/GUSTO, would
it be practical to have an underlying C API that is wrapped by
templatized C++ classes? That way, we could still have the best
of all worlds.
Zach Frey
> While I like the dream of a Grand Unified Scripting Toolkit Object
> (GUSTO), a C++ implementation would currently be limiting in the
> embedded systems workplace (IMHO). ANSI C is still the common
> denominator here.
Agreed. C compilers are outperforming nowadays and ANSI C is the assembler
of XXI Century...
> I am still learning the intricacies of C++, so I don't know if
> the following suggestion is practical or not. When factoring out
> these various features that would make up S2K/USE/GUSTO, would
> it be practical to have an underlying C API that is wrapped by
> templatized C++ classes? That way, we could still have the best
> of all worlds.
Yes it could be done. I, however, think we have another solution on our
hands, powerful, open source, clear, clean and all that:
Eiffel. SmallEiffel (aka GNU Eiffel) is a Eiffel to ANSI C compiler (also
to JVM), extremely fast (astonishingly fast, actually) and easy to use.
Eiffel, as a language, has a nice object model, design by contract, which
is powerful and clean, and a small core language plus powerful libraries,
all implemented in Eiffel.
You can check by yourself at http://www.loria.fr/SmallEiffel
Although I don't usually program in Eiffel (yet) I like it very much, it
is really clear and clean, nice models and safe self-documenting code can
be easily written. And th elibraries are starting to grow up fast.
Just some food for thought...
Best regards,
David@
PS- just a note, SmallEiffel is written in Eiffel... D@
--
<sig> <who> David Suárez de Lis
<uri> mailto:ak...@fantasia.usc.es
<from><institution> University of Santiago de Compostela
<country> SPAIN <federation> EU </from>
</sig>
I don't really want a monolithic unified API/Object, which is against idea of
scalability. That's why I proposed the scalable componentized "Unified
Scripting Environment" which is different in philosophy from any existing
scripting architecture trying to provide a unifying API.
> embedded systems workplace (IMHO). ANSI C is still the common
> denominator here. While many RTOS ("Real-Time OS) vendors do use
> gcc/g++ as their native toolchain, many do not. For example, I
> doubt that the MRI (http://www.mentorg.com/microtec/), Diab
> (http://www.ddi.com/) or SDS (http://www.sdsi.com) toolchains
> are going to go away soon. Tying S2K, or GUSTO, or whatever,
> to a particular C++ implementation does not sound like a good
> idea to me.
Although true to some degree, I think our purpose of the toolkit is mostly for
the next century. We've already had a C++ standard. We can safely assume a
popular subset of C++ facility, especially potential performance enhancement
like template (which is like a type safe macro) will be available for such
platforms "real soon now". g++ is already good enough and a good candidate.
> Some of the selling points to me of Tcl as a language was its
> nice, clean, ANSI C implementation, and it's relatively few
> dependancies on system calls. This promised to minimize the
> pain of porting Tcl to the RTOS-du-jour. Well, the system call
> dependancy list has grown, leading to my desire for "Smaller Tcl".
> If the base implementation goes to C++, that would likely lead
> to a new desire for a "Smaller, C-only Tcl".
This is not a really appropriate analogy. I think the motivation for a smaller
tcl core is not really about the size per se, it's about general unwillingness
of dragging around unnecessary features for simple usages. The thing people
really want is scalability, up AND down. The appropriate/smart usage of
template can actually help compiler optimize away unused logic and storage and
make the size of a final component for a particular platform actually smaller
than the C-only implementation. We have gained a lot of experience with
template with toolkits like STL and ATL.
> I am still learning the intricacies of C++, so I don't know if
> the following suggestion is practical or not. When factoring out
> these various features that would make up S2K/USE/GUSTO, would
> it be practical to have an underlying C API that is wrapped by
> templatized C++ classes? That way, we could still have the best
> of all worlds.
Unfortunately, we can't have the best of both world with template. A good
template class library assists an optimizing compiler in many ways since
template instantiation happens at compile time. Using a C underlying API
defeats most performance benefit of template, make it just type-safe wrappers
of unsafe C APIs with void*, which doesn't really worth the effort, IMHO.
Fly.
>In article <73ehp8$og9$1...@nnrp1.dejanews.com>,
Good and agree. The architecture of Small Tcl should be scalable up and down.
Current Tcl core is not scalable down. That is the problem. I do not think
add the thread to the core is important.
>This is not a really appropriate analogy. I think the motivation for a smaller
>tcl core is not really about the size per se, it's about general unwillingness
>of dragging around unnecessary features for simple usages. The thing people
>really want is scalability, up AND down. The appropriate/smart usage of
>template can actually help compiler optimize away unused logic and storage and
>make the size of a final component for a particular platform actually smaller
>than the C-only implementation. We have gained a lot of experience with
>template with toolkits like STL and ATL.
>> I am still learning the intricacies of C++, so I don't know if
>> the following suggestion is practical or not. When factoring out
>> these various features that would make up S2K/USE/GUSTO, would
>> it be practical to have an underlying C API that is wrapped by
>> templatized C++ classes? That way, we could still have the best
>> of all worlds.
C++ is good to implement libraies. But I do not think it is quite useful
for the core. C is a very good selection for the small core.
>Unfortunately, we can't have the best of both world with template. A good
>template class library assists an optimizing compiler in many ways since
>template instantiation happens at compile time. Using a C underlying API
>defeats most performance benefit of template, make it just type-safe wrappers
>of unsafe C APIs with void*, which doesn't really worth the effort, IMHO.
>Fly.
Chang
That kite won't fly, since the time/effort cost of reimplementing
loads of currently-working code from C into Eiffel and getting an
efficient and correct implementation out of the translator back-end
are real killers.
Any unification effort between the various scripting systems
implementations should start from code that works as of *now*. It is
the only way of delivering anything sane within a reasonable length of
time.
C++ is a bad idea because:
o We can't use the C++ standard library. Too buggy across different
versions of it - we don't want to have to wait for a couple of
years for properly compliant versions of the library to be
released by compiler vendors. It would also add bloat, which is a
real problem in some applications.
o We can't use templates because they are too often implemented
badly or even have only a subset of the functionality of the
standard. I believe some compilers even omit support for
templates altogether. (See the comments above about vendors and
bloat.)
o We can't use exceptions because they are too often implemented
badly. (I seem to be repeating myself...)
o Objects don't help much when you're trying to make a good regexp
engine...
o Overloading (especially of operators) is actively a bad idea, as
it makes it significantly more difficult to maintain the library.
That would be a Really Bad Thing.
This leaves us with only really the features that are present in C
anyway, so we might as well use C and be sure that no developer with
more desire-for-Kewl-Feetyorz than sense will make life difficult for
everyone else. ANSI C++ is far too immature at this point to be worth
it, and other variants of C++ vary too much (in areas that are
interesting to a project such as this) to be properly useful.
Remember, we want to have a basic common lib inside a couple of months
and we want it to continue to work for years and be portable across
the maximum number of platforms - this really restricts our options a
lot.
Maybe, maybe not. How does gcc 2.8 (or egcs) score on these issues?
> o Objects don't help much when you're trying to make a good regexp
> engine...
Correct. And there's no need to reinvent wheels. Just wrap the C code.
> [...] ANSI C++ is far too immature at this point to be worth it
How so? C++ *is* C (except for a few details). I've used C++ for some
10 years now, and am deeply puzzled by C vs. C++ discussions...
While staying out of the three first objections you raise, I've been
able to get quite far with C++ on lots and lots of platforms. The
object side of things, virtual member calls, constructors/destructors,
and inlining make the transition still *very* much worthwhile, IMO.
Even without STL/templates and exceptions, if needs be.
> Remember, we want to have a basic common lib inside a couple of months
> and we want it to continue to work for years and be portable across
> the maximum number of platforms - this really restricts our options a
> lot.
I don't quite see how. Gcc/egcs, plus perhaps MSVC and/or CodeWarrior
are not going to go away - and they'll get even better still, I expect.
This is not a proposal to *reduce* anyone's options. It is a way to
offer something in addition to what is already happening. With the
mechanisms added by C++ (and I hope STL, but that's clearly a separate
decision), I expect that we could come up with a very modular structure,
which will scale in several dimensions.
Btw, my approach would be to work from existing code - and gradually
adjust sections. Even very fundamental changes can be tackled this way,
given the proper tools and source code browsers.
-- Jean-Claude
Peter Allworth.
Donal K. Fellows wrote:
> In article <73h5v5$sh4$1...@nnrp1.dejanews.com>, <fl...@my-dejanews.com> wrote:
> > Although true to some degree, I think our purpose of the toolkit is
> > mostly for the next century. We've already had a C++ standard. We
> > can safely assume a popular subset of C++ facility, especially
> > potential performance enhancement like template (which is like a
> > type safe macro) will be available for such platforms "real soon
> > now". g++ is already good enough and a good candidate.
>
> C++ is a bad idea because:
>
> o We can't use the C++ standard library. Too buggy across different
> versions of it - we don't want to have to wait for a couple of
> years for properly compliant versions of the library to be
> released by compiler vendors. It would also add bloat, which is a
> real problem in some applications.
>
> o We can't use templates because they are too often implemented
> badly or even have only a subset of the functionality of the
> standard. I believe some compilers even omit support for
> templates altogether. (See the comments above about vendors and
> bloat.)
>
> o We can't use exceptions because they are too often implemented
> badly. (I seem to be repeating myself...)
>
> o Objects don't help much when you're trying to make a good regexp
> engine...
>
> o Overloading (especially of operators) is actively a bad idea, as
> it makes it significantly more difficult to maintain the library.
> That would be a Really Bad Thing.
>
> This leaves us with only really the features that are present in C
> anyway, so we might as well use C and be sure that no developer with
> more desire-for-Kewl-Feetyorz than sense will make life difficult for
> everyone else. ANSI C++ is far too immature at this point to be worth
> it, and other variants of C++ vary too much (in areas that are
> interesting to a project such as this) to be properly useful.
> Remember, we want to have a basic common lib inside a couple of months
> and we want it to continue to work for years and be portable across
> the maximum number of platforms - this really restricts our options a
> lot.
>
[..]
> > Eiffel, as a language, has a nice object model, design by contract, which
> > is powerful and clean, and a small core language plus powerful libraries,
> > all implemented in Eiffel.
>
> That kite won't fly, since the time/effort cost of reimplementing
> loads of currently-working code from C into Eiffel and getting an
> efficient and correct implementation out of the translator back-end
> are real killers.
>
> Any unification effort between the various scripting systems
> implementations should start from code that works as of *now*. It is
> the only way of delivering anything sane within a reasonable length of
> time.
This view is not uncommon and needs correction - too many people, who
often only have a passing knowledge of Eiffel, have the view that Eiffel
is an island and that C++ is the only way to interface C to the OO world.
Eiffel interfaces C (``makes good glue'') in a way much better than C++.
A relavant extract from a Bertrand Meyer (the Eiffel language designer)
article might help:
%%
The Eiffel view is that we should have a language that is
object-oriented all the way through, while providing clear
and clean communication paths with the non-O-O world, in
particular the C world.
-- Bertrand Meyer, http://www.elj.com/eiffel/bm/eiffelview/
%%
A quote from another article:
%%
The [Eiffel] language is pure O-O, but the implementation relies on
C. This is in fact the best way to combine O-O and C: don't mix the
paradigms at the language level (which is the best way to get much
confusion and lose the principal benefits of object technology);
instead, keep a clean, simple, consistent O-O language; and rely on
the ubiquity, portability and efficiency of C compilers to build on
a solid technology base and take advantage of the mass of C and C++
code available.
-- Bertrand Meyer, http://www.elj.com/eiffel/bm/hp-project/
%%
The fact that Eiffel has just become SWIGGable, see the SWIG home page
should also further reinforce the fact that Eiffel does interface with
the external (C) world and that there is *not* a need to re-implement
everything in Eiffel .. you can leverage from past efforts. With time,
there will come the opportunity to move C code to Eiffel, but often this
might to be necessary or even preferrable.
I hope this helps clarify the situation.
Geoff Eldridge -- ``Eiffel, Back to the Future''
-- ge...@elj.com
-- Eiffel Liberty Daily: http://www.elj.com/
In article <Pine.LNX.3.96.98112...@fantasia.usc.es>,
David S de Lis <ak...@fantasia.usc.es> wrote:
> On Tue, 24 Nov 1998 z...@one.com wrote:
>
> > While I like the dream of a Grand Unified Scripting Toolkit Object
> > (GUSTO), a C++ implementation would currently be limiting in the
> > embedded systems workplace (IMHO). ANSI C is still the common
> > denominator here.
>
> Agreed. C compilers are outperforming nowadays and ANSI C is the assembler
> of XXI Century...
>
> > I am still learning the intricacies of C++, so I don't know if
> > the following suggestion is practical or not. When factoring out
> > these various features that would make up S2K/USE/GUSTO, would
> > it be practical to have an underlying C API that is wrapped by
> > templatized C++ classes? That way, we could still have the best
> > of all worlds.
>
> Yes it could be done. I, however, think we have another solution on our
> hands, powerful, open source, clear, clean and all that:
>
> Eiffel.
Cameron Laird and Kathryn Soraiz, in their SunWorld `Regular Expressions'
article, provide a current review of Eiffel in their Nov 15 update titled:
Getting acquainted with Eiffel
http://www.sunworld.com/swol-11-1998/swol-11-regex.html#2
> .. SmallEiffel (aka GNU Eiffel) is a Eiffel to ANSI C compiler (also
> to JVM), extremely fast (astonishingly fast, actually) and easy to use.
This paper describes some of the optimisations you get:
http://www.elj.com/elj-win32/ooplsa97-se-paper.html
or in full (pdf format):
http://www.loria.fr/projets/SmallEiffel/papers/papers.html#OOPSLA97
Eiffel interfaces C (``makes good glue'') in a way much better than C++:
http://www.elj.com/eiffel/bm/eiffelview/
%%
The Eiffel view is that we should have a language that is
object-oriented all the way through, while providing clear
and clean communication paths with the non-O-O world, in
particular the C world.
-- Bertrand Meyer
%%
> Eiffel, as a language, has a nice object model, design by contract, which
> is powerful and clean, and a small core language plus powerful libraries,
> all implemented in Eiffel.
>
> You can check by yourself at http://www.loria.fr/SmallEiffel
>
> Although I don't usually program in Eiffel (yet) I like it very much, it
> is really clear and clean, nice models and safe self-documenting code can
> be easily written. And th elibraries are starting to grow up fast.
Win32 users might be interested to check out the elj-win32 SmallEiffel
distribution:
which has a convenient IDE called SEEd developed by Steven White that
offers class and error navigation along with syntax highlighting:
http://www.elj.com/elj-win32/seed/
The next release of SmallEiffel is just about ready and this is the plan
for the next release of elj-win32:
http://www.dejanews.com/getdoc.xp?AN=399577958
> Just some food for thought...
.. and some more ..
Eiffel Liberty: http://www.elj.com/
Object Oriented Software Engineering in Eiffel by Jean-Marc Jezequel
http://www.irisa.fr/pampa/EPEE/oosewe.pdf
(The first 4 or so chapters from his excellent book)
Eiffel: An Advanced Introduction by Alan Snyder and Brian Vetter
http://www.elj.com/eiffel/intro/
Object Oriented Programming in Eiffel by Robert Rist
http://www.socs.uts.edu.au/~rist/eiffel/
(A complete online version of the 2nd Edition of his Prentice Hall
Book)
> Best regards,
> David@
>
> PS- just a note, SmallEiffel is written in Eiffel... D@
The Perl6 porters (Project Topaz) decided against using SmallEiffel
in favour of C++ because they believed it inappropriate for compiler
development and should be only used for application development, even
in the face of the above fact. As Chip said:
%%
As a result of hopping around this book OOSC2 (no one could read
the whole thing in less than a week), I have new respect for
Eiffel. Meyer provides good reasons for his decisions. For general
application development I now rank Eiffel well above C++.
-- Chip Salzenberg, 28 Aug 98
%%
I rank Eiffel above C++ now for general application development.
Eiffel is great. No sarcasm here.
-- Chip Salzenberg, 29 Aug 98
%%
I don't want to revive the whole E*ff*l discussion, but: Perl is
more like an operating system than it is like an application. It's
a vital part of the infrastructure for millions of programmers;
each millisecond we save will be repaid million- or billionfold.
-- Chip Salzenberg, 23 Nov 98
%%
The other reason Eiffel was `deep-sixed' in favour of C++ was a design
requirement for dynamic linking:
%%
John Tobey: Cool! Will that sort of thing (my-overloading)
be implementable as a dynamic module?
Chip Salzenberg: That's a design requirement -- one that
deep-sixed E*ff*l, actually.
-- Perl6-porters, 23 Nov 98
%%
GNU Eiffel is now very close to having such a facility through the
guiding efforts of Patrick Doyle:
--8<----
This is the SEDL project: SmallEiffel Dynamic Linking which
requiries a little more open-source effort (that Topaz could
have brought) to polish it off properly.
* Readme: http://www-ug.eecg.toronto.edu/~doylep/README
* tarball: http://www-ug.eecg.toronto.edu/~doylep/dynamic.tar.gz
-->8----
It has been interesting to watch the perl6-porters mailing list:
http://www.egroups.com/list/perl6-porters/info.html
They have given C++ compilers a lot of scope to become a whole lot more
compatible than they currently are. There is also some concerns already
about loss of efficiency due to the use of inheritance and virtual tables.
SmallEiffel you just don't get these concerns - see the above paper.
Any community wanting to develop/move their underlying implementation
language from C to C++ (the Python people are already thinking about
this for Python 2) would do well to join the perl6-porters mailing list.
SmallEiffel remains an option well worth investigating and is IMHO ideal
for compiler development, contrary to perl6-porters decision.
Geoff Eldridge -- ``Eiffel, Back to the Future''
-- ge...@elj.com
-- Eiffel Liberty Daily: http://www.elj.com/new/
> --
> <sig> <who> David Suárez de Lis
> <uri> mailto:ak...@fantasia.usc.es
> <from><institution> University of Santiago de Compostela
> <country> SPAIN <federation> EU </from>
> </sig>
-----------== Posted via Deja News, The Discussion Network ==----------
Sorry Jean-Claude, here I'm 100% with Donal.
I don't want to replay a St-Barthelemy. I have only concrete experience
of how much C++ written by others (which would be the general case with
Tcl, right ?) can be the vehicle of extremely bad design. Why is C++
different from C in this regard ? Because with an OO language the
programmer is induced into writing an extra layer which is left to the
"user" (I mean client of his library) in the C case: the class/instance
layout. The key is that it demands much wisdom to get it right the first
time in most cases, especially when you are designing within a framework
loaded with interdependencies. The net result is that a C library
written by an average programmer can be used reasonably (because its
features are more likely to be atomic), while a C++ set of classes
written by an average OO-freak is likely to be a nightmare (remember the
SunXTL case).
Summary: building a small, beautiful, universal... thing is already
enough work. Fighting the OO war could be postponed after that.
-Alex
Volker
Active Template Library. A set of templates and
helpers that makes it easier to write ActiveX/COM
components in C++.
someone else wrote:
> Active Template Library the latest rubbish from M$. Supposed to be
> 'lighter' than MFC. In practice it's just more of the obscurantist
> hypeware that keeps M$ in business.
MFC is a GUI framework.
ATL is a COM support library.
Two different things.
(given Tcl/Tk's excellent portability, one would
have thought that Tcl'ers would be more tolerant
than this. but nevermind...)
/F
MFC does contain a COM support part; that's where comparison with ATL
makes sense. In *that* context, it can be said that it is somewhat
lighter (and more powerful: see Free-Threaded model), but not quite
enough to get out of the 'obscurantist hypeware' category. So Robin is
right.
-Alex
In article <73lko0$fq2$1...@nnrp1.dejanews.com>, <ge...@elj.com> wrote:
> In article <73jvnu$14r$1...@m1.cs.man.ac.uk>,
> fell...@cs.man.ac.uk (Donal K. Fellows) wrote:
>> That kite won't fly, since the time/effort cost of reimplementing
>> loads of currently-working code from C into Eiffel and getting an
>> efficient and correct implementation out of the translator back-end
>> are real killers.
>>
>> Any unification effort between the various scripting systems
>> implementations should start from code that works as of *now*. It is
>> the only way of delivering anything sane within a reasonable length of
>> time.
>
> This view is not uncommon and needs correction - too many people, who
> often only have a passing knowledge of Eiffel, have the view that Eiffel
> is an island and that C++ is the only way to interface C to the OO world.
I did not say that. I did not (mean to) imply that. For the record,
I believe that C++ is a really awful interface between C and the OO
world (I've seen worse, but that's a whole 'nother moan...)
> Eiffel interfaces C (``makes good glue'') in a way much better than C++.
> A relavant extract from a Bertrand Meyer (the Eiffel language designer)
> article might help:
Not really. The problem is that we are not looking to start a new
project to produce a whole collection of facilities that can be used
as a common core library by new scripting languages, but rather we are
looking to share common pre-existing features between existing
scripting languages that already have well-engineered and very heavily
tested implementations already, in order that we can maximise the
solidity and efficiency of the language systems based on it.
At least with the Tcl library, you'd do well to remember there is a
full test suite with it that checks the correctness of the
implemenation as built. Redeveloping all that is not the sort of
thing that anyone wants to do, and considerations like that are among
my chief concerns at this point.
[ Good quotes elided ]
> The fact that Eiffel has just become SWIGGable, see the SWIG home page
>
> http://www.swig.org/
>
> should also further reinforce the fact that Eiffel does interface with
> the external (C) world and that there is *not* a need to re-implement
> everything in Eiffel .. you can leverage from past efforts. With time,
> there will come the opportunity to move C code to Eiffel, but often this
> might to be necessary or even preferrable.
I do not dispute that Eiffel can work with C. I dispute that *any*
reimplementation can achieve the consistent quality, efficiency,
testability and documentation of those components on the critical
paths of major scripting languages in a couple of months. I
*seriously* dispute the sanity of anyone suggesting that such a
project be undertaken unless they justify very carefully the reasons
for such an action.
A promise that everything is going to be great doesn't count for
masses when everything is already great anyway! :^)
> I hope this helps clarify the situation.
It doesn't, because you've completely misunderstood where we're coming
from. That makes things a little tricky.
As a matter of fact, I'm in the middle of writing a compiler in Eiffel,
and I must say that even without the language advantages, I would have
chosen it for efficiency alone. I'm free to use any kind of abstraction
and OO modelling I want, confident that the compiler will optimize
it away unless it actually requires runtime decisions.
Far from the image that Eiffel seems to have of producing huge, bloated
executables, I now use it precisely because SmallEiffel produces more
efficient C code than I could ever write by hand. SmallEiffel completely
separates readability/maintainability from efficiency: you
write your code in a very readable/maintainable way, and the compiler
produces this horribly ugly C code that runs like lightning.
-PD
--
--
Patrick Doyle TINCC
doy...@ecf.toronto.edu
Oops! Pressed send instead of edit...
I should emphasise that this is not an attack on Eiffel. It is an
attack on any reimplementation from scratch of core Tcl (and Python if
I remember the rest of this thread correctly) services. The current
standard is so high that major work will almost inevitably introduce
more bugs than it fixes. I don't fancy 6-12 months of fixing bizarre
and subtle bugs that were first stamped out several years ago, or that
never existed in the previous implementation.
As I get older, I find my programming is getting more and more
defensive...
How long has that been out? How widely deployed is it? Do we really
want to bind all and sundry to a particular development tool?
>> o Objects don't help much when you're trying to make a good regexp
>> engine...
>
> Correct. And there's no need to reinvent wheels. Just wrap the C code.
Just leave it unwrapped and working fine. I don't want to see wheels
within wheels either.
>> [...] ANSI C++ is far too immature at this point to be worth it
>
> How so? C++ *is* C (except for a few details). I've used C++ for some
> 10 years now, and am deeply puzzled by C vs. C++ discussions...
The subset that is stable is mostly not very useful, and some people
(a fair proportion of the users/developers) do not have access to a
good C++ implementation.
> While staying out of the three first objections you raise, I've been
> able to get quite far with C++ on lots and lots of platforms. The
> object side of things, virtual member calls, constructors/destructors,
> and inlining make the transition still *very* much worthwhile, IMO.
> Even without STL/templates and exceptions, if needs be.
OO buys little unless it is designed in from the beginning. We want a
solution sooner than *that*. And inlining makes debugging orders of
magnitude harder. Hard-won experience tells me that.
We should write C, as it is a *known*good*implementation*route*.
Comprendez?
> [...] We should write C, as it is a *known*good*implementation*route*.
If I may summarize it all, you're saying: let's stick with what we have.
No one is stopping you, of course.
You also seem to imply that using C++ (or Eiffel) is a complete rewrite.
This is not something I've said, I'm sorry if that is how it sounded.
All I've wanted to point out is that something "other than C" has helped
*me* to write more robust, more modular, and much larger pieces of code.
(I'd like to write "beyond C", but I'm afraid it would be misunderstood)
Anyway, the voices on c.l.t so far almost unanimously say: "stick to C".
-- Jean-Claude
egcs is pretty good, but unfortunately C++ compilers do not
"mix and match" very well, mainly due to incompatible "name mangling"
schemes.
So if (for example) a vendor gives you a library compiled with
Sun C++, you cannot link it with g++ generated code. :-(
>While staying out of the three first objections you raise, I've been
>able to get quite far with C++ on lots and lots of platforms. The
>object side of things, virtual member calls, constructors/destructors,
>and inlining make the transition still *very* much worthwhile, IMO.
>Even without STL/templates and exceptions, if needs be.
I also like the nice features of C++. I wrapped some of the Tcl C
library and found it very convenient and less error-prone, especially
relating to memory management.
Mark.
ma...@usai.asiainfo.com
Mark Harrison at AsiaInfo Computer Networks,
Beijing, China
For the Tk-core I'd like to have a little class library which makes it
easier to write a Tk-widget from scratch.
To model the widget hierarchy, C++ would fit much better than the
current C-implementation with the "horror-cast-technology"
(that little difference between TkWindow* and Tk_Window you know)
and it would make the implementation *smaller*.
If I look through the Tk-widget code I have this dejavue ...
(type "grep SetBackground tk8.0/generic/*.c" ).
To extend a Tk-C-widget with some features I have the 2 ways:
1. creating a patch (if I want to have pain with every new version)
2. copy the original and make my private widget from it, as a bad
side result this blows up the code...
If I had a choice between using C++ for the Tcl-core or the Tk-core
I'd vote for the Tk-core. Oh , and the same opinion if someone
speaks about "C" vs. "Java"
Just my 0.02 DM
Regards, Leo
--
Leo Schubert, Brueckner&Jarosch Ing.-GmbH Erfurt, Germany 99084
Andreasstr. 37, TEL +49=361-21240.14, FAX .19, EMAIL l...@bj-ig.de
I'm with you toward the end of the spectrum
that esteems stuff that's working right, and
doing it now. Maturity of a software tech-
nology is worth a lot to me.
I intended to insert a paragraph here about
interfaces vs. implementations, and what John
has achieved with Tcl. I haven't figured out
yet how to say it without risking more con-
tention than is my aim, so I'll leave it for
another time. What I can contribute, though,
are some background elements that might
interest you, Donal.
Please don't see the contributors to this
thread as callow interlopers willing to
sacrifice Tcl for problematic gains. They're
operating in the context of these realities:
* Perl is on schedule for a re-implemen-
tation in C++ from C.
* Python has been re-implemented with
surprising, even shocking, success,
in Java, from C. Pythonia plans to
maintain both of these implementa-
tions on distinct development tracks.
* Among the design decisions for Python
2.0 still to be decided is language
of implementation. It likely will
be C--but that's not certain.
* Eiffel zealots have very impressive
comparative experiences in moving
from C ...
>
>At least with the Tcl library, you'd do well to remember there is a
>full test suite with it that checks the correctness of the
>implemenation as built. Redeveloping all that is not the sort of
>thing that anyone wants to do, and considerations like that are among
>my chief concerns at this point.
Donal, I don't follow this point; what does the
language of implementation of the core have to
do with the value crystallized in the test suite?
.
.
.
>I do not dispute that Eiffel can work with C. I dispute that *any*
>reimplementation can achieve the consistent quality, efficiency,
>testability and documentation of those components on the critical
>paths of major scripting languages in a couple of months. I
What I've heard Jean-Claude most concretely pro-
pose, I believe, is co-operation between language
communities on identifiable common elements.
Regular expressions are the landmark example,
but certainly Unicoding, network protocol
facilities, GUI bindings, threading, ... look to
me like other ripe areas. I'm making a distinction
with which others might not agree. Evaluation
of arithmetic expressions is a function all the
scripting languages share. There's little in-
centive to work together on this, though, because
it's generally not an active area of enhancement.
I think there's no strong argument in favor of
touching stuff that already works well. Several
of us *do* want, though, to set up systems to
multiply the effectiveness of implementers
working in "hot" areas.
Perhaps I'm muddying the waters. Maybe some
people are talking about scrapping all of Tcl's
existing implementation and redoing it in Eiffel.
I think it's more likely that several are
thinking about systems that subsume Tcl ...
.
.
.
'Nother contextual point: in your amendment to
the previous post, you described the "defensive-
ness" of your own coding. I think that way,
too. However, just as we working with Tcl share
such key cultural elements as Ousterhout's
deprecation of threads, the code-data duality,
the irrelevance of performance (rightly con-
sidered), and so on, Bertrand Meyer has written
papers which sensitive Eiffelists to the
limits of defensive programming.
--
Cameron Laird http://starbase.neosoft.com/~claird/home.html
cla...@NeoSoft.com +1 281 996 8546 FAX
>
> "Donal K. Fellows" wrote:
> [...]
> > C++ is a bad idea because:
> >
> > o We can't use the C++ standard library. Too buggy [...]
> > o We can't use templates [...] implemented badly [...]
> > o We can't use exceptions [...] implemented badly [...]
>
> Maybe, maybe not. How does gcc 2.8 (or egcs) score on these issues?
> How so? C++ *is* C (except for a few details). I've used C++ for some
> 10 years now, and am deeply puzzled by C vs. C++ discussions...
>
> While staying out of the three first objections you raise, I've been
> able to get quite far with C++ on lots and lots of platforms. The
> object side of things, virtual member calls, constructors/destructors,
> and inlining make the transition still *very* much worthwhile, IMO.
> Even without STL/templates and exceptions, if needs be.
>
I am pretty new to C++, but in regards to developing a portable
library, how does one deal with different name-mangling schemes for
different C++ compilers, short of having to compile everything with
the same compiler. I've heard this mentioned as a serious problem,
but I am not very familiar with all the issues. thanks
ron
I see most of the native components in C++ in 3.5 years :-) However, given
the time constraint as in any real world projects. A balanced approach (read:
compromise) for now seems to be:
1. Use ANSI-C instead of K&R for core libs.
Get rid of those distracting macros, ifdefs. Everybody seems to agree(?)
2. Write C++ friendly C code (CRM, Harbison & Steele, required reading)
a. So that C++ compiler can compile the core and link with C++
extensions without modification. Much work has been done by
the pluspatch already.
b. Make generic code like dynamic vector, associate array (hash)
easily replaceable, so that if a higher performance C++
implementation becomes available, user can have a choice.
I would think proper abstraction/layering is endorsed by most
C programmers?
Now that we have solved implementation language issue, let's brainstorm on a
tougher/more fundamental issue: componentization:
1. Dynamic Linking.
It has always been a sore point for any portable projects. We want to
design a scheme that requires minimal system support, instead of just
assumming a Unix shared lib model, which itself isn't uniform across
Unix variants. We might have to invent a simpler compiler independent
alternative to MS' basic COM mechanism (no DCOM stuff, we have CORBA
for that purpose). This approach makes it possible to use older versions
of extensions in new cores without recompilation, which is a major gain.
2. Packaging.
I'd like a way of distributing a component with one URL. I'd like to
see something like the pluspatch support for wrapping supporting scripts
as static chars.
3. Component Registry
pkgIndex.tcl is a primitive component registry, as our traditional setting
path env's in rc files. I can think of a local registry hierarchy built
upon the file system (instead of some proprietary database) and DNS like
DCR (distributed component registry), which can tell you the component id,
even an URL given the component name and version.
Fly.
Active Template Library from Microsoft for writing COM components, which is
available in source code in VC++ 5/6. It has quite a few neat usages of C++
templates that could generate code optimized both for speed and space.
STL and ATL illustrate a trend in industry moving away from fanatic object-
orientation to a more balanced approach, i.e., use genericity to express cross
domain algorithms and use object-orientation to manage complexity within a
domain.
There seems to be some intense hatred against any MS constructs in general in
non-Windows camps, which I think is irrational. You have to know your enemy to
defeat it...
What scripting languages have currently been implemented in C++ ?
I really feel a rewrite into any language other than C is a wrong direction
for Tcl to move - particular for those of us who want to see a small
interpreter.
A large interpreter appears less useful for embedded applications.
--
<URL: mailto:lvi...@cas.org> Quote: Saving the world before bedtime.
<*> O- <URL: http://www.purl.org/NET/lvirden/>
Unless explicitly stated to the contrary, nothing in this posting
should be construed as representing my employer's opinions.
We've found nothing but headaches and pain trying to use gcc 2.8 and egcs
on Solaris SPARC ...
that is, trying to get the g++ portion of these to work. So far, we've
got people using the last few versions of these, because each one has
different sets of bugs stopping people from using them...
At this time of nite I don't have the specific details - it was recounted
to me in the hallways back from lunch last week.
The reason I ask this question is that I want to research the success and
problems that have been encountered in the task.
Intense hatred does not always equal irrationality. Some people
hate MS because of the damage to the computing environment it has caused.
However, I don't see why ignorance of ATL implies hatred - I've no
interest in (or need for) personal software development on Windows whatsoever,
and if I am so blessed may be able to go to my grave in such a condition...
Not everyone has access to a C++ compiler (I know this may seem
amazing to some people.) There are many issues involved (licensing is
very important here) and it is easier to write highly memory efficient
code in C (this is ultra-important in the embedded market.)
Furthermore, there is widespread incompatability in C++ name-mangling
between compilers. Ideally this wouldn't be the case, but ideally we
wouldn't be writing in either C or C++, but rather in a higher-level
language that gave more support to doing what we wanted!
> 2. Write C++ friendly C code (CRM, Harbison & Steele, required reading)
> a. So that C++ compiler can compile the core and link with C++
> extensions without modification. Much work has been done by
> the pluspatch already.
That is good, except it would be nice if it was also possible to use a
K&R compiler too. I know that is a bit trickier.
Of course, speaking personally, I prefer the K&R style of function
definition (when combined with an ANSI-C declaration,) as it makes the
code more readable in the common case where many arguments have the
same type, and it still helps the rest of the time too.
> b. Make generic code like dynamic vector, associate array (hash)
> easily replaceable, so that if a higher performance C++
> implementation becomes available, user can have a choice.
> I would think proper abstraction/layering is endorsed by most
> C programmers?
That is already possible at a source-level. It would be nice to have
an abstracted access to the numEntries field of the Tcl_HashTable
structure. That is a very useful field to be able to reference
(especially since the only alternative is to iterate across the whole
table!)
> Now that we have solved implementation language issue, let's brainstorm on a
> tougher/more fundamental issue: componentization:
>
> 1. Dynamic Linking.
> It has always been a sore point for any portable projects. We want to
> design a scheme that requires minimal system support, instead of just
> assumming a Unix shared lib model, which itself isn't uniform across
> Unix variants. We might have to invent a simpler compiler independent
> alternative to MS' basic COM mechanism (no DCOM stuff, we have CORBA
> for that purpose). This approach makes it possible to use older versions
> of extensions in new cores without recompilation, which is a major gain.
Talk to Paul Duffin <pdu...@hursley.ibm.com> about this. Note that
some systems just plain stink in this respect (thinking hard about AIX
right now...)
> 2. Packaging.
> I'd like a way of distributing a component with one URL. I'd like to
> see something like the pluspatch support for wrapping supporting scripts
> as static chars.
There are problems with that when you move away from pure Tcl
components that operate correctly in a safe interpreter. Establishing
that a component won't do anything bad is difficult with a non-safe
Tcl script, and much tougher with a binary extension. Plus, there are
many distribution methods (compare zipped distributions, with
compressed tarfiles, with gzipped tarfiles, with pure tarfiles, with
making each file available separately) and when you are distributing
binaries you have the problem that you really need to distribute
binaries for each supported platform as well as having a method of
dealing with unsupported platforms.
Then there are the concomitant "political" problems...
> 3. Component Registry
> pkgIndex.tcl is a primitive component registry, as our traditional setting
> path env's in rc files. I can think of a local registry hierarchy built
> upon the file system (instead of some proprietary database) and DNS like
> DCR (distributed component registry), which can tell you the component id,
> even an URL given the component name and version.
That only works when you've resolved the problems in (2) above. Any
thoughts on whether it should integrate with CORBA?
I don't want always to be on the bleeding edge; it hurts. I strongly
urge Tclers to avoid changing from C to C++
--
Robin Becker
What was once a small powerful and easily embedded script language, has
exploded into a large cumbersome and not so easy to embed general purpose
language. Why not just make that fatal leap and push Tcl/Tk into the
compiled
visual development language arena where it could compete with Visual Basic
and Delphi?
I feel, (and some others who have expressed similar sentiments), that Tcl
has completely lost its direction. It was a GOOD embedded language.
Now it is so fat and bloated with API's that we have to discuss
which c/c++ compiler to use, and system implementations, and standardizing
libraries and such.
Tcl7.4 was simple enough that we embedded it on a in-house multi processor
board running a Real Time Kernel in ONE WEEK - and we were able to develope
and test scripts for it on the Sun workstations. How many people are doing
that with Tcl8+ I wonder. Hence the discussion elsewhere in this newsgroup
concerned with a newer simple Tcl.
Tcl needs to find its roots again: Small, Simple, Powerful. Anything more
and we
might just as well use a compiled language - the debugger are that much
better
and the reslting programs are that much faster too.
Andrew
lvi...@cas.org wrote in message <739207$g4j$1...@srv38s4u.cas.org>...
:In <3653FCC7...@equi4.com> j...@equi4.com writes:
:: - Today's C++ compilers no longer imply huge runtimes, nor do
:: they limit the choice of platforms as much as they used to.
Has g++ been ported to all platforms currently supported by Python and
Tcl (VMS, OS/2, Macintosh, ...).
Hey Larry, I was talking about "hatred against MS **CONSTRUCTS** in general"
without carefully examining them. Hatred against Bill Gates and his
corporation is however justified by the track record.
> However, I don't see why ignorance of ATL implies hatred - I've no
> interest in (or need for) personal software development on Windows whatsoever,
> and if I am so blessed may be able to go to my grave in such a condition...
No, ignorance of anything does NOT imply hatred. See Robin's post about
"hatred" :-) I always thought you are one of the cool headed in this news
group, oh well...
MetaCard is written in C++. There are some compatibility problems
between the compilers on UNIX/Windows/MacOS, but these are generally
minor. The primary advantage of C++ is better developer productivity
and improved code reliability (e.g., memory leaks are *much* less of
a problem because of destructors). Performance isn't an issue given
proper design: MetaCard still runs faster than Tcl 8.0.
Regards,
Scott
> --
> <URL: mailto:lvi...@cas.org> Quote: Saving the world before bedtime.
> <*> O- <URL: http://www.purl.org/NET/lvirden/>
> Unless explicitly stated to the contrary, nothing in this posting
> should be construed as representing my employer's opinions.
--
*********************************************************
Scott Raney ra...@metacard.com http://www.metacard.com
MetaCard: Scripting, without all the syntactic gymnastics
No longer - I've redefined the topic to encompass yours <grin>.
:What was once a small powerful and easily embedded script language, has
:exploded into a large cumbersome and not so easy to embed general purpose
:language. Why not just make that fatal leap and push Tcl/Tk into the
:compiled
:visual development language arena where it could compete with Visual Basic
:and Delphi?
I have to say that one of my concerns with the direction of
'competing' (and I know, AF, that you are
talking rhetorically here...) is that I'm not convinced that Tcl is a
very good large language right now.
As some of you can imagine, I tend to read quite a number of the problems
reported here. For 18+ months I've been concerned that the weight of
a massive user community such as Windows would push Tcl from the round hole`
it was designed to fit into a square hole.
:I feel, (and some others who have expressed similar sentiments), that Tcl
:has completely lost its direction. It was a GOOD embedded language.
:Now it is so fat and bloated with API's that we have to discuss
:which c/c++ compiler to use, and system implementations, and standardizing
:libraries and such.
And I don't think that anyone disagrees with having the APIs defined -
just not bundled into the base interpreter.
> In article <365DEA47...@zeta.org.au>,
> Peter Allworth <lin...@zeta.org.au> wrote:
> <snip>
> >For my own part, I would recommend that the new core be written in strict ANSI C.
> <snip>
> .
> I still rely on the compilers bundled with HP-UX and
> SunOS. I know many others find them unusable. I'm
> willing to consider abandoning them.
>
I see your point and this is clearly a reason why Tcl endeavoured to support K&R
originally.
I must say, though, you're lucky to get a compiler bundled with your OS. I bought my
SparcStation three years ago and Solaris 2.5 (as it was then) came with no compiler at
all.
You _had_ to download a precompiled GNU binary and use that to build the latest
release.
(Or else pay SunSoft a fortune for their official compiler.)
All in all I'm very happy with the result.
PeterA.
Hey, we are talking about building stuff for a higher-level language :-)
To me the object syntax, exception handling and resource management without
sacrificing performance alone would justify C++ if there weren't the
incompatibility issues and the strict portability requirements. When writing
native extensions, Do you prefer (in EVERY function that use Tcl_*):
Tcl_SomeType *stuff = Tcl_AllocateSomeResources(interp, blah...);
if (Tcl_DoSomething(interp, stuff, blah...) != TCL_OK) {
/* handle_error1 here */
goto error_all;
}
else if (Tcl_DoSomethingElse(interp, stuff, blah...) != TCL_OK) {
/* handle_error2 here */
goto error_all;
}
/* do more work here */
/* release resources here */
Tcl_ReleaseSomeResource(interp, stuff);
return TCL_OK;
error_all:
/* release resources on exceptions */
Tcl_ReleaseSomeResource(interp, stuff);
return TCL_ERROR;
or just
Tcl_SmartObject stuff(interp);
interp->doSomething(stuff, blah...);
stuff.doSomethingElse(blah...);
Note that exceptions can be handled elsewhere. The result is much cleaner,
more robust code. The choice is pretty clear based upon the comparison.
However, the incompatibility issue (namely name mangling and vtable layout
etc.) is really too big to justify the language switch for core
implementations. But a USTL (Unified Scripting Template Library) _outside_
the core for writing extensions in C++ efficiently (see above comparison) is
justified, when general portability is not a concern. Extension writers
should not have to create a solution for everybody on every platform. They
should have a choice.
> > 2. Write C++ friendly C code (CRM, Harbison & Steele, required reading)
> > a. So that C++ compiler can compile the core and link with C++
> > extensions without modification. Much work has been done by
> > the pluspatch already.
>
> That is good, except it would be nice if it was also possible to use a
> K&R compiler too. I know that is a bit trickier.
Let's do a survey, please list any platforms in use (your garage doesn't
count:) where K&R is the ONLY option. I cannot think of any platform in
existence that has this requirement. Getting rid of the baggage of K&R is a
major gain, as ANSI C is much better defined language to write portable code
than K&R.
> Of course, speaking personally, I prefer the K&R style of function
> definition (when combined with an ANSI-C declaration,) as it makes the
> code more readable in the common case where many arguments have the
> same type, and it still helps the rest of the time too.
When designed properly, very few function needs many arguments. This is a
minor cosmetic issue. I personally prefer strict ANSI prototyping (the idea
is to let compiler catch the miss matches for you), as it has helped catching
a lot of potential bugs in cross language development, e.g., mixing C/C++ and
Fortran in large high performance parallel code.
> > Now that we have solved implementation language issue, let's brainstorm on a
> > tougher/more fundamental issue: componentization:
> >
> > 1. Dynamic Linking.
> > It has always been a sore point for any portable projects. We want to
> > design a scheme that requires minimal system support, instead of just
> > assumming a Unix shared lib model, which itself isn't uniform across
> > Unix variants. We might have to invent a simpler compiler independent
> > alternative to MS' basic COM mechanism (no DCOM stuff, we have CORBA
> > for that purpose). This approach makes it possible to use older versions
> > of extensions in new cores without recompilation, which is a major gain.
>
> Talk to Paul Duffin <pdu...@hursley.ibm.com> about this. Note that
> some systems just plain stink in this respect (thinking hard about AIX
> right now...)
Although AIX is kinda weird in terms of shared libs (actually everthing,
compared with normal Unices), I can always extract the shr.o from
mysharedlib.a and rename the shr.o as mysharedlib.so, so "load" in tcl works
fine. Back linking (loading shared lib from a static linked wish) works on
AIX. IMHO, it's better than Linux and Windows, which is the primary reason
Paul is working on a more portable solution.
> > 2. Packaging.
> > I'd like a way of distributing a component with one URL. I'd like to
> > see something like the pluspatch support for wrapping supporting scripts
> > as static chars.
>
> There are problems with that when you move away from pure Tcl
> components that operate correctly in a safe interpreter. Establishing
> that a component won't do anything bad is difficult with a non-safe
> Tcl script, and much tougher with a binary extension. Plus, there are
> many distribution methods (compare zipped distributions, with
> compressed tarfiles, with gzipped tarfiles, with pure tarfiles, with
> making each file available separately) and when you are distributing
> binaries you have the problem that you really need to distribute
> binaries for each supported platform as well as having a method of
> dealing with unsupported platforms.
You can always reject a native extension based upon certain security
criteria. I don't like the idea of any builtin security policies in the core.
Security policies should be customizable by users (site configurators). What
I want is a consistent and extensible interface to deal with the packaging
issue.
> Then there are the concomitant "political" problems...
>
> > 3. Component Registry
> > pkgIndex.tcl is a primitive component registry, as our traditional
setting
> > path env's in rc files. I can think of a local registry hierarchy built
> > upon the file system (instead of some proprietary database) and DNS like
> > DCR (distributed component registry), which can tell you the component
id,
> > even an URL given the component name and version.
>
> That only works when you've resolved the problems in (2) above. Any
> thoughts on whether it should integrate with CORBA?
No CORBA involved, we can use simple HTTP like ASCII protocol over the socket,
it's trivial to write a DCR in pure Tcl. Performance is not a concern here, as
traffic will be nowhere near DNS and HTTP on a busy web site.
The idea is to have a minimal but powerful enough extension mechanism builtin
the core that can be easily extended with components. All external component
handling stuff itself, say Dynamic Component Manager (DCM), should be a
component that is compiler switchable, so the minimal embedded tcl doesn't
have to deal with the issue at all. Looks like we now have two class of
components, loadable components that depend upon DCM and static components
that doesn't. Obviously DCM is a static component.
Oh no it doesn't. At least not after it core dumps !!!!! :-)
> Regards,
> Scott
>
> --
> *********************************************************
> Scott Raney ra...@metacard.com http://www.metacard.com
> MetaCard: Scripting, without all the syntactic gymnastics
Gynamstics is one of the best forms of exercise around. Exercise
improves your performance. I guess you could say that MetaCard is the
couch potatoes language ;-).
--
Paul Duffin
DT/6000 Development Email: pdu...@hursley.ibm.com
IBM UK Laboratories Ltd., Hursley Park nr. Winchester
Internal: 7-246880 International: +44 1962-816880
These also come with ANSI C.
> 2. some base classes like a hash-class and a list-class
> (nowadays everybody have to write down the list-code again and
> again, *this* IS errorprone and tedious)
And invite the anti-pattern of subclassing standard utility classes?
No thanks!
[...]
I can see far more sense in using OO for implementing Tk widgets, but
I'm worried about going with C++ due to its well-known problems.
There are currently embarrasingly many problems with interoperability
between C++ compilers. Sure, the same source will compile on all of
them, but often you don't have the source, and you can end up with
object libraries compiled with many different compilers. At that
point, being a pure C library makes all the difference.
And C is far more widely deployed than C++ (though most places wanting
to build Tk will have access to a C++ compiler.) If we do change the
implementation language though, we should also question whether C++ is
the right choice, or whether going to something else that targets C
(for compatability) would be better still.
From a maintenance, debugging and correctness PoV, the former. From a
lazy programmer PoV, the latter. Experience says to go with the
former as I always have trouble with cute hacks later on.
I do accept the saving due to exceptions though. I prefer the way
Java handles this sort of thing though (with a finally clause) as that
is easier to understand 6 months later. :^)
But if you want to add in a nice Tcl-level traceback of what went
wrong in the C/C++ code, you're most of the way towards losing that
nice shortly-coded version, and the natural lifetime of the object
might well not be neatly lexically-scoped...
>> Of course, speaking personally, I prefer the K&R style of function
>> definition (when combined with an ANSI-C declaration,) as it makes the
>> code more readable in the common case where many arguments have the
>> same type, and it still helps the rest of the time too.
>
> When designed properly, very few function needs many arguments. This is a
> minor cosmetic issue.
Many very useful functions require quite a lot of parameters (e.g.
Tcl_GetIndexFromObj with 6 arguments - 5 in C++ notation - and no easy
way of reducing that without making the function less useful) and many
others take a few that are all of the same type. And sometimes the
alternative - packing everything up in a struct that is passed in by
reference - is a performance drain and PITA.
> I personally prefer strict ANSI prototyping (the idea is to let
> compiler catch the miss matches for you), as it has helped catching
> a lot of potential bugs in cross language development, e.g., mixing
> C/C++ and Fortran in large high performance parallel code.
I use strict prototypes and K&R definitions. My compiler allows this
and it makes my code easier to read (IMAO!)
>> There are problems with that when you move away from pure Tcl
>> components that operate correctly in a safe interpreter. Establishing
>> that a component won't do anything bad is difficult with a non-safe
>> Tcl script, and much tougher with a binary extension. Plus, there are
>> many distribution methods (compare zipped distributions, with
>> compressed tarfiles, with gzipped tarfiles, with pure tarfiles, with
>> making each file available separately) and when you are distributing
>> binaries you have the problem that you really need to distribute
>> binaries for each supported platform as well as having a method of
>> dealing with unsupported platforms.
>
> You can always reject a native extension based upon certain security
> criteria. I don't like the idea of any builtin security policies in the core.
> Security policies should be customizable by users (site configurators). What
> I want is a consistent and extensible interface to deal with the packaging
> issue.
The problems are:
a) Security/Trust. Trusting binaries is a Bad Idea unless your
security architecture is very good indeed. Java approaches
this. C/C++ doesn't. Leaving your business-critical data open
to modification kewl...@hacker.evil.org doesn't seem smart to
me...
b) Binaries are platform specific. How do you resolve the problems
you get (as a library publisher) when a remote user of a library
requests a binary for a platform you do not support? There are
NO non-blecherous solutions to this.
c) Politics. Some places (luckily not here) are very uneasy about
software that opens new sockets. Especially to places outside
their intranet. And over a fairly expensive and slow link. Tcl
is in use in (many of) these places now.
>> That only works when you've resolved the problems in (2) above. Any
>> thoughts on whether it should integrate with CORBA?
>
> No CORBA involved, we can use simple HTTP like ASCII protocol over
> the socket, it's trivial to write a DCR in pure Tcl. Performance is
> not a concern here, as traffic will be nowhere near DNS and HTTP on
> a busy web site.
I know you don't need to use CORBA. But should CORBA-interoperation
be on the list of near-core things?
> The idea is to have a minimal but powerful enough extension
> mechanism builtin the core that can be easily extended with
> components. All external component handling stuff itself, say
> Dynamic Component Manager (DCM), should be a component that is
> compiler switchable, so the minimal embedded tcl doesn't have to
> deal with the issue at all. Looks like we now have two class of
> components, loadable components that depend upon DCM and static
> components that doesn't. Obviously DCM is a static component.
I don't know how to express this quite right, but something in that
paragraph above triggers my "Not In My Software Package, Guv" reflex.
I'm not sure what the precise problems are, but my shoulder-blades can
feel them lurking in there.
If you did go ahead with this, you would probably need to support, as
a reasonable minimum for every non-pure-Tcl package:
i386-Win32
alpha-Win32
ppc-Macintosh
i386-Linux[*]
i386-FreeBSD
i386-Solaris
sparc-Solaris
sparc-SunOS4
mips-Irix[**]
etc.
I don't know of any compiler capable of cross-compiling on demand to
all of those. And so you must either force the requiring end to
compile (not something for runtime!) or make the requiring end fail to
work. Yuck.
Donal.
[* There are additional problems with differences between libc
versions here. ]
[** I think I've got this one right... ]
Is MetaCard ported (even in just a cut-down version) to PalmOS and
WindowsCE yet? What about being an extension/control language on a
network router?
Donal.
Guys, there is already an implementation of Tcl in a nice OO language
with a proper exception system and binary compatibility across multiple
platforms. The Jacl package provides a working Tcl interpreter
implemented completely in Java. It solves many of the problems you
pointed out and
the only real drawback is the runtime speed. Perhaps it would be better
to improve Jacl instead of starting over with something new.
Just my two cents
Mo DeJong
dej...@cs.umn.edu
Not yet. We're looking at implementing Windows CE support because
these systems have the CPU/RAM/screen resolution to make a graphical
environment like MetaCard practical to run (PalmOS systems don't).
> What about being an extension/control language on a
>network router?
No, but then that isn't what MetaCard is used for. It's a graphical
development environment. It's kind of what you'd get with
Tcl+Tk+SpecTcl+TclPro, but with better integration and a much shorter
learning curve. It also supports object property persistence, which
you don't get with the Tcl combination.
Regards,
Scott
>
>Donal.
>--
>Donal K. Fellows http://www.cs.man.ac.uk/~fellowsd/ fell...@cs.man.ac.uk
>Department of Computer Science, University of Manchester, U.K. +44-161-275-6137
>--
>If you staple a penguin to the head of a gnu, the result won't catch herring...
--
***************************************************************
Scott Raney ra...@metacard.com http://www.metacard.com
Tcl and ksh: syntactic gymnastics
MetaCard: it does what you think
:What was once a small powerful and easily embedded script language, has
:exploded into a large cumbersome and not so easy to embed general purpose
:language. Why not just make that fatal leap and push Tcl/Tk into the
:compiled
:visual development language arena where it could compete with Visual Basic
:and Delphi?
I have to say that one of my concerns with the direction of
'competing' (and I know, AF, that you are
talking rhetorically here...) is that I'm not convinced that Tcl is a
very good large language right now.
As some of you can imagine, I tend to read quite a number of the problems
reported here. For 18+ months I've been concerned that the weight of
a massive user community such as Windows would push Tcl from the round hole`
it was designed to fit into a square hole.
Well, what I'm thinking more of is the newer additions like the bytecode
compiler. This technology seems out of place in a "script" / "interpreted"
language such as Tcl. It seems to me that the bytecode compiler and the
API's it spawned are more of a response to Python than a real improvement to
Tcl. Hence the following comment:
:I feel, (and some others who have expressed similar sentiments), that Tcl
:has completely lost its direction. It was a GOOD embedded language.
:Now it is so fat and bloated with API's that we have to discuss
:which c/c++ compiler to use, and system implementations, and standardizing
:libraries and such.
And I don't think that anyone disagrees with having the APIs defined -
just not bundled into the base interpreter.
It's not the API's really, its wether or not Tcl is a good embedded language
anymore. Is it still easily embedded? Not all of it.
It was a simple task to expose the original API outward to the ActiveX
world, making Tcl easily used by such awkward languages as Visual Basic or
PowerSoft.
People who download the source code to AxTcl/TclOCX will notice right away
that I didn't bother with the new object API - too much trouble!
Sorry to ramble on folks. I just feel that the best embeddable script
engine around is
about to get lost in the hype.
I agree that one should not just arbitrarily hate specific software
constructs without carefully examining them. I would hope that, as well,
people would not arbitrarily love specific software constructs without
carefully examining them as well. ActiveX is an example I've watched over
the years as people just seem to go 'nuts' over without considering the
dangers involved...
:> However, I don't see why ignorance of ATL implies hatred - I've no
:> interest in (or need for) personal software development on Windows whatsoever,
:> and if I am so blessed may be able to go to my grave in such a condition...
:
:No, ignorance of anything does NOT imply hatred. See Robin's post about
:"hatred" :-) I always thought you are one of the cool headed in this news
:group, oh well...
Sorry if you take my strongly held prejudice against MS software as
somehow being hot headed. For years now I have dealt with thousands of
emails, phonecalls, etc. a month. Of those, I have yet to have a
contact indicating a good experience with Windows...
My first hands on exposure came from attempting to assist a user by
installing Netscape on a Windows 3.x system. I followed all the instructions.
After 3 days of hardware and OS people examinging the machine, the end
result was a total scrubbing of the disk and starting over again.
I've taken a half dozen or more classes on Windows 95 and apps. I've used
Powerpoint 97 for my church announcements for two months.
I use a Windows NT interface to look at powerpoint docs, Word docs, etc.
thru a Solaris window.
I have yet to see one case where the Windows app was one I could bear using
any longer than legally necessary.
Over the years, I have used Apple IIs/IIes/IIgs/III/Lisa/Macintosh, TRS80,
CP/M, Amiga, Atari, Commodore PET / 64, DEC PDP 8, PDP 11 (multiple OSes),
PDP 20, VAX Unix/VMS, IBM MVS, dozens of Unixen, and studied another
1/2 dozen or more OSes. Never mind the 2-3 dozen programming languages
I've used over the past 25 years.
On all these systems, I have struggled with poor apps, poor OSes, poor
programming languages, poor hardware support, etc.
I myself find Windows to be very nearly the worse of all these systems in
terms of users trying to get done what they want.
Perhaps you didn't get the point here. If you have multiple resources that
need to be managed manually with manual exception handling you are asking for
trouble with the unnecessary complexity, which affect the maintainablity and
the robustness. The latter is not just cute hacks, it has a clean syntax
with well defined semantics, so you can focus on your business logic.
> > When designed properly, very few function needs many arguments. This is a
> > minor cosmetic issue.
>
> Many very useful functions require quite a lot of parameters (e.g.
> Tcl_GetIndexFromObj with 6 arguments - 5 in C++ notation - and no easy
> way of reducing that without making the function less useful) and many
> others take a few that are all of the same type. And sometimes the
> alternative - packing everything up in a struct that is passed in by
> reference - is a performance drain and PITA.
When you have that kind of function you need to document it anyway, which I
prefer the ANSI style:
Return_type my_func( Type1 v1, /* blah1 */
Type2 v2, /* blah2 */
...
Typen vn, /* blahn */)
{
...
}
> > I personally prefer strict ANSI prototyping (the idea is to let
> > compiler catch the miss matches for you), as it has helped catching
> > a lot of potential bugs in cross language development, e.g., mixing
> > C/C++ and Fortran in large high performance parallel code.
>
> I use strict prototypes and K&R definitions. My compiler allows this
> and it makes my code easier to read (IMAO!)
This is highly subjective. I found ANSI style easier to read, IMHO. BTW, I
didn't see you mentioning a system that meets my survey requirement (K&R is
the only option)
> > You can always reject a native extension based upon certain security
> > criteria. I don't like the idea of any builtin security policies in the
core.
> > Security policies should be customizable by users (site configurators). What
> > I want is a consistent and extensible interface to deal with the packaging
> > issue.
>
> The problems are:
> a) Security/Trust. Trusting binaries is a Bad Idea unless your
> security architecture is very good indeed. Java approaches
> this. C/C++ doesn't. Leaving your business-critical data open
> to modification kewl...@hacker.evil.org doesn't seem smart to
> me...
I think my suggestion doesn't prevent you from using pure tcl or whatever.
The language should always be an enabler rather than an enforcer. I just
don't want the core to enforce any hardcoded security policy. It's just good
software engineering practice to advocate loose coupling. We don't want to
preclude the possibility of adding a static component later that can deal
with digital signatures for checking the integrity of the installation.
> b) Binaries are platform specific. How do you resolve the problems
> you get (as a library publisher) when a remote user of a library
> requests a binary for a platform you do not support? There are
> NO non-blecherous solutions to this.
If you write binary components, you reduced the portability of these
components. I am not asking everybody to write binary components. If you want
your cool components to be used everywhere, use pure script. There are many
applications, where scripting language is just an integration tool/glue for
various native components, and you really don't care about running your
massive parallel simulation on any consumer level PC or Mac. It is the
diversity of the application that demand a scalable, componentized scripting
environment.
> c) Politics. Some places (luckily not here) are very uneasy about
> software that opens new sockets. Especially to places outside
> their intranet. And over a fairly expensive and slow link. Tcl
> is in use in (many of) these places now.
Looks like your major concern is to write tclets that run everywhere say in a
browser. For this application, I have suggested a open source VM/plugin that
is site customizable for securities. So if your tclet wants to open a socket,
when the local policy doesn't allow it, the VM would just pop a message like
"Our policy doesn't not allow...the tclet will not run". Or in the case of
binary component, it could check the digital signature and determine if it's
from a trusted source.
> I know you don't need to use CORBA. But should CORBA-interoperation
> be on the list of near-core things?
Yes, it could be one of the static components, see below.
Perhaps I didn't express myself clear enough. Not all the native components
need to support all platforms, depending upon the application. I was talking
about the decomposition of the current core here. The current Tcl core needs
to support all these platforms right? I just want to break the current big
monolithic core into a tiny kernel and a few static components which are of
course native and need to support all platforms. No additional maintenance
overhead here.
Component layers:
1. Portable tiny kernel
- Provides meta language support and for boostrapping/linking with
static components.
2. Static components
- Provide facilities that's linked with the kernel at compile time.
e.g. default language (Tcl of course, or any language that uses meta
language support in the kernel) parser; basic networking support;
basic filesystem support; dynamic component support; and other static
components for specific platforms; The purpose of static components is
to provide component configuration at compile time so that only needed
components is linked in.
3. dynamic loadable components
- All other components, including pure script (in the default parser
language) and binary components.
In order for a binary component to be dynamically loadable, it must
conform to certain specification/protocol that Paul is working on
right now.
Now we have a highly customizable, componentized Tcl. A minimalist Tcl
for an embedded system could contain only the tiny kernel, Tcl parser and
customized static component for the specific hardware. (note the absence of
filesystem, networking components here.) On the other hand, we could have a
powerful enterprise integration environment with dynamically loadable
components including of course Tk. I hope that I have made my points across
this time.
A higher-level language does not require an OO, whiz-bangified
implementation. The C Tcl implementation is *one* existance proof of
this.
>To me the object syntax, exception handling and resource management without
>sacrificing performance alone would justify C++ if there weren't the
>incompatibility issues and the strict portability requirements.
That "if" is the problem. I *want* the strict portability requirements.
I've been
working on evangelizing Tcl within my company for a few years now. Its key
selling point *is* its portability, or at least the promise thereof.
(Second on
the list is Expect.) Right now, we could use Tcl on, oh, about six or seven
RTOSes, plus the common (and some uncommon) Unices, with Window NT
an impending danger.
C solutions are tough enough to make reliable and portable over this mix
of environments. C++ is ... worse.
>> > 2. Write C++ friendly C code (CRM, Harbison & Steele, required reading)
>> > a. So that C++ compiler can compile the core and link with C++
>> > extensions without modification. Much work has been done by
>> > the pluspatch already.
>>
>> That is good, except it would be nice if it was also possible to use a
>> K&R compiler too. I know that is a bit trickier.
>
>Let's do a survey, please list any platforms in use (your garage doesn't
>count:) where K&R is the ONLY option. I cannot think of any platform in
>existence that has this requirement. Getting rid of the baggage of K&R is a
>major gain, as ANSI C is much better defined language to write portable
code
>than K&R.
Why shouldn't garages count?
I'm serious. I would like Tcl to be a good "embeddable" language, not just
in Dr. O's original sense of "embeddable in a Unix application as the macro
language" but embeddable in the sense of "suitable for use in an embedded
system." Of the major scripting languages that I am familiar with, Tcl has
unique strengths in this area. This downward scalability has good
application
for garage-shop projects.
Even in the commericial realm, all of the real-time OS market surveys that
I've
seen over the last several years indicate that the #1 RTOS is "I wrote my
own
for the project", with the various commercial vendors (Wind River, ISI,
Lynx,
Microware, etc.) fighting it out for the next places. That's changing,
"roll your own
RTOS" is becoming less popular, and most RTOSes are looking more and
more like POSIX. But it's still not something to discount.
>> Of course, speaking personally, I prefer the K&R style of function
>> definition (when combined with an ANSI-C declaration,) as it makes the
>> code more readable in the common case where many arguments have the
>> same type, and it still helps the rest of the time too.
IMO, the classic K&R vs ANSI C is a non-issue. While I love the new ANSI
features, especially prototypes, writing code to compile for K&R compilers
while taking advantage of ANSI prototyping when available is a solved
problem.
Last I checked, the Tcl core already did this. I think it's the right way
to go.
Zach Frey
zfrey at bright dot net
For whatever it's worth, Andrew, (since nobody else is jumping in here to
defend you) I strongly agree with what you've been saying here, and vented a
similar sentiment on the MacTcl list a while ago. And (although I now rather
regret the fairly whiny tone in which I expressed it at the time) the core
idea holds. It does seem like the stable, interpreted, simpler Tcl/Tk of a
couple years ago was a thing of unique utility, flexibility, and beauty (not
to mention the ability to run basically anywhere interesting), and I too am a
bit confused by the push to turn Tcl/Tk into something other than that when
there are already so many other mature languages and environments for doing
most of those other things. This confusion (and this message) should not be
interpreted as disrespect for the large quantities of talent, good intentions,
and hard work that have gone into all those changes, nor as a rejection of
progress per se-- but maybe just a reminder that it's important to try to keep
a clear view of what each tool is really good for, and that it's important to
remember to be careful to recognize and preserve that which works really well.
Sometimes when you need a fork it's better to just put your spoon down and
pick up the fork, instead of carving your spoon into a fork just because the
spoon happens to already be in your hand. Because then you won't have a spoon
any more, just forks.
-- Chris '...He of the unpopular opinions, ducking for cover now...' Grigg
------------------
please remove the 'SPAMLESS' if replying directly
I was not talking about the ActiveX mechanism per se. I was talking about the
current ATL implementation that take advantage of the power of C++ template
quite well, as code example only. From library design point of view, it has
quite a few merits over say stuff like MFC. Not that I like the overall
design of downloading and linking in dlls over the net too easily. There are
some fundamentally good thing from MS like COM, not that I like their design
and/or implementation, but the idea (not their original, but they did
implement one) of interface based language independent binary component is a
"good thing". This means, COM components generated by VB, delphi or VC++, can
be used interchangeably. There is no counter part in Unix world yet (CORBA is
the counter part of DCOM, since CORBA objects HAVE TO lives in different
processes, which is a real drag for non-distributed applications), where
libraries/objects from different compilers of the same language (esp. C++) on
the same machine, would not work with each other, let alone language
independent...
Just hatred itself is not enough to defeat MS. We have to use our open and
collective wisdom to do something better and make it easily accessible to
everyone. When enough people think there are better alternatives to MS, the
evil empire will crumble...
> Over the years, I have used Apple IIs/IIes/IIgs/III/Lisa/Macintosh, TRS80,
> CP/M, Amiga, Atari, Commodore PET / 64, DEC PDP 8, PDP 11 (multiple OSes),
> PDP 20, VAX Unix/VMS, IBM MVS, dozens of Unixen, and studied another
> 1/2 dozen or more OSes. Never mind the 2-3 dozen programming languages
> I've used over the past 25 years.
>
> On all these systems, I have struggled with poor apps, poor OSes, poor
> programming languages, poor hardware support, etc.
>
> I myself find Windows to be very nearly the worse of all these systems in
> terms of users trying to get done what they want.
This is highly subjective, as if MS got the current market share without any
technical merits. IBM and Intel should also take much responsibility for what
MS is today...
So its a case of "Come back in 5 years time when the object format has
stabilised on each platform"? If that's not an example of embarrasing
interoperability problems, what is?
As it stands C++ only really works if all the objects in an
application are built with the same compiler. This is only an option
at industry juggernauts or with Free Software fanatics. Everyone else
who buys in external libraries in object-only form is left up the
creek sans paddle.
Furthermore, C++ has other problems (inline function^H^H^Hmethod
declarations in headers that make it harder to develop modules
independently and often blow code size up to over 100K for a single
header file[*], etc.)
C++ is a good thing when done right, and it is a horrific mess when
done wrong. There are a lot of programmers that should never have
been let near it.
Donal.
[* I know that's bad practise. But it is the sort of idiocy that
nobody sane would try in C, and that I've seen others dumped in the
middle of all too often with C++. After a while, you start to
consider blaming the language... ]
--
Donal K. Fellows http://www.cs.man.ac.uk/~fellowsd/ fell...@cs.man.ac.uk
Department of Computer Science, University of Manchester, U.K. +44-161-275-6137
--
"And remember, evidence is nothing." - Stacy Strock <sp...@adisfwb.com>
Do you think you could rewrite that while making sure that you get the
quoting right so that I at least can tell who wrote what? I suppose I
could do it by going back, printing the messages and studying them
side by side, but I don't feel that I should have to! Instead, I'll
ignore the message completely...
Donal.
I get the point. I just think that C++ is a long way off the best
solution, as it is too much of a write-only language.
> This is highly subjective. I found ANSI style easier to read, IMHO.
That I can understand.
> BTW, I didn't see you mentioning a system that meets my survey
> requirement (K&R is the only option)
I was neatly ignoring that part, since it was something that was
better answered by others with more experience in the wierder edges of
OS systems.
> I think my suggestion doesn't prevent you from using pure tcl or whatever.
> The language should always be an enabler rather than an enforcer. I just
> don't want the core to enforce any hardcoded security policy. It's just good
> software engineering practice to advocate loose coupling. We don't want to
> preclude the possibility of adding a static component later that can deal
> with digital signatures for checking the integrity of the installation.
For pure Tcl, downloaded modules only make sense in some areas anyway.
And they almost universally don't make sense for core capabilities,
which is what I thought this thread was originally about. If I was
making a smaller, more embeddable Tcl, all the networking stuff would
be among the first stuff to go. It is big, and there are many
applications and usage situations where it is useless.
> If you write binary components, you reduced the portability of these
> components. I am not asking everybody to write binary components. If you want
> your cool components to be used everywhere, use pure script. There are many
> applications, where scripting language is just an integration tool/glue for
> various native components, and you really don't care about running your
> massive parallel simulation on any consumer level PC or Mac. It is the
> diversity of the application that demand a scalable, componentized scripting
> environment.
Oh! So I can instead get everything I want by using NFS or AFS? I
think I'd prefer to stick to the existing tech here.
> Looks like your major concern is to write tclets that run everywhere say in a
> browser.
No. I was ignoring that area completely as that is somewhere where
the downloading of foreign executable content is a given anyway.
> For this application, I have suggested a open source VM/plugin that
> is site customizable for securities. So if your tclet wants to open
> a socket, when the local policy doesn't allow it, the VM would just
> pop a message like "Our policy doesn't not allow...the tclet will
> not run". Or in the case of binary component, it could check the
> digital signature and determine if it's from a trusted source.
I was thinking about the dynamic loading of core components into a
core Tcl interpreter (the compiler/bytecode-executer would be one of
the parts I would load, if possible, as would be procedures and
namespaces.) Doing this sort of thing requires the loading of non-Tcl
code at runtime. Hence, it is impractical to do it from a central
repository without good OS support (see a few paragraphs above.)
Digital signatures for binaries do not resolve all the issues by
themselves, as I have varying levels of trust. C and C++-based
implementations only really support binary levels of trust, as you
either let them execute (usually as yourself) or you don't.
[ much elided ]
> Component layers:
>
> 1. Portable tiny kernel
> - Provides meta language support and for boostrapping/linking with
> static components.
>
> 2. Static components
> - Provide facilities that's linked with the kernel at compile time.
> e.g. default language (Tcl of course, or any language that uses meta
> language support in the kernel) parser; basic networking support;
> basic filesystem support; dynamic component support; and other static
> components for specific platforms; The purpose of static components is
> to provide component configuration at compile time so that only needed
> components is linked in.
>
> 3. dynamic loadable components
> - All other components, including pure script (in the default parser
> language) and binary components.
> In order for a binary component to be dynamically loadable, it must
> conform to certain specification/protocol that Paul is working on
> right now.
I think there could be a *long* discussion on exactly what component
should be where...
> Now we have a highly customizable, componentized Tcl. A minimalist Tcl
> for an embedded system could contain only the tiny kernel, Tcl parser and
> customized static component for the specific hardware. (note the absence of
> filesystem, networking components here.) On the other hand, we could have a
> powerful enterprise integration environment with dynamically loadable
> components including of course Tk. I hope that I have made my points across
> this time.
Dynamic loading of binary components other than those that are
completely trusted is *extremely* contentious. Accessing remote
binary components (via CORBA, say) is less so since you can make sure
that the parts that live within your "security zone" are trusted and
well-behaved in that they only give external access to those resources
that they are explicitly permitted to.
All of which begs the question, just what is a security zone? I'm not
wholly sure, but you could say that there is no security checking
between components within a zone. In Tcl terms, it would probably be
an interpreter, and in Java terms it would probably be a ClassLoader.
SZs nest, of course, since a corporate firewall also delimits a zone.
I can't remember where I first encountered the concept, or what jargon
they used either... :^)
This post is too long and not coherent enough. Nor is it sufficiently
diplomatic, but I'm suffering from an application that I can either
run until the problem with overallocation[*] of memory shows up, or I
can attach a debugger to it and get useful information out. But not
both at the same time. <cue screams of anguish> This is a bit(!)
frustrating...
Donal.
[* Not a leak precisely. The memory gets deallocated again, but the
peak usage is stupidly large given the amount I think is needed to
run the app, which causes serious problems... ]
--
Donal K. Fellows http://www.cs.man.ac.uk/~fellowsd/ fell...@cs.man.ac.uk
Department of Computer Science, University of Manchester, U.K. +44-161-275-6137
--
>>SmallEiffel remains an option well worth investigating and is IMHO ideal
>>for compiler development, contrary to perl6-porters decision.
The main problem of a core is the modularity, API, and performence .
There is no OO inhertance needed. ANSI C is the best choice of the
language. We need separate the core and its library. At present
the architecture of the Tcl core is not satisify. The arguments of
using what language to do implementation are missing the direction.
At present the better question is what are requirements of the
Small Tcl core? For example I require it less than 50K. In this
limitation we have to choice carefully what we needed.
What the small Tcl core needed is the reorganization. It is evolution
not the revolution.
Chang
>I feel, (and some others who have expressed similar sentiments), that Tcl
>has completely lost its direction. It was a GOOD embedded language.
>Now it is so fat and bloated with API's that we have to discuss
>which c/c++ compiler to use, and system implementations, and standardizing
>libraries and such.
I agree. That is why we need a Small Tcl. Tcl 8.0 or 8.1 is useless in the
embedded applications. We really need a 50k Tcl core. That may provide
scale up ability. Currently the Tcl 8.0 and 8.1 can not scale down!
In this standard Tcl 8.0 has no scalability. It is unfortunate. I may list
some requirements of Small Tcl later. The number one requirement is the
scalability (up and down)!.
>Tcl7.4 was simple enough that we embedded it on a in-house multi processor
>board running a Real Time Kernel in ONE WEEK - and we were able to develope
>and test scripts for it on the Sun workstations. How many people are doing
>that with Tcl8+ I wonder. Hence the discussion elsewhere in this newsgroup
>concerned with a newer simple Tcl.
>Tcl needs to find its roots again: Small, Simple, Powerful. Anything more
>and we
>might just as well use a compiled language - the debugger are that much
>better
>and the reslting programs are that much faster too.
Right. That is the task of Small Tcl.
Chang
>Andrew
>lvi...@cas.org wrote in message <739207$g4j$1...@srv38s4u.cas.org>...
>:In <3653FCC7...@equi4.com> j...@equi4.com writes:
>:: - Today's C++ compilers no longer imply huge runtimes, nor do
>:: they limit the choice of platforms as much as they used to.
>Has g++ been ported to all platforms currently supported by Python and
>Tcl (VMS, OS/2, Macintosh, ...).
>--
I tried a few times a while back to persuade some of my
colleagues/superiors to put a Tcl interpreter in some embedded devices
without success. Lately with the bloating 8.x, I didn't even bother
asking. It is simply impossible. Now if a (much) smaller Tcl core
would make that feasible, I may have another shot at it. But to
include religious wars like C++ vs C in addition to the argument seems
way out of the question.
$0.02
/Carmel Lau
cl...@home.com
(Opinions are mine)
In particular, I believe that the SmallEiffel and Sather compilers share
some interesting implementation advantages:
* Code is generated by walking the call tree from 'main'. That is,
only those routines which might ever be provably
called are actually emitted in the output.
This can be a very big win with fat parameterized library classes:
only those few routines which are
* They generate very efficient dispatch tables, with no penalty for
multiple inheritance.
SmallEiffel seems to have a more efficient and highly specialized
dispatch implementation still.
:Patrick Doyle TINCC
:doy...@ecf.toronto.edu
--
* Matthew B. Kennel/Institute for Nonlinear Science, UCSD
*
* "do || !do; try: Command not found"
* /sbin/yoda --help
In article <742i9m$rmr$1...@nnrp1.dejanews.com>,
fl...@my-dejanews.com wrote:
<much other topics elided>
> Component layers:
>
> 1. Portable tiny kernel
> - Provides meta language support and for boostrapping/linking with
> static components.
>
> 2. Static components
> - Provide facilities that's linked with the kernel at compile time.
> e.g. default language (Tcl of course, or any language that uses meta
> language support in the kernel) parser; basic networking support;
> basic filesystem support; dynamic component support; and other static
> components for specific platforms; The purpose of static components is
> to provide component configuration at compile time so that only needed
> components is linked in.
>
> 3. Dynamically loadable components
> - All other components, including pure script (in the default parser
> language) and binary components, which depend upon the dynamic
> component support -- a static component.
>
> Now we have a highly customizable, componentized Tcl. A minimalist Tcl
> for an embedded system could contain only the tiny kernel, Tcl parser and
> customized static component for the specific hardware. (note the absence of
> filesystem, networking components here.) On the other hand, we could have a
> powerful enterprise integration environment with dynamically loadable
> components including of course Tk. I hope that I have made my points across
> this time.
Well, but what features DO you want to have in?
In the interest of finding a solution, I'll give a list of features
I want to have in the small Tcl:
* Traces, events, waits. This is really high on my list
for embedded stuff. The OS should be able to generate an event
within Tcl, (and associate data with it) so that later, when
the Tcl runs in an idle cycle of the OS, Tcl can process this event.
* namespaces/packages. I could live with Tcl-Only packages for
small Tcl, with bigger stuff having to be statically linked into
Tcl.
What do I not need?
* I/O. The whole channel concept could be an extension.
In embedded I've got to write a send-function anyway when sending
data to some piece of hardware. Receiving stuff could be handled by
event-queues.
* exec :-)
* How does the "after <some time>" work? I think, we don't need an extra
timer for Tcl, when every RTOS already offers several.
* The bytecode compiler.
* That object interface?
Greetings!
Volker
I would not call Java a nice OO language at all. I think that it is
horrendous. Also although it is supposed to be binary compatible across
multiple platforms it does rely on all those platforms having the same
version (there are a lot of incompatibilities between different
versions) and also having completely compatible virtual machine
implementations.
I for one would not like Tcl to be reliant on Java for its
implementation.
How would you access the RTOS's timer from Tcl then ?
> * The bytecode compiler.
> * That object interface?
>
> Greetings!
> Volker
When you create a list of features that you need / don't need in a
small core you should list Tcl features rather than implementation
issues and how you want / do use them. You should also give us some idea
of how large you want the core to be, how much memory it is allowed to
use, etc as that information is more important to you than how it is
implemented underneath.
Anyone got any concrete figures about memory requirements ?
Creating a Tcl core which will satisfy all the requirements of everyone
who wants to embed it is impossible. What we should aim to do is
structure it so that it is easy to simplify.
The first thing we need to do is create a list of function groups that
Tcl supports. Once we have that then we can sort them against some
criteria (usefulness vs cost) and decide which ones should be in the
core and which ones should be statically bound in and which ones should
be dynamically loadable.
Here is a first stab at a list. The number after is the approximate
percentage of the total core that these functions take.
namespaces 3.7
regular expressions 1.1
packages 1.0
dynamic loading 2.3
file management 5.4
input / output 12.1
+ sockets +0.4
date / time 4.2
processes 8.8
events / notifier 2.0
tcl commands 15.0
variables 5.2
basic objects 4.0
error strings 1.7
compiler 10.5
virtual machine 6.4
interp stuff 5.5
Networking should really be a dynamically linked component.
I think Tcl should be re-architected a bit like this:
Layer 1 (lowest): Tcl core language
Flow control, variables, traces, procs, namespaces,
packages, dynamic loading, basic file i/o
Layer 2: event loop
Dispatching, vwait, after, fileevent etc.
Everything else as dynamically loaded components above layer 1 or 2:
Networking (on layer 2)
Tk (on layer 2)
Incr Tcl (on layer 1)
Safe Tcl (on layer 1)
Other components can be layered above dynamically loaded packages:
Img (on Tk)
Tix (on Tk)
Tcl-DP (on networking, and layer 2)
The "package require" command will ensure that dynamically loaded
components can only be loaded into an interpreter with the correct
support -- be it the event loop or other dynamically loaded components.
I would really like to see Tcl return to its roots as a simple,
embedded scripting language. It should be possible to embed a
basic language interpreter into an application without modifying
the Tcl library itself.
However, I also find Tcl very useful as a platform to build cross
platform tools. Good layering that cleanly separates the different
concerns of the Tcl/Tk toolkit will ensure that Tcl will support
both uses.
fl...@my-dejanews.com wrote:
>
> I felt I need to make these more important points stand out of other issues.
>
> In article <742i9m$rmr$1...@nnrp1.dejanews.com>,
> fl...@my-dejanews.com wrote:
>
> <much other topics elided>
>
> > Component layers:
> >
> > 1. Portable tiny kernel
> > - Provides meta language support and for boostrapping/linking with
> > static components.
> >
> > 2. Static components
> > - Provide facilities that's linked with the kernel at compile time.
> > e.g. default language (Tcl of course, or any language that uses meta
> > language support in the kernel) parser; basic networking support;
> > basic filesystem support; dynamic component support; and other static
> > components for specific platforms; The purpose of static components is
> > to provide component configuration at compile time so that only needed
> > components is linked in.
> >
> > 3. Dynamically loadable components
> > - All other components, including pure script (in the default parser
> > language) and binary components, which depend upon the dynamic
> > component support -- a static component.
--
+------------------------------------------+---------------------+
| Name: Nat Pryce MEng ACGI | Dept. of Computing, |
| Email: n...@doc.ic.ac.uk | Imperial College, |
| Tel: +44 (0)171 594 8394 | 180 Queen's Gate, |
| Fax: +44 (0)171 581 8024 | London SW7 2BZ, |
| WWW: http://www-dse.doc.ic.ac.uk/~np2 | United Kingdom |
+------------------------------------------+---------------------+
In what way? (And no, I am not Angus Deayton...)
> Also although it is supposed to be binary compatible across multiple
> platforms it does rely on all those platforms having the same
> version (there are a lot of incompatibilities between different
> versions) and also having completely compatible virtual machine
> implementations.
You mean the fact that some software companies decided to try to
scupper the language and had the Big Bad Court tell them to stop it?
Or is it the fact that the conformance and testing suites are not
tight enough yet?
> I for one would not like Tcl to be reliant on Java for its
> implementation.
Certainly not at the present time, but in the future...?
Donal.
--
Donal K. Fellows http://www.cs.man.ac.uk/~fellowsd/ fell...@cs.man.ac.uk
Department of Computer Science, University of Manchester, U.K. +44-161-275-6137
--
My personal opinion is that it would be a better use people's time
to work on other projects. There are zillions of new things you
could write for Tcl that would provide much more immediate benefits.
-John-
Jean-Claude Wippler wrote:
>
> Alexandre Ferrieux wrote (in reply to Guido van Rossum's comments):
> [...]
> > How could we disagree with an intention to share more, and reinvent
> > fewer wheels ? Okay, to some extent most have already been reinvented
> > (I mean duplicated between Tcl and Python). Still, this is obviously,
> > definitely an interesting standpoint for the remaining ones.
> >
> > So, let's go ahead and write down the specs of 'the ultimate C-level
> > OS abstraction library', first by merging the Tcl and Python ones
> > (with duplicates in blinking red).
>
> [snipped, lots of ideas on what might be good candidates for teaming up]
>
> Can this really be happening? Pinch, ouch: yes, it's not a dream...
>
> Imagine the following "Scripting 2000 toolbox", let's call it "S2K":
>
> - A small set of independent function groups, such as hashing, base io,
> dynamic vectors, string utilities, regex, etc. Coded in C or C++.
> And another one who's time has come, IMO: memory mapped file access.
> - This core is based on proven software, aiming for maximum MODULARITY.
> Function groups are planned to evolve / be-added right from the start.
> - Choose C++ as interface language, and make very careful use of the
> template mechanism to avoid inheritance and virtual members, except
> where these mechanisms add value and *increase* performance.
> - Use the virtual dispatch mechanism of C++ as the binding layer for
> extensions. Add wrappers for C and Tcl/Python specific aspects.
> - Consider using C++ exceptions, if this does not prevent portability.
> A major gain would be to benefit from C++'s automatic stack cleanup.
> - Use C++ class wrappers to take care of all reference-counting details.
> This is easy, automatic reference counting works really well with C++.
>
> Before this starts to sound like a huge wish-list, I have to add that I
> think it can be done without creating a monster. This should not be an
> attempt to re-invent wheels yet again, but as a way to leverage whatever
> code there is right now, from areas as varied as Tcl, Python, Perl, and
> what have you (STL, Icon, Scheme, the list is endless). A good example
> would be Henry Spencer's regular expression code, or the PCRE version,
> which might turn out to be an excellent choice. With templates, such
> code could be parametrized on class/type, allowing the use of the same
> code even with the very different object models of Python and Tcl.
>
> The underlying assumptions are, that general-purpose scripting languages
> all have very similar requirements (hashing, string handling, dynamic
> vectors are all good examples of this), and that a common code base will
> attract maximum attention from the most talented minds in the world to
> optimize and take such generic function groups to new heights.
>
> C++ makes very good sense IMO, because of the following considerations:
> - Templates allow code to be reused no matter what the low level
> implementations look like. It really is possible to reuse the
> perfect sorting code, despite the fact that strings are often
> implemented in very different ways (null bytes vs. <ptr,len>),
> to name one example. There is no performance penalty.
> - Virtual dispatch vectors are a very efficient mechanism for
> late binding in statically typed / compiled languages. A more
> generic dynamic extension mechanism could greatly reduce the
> amount of work needed to connect *all* scripting languages to
> many sorts of language-independent tools and libraries.
> - C++ is a 99% superset of C, and does not sacrifice speed. As
> a matter of fact, I claim that it adds opportunities for more
> performance in the same way that algorithms in C can nowadays
> outperform assembly code by using much smarter algorithms.
> - Today's C++ compilers no longer imply huge runtimes, nor do
> they limit the choice of platforms as much as they used to.
> - All new software engineering developments are now taking place
> in C++ instead of C, the STL being a prime example. Though a
> choice of STL would need to be carefully considered, there are
> probably many other valuable C++ bodies of code to pick from.
>
> Does this rule out C? No, with suitable wrappers existing code can be
> accomodated just as well. This matters especially for extensions.
>
> What about Guile? Well, that's a different goal. Guile, as far as I
> understand it, aims higher and wants to reimplement several scripting
> languages on top of a Scheme-based language. Some of the function
> groups mentioned here might perhaps be useful to Guile - I don't know.
> Collecting existing code and making it fit into a common code base is
> mostly orthogonal to the idea of building with Guile, I would expect.
>
> How does this help Tcl? This could be used to arrive at a smaller Tcl
> core. Tiny even. How about a Tcl-only version which does not support
> traces (variable/command), and which does not support Tk? There are
> plenty of simple scripts which do not need this extra capability. It
> would not surprise me if it turns out that adding such a fundamental
> idea as variable tracing could even be done through templates.
>
> How does this help Python? By adopting this approach, Python could be
> the driving force behind such new modularity, and offer many of its very
> powerful data structures as spin-off. It would benefit by having even
> more people involved in optimizing and improving the algorithms used.
>
> How does this help scripting in general? How about finding new ways to
> package and deploy extensions, for ALL languages? Or installers which
> benefit from common code? Or more generalized test/porting tools? Or
> pluggable string/container implementation experiments? Or combining
> efforts to port to embedded/palmtops? The list is really endless...
>
> But isn't scripting language X supposed to fight and destroy language Y?
> For three reasons the answer is no, IMO. First of all, Tcl and Python
> appear to be so different that they look very much complementary, more
> like a macro language versus a data-abstration language than anything
> else. Second, there are plenty of technical similarities which might
> turn out to require no trade-offs whatsoever. After AWK, Perl, Tcl, and
> Python - we all *know* how to get hashing and dynamic vectors "right".
> Finally, diversity is not only good, it is in fact the only way to move
> forward and spur evolution. Software engineering can now be taken to a
> higher level, and focus on new language designs and data model designs.
> Core data structures and core algorithms really are solved, just like
> the development of C runtime libraries is basically a thing of the past.
> IMO, it is time to move to higher levels of abstraction.
>
> Ok, but isn't Perl the big evil we all desperatly want to go away? Nah,
> there is plenty of room on this planet. The current investment in Perl
> is most likely a multiple of Tcl and Python taken together. It is not
> realistic to expect anyone to destroy that investment. And who knows,
> people from the Perl community might well embrace this approach, and
> even contribute substantially. Either way is fine with me. In terms of
> technology (read: performance), Perl still has quite a lot to offer.
>
> All of today's modern scripting languages have more communality than
> fundamental differences, despite everyone's personal preferences and
> those ever-recurring discussions of language syntax and semantics.
>
> Let's try to move forward. Let's share and combine the best there is.
> Not as a goal, but simply as a side effect of striving for excellence.
> Scripting, its flexible and portable solutions, will benefit from it.
>
> -- Jean-Claude
--
________________________________________________________________________
John Ousterhout 650-843-6902 tel
Chief Executive Officer 650-843-6909 fax
Scriptics Corporation john.ou...@scriptics.com
The Tcl Platform Company http://www.scriptics.com
The purpose of static components is to have the advantage of componentization
without the overhead of dynamic loading mechanism, as a portable dynamic
loading mechanism is non-trivial to implement and probably incurs more runtime
overhead. This is important for configurations for embedded systems. On the
other hand, any static component can be easily converted to dynamically linked
component, and vice versa, if one wishes.
> Layer 2: event loop
> Dispatching, vwait, after, fileevent etc.
They can be on the same level of dynamic loading and filesystem support etc.
depending upon your perspectives. e.g., they can be more important than
dynamic loading and file i/o on an embbeded system. So from my component
point of view, every configurable (excludable) unit that doesn't rely on
dynamic loading, is a static component.
Too many layers are artifacts of domains and unnecessary.
> Everything else as dynamically loaded components above layer 1 or 2:
> Networking (on layer 2)
> Tk (on layer 2)
> Incr Tcl (on layer 1)
> Safe Tcl (on layer 1)
>
> Other components can be layered above dynamically loaded packages:
> Img (on Tk)
> Tix (on Tk)
> Tcl-DP (on networking, and layer 2)
See comments above.
> The "package require" command will ensure that dynamically loaded
> components can only be loaded into an interpreter with the correct
> support -- be it the event loop or other dynamically loaded components.
Agreed.
> I would really like to see Tcl return to its roots as a simple,
> embedded scripting language. It should be possible to embed a
> basic language interpreter into an application without modifying
> the Tcl library itself.
Many embbeded system doesn't need/support dynamic loading. That's why I
introduced static components.
We cannot talk about components without talking about their interfaces. Once
they are defined with proper abstraction, our work is half done.
Basically it comes down to maturity.
It seems to be missing a lot of features which I for one have come to
expect from an object oriented language. I know that the features are
not necessary for an object oriented language but they can make life a
lot easier (and also a lot more complicated if you are not careful).
1) Multiple inheritance.
2) Templates.
3) Enumerations.
Also some of the features have not been carefully thought out or have
major problems with.
1) Anonymous classes do not have proper closure.
2) AWT.
I am in no way implying that C++ which most of you will have guessed is
the OO language I know best is perfect, in fact it is far from perfect,
it is just that it has had the benefit of time (and the handicap of
trying to be a super C). Java and C++ are aimed at two different markets
and I feel that C++ fits its market better than Java fits its.
> > Also although it is supposed to be binary compatible across multiple
> > platforms it does rely on all those platforms having the same
> > version (there are a lot of incompatibilities between different
> > versions) and also having completely compatible virtual machine
> > implementations.
>
> You mean the fact that some software companies decided to try to
> scupper the language and had the Big Bad Court tell them to stop it?
> Or is it the fact that the conformance and testing suites are not
> tight enough yet?
>
Both plus the fact that the language definition and the classes seem to
change so much and are incompatible from one release to the next. I am
sure that it will settle down but by that time there will be so many
different versions out there that it will be very difficult to write
truely cross platform code.
> > I for one would not like Tcl to be reliant on Java for its
> > implementation.
>
> Certainly not at the present time, but in the future...?
>
I cannot conceive of a time when C will be replaced by Java, it has not
even been replaced by C++.