Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

New version of C-like interpreter available

42 views
Skip to first unread message
Message has been deleted

John Nagle

unread,
Mar 30, 1994, 2:35:17 AM3/30/94
to
da...@pacific.mps.ohio-state.edu ("John E. Davis") writes:
> The latest version of my C-like interpreter S-Lang is available. UNIKE
>PREVIOUS VERSIONS, THE NEWEST VERSION MY BE USED OR DISTRIBUTED WITHOUT ANY
>FEE OR ROYALTY. Details on where to ftp S-Lang from is presented below.

> What is S-Lang? S-Lang is a very powerful stack based embedded interpreter
>with a C-like language interface. This means that you can embed S-Lang into
>your C program to give it a powerful scripting language with a friendly syntax
>that resembles C. For example, here is a S-Lang function that computes that
>the of a matrix:

I took a brief look at this, and it's an interesting little system,
but C/C++ programmers may find it a bit strange. It has manual stack
manipulation, like Forth; you can push multiple return values on the
stack, swap the top two entries on the stack, and similar Forth-like
operations. You can also get the stack out of sync by failing to use
a return value from a function.

Multiple return values are syntactically difficult. Most attempts
at a syntax are painful. Common LISP is the most popular language with
this feature, and even there, it doesn't fit well with the rest of the
language. Mesa, the old Xerox PARC language, had multiple returns done
well, but the Mesa approach of treating a function call as a structure
constructor followed by a one-argument function call returning a structure
of return values never really caught on, mostly because it didn't match
the type systems of C or Pascal.

John Nagle

Message has been deleted

Jon Leech

unread,
Mar 30, 1994, 11:33:50 AM3/30/94
to
[followups to comp.lang.misc only]

In article <DAVIS.94M...@pacific.mps.ohio-state.edu>, da...@pacific.mps.ohio-state.edu ("John E. Davis") writes:
|> Finally, consider the following. Someone asked me to provide a comparison
|> between TCL and S-Lang. To be honest, I do not know TCL but I picked up the
|> package and wrote a simple test routine. Here are both routines (TCL and
|> S-Lang). I think that most C programmers will find the S-Lang function
|> more understandable.
|> [examples elided]

Tcl is not C, nor is it intended to be. It is an easily imbedded,
extensible command language interpreter with significant string and list
processing capability. Tcl shines at such tasks, not at number-crunching
(even your file processing examples would be considerably more compact in
Tcl than in S-Lang).

People who are interested in Tcl should visit comp.lang.tcl and learn
more for themselves.

Jon
__@/

Stefan Monnier

unread,
Mar 30, 1994, 12:11:40 PM3/30/94
to
In article <nagleCn...@netcom.com>, John Nagle <na...@netcom.com> wrote:
> language. Mesa, the old Xerox PARC language, had multiple returns done
> well, but the Mesa approach of treating a function call as a structure
> constructor followed by a one-argument function call returning a structure
> of return values never really caught on, mostly because it didn't match
> the type systems of C or Pascal.

Well, SML, Haskell and several other current languages use the same
"single structured argument" style of parameter and return values
passing. Of course they also use currying which is often nicer,
but still: the Mesa approach is far from dead !


Stefan
--

-----------------------------------------------------
-- On the average, people seem to be acting normal --
-----------------------------------------------------

Peter da Silva

unread,
Mar 30, 1994, 1:09:17 PM3/30/94
to
In article <DAVIS.94M...@pacific.mps.ohio-state.edu>,
John E. Davis <da...@amy.tch.harvard.edu> wrote:
> proc test1 {i0 n} {
> set sum 0
! for {set i $i0} {$i < $n} {incr i} {
>
! incr sum $i
> }
> return $sum
> }

TCL isn't intended for numeric work, but the above changes should greatly
increase the readability of this fragment.

Peter da Silva

unread,
Mar 30, 1994, 1:14:05 PM3/30/94
to
In article <CnIvA...@wizzy.com>, Andy Rabagliati <an...@wizzy.com> wrote:
> Well, the Tk part (X-windows interface) may soon be married to perl, as
> Tkperl.

The people who think Perl is a better language than TCL really mystify me.
Perl is *truly* baroque and awful... all the worst aspects of the ever more
complex UNIX shells without the dataflow model.

Gimme something like Smalltalk, Postscript, or any of the Lisp family. Tcl
isn't perfect, but a reasonable compromise between strings and lists.

Jerome T Holland

unread,
Mar 30, 1994, 9:03:40 PM3/30/94
to
In article <nagleCn...@netcom.com>, John Nagle <na...@netcom.com> wrote:

One proprietary language I used extensively on PDP-11's simply used a
semicolon to separate argument *values* from result *lvalues*. It worked
like a charm:
foo(a,b,c;x,y,z)
All args and vals were by value. This example takes a, b, c as args, returns
values to x, y, and z.

You could also use the first result as a normal function value:
x=foo(a,b,c;y,z)
Than caused a little work for the compiler writer, but nothing substantial.

The worst part was converting this to C. There was a noticeable decrement
in the clarity of the code.

Jerry Holland
--
Parsifal Software | Parsifal -> AnaGram | 800-879-2577
P.O. Box 219 | AnaGram -> parsers | Voice/Fax 508-358-2564
Wayland, MA 01778 | parsers -> results, fast | jhol...@world.std.com

John Nagle

unread,
Mar 31, 1994, 12:39:33 AM3/31/94
to
da...@pacific.mps.ohio-state.edu ("John E. Davis") writes:
>Finally, consider the following. Someone asked me to provide a comparison
>between TCL and S-Lang. To be honest, I do not know TCL but I picked up the
>package and wrote a simple test routine. Here are both routines (TCL and
>S-Lang). I think that most C programmers will find the S-Lang function
>more understandable.

TCL is truly awful. Painful syntax, wierd scope rules, etc.
But it is becoming popular because it is free and talks to X-windows.
I hope somebody comes up with a popular alternative soon.

John Nagle

Marc Wachowitz

unread,
Mar 31, 1994, 2:24:50 AM3/31/94
to
John Nagle (na...@netcom.com) wrote:
> TCL is truly awful. Painful syntax, wierd scope rules, etc.
> But it is becoming popular because it is free and talks to X-windows.
> I hope somebody comes up with a popular alternative soon.

Try elk, a scheme interpreter designed to be an extension language. I know
that there are several libraries available for the X window system, though
I don't know how high level the access to X is (I've never programmed X).

------------------------------------------------------------------------------
* wonder everyday * nothing in particular * all is special *
Marc Wachowitz <m...@ipx2.rz.uni-mannheim.de>

Marc Wachowitz

unread,
Mar 31, 1994, 2:39:02 AM3/31/94
to
I wrote:
> Try elk, a scheme interpreter designed to be an extension language. I know
> that there are several libraries available for the X window system, though
> I don't know how high level the access to X is (I've never programmed X).

Just forgot to mention: it's available from ftp.x.org:/contrib/elk-2.2.tar.gz

Andy Rabagliati

unread,
Mar 31, 1994, 4:20:40 AM3/31/94
to
In article <nagleCn...@netcom.com>, John Nagle <na...@netcom.com> wrote:

> TCL is truly awful. Painful syntax, wierd scope rules, etc.
>But it is becoming popular because it is free and talks to X-windows.
>I hope somebody comes up with a popular alternative soon.
>

Well, the Tk part (X-windows interface) may soon be married to perl, as
Tkperl. Seems like a good idea as something to use all those spare
machine cycles and megabytes you were wondering what to do with. But it
seems a good match.

Cheers, Andy.

PS - [ I presume the crossposting is still appropriate, but maybe we
should be following up to comp.lang.misc soon ]

Oliver Laumann

unread,
Mar 31, 1994, 4:25:55 AM3/31/94
to
[I have removed comp.lang.c and comp.lang.c++ from the Newsgroups line]

In article <nagleCn...@netcom.com>, John Nagle <na...@netcom.com> wrote:

> TCL is truly awful. Painful syntax, wierd scope rules, etc.
> But it is becoming popular because it is free and talks to X-windows.
> I hope somebody comes up with a popular alternative soon.

The alternative is there -- it's the Scheme language: simple syntax;
lexical scoping; freely available (a wealth of different implementations);
talks to the X window system.

Also, don't forget that Tcl was never designed to be used as a
programming language for large, free-standing applications, but as an
(embeddable) extension language for applications written in C.
When used like this, typical Tcl ``scripts'' are rather short; the
syntax actually isn't that painful when Tcl is used for its original
purpose (UNIX shell syntax isn't that painful either provided that
you don't attempt to write large shell programs).

Michael Salmon

unread,
Mar 31, 1994, 4:35:37 AM3/31/94
to
In article <nagleCn...@netcom.com>

It's obvious to see that you have taken an in depth look at Tcl. I
would like to hear what you find so painful about the syntax and so
wierd about the scoping rules. Funnily enough you didn't mention the
bigest problem of all, quoting.

To clear up misconception as well, Tcl is an extensible language, i.e.
you can easily add your own commands (too easily some might say) and
one of Tcl's extensions is Tk which then uses Xlib to talk to an X11
server.

--

Michael Salmon

#include <standard.disclaimer>
#include <witty.saying>
#include <fancy.pseudo.graphics>

Ericsson Telecom AB
Stockholm

squeedy

unread,
Mar 31, 1994, 5:48:20 AM3/31/94
to
i remember also that MIT CLU language (B. Liskov) had features
that supported multiple return values from a function. it's been
well over 10 years since i used that language, so details escape me...
--
hwajin
PEACEFUL STAR

Marcus Daniels

unread,
Mar 31, 1994, 6:18:06 AM3/31/94
to

TCL and TkPerl.

Civilization is indeed coming to an end.

Erick Gallesio

unread,
Mar 31, 1994, 7:17:27 AM3/31/94
to
Here is an exerpt of the announce I've already done in comp.lang.tcl:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
STk is a graphical package which rely on Tk and the Scheme programming
language. Concretely it can be seen as the J. Ousterhout's Tk package where
the Tcl language has been replaced by Scheme.

Features of STk
---------------

* All the commands defined by the Tk toolkit are available to the STk
interpreter (tk commands are seen as a special type of objects by the
interpreter).

* Callback are expressed in Scheme

* Tk variables (such are -textvariable) are reflected back into Scheme
as Scheme variables.

* supports Tk 3.6

* interpreter is conform to R4RS

* Clos/Dylan syntax like OO extension named STklos

* A set of STklos classes have been defined to manipulate Tk commands
(menu, buttons, scales, canvas, canvas items) as Scheme objects.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

STk is available by anonymous ftp at kaolin.unice.fr (193.48.229.225).


--

-------------------------------------------------------------------------------
Erick Gallesio tel : (33) 92-96-51-53
ESSI - I3S fax : (33) 92-96-51-55
Universite de Nice - Sophia Antipolis email : e...@unice.fr
Route des colles
BP 145
06903 Sophia Antipolis CEDEX
-------------------------------------------------------------------------------

klaus u schallhorn

unread,
Mar 31, 1994, 2:06:16 PM3/31/94
to
In article <MARCUS.94M...@tdb.ee.pdx.edu> mar...@ee.pdx.edu (Marcus Daniels) writes:
>
>TCL and TkPerl.
>
>Civilization is indeed coming to an end.

But then, maybe not. Some people seem to be awake ;-)

Brent S Noorda

unread,
Mar 31, 1994, 4:17:52 PM3/31/94
to
Anyone interested in C-like interpreted languages might find my
Cmm language interesting. Cmm is "C-Em-Em" or "C-minus-minus", also
known as "C minus the hard stuff". It is just this "hard stuff",
namely pointers and type-declarations/data-layout, that makes
C difficult to learn and inappropriate for comman-line or
application-macro/scripting purposes.

Now Cmm completes the C family of languages: Cmm/C/C++. Each member
of this family share the basic C syntax, and each member can talk
to the others, but each family member is best sutied for a different
domain.

I know what you're thinking: C minus pointers and types is not C! But
if you take a few minutes to look at Cmm then you'll learn that in
the appropriate command-line or macro domain, Cmm IS C. The Cmm
standard library is even identical to the C standard library with
the single exception of bsearch, which returns an index into the
array instead of a pointer.

I would like to tell more here, but you can probably best get an idea
of the utily of Cmm by downlaoding and testing the CEnvi Cmm interpreters
for DOS, Windows, or OS/2. The latest version of this shareware,
version 1.009, is available via anonymous ftp from world.std.com or
ftp.std.com in the pub directory (I can also mail on request). The
files are cenvi2.zip, cenviw.zip, or cenvid.zip for CEnvi for
OS/2, Windows, and DOS versions, respectively, containing the
interpreter, documents, and oodles of sample Cmm utilities. The same
ftp site has cenvi2.txt, cenviw.txt, and cenvid.txt for a list
of what's in the zip files.

I'm also interested in hearing from application developers who would
like to incorporate the Cmm-engine in their own applications, want
to help with Remote-cmm (running Cmm functions on remote computers),
or want to port CEnvi to different platforms (DOS, DOS32, WIN, WIN32,
NT, and PM are already taken). CEnvi was created as a test-bed
and demonstration platform for the Cmm language, but has proved
so useful by itself that it has taken on a life of its own and
is currently marketed as $38 shareware (all platforms included).

Any questions?

(P.S. Cmm is not to be confused with C--, which is another fine
variant of C, but takes off in a different direction)

Brent
b...@world.std.com

Suresh Kumar

unread,
Mar 31, 1994, 4:35:30 PM3/31/94
to
In <CnIB2...@world.std.com>, jhol...@world.std.com (Jerome T Holland) writes:
>In article <nagleCn...@netcom.com>, John Nagle <na...@netcom.com> wrote:
>>da...@pacific.mps.ohio-state.edu ("John E. Davis") writes:
>>
>> Multiple return values are syntactically difficult. Most attempts
>>at a syntax are painful.
>> John Nagle
>
>One proprietary language I used extensively on PDP-11's simply used a
>semicolon to separate argument *values* from result *lvalues*. It worked
>like a charm:
> foo(a,b,c;x,y,z)
>All args and vals were by value. This example takes a, b, c as args, returns
>values to x, y, and z.
>

MATLAB has a nice syntax for multiple return values:

[retval1, retval2, retval3]= Foo(A, B, c, d);

For example,

[X,D]=eig(A,B);
returns a diagonal matrix D of generalized eigenvalues, along with the
matrix D of eigenvectors.


==============================================================================
Suresh Kumar "It is possible to live nicely even in a palace"
sur...@watson.ibm.com - Marcus Aurelius
==============================================================================

Ray Johnson

unread,
Mar 31, 1994, 5:37:21 PM3/31/94
to
In article <nagleCn...@netcom.com> na...@netcom.com (John Nagle) writes:

Tcl is good for what it was designed for. An embedable command language.
It is not good for what it has become, a full blown scripting language
used for application development. It's popular not because it is free,
but because Tk (the windowing extension to Tcl) makes writing interactive
GUIs very easy. (Expecially with things like the 'packer'.)

Ray

--
_____________________________________
Ray Johnson
Lockheed Artifical Intelligence Center

Larry Wall

unread,
Mar 31, 1994, 8:35:48 PM3/31/94
to
In article <nagleCn...@netcom.com> na...@netcom.com (John Nagle) writes:
: TCL is truly awful. Painful syntax, wierd scope rules, etc.

: But it is becoming popular because it is free and talks to X-windows.
: I hope somebody comes up with a popular alternative soon.

If you don't mind playing with an alpha, Malcolm Beattie <mbea...@ox.ac.uk>
has a version of tkperl out. It's based on Perl 5 so it's pretty nicely
object-oriented, and will likely be available as a dynamic library to
avoid having to keep an extra executable lying around. (Besides, who
really wants to pronounce tkposixisqlsnmpperl all the time?)

Larry Wall
lw...@netlabs.com

dave...@news.delphi.com

unread,
Apr 1, 1994, 7:34:05 PM4/1/94
to
jhol...@world.std.com (Jerome T Holland) writes:

>In article <nagleCn...@netcom.com>, John Nagle <na...@netcom.com> wrote:
>>da...@pacific.mps.ohio-state.edu ("John E. Davis") writes:
>>
>> Multiple return values are syntactically difficult. Most attempts
>>at a syntax are painful. Common LISP is the most popular language with
>>this feature, and even there, it doesn't fit well with the rest of the
>>language. Mesa, the old Xerox PARC language, had multiple returns done
>>well, but the Mesa approach of treating a function call as a structure
>>constructor followed by a one-argument function call returning a structure
>>of return values never really caught on, mostly because it didn't match
>>the type systems of C or Pascal.
>>
>> John Nagle

In Python, it's trivial:

A function can return multiple values:

return a, b

Then, you get the values the same way:

x, y = foo()

You can also just get a tuple of the return values:

tup = foo() # tup[0] is a, tup[1] is b


Very simple and conveinent.

Joseph H Allen

unread,
Apr 1, 1994, 11:16:52 PM4/1/94
to
In article <2nieht$j...@news.delphi.com> dave...@news.delphi.com (DAVEG...@DELPHI.COM) writes:
>In Python, it's trivial:

>A function can return multiple values:

> return a, b

>Then, you get the values the same way:

> x, y = foo()

This causes an ambiguity in:
bar(x,y,a,b=foo())

I think python resolves this by not allowing assignments in function calls.
It would have been better if the syntax worked like this:

return {a,b}
{x,y}=foo()
bar(x,y,{a,b}=foo())
--
/* jha...@world.std.com (192.74.137.5) */ /* Joseph H. Allen */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}

George M. Sipe

unread,
Apr 3, 1994, 1:38:50 PM4/3/94
to
In article <CnJs...@world.std.com>, Brent S Noorda <b...@world.std.com> wrote:
>[deleted text]

>Now Cmm completes the C family of languages: Cmm/C/C++. Each member
>[deleted text]

>for DOS, Windows, or OS/2. The latest version of this shareware,

Stop right there.

Shareware library. Two words, when used together, raises red flags for me.

Unlike John Davis' S-Lang (to which this is a follow-up) and John
Ousterhout's very popular TCL, your software is license encumbered.
Offering this as shareware is your right, but will decrease interest in
it by several orders of magnitude. If you find that there is not
strong commercial demand for your software, please consider offering it
unencumbered similar to S-Lang, TCL, and other excellent embedded
command langauges which are freely available *and* may be freely used
(i.e. also not encumbered by the GPL). If Cmm does not reward you with
$, perhaps it could reward you with fame.

--
Manager, Strategic Services - (404) 728-8062 - Georg...@Pyramid.com

Peter da Silva

unread,
Apr 3, 1994, 8:15:54 PM4/3/94
to
In article <1994Apr4.1...@netlabs.com>,
Larry Wall <lw...@netlabs.com> wrote:
> In article <id.7I7...@nmti.com> pe...@nmti.com (Peter da Silva) writes:
> : The people who think Perl is a better language than TCL really mystify me.

> : Perl is *truly* baroque and awful... all the worst aspects of the ever more
> : complex UNIX shells without the dataflow model.

> Tsk, tsk, Peter. Still upset because Perl wouldn't run on your 286 box? :-)

Hardly. Tcl beyond 4.0 won't either. Tk *certainly* won't.

I appreciate that lots of people really like Perl because it's handy, and
convenient, and so on. They undoubtedly get a lot done with it. It's still
ugly and unreadable, and I'm speaking as someone who *likes* a lot of other
languages people dismiss as ugly and unreadable (for example Lisp, C, Forth,
Postscript, etc...).

Come on, you're not going to tell me that Perl is elegant, readable, coherent,
and so on? What does it stand for again?
--
Peter da Silva `-_-'
Network Management Technology Incorporated 'U`
1601 Industrial Blvd. Sugar Land, TX 77478 USA
+1 713 274 5180 "Hast Du heute schon Deinen Wolf umarmt?"

Peter da Silva

unread,
Apr 3, 1994, 8:19:50 PM4/3/94
to
In article <OZ.94Ap...@ursa.sis.yorku.ca>,
ozan s. yigit <o...@ursa.sis.yorku.ca> wrote:
> define private array path-sub-parts(path)
> {
> local result;
> local a;
> local i;
>
> result = array-new(5);
> result[0] = path;
> i = 1;
> a = string-match(path, "(.*)/.*");
> while (array-size(a) > 0)
> {
> path = a[1];
> result[i] = path;
> a = string-match(path, "(.*)/.*");
> i = i + 1;
> }
> return (result);
> }

Oh, that's interesting. Looks like a nice language.

How does it disambiguate:

define private match(a, s)
{
...
}

local string;

string = string-match(path, "...");

Steven D. Majewski

unread,
Apr 4, 1994, 2:22:37 AM4/4/94
to
In article <2nieht$j...@news.delphi.com>,

Actually, Python returns a SINGLE value, which may be a
composite item like a tuple or a list. Python makes
assignment of the contained values by having0 a feature
called Tuple Unpacking - which is what makes both:

tuple = 1, 2
x, y = tuple

possible. Python ALSO has List Unpacking ( not as
ubiquitous or frequently used ):

>>> range( 3 )
[0, 1, 2]
>>> [a,b,c] = range(3)
>>> a
0
>>> b
1
>>> c
2


But this unpacking of a container is NOT quite the same thing that is
meant by multiple-value returns in Lisp. ( In John Nagle's quote above)
Lisp can also return lists of values, but Multiple return values are
a different animal. Two or more values are returned and all but the
first one are optional. ( And the painful syntax alluded to above, is
the method used to tell it that you want those other optional values. )

Python's divmod(), which returns a tuple pair of values is no
different from a Lisp function that returns a pair of values
as a list or a dotted pair. If you only want the first value
of the pair, you have to select it in some way. ( But I do
like the Python syntax for selecting it -
Python: a = f(x)[0] Lisp: ( setq a ( car ( f x )))
Both of which would generate an error if f(x) returned the
value "1" rather than a pair of values. )

But a Lisp function that returns multiple values only returns a
single value, UNLESS you use the special syntax to capture the
optional values.

Another related, but again, somewhat different concept/feature
is multiple return values Icon style - i.e. coroutine generators
that can supply a series of values, and return a different one
each time the generator function is called. Icon has an 'every'
to force the generator to yield all of it's values.


I don't mean to pick on you in particular: other comments in this
thread probably also missed the distinction, but since your
example was Python, it caught my attention more that the others.

- Steve Majewski (804-982-0831) <sd...@Virginia.EDU>
- UVA Department of Molecular Physiology and Biological Physics

Brent S Noorda

unread,
Apr 4, 1994, 11:00:36 AM4/4/94
to
gs...@pyratl.ga.pyramid.com (George M. Sipe) writes:

>Unlike John Davis' S-Lang (to which this is a follow-up) and John
>Ousterhout's very popular TCL, your software is license encumbered.
>Offering this as shareware is your right, but will decrease interest in
>it by several orders of magnitude. If you find that there is not
>strong commercial demand for your software, please consider offering it
>unencumbered similar to S-Lang, TCL, and other excellent embedded
>command langauges which are freely available *and* may be freely used
>(i.e. also not encumbered by the GPL). If Cmm does not reward you with
>$, perhaps it could reward you with fame.

I am claiming no copyrights, trademarks, or other licensing requirements
on the Cmm language. I am looking forward to the fame that Cmm will
bring to me (accolades, children named after me, beach babes wanting
to please me, etc...) and so I make no monetary claims
on that language. First, I don't feel right about making claims
to a language. Second, I think that Cmm is a natural extension
to the C/C++ language family, and so claiming the language would
be like stealing someone else's natural child.

What I am selling as a shareware program is the Cmm interpreter, called
CEnvi, that I have slaved over for the past couple of years. If you
find CEnvi to be useful for getting work done or in any way making
your life just a little bit better, then I expect that you'll fork
out the $38 so I can feed my wife and kids, dogs, and mouse (you
should see there poor hungry faces crying out for a scrap of
bread). CEnvi is sharewhare now (and will remain so, in one form
or another, although it will be sold commercially very soon)
partially because this allows those of you who are interested
in languages to try out Cmm quickly and for free. I could just define the
Cmm language and fellow computer-scientists such as yourself could look
at the specification and say "this is good" or "this is bad"
but you wouldn't really know the utility of the language
without a working example. CEnvi is a working example. Give
it a try and, whether you pay me any money or not, tell me
what you think of Cmm, how it can be improved, and what it
is good for. (available via anonymous ftp from world.std.com or
fpt.std.com in the pub directory, cenvi?.*, version 1.009).

In other words: Nobody is selling C, but they do sell C
compilers. Similarly, I am not selling Cmm but a
particluar implementation of Cmm--and because it is so
far the only implementation I am letting you try it for free.

If CEnvi and/or Cmm is REALLY useful, then I'm in big trouble,
because then Microsoft will soon start selling their own
Cmm interpreter (except they'll be calling it BASIC and
charging a few hundred dollars) and then they'll claim that
they invented the language and they'll sue me and my wife,
kids, dogs, and rat will have to go begging in the streets
to get money to pay the lawyers.

Thanks for the interest.

Brent
b...@world.std.com

Larry Wall

unread,
Apr 4, 1994, 12:54:43 PM4/4/94
to
In article <id.7I7...@nmti.com> pe...@nmti.com (Peter da Silva) writes:
: The people who think Perl is a better language than TCL really mystify me.

: Perl is *truly* baroque and awful... all the worst aspects of the ever more
: complex UNIX shells without the dataflow model.

Tsk, tsk, Peter. Still upset because Perl wouldn't run on your 286 box? :-)

Larry

ozan s. yigit

unread,
Apr 4, 1994, 1:39:18 PM4/4/94
to
Marcus Daniels:

TCL and TkPerl.

Civilization is indeed coming to an end.

no no, it is just evolution doing its babbling, generating its negative
gradients...

oz
---
ps: golem's inaugural lecture, stanislaw lem.

ozan s. yigit

unread,
Apr 4, 1994, 2:24:55 PM4/4/94
to
has anyone looked at the extension language for bbn slate? [if anyone
from BBN or elsewhere could supply a complete grammar for it, i would
appreciate it.]

example from slate:

define private array path-sub-parts(path)
{
local result;
local a;
local i;

result = array-new(5);
result[0] = path;
i = 1;
a = string-match(path, "(.*)/.*");
while (array-size(a) > 0)
{
path = a[1];
result[i] = path;
a = string-match(path, "(.*)/.*");
i = i + 1;
}
return (result);
}

... oz

David Boyd

unread,
Apr 4, 1994, 5:28:01 PM4/4/94
to
In article <OZ.94Ap...@ursa.sis.yorku.ca>,
ozan s. yigit <o...@ursa.sis.yorku.ca> wrote:
>has anyone looked at the extension language for bbn slate? [if anyone
>from BBN or elsewhere could supply a complete grammar for it, i would
>appreciate it.]
>
I have used BBN Slates extension language for several projects
and found it a useful and novel tool. The concept of supporting direct
interfaces to and customization of an office automation/wp/spreadsheet
tool is extremely usefull. I is especially nice for building a system
tailored to a user in a specific functional domain who is not comfortable
with computers and software. I wish more companies would do something
like this. I once built a good briefing support and presentation system
around slate and some motif code. The users loved it. The language itself
was straight forward and flexible to the point that you could override some
internal slate functions.

Now for the news. BBN no longer owns Slate. The software and
rights were sold to a company called Paragon Systems (sorry I don't
have a contact point).

--
David W. Boyd UUCP: uunet!sparky!dwb
Sterling Software ITD INTERNET: Dave...@Sterling.COM
1404 Ft. Crook Rd. South Phone: (402) 291-8300
Bellevue, NE. 68005-2969 FAX: (402) 291-4362
Reston Va Phone: (703)264-8008

Mandeep Dhami

unread,
Apr 4, 1994, 10:15:46 PM4/4/94
to
In article <id.7I7...@nmti.com> pe...@nmti.com (Peter da Silva) writes:
: The people who think Perl is a better language than TCL really mystify me.
: Perl is *truly* baroque and awful... all the worst aspects of the ever more
: complex UNIX shells without the dataflow model.

I feel otherwise, perl "looks" baroque and awful ... but anyone, and I mean
ANYONE, who has written more than a 1000 lines in perl will testify to it's
amazing expressiveness. It is rare that you are unable to write in perl what
you want it to do (i.e it does not get in your way, and you don't have to
write 'around it). There may be too many ways to do it, but that is the beauty!
In its domain of application, it is better than the best ... you just have to
get used to reading punctuation as names/functions ;)

Mandeep

Larry Wall

unread,
Apr 5, 1994, 4:19:30 PM4/5/94
to
In article <2nqhki$d...@sun11k.mdd.comm.mot.com> dh...@mdd.comm.mot.com (Mandeep Dhami) writes:
: I feel otherwise, perl "looks" baroque and awful ... but anyone, and I mean

: ANYONE, who has written more than a 1000 lines in perl will testify to it's
: amazing expressiveness. It is rare that you are unable to write in perl what
: you want it to do (i.e it does not get in your way, and you don't have to
: write 'around it). There may be too many ways to do it, but that is the
: beauty! In its domain of application, it is better than the best ... you
: just have to get used to reading punctuation as names/functions ;)

Indeed, many of the changes in Perl 5 are there specifically to let you
write programs with about 50% less punctuation. Magical variables
have importable human-readable names:

require English; import English;
$INPUT_RECORD_SEPARATOR = "\r\n";

There's no reason why we couldn't have a "require Swedish" as well. Of
course, then you'd have to put up with the extra punctuation *within*
the identifiers... :-)

Subroutines can now be called as if they they were list operators, which
saves you 3 punctuation characters:

Instead of

&foo(1, 2, 3);

you can write

foo 1, 2, 3;

The "indirect object" syntax allows you to call object methods without
the -> symbol (though you can use -> notation if you want to):

$fido = new DOG
Ears => Short,
Tail => Long;

command $fido "heel", "sit", "play dead";
pet $fido;

(To you C++ folks: that "new" there is not a built-in. It just an ordinary
class method that happens to be a constructor.)

Where you used to have to write

@foo = ("now", "is", "the", "time", "for", "all", "good", "men",
"to", "come", "to");

you can now write

@foo = qw(now is the time for all good men to come to);

And there are some human-readable logical operators that have very low
precedence so that instead of writing

foo(1,2,3) || (&complain, next);

you can write

foo 1,2,3 or complain, next;

I suspect that's an improvement.

(Of course, nobody can make regular expressions look pretty. At least,
not in 25 words or less...)

By the way. Being a musician, I take "baroque" to mean "like J. S. Bach".
And being a linguist, I take "awful" to mean "awe inspiring". :-)

Larry Wall
lw...@netlabs.com

Peter da Silva

unread,
Apr 5, 1994, 7:30:15 PM4/5/94
to
In article <1994Apr6.1...@enterprise.rdd.lmsc.lockheed.com>,
Ray Johnson <rjoh...@titan.rdd.lmsc.lockheed.com> wrote:

> In article <id.OZB...@nmti.com> pe...@nmti.com (Peter da Silva) writes:
> >> Tsk, tsk, Peter. Still upset because Perl wouldn't run on your 286 box? :-)
> >Hardly. Tcl beyond 4.0 won't either. Tk *certainly* won't.

> What is keeping Tcl from running on a 286?

Nothing, really. It would just take more effort than I can spare to port
it, given that we're *finally* dumping Xenix.

> I know very little about PCs

These are x86-based minis, actually, running Xenix-286.

The biggest problem is that the Xenix compiler puts the stack and BSS into
one segment, and even with TCL 4.0 BSS was big enough that I could only spare
about 8K of stack.

Andy Newman

unread,
Apr 6, 1994, 12:55:47 AM4/6/94
to
b...@world.std.com (Brent S Noorda) writes:
>Anyone interested in C-like interpreted languages might find my
>Cmm language interesting.

People may also be interested in a language called ICI. It has
C's expression syntax but uses a different data model that is
more suited to the interpretive environment. It has full garbage
collection, proper strings, regular expressions, sets, associative arrays,
error handling (nearly exception handling but not quite) etc...
The interpreter is reasonably small and can be embedded in other
applications (the interpreter source is public domain).

For more information join the ICI mailing list by sending mail to
list...@research.canon.oz.au with the body (not subject) "sub ici".
You get the source for the interpreter through the list although a
couple of sites have it up for FTP.

We (well a few of us) are currently getting a new release of the language
together and would welcome more users.

--
Andy Newman (an...@research.canon.oz.au)

Peter da Silva

unread,
Apr 6, 1994, 9:25:17 AM4/6/94
to
Your article is eloquent, but it's missing an important point. Computer
languages aren't really "languages" in the sense you're using. For example,
ambiguity in human languages is desirable, so is redundancy. In fact, one
complements the other. In a "computer language", though, ambiguity and
redundancy lead to confusion. A clean core syntax and simple semantics
doesn't in any way reduce the capability of a language, and radically
improves overall usability.

A reference to "1984", with the implication that any more formal language
is somehow going to limit the ability for users to express thoughts is very
cute, but meaningless. A computer program isn't a thought, and translating
a program from one language to another... given the languages have similar
semantic levels (it costs more, for example, to convert Perl to C)... is
ample demonstration that the expressive power of the two languages are the
same.

Perl and TCL are very similar in semantic level. There are any number of
other languages that also fit into the same family, from Icon down to some
pretty convoluted codes like Postscript. All of them have a cleaner design
than Perl, and all are as expressive. There's no Orwellian conspiracy to
limit anyone's ability to express thoughts. Just an attempt to make things
as easy as possible to understand the code written.

$from =~ s/.*<([^>]*)>.*/\1/;
from = subst(from, ".*<([^>]*)>.*", "\\1")
regsub {.*<([^>]*)>.*} $from {\1} from

The only difference in these is that the first can't be parsed in a context
independant way. You have to *know* that a regular expression is expected
after the "=~", so you don't try to divide "s" by whatever the rest of that
expression comes to. Would it have been so bad to force the expression to
be in a string?

> With earlier Perls, when people cried "Hodgepodge!", it was easy enough
> for me to laugh along with them. But with Perl 5, that's just a wee
> bit harder to take. No doubt it's a failure of humility on my part,
> but it galls me just a little to put that much effort (both personal
> and communal) into something, and see that effort rejected with
> fightin' words like "all the worst aspects of".

I certainly don't have as much tied up in TCL as you do in Perl, but I've
spent a fair amount of time working on it, and I'm responsible in some
way for a lot of the details of TCL as it stands now. The language is not
conventional, but it *does* have a remarkably clean syntax that still allows
for broad semantics, and like Lisp it treats programs and data as part of
the same structure (in this case formatted strings). When people refer to
it as being "messy" and in the same paragraph praise Perl, I do wonder
what they're thinking of, and if you thought my response a little excessive,
you ought to be glad I edited it down before posting.

> I understand that
> hyperbole is a useful rhetorical device, but it can hurt nonetheless.

When people toss hyperbole, it shouldn't be a shock when they get it back.

Russell Nelson

unread,
Apr 6, 1994, 10:07:56 AM4/6/94
to

The worst thing about TCL is that you have to link it with other code
to get it to do anything real. The worst thing about Perl is that you
don't. :)

--
-russ <nel...@crynwr.com> ftp.msen.com:pub/vendor/crynwr/crynwr.wav
Crynwr Software | Crynwr Software sells packet driver support | ask4 PGP key
11 Grant St. | +1 315 268 1925 (9201 FAX) | Quakers do it in the light
Potsdam, NY 13676 | LPF member - ask me about the harm software patents do.

Ray Johnson

unread,
Apr 6, 1994, 1:01:26 PM4/6/94
to
In article <id.OZB...@nmti.com> pe...@nmti.com (Peter da Silva) writes:
>> Tsk, tsk, Peter. Still upset because Perl wouldn't run on your 286 box? :-)
>Hardly. Tcl beyond 4.0 won't either. Tk *certainly* won't.

What is keeping Tcl from running on a 286? I know very little about PCs
but I ported Tcl to the Mac and know the code pretty well. In the Unix
way, it assumes it create strings of any size and has infinite stack size.
There for Tcl can potentially break on a Mac or a 486. While Tcl would
probably break sooner on a 286, it should be able to run...

(Of course, maybe no one thinks it is worth thier while to do the port...)

Tk, of course, only runs on machines that runs X (which is a big
performance hurdle to jump in the PC world).

Larry Wall

unread,
Apr 6, 1994, 2:44:19 PM4/6/94
to
In article <id.OZB...@nmti.com> pe...@nmti.com (Peter da Silva) writes:
: In article <1994Apr4.1...@netlabs.com>,

: Larry Wall <lw...@netlabs.com> wrote:
: > In article <id.7I7...@nmti.com> pe...@nmti.com (Peter da Silva) writes:
: > : The people who think Perl is a better language than TCL really mystify me.
: > : Perl is *truly* baroque and awful... all the worst aspects of the ever more
: > : complex UNIX shells without the dataflow model.
:
: > Tsk, tsk, Peter. Still upset because Perl wouldn't run on your 286 box? :-)
:
: Hardly. Tcl beyond 4.0 won't either. Tk *certainly* won't.
:
: I appreciate that lots of people really like Perl because it's handy, and
: convenient, and so on. They undoubtedly get a lot done with it. It's still
: ugly and unreadable, and I'm speaking as someone who *likes* a lot of other
: languages people dismiss as ugly and unreadable (for example Lisp, C, Forth,
: Postscript, etc...).
:
: Come on, you're not going to tell me that Perl is elegant, readable, coherent,
: and so on? What does it stand for again?

You obviously believed that hype paragraph in the Perl man page... :-)

The truth is more complicated than that. Yes, Perl is Pathologically
Eclectic, and I do hold it to a lesser standard of simplicity at the
level that most computer scientists like to think they think. I'm not
a computer scientist in the traditional sense, though I've passed for
one often enough. I've often claimed to be a linguist, but I'm not a
traditional one of those either. In fact, I have the best of both
worlds: I take my cues from linguistics, but my queues from computer
science.

Well, enough clowning around. Perl is, in intent, a cleaned up and
summarized version of that wonderful semi-natural language known as
"Unix".

And the fact is, natural languages are "baroque and awful", and have a
disturbing tendency to borrow the "worst aspects" of the languages
about them. Nevertheless, people frequently write elegant, readable
and coherent prose in natural languages. And they *enjoy* doing so.

If certain computer scientists had their way, we'd all be programming in
Newspeak. But then only computer scientists would enjoy programming.

Yes, it's possible to think bad thoughts in Perl. It's possible to
write inelegant, unreadable and stupid code in Perl. And people
frequently do. It's one of the prices of freedom.

Anyway, enough of that diatribe. Perl 5 is, in fact, much more
coherent than Perl 4. For instance, the grammar for Perl 5 is
considerably smaller, while still managing to support 99% of old Perl
code (including a number of deprecated misfeatures), as well as adding
in the new object-oriented features. It runs faster, and the semantics
are much better defined. The new extensions are quite minimalistic,
because for the most part they're natural generalizations of ideas that
were already there before. I know there's something right in there,
because newbies keep asking if they can do such-and-so, and the answer
is typically, "That won't work in Perl 4, but it will in Perl 5."

With earlier Perls, when people cried "Hodgepodge!", it was easy enough
for me to laugh along with them. But with Perl 5, that's just a wee
bit harder to take. No doubt it's a failure of humility on my part,
but it galls me just a little to put that much effort (both personal
and communal) into something, and see that effort rejected with

fightin' words like "all the worst aspects of". I understand that


hyperbole is a useful rhetorical device, but it can hurt nonetheless.

Okay, I've said my say, here's my other cheek... :-)

Larry

William Chang in Marr Lab

unread,
Apr 6, 1994, 4:13:51 PM4/6/94
to
In article <1994Apr6.1...@netlabs.com> lw...@netlabs.com (Larry Wall) writes:
>And the fact is, natural languages are "baroque and awful", and have a
>disturbing tendency to borrow the "worst aspects" of the languages
>about them. Nevertheless, people frequently write elegant, readable
>and coherent prose in natural languages. And they *enjoy* doing so.

Thank you.

>If certain computer scientists had their way, we'd all be programming in

strongly typed, compiled languages, aka Pascal with fixed-length strings!

-- Bill (wch...@cshl.org)

Wayne Throop

unread,
Apr 7, 1994, 10:58:50 AM4/7/94
to
:: From: lw...@netlabs.com (Larry Wall)
:: If certain computer scientists had their way, we'd all be programming in

: From: wch...@phage.cshl.org (William Chang in Marr Lab)
: strongly typed, compiled languages, aka Pascal with fixed-length strings!

Foo. Foo, I say. Of course, by saying "certain computer scientists",
it may be trivially true, in that one such misguided so-called computer
scientist may be found. But the oldest, strongest common thread in
computer science, with the oldest, strongest formal background, the
strongest tie to mathematical analysis of computing, is (near as I can
tell) the lisp/scheme thread. That is, dynamically typed,
either-compiled-or-interpreted-at-whim, without fixed-length ANYthing.

In fact, it has often struck me that perl is an attempt to subsume
two rather incompatible cultures, the itsy-bitsy-text-tools culture
of Unix and the huge-monolithic-recursive-workspace culture of lisp.
In some ways, it isn't an *extension* of the unix philosophy, but
a *subversion* of it, because it attempts to be one tool that does
many things well, instead of a collection of small tools each of which
does one thin well.

Perl is "just" lisp with a set of (forgive me Larry)
line-noise-oriented reader macros.

Tcl is "just" lisp with a set of (forgive me John)
overly simplistic command-line oriented reader macros.

Perl is a swiss-army chain-saw.
Tcl is a powered hand tool with lots of custom grinders
and bits available seperately.

Perl tends to trade orthogonality for short-term convenience. It can
can get away with this because all the sub-tools are closely built in
and integrated (a-la swiss army knife).

Tcl tends to trade short-term convenience for orthogonality.
It needs to do this, because of all the interchangeable parts
it needs to work with.

Both subvert the unix philosophy, and blend it with the lisp
philosophy, in the way I described above.

--
Wayne Throop throopw%sh...@concert.net
thr...@aur.alcatel.com

Jared Rhine

unread,
Apr 7, 1994, 5:40:52 PM4/7/94
to
Wayne> [T]he oldest, strongest common thread in computer science, with the
Wayne> oldest, strongest formal background, the strongest tie to
Wayne> mathematical analysis of computing, is (near as I can tell) the
Wayne> lisp/scheme thread. That is, dynamically typed,
Wayne> either-compiled-or-interpreted-at-whim, without fixed-length
Wayne> ANYthing.

I would disagree with parts of this statement. By far, the oldest,
strongest "thread" in computer science is imperative programming, based on a
Von Neumann architecture. Currently, the most common example is C, which is
certainly not lisp. In contrast, the "thread" with the strongest formal
background and best mathematical foundations are strongly-typed functional
languages along the lines of ML, which is also not lisp. Dynamic typing
limits the theoretical foundation you can construct for a language. Most
modern languages that look interpreted are instead incrementally compiled.
And "fixed-length" is general dependent on the data structures you use, not
the programming model.

--
--
Jared Rhine Jared...@hmc.edu
wibstr Harvey Mudd College
http://www.hmc.edu/www/people/jared/home.html

"Truth is an evaluation of a statement within a context."
-- attributed to David Butterfield

Marcus Daniels

unread,
Apr 8, 1994, 2:05:51 AM4/8/94
to

>>>>> "Wayne" == Wayne Throop <throopw%sh...@concert.net> writes:
In article <2o173a$8...@aurns1.aur.alcatel.com> throopw%sh...@concert.net (Wayne Throop) writes:

Wayne> In fact, it has often struck me that perl is an attempt to
Wayne> subsume two rather incompatible cultures, the
Wayne> itsy-bitsy-text-tools culture of Unix and the
Wayne> huge-monolithic-recursive-workspace culture of lisp. In some
Wayne> ways, it isn't an *extension* of the unix philosophy, but a
Wayne> *subversion* of it, because it attempts to be one tool that
Wayne> does many things well, instead of a collection of small tools
Wayne> each of which does one thin well.
[]
Wayne> Both subvert the unix philosophy, and blend it with the lisp
Wayne> philosophy, in the way I described above.

Exactly.

Don't mean to split hairs here, but I don't think there isn't a really
an intended unix philosophy. Because there was a lack of structure,
people used straightforward approaches to hooking simple tools
together. All out of necessity. The good that comes out of this, is
that there is always a way to get `something' to work. The down side
is that there is never much effort to making that `something' work
very well. After twenty+ years finally some thought has gone into
standardized GUI APIs like CDE and NEXTSTEP. Crazy!

LISP is just the opposite. LISP encourages such uniformity and
integration that I'm not at all suprised that vendors put such little
effort into making it work with other environments. I'm sure it would
have seemed silly at the time to think about FFIs and such as
priorities, considering the alternatives.

I'm fine with the notion that Perl and TCL become embedded tools
that users and system administrators use to get work done. Perl is wonderful
for that. I've got a telecommunications app which uses TCL/expect for
scripts. Works great.

There may be `some computer scientists' may want you to use B&D languages
for checking your buddy's last login, but there also seem to be some
lumpenprogrammers/lazy management types who'd think airline landing
systems might as well be written in BASIC. Just so it gets done yesterday.

The popularity of Perl and TCL, I think, probably has more to do with
the educational background of its users. Programming is thought less and
less of as scientific, mathematically oriented activity and more
of as a mundane `repeat-the-ritual-so-the-computer-can-kinda-make-it
do-the-same-thing'.

Marcus Daniels

Richard A. O'Keefe

unread,
Apr 8, 1994, 3:59:30 AM4/8/94
to
h...@netcom.com (squeedy) writes:

>i remember also that MIT CLU language (B. Liskov) had features
>that supported multiple return values from a function. it's been
>well over 10 years since i used that language, so details escape me...

I just happen to have the CLU manual handy (CLU Reference Manual, Liskov,
Atkinson, Bloom, Moss, Schaffert, Scheifler, & Snyder, Springer Lecture
Numbers in Computer Science 114).

foo = proc(a: int, b: bool) returns (bool, int)
return (not b, a+1)
end foo

b, i := foo(42, false)

The grammar rules in question are
statement ::= ... | decl, ... := invocation;
| idn, ... := invocation;
| return [( expression, ... )];

It's set up so that a function call that returns multiple values cannot
be embedded in an expression, it can only be invoked as shown in these
rules. That means that
Id0 = proc(Id1: T1, ..., Idn: Tn) returns (R1, R2, ..., Rm)
... return (E1, E2, ..., Em) ...
end Id0
can be implemented like this:
void Id0(T1 Id1, ..., Tn Idn, R1 &hid1, R2 &hid2, ..., Rm &hidm) {
... { *hid1 = E1, *hid2 = E2, ..., *hidm = Em; return; } ...
}
and a statement like
V1, V2, ..., Vm := Id0(A1, ..., An)
can be implemented like this:
Id0(A1, ..., An, &V1, &V2, ..., &Vm)
There are other ways to implement it too.

Note that Fortran 90 SUBROUTINES can have parameters that are OPTIONAL
and INTENT(OUT). You can say things like
IF (PRESENT(FOO)) FOO = FUM
when FOO is such a parameter. That means that Fortran 90 subroutines
can return multiple values just like CLU procedures, and can even let
you omit results you're not interested in. The syntax is not as clear
as in CLU, but that's another topic.

--
Richard A. O'Keefe; o...@goanna.cs.rmit.oz.au; RMIT, Melbourne, Australia.
"C is quirky, flawed, and an enormous success." -- Dennis M. Ritchie.

d...@oea.hacktic.nl

unread,
Apr 8, 1994, 5:09:42 AM4/8/94
to
George M. Sipe (gs...@pyratl.ga.pyramid.com) wrote:

: Stop right there.

: Shareware library. Two words, when used together, raises red flags for me.

: Unlike John Davis' S-Lang (to which this is a follow-up) and John
: Ousterhout's very popular TCL, your software is license encumbered.
: Offering this as shareware is your right, but will decrease interest in
: it by several orders of magnitude. If you find that there is not
: strong commercial demand for your software, please consider offering it
: unencumbered similar to S-Lang, TCL, and other excellent embedded
: command langauges which are freely available *and* may be freely used
: (i.e. also not encumbered by the GPL). If Cmm does not reward you with
: $, perhaps it could reward you with fame.

: --
: Manager, Strategic Services - (404) 728-8062 - Georg...@Pyramid.com

^^^^^^^^^^^^
Stop right there.
*.com. One domain that raises a red flag for me.

Unlike *.edu domains, dedicated to the free flow of ideas, your products
are made expecting a financial reward and are often proprietary. Please
consider changing your evil ways. I'm sure that interest in your products
will increase once you offer them freely. Fame and accolade will be heaped
upon you. Inner glow will follow.

--
|< Dan Naas d...@oea.hacktic.nl >|
+--------------------------------------+

Ds. Julian Birch

unread,
Apr 8, 1994, 7:26:46 AM4/8/94
to
A minor, pedantic point:

In article <id.UNE...@nmti.com> you write:
>
> $from =~ s/.*<([^>]*)>.*/\1/;
> from = subst(from, ".*<([^>]*)>.*", "\\1")
> regsub {.*<([^>]*)>.*} $from {\1} from
>
>The only difference in these is that the first can't be parsed in a context

>independent way. You have to *know* that a regular expression is expected
>after the "=~"

I think you misunderstand what is meant by a context independent grammar.
Certainly, there are parts of perl that are nothing like LR, never mind
LR(1), but this isn't one of them. The problem is that it can only be
parsed with the trivial lexer.

>Would it have been so bad to force the expression to
>be in a string?

Actually, it probably would have been. The idea of Tcl is that everything is
a function, perl tries to do things straight off and efficiently. This is
reflected in the syntax for the language. Actually, I quite like Tcl, but
I find it painfully slow. Perl zips along quite happily for simple tasks,
although Tcl is more easily extensible. Perhaps I should go and learn S-Lang.
8-)

Julian.

PS I've removed comp.lang.c(++)? from the follow-ups. Their bandwidth is
far too high anyway, without the great perl vs tcl debate.
--
For my curriculum vitae, type finger jm...@hermes.cam.ac.uk

Peter da Silva

unread,
Apr 8, 1994, 12:14:44 PM4/8/94
to
In article <2o3f1m$j...@lyra.csx.cam.ac.uk>,

Ds. Julian Birch <jm...@cus.cam.ac.uk> wrote:
> In article <id.UNE...@nmti.com> you write:
> > $from =~ s/.*<([^>]*)>.*/\1/;
> > from = subst(from, ".*<([^>]*)>.*", "\\1")
> > regsub {.*<([^>]*)>.*} $from {\1} from

> >The only difference in these is that the first can't be parsed in a context
> >independent way. You have to *know* that a regular expression is expected
> >after the "=~"

> I think you misunderstand what is meant by a context independent grammar.

No, I was just using sloppy language, and sort of unintentionally making my
point about computer "languages" not being languages in the same sense as
English and Farsi. What I meant is that "s/.*<([^>]*)>.*/\1/" can't be
understood by the human reader without surrounding context... people aren't
codewalkers, usually... they pick up bits of the program in isolation and
look at them. Perl seems to make that a lot harder than need be.

> Actually, it probably would have been. The idea of Tcl is that everything is
> a function,

Ah, no, in TCL everything is a string. A function is a string.

> perl tries to do things straight off and efficiently. This is
> reflected in the syntax for the language. Actually, I quite like Tcl, but
> I find it painfully slow.

This is an implementation detail. Really. You could cache pre-parsed strings
and speed up TCL immensely. For example, let's say you have a loop:

while {[foo]} {
if {$baz} {
bar
} else {
zot
}
}

The interpreter can build a parse tree out of that as it goes:

while
[
foo
if
$
baz
bar
zot

and from the second time through the loop onwards you'll get a lot more speed.
You could even get proc to build that tree. It hasn't been done, but there's
nothing in the language definition to prevent it.

Personally, I still like lisp best. Anyone got a TkScheme?

Leslie Mikesell

unread,
Apr 8, 1994, 1:22:42 PM4/8/94
to
In article <id.UNE...@nmti.com>, Peter da Silva <pe...@nmti.com> wrote:
>Your article is eloquent, but it's missing an important point. Computer
>languages aren't really "languages" in the sense you're using. For example,
>ambiguity in human languages is desirable, so is redundancy. In fact, one
>complements the other. In a "computer language", though, ambiguity and
>redundancy lead to confusion. A clean core syntax and simple semantics
>doesn't in any way reduce the capability of a language, and radically
>improves overall usability.

No, there is the "human" side of the language and the "computer" side.
On the human side, similarity to something you already understand
is a big plus. Forth and postscript are nice and clean, but no one
thinks like that. Their capabilities are great, usability isn't.

Perl does a good job of using something similar to the syntax that people
have used in other unix programs. Since this is horribly redundant
and inconsistant, perl is equally (and usefully) afflicted. Perhaps
you could write code in perl's lexer tokens if you want linguistic
purity but I don't think it would make it more capable or useful.

The one part of "unix" programming that I never felt was captured
properly in perl4 was the concept of piping through distinct
entities. I haven't looked at perl5 so perhaps this has been fixed.
What I mean by this is that you commonly attack problems at different
times in different ways (or perhaps different people do different
parts). Then you find that you can combine the solutions into
a pipeline of several operations at once without making any changes
to the individual programs. You can, of course, do this with perl
by continuing to run multiple processes with real pipes, but since
it can do everything else internally it just "feels" wrong. It
seems like there should be a way to simulate a pipeline of programs
from a set of separate scripts within a single process. You can,
of course, rewrite the program to pass $_ around to subroutines,
but that's not the way unix people think. The unix tradition is
to write each piece separately with dozens of command line switches
and expect it to run indepentently of anything else except it's
i/o streams. Having to write subroutines and packages that allow
re-use but don't run independently doesn't quite fit.

Les Mikesell
l...@chinet.com

Henry G. Baker

unread,
Apr 8, 1994, 3:57:37 PM4/8/94
to
In article <JARED.94A...@osiris.hmc.edu> ja...@HMC.Edu (Jared Rhine) writes:
>Wayne> [T]he oldest, strongest common thread in computer science, with the
>Wayne> oldest, strongest formal background, the strongest tie to
>Wayne> mathematical analysis of computing, is (near as I can tell) the
>Wayne> lisp/scheme thread. That is, dynamically typed,
>Wayne> either-compiled-or-interpreted-at-whim, without fixed-length
>Wayne> ANYthing.
>I would disagree with parts of this statement. By far, the oldest,
>strongest "thread" in computer science is imperative programming, based on a
>Von Neumann architecture. Currently, the most common example is C, which is
>certainly not lisp. In contrast, the "thread" with the strongest formal
>background and best mathematical foundations are strongly-typed functional
>languages along the lines of ML, which is also not lisp.

>Dynamic typing
>limits the theoretical foundation you can construct for a language.

Excuse me? The full lambda calculus cannot be statically typed, yet
it has as clean a theoretical foundation as you are going to get.

Static typing in and of itself is no great prize. Turned around,
dynamic typing in and of itself is not either good or bad. I think
that the dynamic _variables_ of non-lexically-scoped Lisp were a much
greater problem than its dynamic _typing_.

Larry Wall

unread,
Apr 8, 1994, 4:43:37 PM4/8/94
to
In article <id.UNE...@nmti.com> pe...@nmti.com (Peter da Silva) writes:
: Your article is eloquent, but it's missing an important point. Computer

: languages aren't really "languages" in the sense you're using. For example,
: ambiguity in human languages is desirable, so is redundancy. In fact, one
: complements the other. In a "computer language", though, ambiguity and
: redundancy lead to confusion. A clean core syntax and simple semantics
: doesn't in any way reduce the capability of a language, and radically
: improves overall usability.

In computer science, ambiguity is known as "overloading", and
redundancy is known as "declaration". Changing the words doesn't
change the concepts. What we really need to deal with is the
meaning of "capability" and "usability".

I do know that computer languages are not really "languages" in the
sense I'm using. That's what I'm trying (feebly) to change. I think
that we need to stop thinking of them as computer languages and start
thinking of them as human languages.

: A reference to "1984", with the implication that any more formal language


: is somehow going to limit the ability for users to express thoughts is very
: cute, but meaningless. A computer program isn't a thought, and translating
: a program from one language to another... given the languages have similar
: semantic levels (it costs more, for example, to convert Perl to C)... is
: ample demonstration that the expressive power of the two languages are the
: same.

Well, okay, the analogy fails at the level of what's *possible* to
express, because Orwell probably didn't know or care that all human
languages are Turing complete. But practically speaking there's little
difference between the totally impossible and merely difficult. It's
possible to write a Fortran compiler in Teco (in fact, it's been done),
but that's not the kind of expressiveness I'm talking about.

It is possible to express this sentence in Pidgin English, but it may
not be possible to express it in 25 words or less.

And maybe a computer program isn't a thought, but I think it should
be closer to a thought than to a sequence of transistor states.

: Perl and TCL are very similar in semantic level. There are any number of


: other languages that also fit into the same family, from Icon down to some
: pretty convoluted codes like Postscript. All of them have a cleaner design
: than Perl, and all are as expressive. There's no Orwellian conspiracy to
: limit anyone's ability to express thoughts. Just an attempt to make things
: as easy as possible to understand the code written.

We just have different ideas of what's easy, I suppose. There's more
than direction to crawl out of the Turing Tarpit. Yes, Postscript is
expressive in one sense, but it's also one of the most user-hostile
languages known to man, all in the name of making the computer's job a
little easier. (Not that regular expressions are all that user
friendly... :-)

: $from =~ s/.*<([^>]*)>.*/\1/;


: from = subst(from, ".*<([^>]*)>.*", "\\1")
: regsub {.*<([^>]*)>.*} $from {\1} from
:
: The only difference in these is that the first can't be parsed in a context
: independant way.

People thrive on context-dependencies, and computers can be taught to
let people thrive.

: You have to *know* that a regular expression is expected


: after the "=~", so you don't try to divide "s" by whatever the rest of that
: expression comes to. Would it have been so bad to force the expression to
: be in a string?

Well, actually, you *can* have a string after the =~ if you want (but
then it makes it a runtime pattern rather than a compile-time
pattern).

The interpretation of / as a quoting character is in fact controlled by
the s. One of the things about Perl that I think is user friendly is
that the user gets to pick the quotes. This is where we get back to
the freedom-of-expression issue (in my sense). People can use this
feature either to increase or decrease readability. By and large
they'll choose to increase readability. Freedom of the press means
that people can print lies. But by and large, most newspapers in this
country try to print some version of the truth most of the time. I
think it's a net win.

: I certainly don't have as much tied up in TCL as you do in Perl, but I've


: spent a fair amount of time working on it, and I'm responsible in some
: way for a lot of the details of TCL as it stands now. The language is not
: conventional, but it *does* have a remarkably clean syntax that still allows
: for broad semantics, and like Lisp it treats programs and data as part of
: the same structure (in this case formatted strings). When people refer to
: it as being "messy" and in the same paragraph praise Perl, I do wonder
: what they're thinking of, and if you thought my response a little excessive,
: you ought to be glad I edited it down before posting.

Okay, I'm glad. :-)

I don't know for sure, but I *think* that when people complain about
TCL looking messy, it's because of the forced use of bracketing
characters all over the place. (I suspect these people have a similar
revulsion to LISP.) Now you and I know that the brackets and parens
are just characters on the screen, and that they're there for a very
good reason, and that languages ought not be judged merely by surface
appearances. I can only offer you the comfort that the same thing
happens with Perl. People see $ and @ and have an immediate
immunological reaction. (Fortunately, it's not life-threatening.)

That's not to say that there aren't some things in Perl that could
use to be improved. There are some places where the design can be
cleaned up, and that's just what I'm trying to do with Perl 5, to
the extent that's possible without changing the things people *like*
about Perl.

: > I understand that


: > hyperbole is a useful rhetorical device, but it can hurt nonetheless.
:
: When people toss hyperbole, it shouldn't be a shock when they get it back.

The shock has overthrown my wit, which is regrettable.

Larry

ozan s. yigit

unread,
Apr 8, 1994, 4:56:32 PM4/8/94
to
Peter da Silva:

Personally, I still like lisp best. Anyone got a TkScheme?

poof! done. :-)

see STk at cs.indiana.edu:/pub/scheme-repository/new.

an extract from STk reference manual, STk overview:

Tk is a powerful X11 graphical tool-kit defined at the
University of Berkeley by John Ousterhout. This toolkit gives
to the user high level widgets such as buttons or menus and
is easily programmable. In particular, a little knowledge
of X fundamentals is needed to build an application with it.
Tk package rely on an interpretative language named Tcl.

STk is another implementation of the Scheme programming language.
The main interest of STk is to provide a full integration of the
Tk toolkit in Scheme. In this implementation, Scheme establishes
the link between the user and the Tk toolkit, since it substitutes
the Tcl language.

...

STk uses siod, but should probably use vscm or scm.

oz
---
In der Kunst ist es schwer etwas zu sagen, | electric: o...@sis.yorku.ca
was so gut ist wie: nichts zu sagen. - LW | or [416] 736 2100 x 33976

Message has been deleted

Peter da Silva

unread,
Apr 8, 1994, 6:56:26 PM4/8/94
to
In article <DAVIS.94A...@pacific.mps.ohio-state.edu>,
John E. Davis <da...@amy.tch.harvard.edu> wrote:
> If you want BOTH speed and a pleasing syntax, S-Lang should interest you. In
> S-Lang, one can write the above fragment as:

> while (foo) {
> if (baz) {
> bar;
> } else {
> zot;
> }
> }

Sorry, I actually prefer the TCL version. I like code being interchangable
with data.

> From what I have seen, TCL's indenting style is very inflexible and one
> cannot use styles like those illustrated above.

while {[foo]} if $baz bar else zot

is legal TCL.
--
Peter da Silva. <pe...@sugar.neosoft.com>.
`-_-' Ja' abracas-te o teu lobo, hoje?
'U`
Looks like UNIX, Feels like UNIX, works like MVS -- IBM advertisement.

Peter da Silva

unread,
Apr 8, 1994, 7:02:48 PM4/8/94
to
In article <CnyAx...@chinet.chinet.com>,

Leslie Mikesell <l...@chinet.chinet.com> wrote:
> No, there is the "human" side of the language and the "computer" side.
> On the human side, similarity to something you already understand
> is a big plus. Forth and postscript are nice and clean, but no one
> thinks like that.

I do. Seriously. When working in Postscript am I you I from Germany
just come have think would.

> Perl does a good job of using something similar to the syntax that people
> have used in other unix programs.

It doesn't really give me access to the semantics I expect in the shell,
though, and it does it by merging the syntax of dozens of different programs.
It fails then to communicate with me. For example:

> The one part of "unix" programming that I never felt was captured
> properly in perl4 was the concept of piping through distinct
> entities.

Exactly. This more than anything else is the very core of UNIX shell
programming. TCL lets me do this in a clean and intuitive way:

set result [exec prog | prog]

This is more useful to me than being able to say 's/foo/bar/g' at the
language level... and since Perl *is* a parsed language I don't naturally
expect to find scraps of sed littered though it.

Peter da Silva

unread,
Apr 8, 1994, 7:14:20 PM4/8/94
to
In article <1994Apr8.2...@netlabs.com>,

Larry Wall <lw...@netlabs.com> wrote:
> In computer science, ambiguity is known as "overloading", and
> redundancy is known as "declaration".

Oh god, talk about your tarpit. I'm not going to get into a dictionary war
with a linguist. You can call "a nice knockdown argument" "glory" for all
I care. I don't agree with these redefinitions but we can spend the whole
summer tossing them around if I let it happen, so I won't.

> We just have different ideas of what's easy, I suppose.

Well, obviously we do. You produced Perl. I find it gives me a headache...
and as I've mentioned before I eat user-hostile programming languages for
breakfast. I can't deal with a language where I can pick up a code fragment
and can't even tell what the operators are without the surrounding context.
I guess it's a character flaw.

Larry Wall

unread,
Apr 9, 1994, 2:34:40 PM4/9/94
to
In article <2o4nqo$a...@sugar.NeoSoft.COM> pe...@sugar.NeoSoft.COM (Peter da Silva) writes:
: In article <CnyAx...@chinet.chinet.com>,

: Leslie Mikesell <l...@chinet.chinet.com> wrote:
: > No, there is the "human" side of the language and the "computer" side.
: > On the human side, similarity to something you already understand
: > is a big plus. Forth and postscript are nice and clean, but no one
: > thinks like that.
:
: I do. Seriously. When working in Postscript am I you I from Germany
: just come have think would.

:-)

: > The one part of "unix" programming that I never felt was captured


: > properly in perl4 was the concept of piping through distinct
: > entities.
:
: Exactly. This more than anything else is the very core of UNIX shell
: programming.

I believe in letting the shells be good at what they're good at. Perl
makes it pretty easy to get to and from the shell.

: TCL lets me do this in a clean and intuitive way:


:
: set result [exec prog | prog]

That's not so very different from

$result = `prog | prog`;

or if you're partial to square brackets,

$result = qx[ prog | prog ];

or if you're partial to strings:

$result = myexec "prog | prog";

You can also explicitly open pipes to/from shell commands:

open(PIPE, "prog | prog|") || die "Oops";
$result = <PIPE>;
close PIPE;

You can also explicitly open pipes to/from a forked copy of yourself, which
seems pretty darn close to what Leslie was asking for.

if (open(PIPE, "-|")) {
$result = <PIPE>;
}
else {
print "This is a result";
exit;
}

And if you're really wacko, you can do all the pipes and forks and execs
yourself, just like in C.

In short, there's more than one way to do it.

What Perl doesn't give you is grammatically based implicit forking.
That's what shells are for.

: This is more useful to me than being able to say 's/foo/bar/g' at the


: language level... and since Perl *is* a parsed language I don't naturally
: expect to find scraps of sed littered though it.

That's why the man page mentions it in the first paragraph... :-)

Seriously, almost every language has scraps of other languages in it.
Perl is just a scrappier language than most.

Larry

Joe Moss

unread,
Apr 9, 1994, 5:37:46 PM4/9/94
to
pe...@sugar.NeoSoft.COM (Peter da Silva) writes:


> while {[foo]} if $baz bar else zot

>is legal TCL.

Peter, you're slipping. That should be:

while {[foo]} {if $baz bar else zot}

At least with the way the while command is implemented in
John's version 7.3 release. You could, of course, reimplement
while.... Or was that your point?

--
Joe V. Moss | j...@morton.rain.com
Morton & Associates | jo...@m2xenix.psg.com
7478 S.W. Coho Ct. | - or -
Tualatin, OR 97062-9277 | uunet!m2xenix!morton!joe

Leslie Mikesell

unread,
Apr 9, 1994, 7:26:45 PM4/9/94
to
In article <1994Apr9.1...@netlabs.com>,
Larry Wall <lw...@netlabs.com> wrote:

>What Perl doesn't give you is grammatically based implicit forking.
>That's what shells are for.

No, I don't want "real" forks. What I want is for perl to let me
pretend to pipe data through otherwise isolated program sections
which can also be run and tested as separate entities. If I use
real forks I might as well keep running smaller programs like
sed and awk.

The whole reason for switching to perl from a shell script is that
perl can do it all in one process. However, the way you develop
the shell script is to write all the sections that need sed, awk,
tr, etc. that are complex enough to need testing in separate pieces.
You may (or may not) end up merging the text of these programs into
the body of the shell script but they still run as independent units.
That is, nothing you change in one place will affect how another part
works unless you modify a shell variable that the other one expands and
you can run it out of context with test data from a file or other source.

I just haven't been able to get the same effect in perl while keeping
the ability to put it all in one process. Maybe I'm not using packages
or subroutines right but I just haven't been able to convert several
perl programs into a single one that acts like each piece sees the output
data stream of the previous piece without rewriting the whole mess.
What's worse is that somehow each piece seems to end up needing to know
something about the state of all the others when combined.

Les Mikesell
l...@chinet.com

Peter da Silva

unread,
Apr 9, 1994, 10:38:56 PM4/9/94
to
In article <1994Apr9.1...@netlabs.com>,
Larry Wall <lw...@netlabs.com> wrote:
> : TCL lets me do this in a clean and intuitive way:
> :
> : set result [exec prog | prog]

> That's not so very different from

> $result = `prog | prog`;

Well, the main difference is that the parsing into tokens is done in terms
of TCL lists, rather than going through a step where tokens are converted
to a string and reparsed. It's like the difference between:

for i in $*

and

for i in ${1+"$@"}

which is a better way to do things. In the TCL case, and the second bourne
shell case, once it's decided that 'funny file"name\withjunk in.it' is a
file name, it stays that way. This is no minor advantage if you're trying
to make a script foolproof.

> Seriously, almost every language has scraps of other languages in it.
> Perl is just a scrappier language than most.

Can't argue with that, and I'll accept the play on words. Perl is the UNIX
equivalent of BASIC, with all the positive and negative consequences that
implies.

Peter da Silva

unread,
Apr 10, 1994, 1:10:55 PM4/10/94
to
In article <1994Apr9.2...@morton.rain.com>,

Joe Moss <j...@morton.rain.com> wrote:
> pe...@sugar.NeoSoft.COM (Peter da Silva) writes:
> > while {[foo]} if $baz bar else zot

> >is legal TCL.

> Peter, you're slipping. That should be:

> while {[foo]} {if $baz bar else zot}

You're right. *sigh*.

Still, there's no rigid indentation style. I don't know where people
get stuff like that, unless they're thinking of Python (which does use
indentation to mark nesting levels).

Of course you could do:

rename while WHILE

proc while {cond args} {
WHILE {[uplevel 1 $cond]} {[uplevel 1 $args]}
}

:->

Wayne Throop

unread,
Apr 10, 1994, 2:03:49 PM4/10/94
to
: From: l...@chinet.chinet.com (Leslie Mikesell)
: The one part of "unix" programming that I never felt was captured

: properly in perl4 was the concept of piping through distinct
: entities. I haven't looked at perl5 so perhaps this has been fixed.

Well... IMHO, the object oriented features added to perl5 provide a
better decomposition framework than that of the stream-of-text pipe
metaphor, yet one that can be a direct heir to the unix notion of
little-tools-that-do-one-thing-well.

( Of course, tcl already has [incr tcl], and has for some time... This
rather illustrates a strength of tcl vs perl, that is, when one goes
to add an extension, one can concentrate on adding the semantics, but
to significantly change perl one needs to tinker with its lexical and
gramatical structure. (Tcl pays for this advantage, of course.) )

Wayne Throop

unread,
Apr 10, 1994, 2:09:43 PM4/10/94
to
: From: da...@pacific.mps.ohio-state.edu ("John E. Davis")
: Regarding speed, S-Lang has string, integer, floating point (float or
: double), and array types. For operations involving non-string types, my
: own benchmarks indicate that S-Lang is about an order of magnitude
: faster than TCL. This is due to the fact that TCL has only string
: types.

I strongly doubt this is the case. The order-of-magnitude advantage
is *very* unlikely to be because of the primitive types, but because
tcl reparses the text of procedure and nested statement bodies every
time they are encountered, and doesn't encache any internal form of
executable statement. For example, taking five interpreters I happen
to have handy on my linux system here, namely tcl, perl, clisp, gawk,
and (of my own invention) oosp, I get these times on a trivial recursion
and simple arithmetic benchmark (code appended):

cpu seconds interpreter features
interpreter for fib(20) save parse arithmetic types
----------- ------------- ----- -----
tcl 22.82 n n
perl4 7.74 y y
clisp 3.25 y y
gawk 1.62 y y
oosp 1.17 y n

Perl, clisp and gawk all have arithmetic internal types,
and all encache parse trees. Tcl has no internal arithmetic types, and
doesn't encache parse trees. Oosp has no internal arithmetic types,
but *does* encache parse trees.

I think it is clear (at least, it is to me) that the overhead of
reparsing is a far more significant factor than the overhead of
converting to and from strings for lack of arithmetic types. Keeping
internal arithmetic types needn't dominate the performance of an
interpreter.

In fact, I estimate that tcl could be sped up by a factor of perhaps 5,
without changing it's fundamental architecture or user interface, or
even programatic interface, *at* *all*.

-- code in tcl

proc fib {n} {
if {$n<2} {
return $n
} {
return [expr {[fib [expr $n-2]]+[fib [expr $n-1]]}]
}
}

-- code in perl

sub fib {
local($n)=@_;
if( $n<2 ){
return $n;
} {
return &fib($n-2)+&fib($n-1)
}
}

-- code in clisp

(defun fib (n) (if (< n 2) n (+ (fib (- n 2)) (fib (- n 1)))))

-- code in gawk

function fib(n) {
if ( n<2 ) return n; else return fib(n-2) + fib(n-1)
}

-- code in oosp

void declare'fib'1
void define'fib with'n arg'1
if int< val'n'2
val'n
+ fib - val'n'2 fib - val'n'1

Message has been deleted

Wayne Throop

unread,
Apr 10, 1994, 5:19:15 PM4/10/94
to
::: From: pe...@nmti.com (Peter da Silva)
::: TCL lets me do this in a clean and intuitive way:

::: set result [exec prog | prog]

:: From: lw...@netlabs.com (Larry Wall)
:: That's not so very different from
:: $result = `prog | prog`;

: From: pe...@nmti.com (Peter da Silva)
: Well, the main difference is that the parsing into tokens is done in


: terms of TCL lists, rather than going through a step where tokens are
: converted to a string and reparsed.

Um... no, Larry is right. There's no reparsing involved in the
perl expression he gives (though there often would be in similar
expressions in unix shells). The main difference is that perl devotes
a bit of syntax to recognizing backticks tied specifically to this little
bit of semantics, while tcl separates the issue of lexical appearance
from semantic consequences. This is exactly what makes tcl easier
to add functionality to: it separates concerns of functionality from
concerns of parsing. Perl deliberately does the opposite, and ends
up with more terse notations.

The same difference in philosophy appears in, say, "here files"
for various purposes. Perl imported several notations and methods of
quoting, cleaning up only minimally the maze of twisty evaluations,
all different, that the shells are heir to. In perl, then, one can
specify strings via "" quotation, '' quotation, and "here" files
via bareword delimiter, "" delimiter, and even `` delimiter. In
tcl, the two pre-existing methods of quoting with {} and "" are simply
combined with redirection operators to allow any sorts of here files
you want. In tcl you end up with lots of ways of specifying here
files, but they are combinations of a much smaller set of well-defined
operations. In perl, each kind of quoting is specifically, syntactically
married to a convention to make up a here file method.

In some ways, it's like the classic problem of whether you write
a single compiler front end for N languages, another back end for M
targets, and get N*M compilers (the tcl way), or whether you custom
craft N*M compilers for each language/target pair, and end up with
faster, more integrated (but more ideosyncratic) compilers (the perl way).

Tcl divorces syntax from semantics, in order to exploit regularities.
Perl marries syntax to corresponding semantic elements, in order to
exploit irregularities.

Wayne Throop

unread,
Apr 10, 1994, 6:21:29 PM4/10/94
to
:::: From: pe...@nmti.com (Peter da Silva)
:::: set result [exec prog | prog]
::: From: lw...@netlabs.com (Larry Wall)
::: $result = `prog | prog`;

:: From: pe...@nmti.com (Peter da Silva)
:: Well, the main difference is that the parsing into tokens is done in
:: terms of TCL lists, rather than going through a step where tokens are
:: converted to a string and reparsed.

: From: throopw%sh...@concert.net (Wayne Throop)
: Um... no, Larry is right. There's no reparsing involved in the
: perl expression he gives

No, you dummy, Peter is right! Sure, there's no reparsing
of the *output* of the backtick construction (as there is
in shells), but the content of the backticks is fed to a shell
for reparing on the way *into* exec-ing the two programs.

( I just caught on to what Peter was pointing out there, after
I'd already posted. Yes, tcl removes a level of evaluation
in setting up pipelines. Obviously, perl lists can be used
along with fork and exec system calls to do the same thing, but
tcl comes with it already packaged conveniently, and the regular
syntax helps tcl in this regard. )

Message has been deleted

Ds. Julian Birch

unread,
Apr 11, 1994, 7:58:31 AM4/11/94
to
In article <7660...@sheol.uucp>,
Wayne Throop <throopw%sh...@concert.net> wrote:
>: From: l...@chinet.chinet.com (Leslie Mikesell)
>: The one part of "unix" programming that I never felt was captured

>: properly in perl4 was the concept of piping through distinct
>: entities. I haven't looked at perl5 so perhaps this has been fixed.
>
>Well... IMHO, the object oriented features added to perl5 provide a
>better decomposition framework than that of the stream-of-text pipe
>metaphor, yet one that can be a direct heir to the unix notion of
>little-tools-that-do-one-thing-well.

I'm unconvinced by your claim that OOP is 'better' than pipes - I would
claim both were useful for different things. Notably, if you're going
to use unix, the whole system is set up to make pipes and processes the
easiest way of doing things. In addition, working out how to use a
small program in a shell script is always, imho, going to be easier
than working out how to use a class library.

Disclaimer: I haven't played with perl5.

Julian.

Peter da Silva

unread,
Apr 11, 1994, 8:47:55 AM4/11/94
to
Geeze. I didn't mean to start a Tcl-Perl flamewar. Enough is enough, 'k?

Larry Wall

unread,
Apr 11, 1994, 3:00:00 PM4/11/94
to
In article <2obe17$b...@lyra.csx.cam.ac.uk> jm...@cus.cam.ac.uk (Ds. Julian Birch) writes:
: In article <7660...@sheol.uucp>,

: Wayne Throop <throopw%sh...@concert.net> wrote:
: >: From: l...@chinet.chinet.com (Leslie Mikesell)
: >: The one part of "unix" programming that I never felt was captured
: >: properly in perl4 was the concept of piping through distinct
: >: entities. I haven't looked at perl5 so perhaps this has been fixed.
: >
: >Well... IMHO, the object oriented features added to perl5 provide a
: >better decomposition framework than that of the stream-of-text pipe
: >metaphor, yet one that can be a direct heir to the unix notion of
: >little-tools-that-do-one-thing-well.
: I'm unconvinced by your claim that OOP is 'better' than pipes - I would
: claim both were useful for different things.

Precisely. Perl happily reinvents a number of wheels, but it would be
silly to reinvent pipes, at least on Unix. Perl tries Really Hard to
fit humbly back into the Unix toolbox. Though, of course, any time you
try to include an elephant in a flea circus, some fleas are bound to
get squashed... :-)

Larry

Wayne Throop

unread,
Apr 12, 1994, 9:53:06 AM4/12/94
to
:::: From: l...@chinet.chinet.com (Leslie Mikesell)

:::: The one part of "unix" programming that I never felt was captured
:::: properly in perl4 was the concept of piping through distinct
:::: entities. I haven't looked at perl5 so perhaps this has been fixed.
::: Wayne Throop <throopw%sh...@concert.net>
::: Well... IMHO, the object oriented features added to perl5 provide a

::: better decomposition framework than that of the stream-of-text pipe
::: metaphor, yet one that can be a direct heir to the unix notion of
::: little-tools-that-do-one-thing-well.
:: From: jm...@cus.cam.ac.uk (Ds. Julian Birch)
:: I'm unconvinced by your claim that OOP is 'better' than pipes - I

:: would claim both were useful for different things.

But that's not what I claimed. What I said was that it was a better
decomposition framework, not just "better", period. Clearly, there
are many tasks that pipes are better at than objects.

: From: lw...@netlabs.com (Larry Wall)
: Precisely. Perl happily reinvents a number of wheels, but it would be


: silly to reinvent pipes, at least on Unix.

Agreed. No need to reinvent them. There they already are, and should remain.

I suppose where I was unclear was in saying that OO in scripting
languages could be an "heir" to pipes in scripting languages. The
implication being the passing of a legacy upon the death of the
predecessor. That implication wasn't intended. The impliciation I
intended was more that there is a large overlap in what the two are good
for, and OO has an interesting strength to bring to bear that fits right
in with Unix's small-is-beautiful philosophy.

I suppose that's more like a business partner than an heir... or something.

Richard Kooijman

unread,
Apr 12, 1994, 11:22:19 AM4/12/94
to
da...@pacific.mps.ohio-state.edu ("John E. Davis") writes:

>In article <1994Apr11....@netlabs.com> lw...@netlabs.com (Larry Wall)
>writes:
>> In article <7660...@sheol.UUCP> throopw%sh...@concert.net (Wayne Throop) writes:
>> : cpu seconds interpreter features


>> : interpreter for fib(20) save parse arithmetic types
>> : ----------- ------------- ----- -----
>> : tcl 22.82 n n
>> : perl4 7.74 y y
>> : clisp 3.25 y y
>> : gawk 1.62 y y
>> : oosp 1.17 y n
>>

>> Another data point:
>>
>> perl5 2.50 y y


> What machine did you run this on? It is kind of meaningless unless these
>tests were performed on the same machine. Perhaps a ratio is a better
>measure.

My favorite utility awk performs like this using 3 popular incarnations
(run on a SPARCstation IPC):

gawk 1.13
mawk 0.120
awk(*) 0.110

*) not the system's awk (which doesn't understand functions, but
a newer version.

A couple of conclusions:

- my IPC is faster than the Linux station that was used
- smart implementations just perform (a lot) better
- this has nothing much to do with TCL's execution speed
- for speed awk is to be preferred (in this particular
situation, because I know that Perl will be faster in
a lot of other situations)


Richard.

Larry Wall

unread,
Apr 11, 1994, 3:28:35 PM4/11/94
to
In article <7660...@sheol.UUCP> throopw%sh...@concert.net (Wayne Throop) writes:
: cpu seconds interpreter features

: interpreter for fib(20) save parse arithmetic types
: ----------- ------------- ----- -----
: tcl 22.82 n n
: perl4 7.74 y y
: clisp 3.25 y y
: gawk 1.62 y y
: oosp 1.17 y n

Another data point:

perl5 2.50 y y

And most of that is still subroutine call overhead--there's little
string conversion or floating-point overhead.

Larry

Larry Wall

unread,
Apr 12, 1994, 3:56:12 PM4/12/94
to
In article <DAVIS.94A...@pacific.mps.ohio-state.edu> da...@amy.tch.harvard.edu (John E. Davis) writes:
: In article <1994Apr11....@netlabs.com> lw...@netlabs.com (Larry Wall)
: writes:
: > Another data point:
: >
: > perl5 2.50 y y
:
:
: What machine did you run this on? It is kind of meaningless unless these

: tests were performed on the same machine. Perhaps a ratio is a better
: measure.

I should have mentioned that I already normalized it. Both the Perl4
and Perl5 times were slightly faster than that on my machine, which is
some kind of Sparc or other. Admittedly there are many possible
sources of error in comparitive benchmarks, but the basic fact is that
Perl 5 can run a Fibonacci about three times faster than Perl 4.

Larry

John Macdonald

unread,
Apr 12, 1994, 5:02:59 PM4/12/94
to
In article <Co0MG...@chinet.chinet.com> l...@chinet.chinet.com (Leslie Mikesell) writes:
|
|No, I don't want "real" forks. What I want is for perl to let me
|pretend to pipe data through otherwise isolated program sections
|which can also be run and tested as separate entities. If I use
|real forks I might as well keep running smaller programs like
|sed and awk.
|
|The whole reason for switching to perl from a shell script is that
|perl can do it all in one process. However, the way you develop
|the shell script is to write all the sections that need sed, awk,
|tr, etc. that are complex enough to need testing in separate pieces.
|You may (or may not) end up merging the text of these programs into
|the body of the shell script but they still run as independent units.
|That is, nothing you change in one place will affect how another part
|works unless you modify a shell variable that the other one expands and
|you can run it out of context with test data from a file or other source.
|
|I just haven't been able to get the same effect in perl while keeping
|the ability to put it all in one process. Maybe I'm not using packages
|or subroutines right but I just haven't been able to convert several
|perl programs into a single one that acts like each piece sees the output
|data stream of the previous piece without rewriting the whole mess.
|What's worse is that somehow each piece seems to end up needing to know
|something about the state of all the others when combined.

To a certain extent, Leslie is right here.

Perl4 (and Perl5 in its current state, more below) do not provide
this sort of pipeline dataflow concept.

And yet...

Most of the time, if you are trying to do what would have been a
pipeline in a shell script, it comes out easily anyhow.

The "cat file1 file2 ..." at the beginning of the pipeline is
the (possibly automatically provided) enclosing loop:

while(<>) {
...
}

A "tr" in the pipeline is just a single statement within the
body of the loop. So, in many cases is a "sed". A "grep"
is just a conditional next statement (that would have been a
continue statement in C). So is "uniq".

There are a lot of the small Unix utilities that just operate
on the stream of lines without affecting the number of lines,
and these just flow right into the straight-line code without
any special need for pipeline dataflow simulation.

Another one that comes out easily but in a different way is
"sort". It gets turned into: "first loop to read the portion
of the pipeline that preceeds the sort, putting the info into
an array", a sort function invokation, and "second loop taking
the sorted array and passing it through the rest of the
pipeline".

The place where some sort of pipeline dataflow idiom within
Perl would be usful occurs when you get to a larger sort of
a problem, perhaps one involving multiple "awk" scripts in
a pipeline. There it would be nice to be able to run a2p on
the awk scripts, and then just join together the results in
a single Perl program. But, but the time your program is
getting to be that large, the cost of having two processes
connected by a pipeline is probably not a huge problem if
neither of the two can be reduced to a simple straight line
sequence that could be treated as an input or an output
subroutine by the other, and it is pretty easy to get a
Perl program to fork into two pipeline connected processes.

So, while pipeline dataflow is a very useful concept; it is
far more important to shells than it is to Perl. (It would
be useful to Perl, but it is essential to shells [unless you
are happy with JCL or some such :-].)

In the further future, there *will* probably be some sort of
better explicit support for the pipeline dataflow concept.
There have been a number of proposals of additions to Perl
that would provide this.

Perl5 already has the ability to run multiple scripts at the
same time from within the same single C mainline program.
This could be manipulated to do pipeline simulation. It
might be possible to wrap it in a Perl library package that
did it fairly invisibly.

Larry has also talked about the possibility of supporting
some sort of light-weight processes or co-routines within
erl5 - but not in the initial release - based upon a top
level for the Perl interpretor that used that same multiple
script capability. That could be easily built into providing
the pipeline idiom in some easy to use fashion.

There has also been talk of having handle objects, which
would allow reading and/or writing to the object instead of a
file - which would again be a solid basis for providing
pipeline dataflow.
--
That is 27 years ago, or about half an eternity in | John Macdonald
computer years. - Alan Tibbetts | j...@Elegant.COM

Message has been deleted

Sean Casey

unread,
Apr 12, 1994, 5:32:50 PM4/12/94
to
pe...@nmti.com (Peter da Silva) writes:

> $from =~ s/.*<([^>]*)>.*/\1/;
> from = subst(from, ".*<([^>]*)>.*", "\\1")
> regsub {.*<([^>]*)>.*} $from {\1} from

>The only difference in these is that the first can't be parsed in a context
>independant way.

The only semantic difference. The important difference as far as I'm
concerned is the first is a lot easier to code and read later. The
best language is the one you wanna use.

Sean
--
``Wind, waves, etc. are breakdowns in the face of the commitment to
getting from here to there. But they are conditions for sailing -- not

Lloyd Allison

unread,
Apr 12, 1994, 8:14:27 PM4/12/94
to
da...@pacific.mps.ohio-state.edu ("John E. Davis") writes:

>In article <nagleCn...@netcom.com> na...@netcom.com (John Nagle) writes:
> I took a brief look at this, and it's an interesting little system,

>This is true but in practice it is not as bad as it sounds. For example, this
>morning I needed to calculate a chi-square of some data so I threw together
>the following S-Lang routine:

...

Does this have any relationship to the statistical language S or S-Plus ?

Lloyd ALLISON
Central Inductive Agency,
Dept. of Computer Science, Monash University, Clayton, Victoria 3168, AUSTRALIA
tel: 61 3 905 5205 fax: 61 3 905 5146 email: ll...@cs.monash.edu.au
ftp: bruce.cs.monash.edu.au see ~ftp/pub/lloyd/....

ozan s. yigit

unread,
Apr 13, 1994, 11:04:58 AM4/13/94
to
Andy Newman:

People may also be interested in a language called ICI. It has
C's expression syntax but uses a different data model that is
more suited to the interpretive environment. It has full garbage
collection, proper strings, regular expressions, sets, associative arrays,
error handling (nearly exception handling but not quite) etc...

i second this recommendation. ICI is very straightforward, cleanly
implemented, and could be good extension language, or a base to implement
others.

oz
---
@#?!@**%$&!! is a reserved word | electric: o...@sis.yorku.ca
in cplusplus... -- anon | or [416] 736 2100 x 33976

Leslie Mikesell

unread,
Apr 14, 1994, 2:42:48 PM4/14/94
to
In article <Co5zs...@elegant.com>, John Macdonald <j...@elegant.com> wrote:
>
>So, while pipeline dataflow is a very useful concept; it is
>far more important to shells than it is to Perl. (It would
>be useful to Perl, but it is essential to shells [unless you
>are happy with JCL or some such :-].)

It would be useful to would-be perl programmers who like to test
things in pieces.

>Perl5 already has the ability to run multiple scripts at the
>same time from within the same single C mainline program.
>This could be manipulated to do pipeline simulation. It
>might be possible to wrap it in a Perl library package that
>did it fairly invisibly.

Aside from the syntactic sugar of making the variable names unique
(ideally with a package-like flavor so you can cheat if you want),
all it needs is more of a wrapper around stdio.

>Larry has also talked about the possibility of supporting
>some sort of light-weight processes or co-routines within
>erl5 - but not in the initial release - based upon a top
>level for the Perl interpretor that used that same multiple
>script capability. That could be easily built into providing
>the pipeline idiom in some easy to use fashion.

It doesn't need to be all that complex - it would just have to
schedule running the parts according to the pseudo-stdio accesses.
If you read() and the buffer needs to be filled you run the
previous chunk - when it writes, you come back. I think parsing the
new syntax needed to describe the connections would be harder than
threading the runtime.

>There has also been talk of having handle objects, which
>would allow reading and/or writing to the object instead of a
>file - which would again be a solid basis for providing
>pipeline dataflow.

That's probably even better, as long as you can build an object with
read/write to stdin/stdout for standalone use for testing, then
combine them.

Les Mikesell
l...@chinet.com

John Alsop

unread,
Apr 15, 1994, 3:42:36 PM4/15/94
to
In article <1994Apr8.2...@netlabs.com> lw...@netlabs.com (Larry Wall) writes:

>We just have different ideas of what's easy, I suppose. There's more
>than direction to crawl out of the Turing Tarpit. Yes, Postscript is
>expressive in one sense, but it's also one of the most user-hostile
>languages known to man, all in the name of making the computer's job a
>little easier.

Not to throw more gas on the fire, but I'd rather write in Postscript than
Perl any day of the week. (and I've used both a lot).


--
John Alsop

Sea Change Corporation
6695 Millcreek Drive, Unit 8
Mississauga, Ontario, Canada L5N 5R8
Tel: 905-542-9484 Fax: 905-542-9479
UUCP: ...!uunet!uunet.ca!seachg!jalsop jal...@seachg.com

John Macdonald

unread,
Apr 15, 1994, 11:39:35 AM4/15/94
to
In article <Co9In...@chinet.chinet.com>,

Leslie Mikesell <l...@chinet.chinet.com> wrote:
|In article <Co5zs...@elegant.com>, John Macdonald <j...@elegant.com> wrote:
|>
|>So, while pipeline dataflow is a very useful concept; it is
|>far more important to shells than it is to Perl. (It would
|>be useful to Perl, but it is essential to shells [unless you
|>are happy with JCL or some such :-].)
|
|It would be useful to would-be perl programmers who like to test
|things in pieces.

I test things in pieces all the time. I write the "pipeline"
left-to-right and stop and execute it at each stage. It is
easy to comment out large chunks for testing. Another simple
sequence is:

# during initialization...
open(TEE1,">/tmp/testit.1");
open(TEE2,">/tmp/testit.2");
open(TEE3,">/tmp/testit.3");
open(TEE4,">/tmp/testit.4");

# and then at various places in the "pipeline"
print TEE1 $_

which is equivalent to inserting "| tee /tmp/testit.1" into
the shell pipeline that would correspond to what is being done
in the Perl code.

Having handle objects and having co-routines should make it very
simple to build a library for doing exactly that. A handle
object read/write routine would be set up as a simple sub-routine
in many cases - when ever it does not need to keep much extra
state and nested functional state hanging around - but would be
set up as a co-routine in the more complicated variations.

Leslie Mikesell

unread,
Apr 16, 1994, 1:25:10 PM4/16/94
to
In article <CoB4t...@jmm.pickering.elegant.com>,
John Macdonald <j...@jmm.pickering.elegant.com> wrote:

>I test things in pieces all the time. I write the "pipeline"
>left-to-right and stop and execute it at each stage.

The problem is that the sorts of things I do tend to operate
on different sized "chunks" of data, equivalent to pipelines
of:
sed -n "/pattern1/,/pattern2/p" |
sed -e "/pattern3/,/pattern4/s/something/somethingelse/g"
which isn't too bad to emulate in perl until you get 3 or 4
deep, at which point you have state variables all over the place
to keep track of your positions in the address selection matches,
and your conditionals begin to need to know about all of the others.

>It is
>easy to comment out large chunks for testing.

But it isn't easy when you aren't working on a line at a time. Besides
I don't want to have to work left to right in the program flow. I want
to be able to test things in "conceptual" steps. Maybe there are a
dozen people in the world that can write regexp substitutions and get
them right without testing. I'm not one of them. I want to be able
to redirect a file of test data into the program section that does
each opertation to be sure I know what is going on.

> Another simple sequence is:
> # during initialization...
> open(TEE1,">/tmp/testit.1");
> open(TEE2,">/tmp/testit.2");

I suppose I could make each section use explicit tmp files, but
that's so un-unixlike. Since perl seems oriented toward unix-like
programming and combining operations that would otherwise take
several processes it seems strange that it omits the concept of passing
a data stream from one place to another.

Les Mikesell
l...@chinet.com

Leslie Mikesell

unread,
Apr 18, 1994, 11:20:56 AM4/18/94
to
In article <CoFqJ...@jmm.pickering.elegant.com>,

John Macdonald <j...@jmm.pickering.elegant.com> wrote:
>|The problem is that the sorts of things I do tend to operate
>|on different sized "chunks" of data, equivalent to pipelines
>|of:
>| sed -n "/pattern1/,/pattern2/p" |
>| sed -e "/pattern3/,/pattern4/s/something/somethingelse/g"
>|which isn't too bad to emulate in perl until you get 3 or 4
>|deep, at which point you have state variables all over the place
>|to keep track of your positions in the address selection matches,
>|and your conditionals begin to need to know about all of the others.

>while(<>) {
> next unless $InRange1 = $InRange1 || /pattern1/;
> /pattern2/ && $InRange1 = 0;
>
> # pipeline point between the two sed's
>
> if( $InRange2 || /pattern3/ ) {
> s/something/somethingelse/g;
> $InRange2 = ! /pattern4/;
> }
>
> # pipeline point after the second sed
>}

I meant pipelines equivalent to several of the above pairs of lines
with start,end ranges that may overlap and patterns that may be
affected by earlier transformations.

>Sed commands, as I said earlier, fit quite well into the mode of
>a series of sequential lines of code inside the loop that scans
>the original input and transforms it into the right set of data
>for the next series of lines that correspond to a pipeline stage
>in the original shell script.

Yes, if you stick to the "easy" sed commands that work on a line
at a time. Do something that uses the holding space, or an
awk program that rearranges things, or pipe through fmt and the
idea of passing $_ through a bunch of perl code doesn't fit
any more. I know perl can do all of the same things, it just
doesn't map into the conceptual operations on a data stream once
you get out of line-at-a-time mode. You have to get used to
the idea of the output side of an operation gathering up logical
units for the next step instead of being able to spew out bits
and pieces and letting the input side of the next operation collect
them.

>The tee's weren't for making the body of the code work on files.
>They were to be used the same way you would insert tee commands
>into a shell pipeline that you were testing, to make a copy of
>the data as it flows past a point in the pipeline to be able to
>check that each stage did what you expected.

Oh. I've never been able to organize things well enough to make this
useful. Besides, I usually want to feed test data into a particular
chunk as well as watching what comes out. Of course perl -d is so
much better than shell style debugging that the only parts that really
need this kind of testing are the regexp transformations.

Maybe it would be possible to do what I want with some wrapper code
in perl itself. The idea would be to make a program that would
run stand-alone but could also be "required" in a larger program
inside some kind of framework that would arrange to pass the data
back and forth to several packages.

Les Mikesell
les@chinet,.com

John Macdonald

unread,
Apr 17, 1994, 11:18:36 PM4/17/94
to
In article <CoD4D...@chinet.chinet.com>,

Leslie Mikesell <l...@chinet.chinet.com> wrote:
|In article <CoB4t...@jmm.pickering.elegant.com>,
|John Macdonald <j...@jmm.pickering.elegant.com> wrote:
|
|>I test things in pieces all the time. I write the "pipeline"
|>left-to-right and stop and execute it at each stage.
|
|The problem is that the sorts of things I do tend to operate
|on different sized "chunks" of data, equivalent to pipelines
|of:
| sed -n "/pattern1/,/pattern2/p" |
| sed -e "/pattern3/,/pattern4/s/something/somethingelse/g"
|which isn't too bad to emulate in perl until you get 3 or 4
|deep, at which point you have state variables all over the place
|to keep track of your positions in the address selection matches,
|and your conditionals begin to need to know about all of the others.

while(<>) {


next unless $InRange1 = $InRange1 || /pattern1/;
/pattern2/ && $InRange1 = 0;

# pipeline point between the two sed's

if( $InRange2 || /pattern3/ ) {
s/something/somethingelse/g;
$InRange2 = ! /pattern4/;
}

# pipeline point after the second sed
}

|>It is


|>easy to comment out large chunks for testing.
|
|But it isn't easy when you aren't working on a line at a time. Besides
|I don't want to have to work left to right in the program flow. I want
|to be able to test things in "conceptual" steps. Maybe there are a
|dozen people in the world that can write regexp substitutions and get
|them right without testing. I'm not one of them. I want to be able
|to redirect a file of test data into the program section that does
|each opertation to be sure I know what is going on.

Yep. So, after writing and testing the first "sed1-equivalent",
you just comment out those two lines, write the "sed2-equivalent"
and run the program with your test data. Then uncomment the sed1
lines, and run with the test data both the combination. And then
write the next stage of the pipeline, possibly commenting out all
of these so far as the first step in testing the next stage...

(By the way, instead of inserting # in each line to comment stuff
out, you can use the lines:

<<'IgnoreForTesting'

IgnoreForTesting

wrapped around the entire section to be commented out temporarily.)

Sed commands, as I said earlier, fit quite well into the mode of
a series of sequential lines of code inside the loop that scans
the original input and transforms it into the right set of data
for the next series of lines that correspond to a pipeline stage
in the original shell script.

|> Another simple sequence is:


|> # during initialization...
|> open(TEE1,">/tmp/testit.1");
|> open(TEE2,">/tmp/testit.2");
|
|I suppose I could make each section use explicit tmp files, but
|that's so un-unixlike. Since perl seems oriented toward unix-like
|programming and combining operations that would otherwise take
|several processes it seems strange that it omits the concept of passing
|a data stream from one place to another.

The tee's weren't for making the body of the code work on files.


They were to be used the same way you would insert tee commands
into a shell pipeline that you were testing, to make a copy of
the data as it flows past a point in the pipeline to be able to

check that each stage did what you expected. This method of
writing and testing shell scripts seems pretty Unix-like to me,
and the corresponding Perl mechanism seems like doing much the
same thing. However, I must say that I've never actually had a
pipeline complicated enough that I needed to make more than two
of these tee's of the data flow through internal points in the
program pipeline.

0 new messages