I like C++ for a number of reasons, one of which is its multiparadigm
capabilities. It is at heart an OO language, but it is capable of
procedural and generic programming idioms, and there is nothing preventing
you from practicing most of the functional paradigm as well, although higher
order functions are a bit of stretch I believe. I don't like C++ because
it's too easy to shoot yourself in the foot (although a number of tools help
in eliminating this).
I have been using Haskell and Clean lately and find myself liking many of
the functional idioms such as referential transparency. It eliminates many
bug inducing practices, even if it means you have to structure your code a
little differently. I also like the higher order functions and the pattern
matching. It eliminates so many if statements. The mathematical notation
is aesthetically pleasing as well, but that's purely subjective.
I know LISP supports many of these idioms, but it also supports many of the
imperative idioms as well, which strikes me as being too easy to fall back
on. I like GC, but it's available in many other languages as well.
I often hear (in one of the current threads it was reiterated) that there
are some ideas of expression that are not possible in other languages,
particularly C++, that can be done easily in LISP. What are they and can I
have an example or two?
The *most* advantageous thing I see so far is the REPL method of coding
rather than ECD method (Edit -> Compile -> Debug) which can take so much
time and cause a train of thought to be lost. REPL is so fast to try out
ideas. But I would hardly call this a language feature. It seems more of
an environment feature.
The one thing I do miss is strong typing. In python I found myself catching
run time errors that were type errors (not typing errors :))
Thanks for your kind opinions. I'm really not looking to troll but to
learn. I have a raytracer that I have about 50% done in Clean and I'm
thinking I could convert it to LISP to see what everyone is going on about.
--
Icos
d
two
zero at
icosahedron
dot
org
jag
"Icosahedron" <no...@nowhere.com> wrote in message
news:JYK19.12172$pg2.9...@bgtnsc05-news.ops.worldnet.att.net...
> The one thing I do miss is strong typing. In python I found myself catching
> run time errors that were type errors (not typing errors :))
In CMUCL (and maybe the commercial Lisp implementations as well), the
compiler knows quite a lot about how types propagate. If you declare
some types (e.g. structure slots), this will find you some errors at
compile time which is quite nice.
Of course, more errors are detected in strongly typed languages. But
I have the impression that most of those come into existence by having
to declare types:-)
Nicolas.
> I often hear (in one of the current threads it was reiterated) that there
> are some ideas of expression that are not possible in other languages,
> particularly C++, that can be done easily in LISP. What are they and can I
> have an example or two?
For example: Compiling custom-generated code at runtime (using
information available only at runtime) can lead to major performance
improvements compared with statically compiled code.
> Thanks for your kind opinions. I'm really not looking to troll but to
> learn. I have a raytracer that I have about 50% done in Clean and I'm
> thinking I could convert it to LISP to see what everyone is going on about.
An example for a very simple raytracer may be found in Graham's ANSI
CL book.
Nicolas.
P.S.: Of course, your questions have been discussed intensively in
c.l.l in the last month(s). See the archives.
Remember! The more you declare, the more errors the compiler flags
up! Declare Declare Declare! Declare now and get a free asterisk!
--
Have you stopped beating your wife yet?
You can certainly use a C/C++ style when you code in Lisp, but its
somewhat laborious. ie; adapting for() & while() idioms to (do ..)
instead of using (loop ..), or not taking advantage of subtle things
like closures and macros to reduce work and/or ease syntax.
From a C/C++ perspective, Lisp's OO seems screwy to say the least-
almost seeming like its not in fact "real" object orientation. Once
you start to use it, you'll really appreciate how it works without all
the melodrama of header files, member declarations and all the
byzantine rules that go into class heirarchies.
Thats not to say I hate C/C++ either. I use both and they suit many
problems- but its helps to be aware of the things they don't do quite
so well.
> I know LISP supports many of these idioms, but it also supports many of the
> imperative idioms as well, which strikes me as being too easy to fall back
> on. I like GC, but it's available in many other languages as well.
I think the idea is to choose whichever approach suits you and the
problem best. Neither approach is automatically better.
> I often hear (in one of the current threads it was reiterated) that there
> are some ideas of expression that are not possible in other languages,
> particularly C++, that can be done easily in LISP. What are they and can I
> have an example or two?
- Closures (I think this is a <huge> feature of CL)
- Adding methods to classes without access to source or recompiling
everything that depends upon the class's declaration. (Which means
no full-system recompile when you change some little thing in the
penultimate base class)
- Rational numeric type
- Integer types that size themselves automatically (without finding &
fixing all instances of the other type, recompiling, then finding
and fixing all the instances you missed the first time- you get the
idea...)
(not that other languages don't implement some of these- they're some
features I've especially appreciated in CL).
> The *most* advantageous thing I see so far is the REPL method of coding
> rather than ECD method (Edit -> Compile -> Debug) which can take so much
> time and cause a train of thought to be lost. REPL is so fast to try out
> ideas. But I would hardly call this a language feature. It seems more of
> an environment feature.
I suppose it is an environment feature- so I imagine its more
associated with a particular implementation. Even so, if it saves
time & headache, then its a win regardless of what the feature is a
consequence of.
> The one thing I do miss is strong typing. In python I found myself catching
> run time errors that were type errors (not typing errors :))
If you're sure of types then by all means declare them, you'll then
get what you expect. Lisp is nice because you don't have to declare
everything in advance- which can also save you when suddenly your ints
exceed their declared size or become floats.
For the last 2 years or so I've been re-applying myself to CL after 10
years of C, C++ and Assembly, and I've not found type errors
particularly troublesome. They do show up, but tend to manifest
themselves pretty fast & they're quick to fix. For me, type errors
tend to be along the lines of incorrectly assembling a cons or
supplying a list where something else is needed (or vice versa).
> learn. I have a raytracer that I have about 50% done in Clean and I'm
> thinking I could convert it to LISP to see what everyone is going on
> about.
Its a good idea. If you do, please try to approach Common Lisp on its
own and not force idioms from another language onto it- not to say
that all such idioms are bad, just be thoughtful. The learning curve
is longer that way, but as a consequence I think you'll find the
language easier to use. Once you get it ported, the next hurdle will
be optimization.
Enjoy!
Gregm
> As compared to other languages, functional in particular, I would say
> that the code-as-data property of Lisp is the biggest distinguishing
> feature. It allows easily definable and rich macros which leads to
> a style of programming in which new languages are implemented
> on top of Lisp to solve problems. Ironically, one of the most often
> heard criticisms of Lisp is its syntax, e.g., Lots of Insipid Silly
> Parentheses. Can't please everyone!
>
yes, really I am fine with the syntax for myself, but often I wonder if it
scares off others.
I had before thought up ways to "fudge" the syntax to look more normal, or
at least easier to read. in my mind this would both look better and still
keep the way the syntax works (underlying the reader it is still lists).
my idea is that in cases of printing code it could be possible to have
closures flag how they want to be "displayed", or maybe a special reader
could pick up on the syntax and include this when defining the function.
I just have not implemented this as I lack much motivation for this right
now.
actually there are many things I like in cl better than scheme, so it ends
up being a process of grafting some cl features onto a scheme core.
it is cheesy really, maybe I should learn cl better, though I am not really
up to directly rewriting my scheme system to being a cl one.
actually one thing I sort of dislike about scheme is the hypocrisy of not
defining it as having some features but defining some that would depend on
the others. them being more upfront about some design issues would have
eased writing the interpreter a bit.
--
<cr88192[at]hotmail[dot]com>
<http://bgb1.hypermart.net/>
I'm not sure what a closure is I guess. I thought a closure was a level of
scope. I guess I'm not sure what the difference is between say a LISP
closure and say, a try/catch block or function definition in C++. Please
explain.
> - Adding methods to classes without access to source or recompiling
> everything that depends upon the class's declaration. (Which means
> no full-system recompile when you change some little thing in the
> penultimate base class)
Now this is nice.
> - Rational numeric type
Hm. I've never used them so I don't know if I would need them, but I guess
it's good to have them.
> - Integer types that size themselves automatically (without finding &
> fixing all instances of the other type, recompiling, then finding
> and fixing all the instances you missed the first time- you get the
> idea...)
Yes, I could see how this would be a big benefit.
> (not that other languages don't implement some of these- they're some
> features I've especially appreciated in CL).
> > The *most* advantageous thing I see so far is the REPL method of coding
> > rather than ECD method (Edit -> Compile -> Debug) which can take so much
> > time and cause a train of thought to be lost. REPL is so fast to try
out
> > ideas. But I would hardly call this a language feature. It seems more
of
> > an environment feature.
>
> I suppose it is an environment feature- so I imagine its more
> associated with a particular implementation. Even so, if it saves
> time & headache, then its a win regardless of what the feature is a
> consequence of.
True.
> > The one thing I do miss is strong typing. In python I found myself
catching
> > run time errors that were type errors (not typing errors :))
> If you're sure of types then by all means declare them, you'll then
> get what you expect. Lisp is nice because you don't have to declare
> everything in advance- which can also save you when suddenly your ints
> exceed their declared size or become floats.
Is it possible to declare types of parameters to functions or in lets? I
know that generic functions use types to dispatch to the appropriate method,
but that's the only place I've seen them used. I'll have to look again.
> For the last 2 years or so I've been re-applying myself to CL after 10
> years of C, C++ and Assembly, and I've not found type errors
> particularly troublesome. They do show up, but tend to manifest
> themselves pretty fast & they're quick to fix. For me, type errors
> tend to be along the lines of incorrectly assembling a cons or
> supplying a list where something else is needed (or vice versa).
This was my experience with Python, although after a while it was "not
again..." One of the biggest headaches could be that Python doesn't require
declarations as LISP does so you could mispell a variable and get a new
variable instead. I thought it was cool at first, but after a few mistakes,
I thought it was a bigger headache. At least LISP has declarations, even if
they are untyped.
> > learn. I have a raytracer that I have about 50% done in Clean and I'm
> > thinking I could convert it to LISP to see what everyone is going on
> > about.
>
> Its a good idea. If you do, please try to approach Common Lisp on its
> own and not force idioms from another language onto it- not to say
> that all such idioms are bad, just be thoughtful. The learning curve
> is longer that way, but as a consequence I think you'll find the
> language easier to use. Once you get it ported, the next hurdle will
> be optimization.
>
That's partially what this thread is for, is to help me realise what those
idioms are. I don't want to fall back. That was one reason I used a purely
functional language to start with since it forced me to adapt.
Thanks for the input.
Icos
cr88192> John Gilson wrote:
>> As compared to other languages, functional in particular, I would say
>> that the code-as-data property of Lisp is the biggest distinguishing
>> feature. It allows easily definable and rich macros which leads to
>> a style of programming in which new languages are implemented
>> on top of Lisp to solve problems. Ironically, one of the most often
>> heard criticisms of Lisp is its syntax, e.g., Lots of Insipid Silly
>> Parentheses. Can't please everyone!
>>
cr88192> yes, really I am fine with the syntax for myself, but often I
cr88192> wonder if it scares off others.
I agree. I believe that the single biggest mental block other
programmers face when considering lisp is its unconventional syntax.
Unfortunately, the converse does not hold. If one implemented a more
conventional syntax, it would not in itself make programmers flock
around lisp. Dylan was an attempt to take the best of lisp but to set
it in a more conventional style, and they failed. It did not get
very many new programmers and it lost contact with the existing lisp
community.
Some of the frequent discussions here on c.l.l. kind of presumes that
choice of programming language is a rational one, but I hold that it
is not.
An issue can drive a programmer away, but the absence of the same
issue is not enough to make that programmer stay on the track.
Not that it is very fair.
------------------------+-----------------------------------------------------
Christian Lynbech | Ericsson Telebit, Skanderborgvej 232, DK-8260 Viby J
Phone: +45 8938 5244 | email: christia...@ted.ericsson.se
Fax: +45 8938 5101 | web: www.ericsson.com
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- pet...@hal.com (Michael A. Petonic)
> The *most* advantageous thing I see so far is the REPL method of coding
> rather than ECD method (Edit -> Compile -> Debug) which can take so much
> time and cause a train of thought to be lost. REPL is so fast to try out
> ideas. But I would hardly call this a language feature. It seems more of
> an environment feature.
Certainly the repl itself is an environment feature, but the language
has to be (and has been) designed with it in mind to make it useful.
For example, suppose the language did not allow functions to be
redefined. What use would a repl be without defun? Suppose the
language allowed functions to be redefined, but all existing
references to the function continued to use the old definition?
If you realised there was a bug in one of your low-level utility
functions, you'd have to re-evaluate everything in all the upper layers
after correcting it.
It's not just functions, either, it's data as well. You don't have to
declare all the methods of a class when you define the class, so when
you need a new protocol or to extend an existing protocol you can just
type in the new methods. If at some time you _do_ need to redefine
the class (e.g. to add or remove a slot) all your existing instances
update automatically.
gdb in some sense could be described as a repl for C:
(gdb) print 2^3
$1 = 1
It's the absence of these language features, not any deficiency in the
gdb environment, that make it less useful than a real toplevel.
-dan
--
http://ww.telent.net/cliki/ - Link farm for free CL-on-Unix resources
I'm sure others can explain this better, but I'll give it a try.
Closures are not related to try/catch- look for handler-bind among
others for that sort of thing.
(let ((x 1)
(y 2))
(defun foo ()
(* x y)) )
... and later in some code somewhere ..
(foo) ;; returns 2
Whats happening is the function foo forms a closure around its lexical
environment at the time of its defintion. Here, its lexical
environment is the let, so it drags along x and y, even though their
lexical definitions go out of scope immediately after foo is defined.
The closure will also grab the variables from other surrounding
lexical units too. Its somewhat analagous to a C static variable in
this case. Closures happen with lots of things- and they can extend
from load to runtime, not only runtime itself.
I saw a better example somewhere that used closures for a password
authentication system. In that case, closures were used to hold the
password table, making it inaccessible except to the authentication
test function which had closed over it.
>
> > - Adding methods to classes without access to source or recompiling
> > everything that depends upon the class's declaration. (Which means
> > no full-system recompile when you change some little thing in the
> > penultimate base class)
>
> Now this is nice.
I find its a feature that doesn't impress much until you run into it.
> > - Rational numeric type
>
> Hm. I've never used them so I don't know if I would need them, but I guess
> it's good to have them.
Its handy for maintaining accuracy in calculations using fractions- no
more floating point rounding problems.
A hidden benefit of Lisps numeric type regime is the language switches
types for you when the magnitudes of a number exceeds the size of its
type. Thus, integers will naturally use a machine word sized type for
efficiency until values get too large, then Lisp will automatically
switch to a bignum. In C/C++, you'll have to realize the type is
going to be exceeded, then change the involved ints to long long or
whatever, and accept the performance hit for arithmetic not exceeding
the machine word size. In contrast, the arithmetic in Lisp will be
efficient until its forced to be less efficient.
> > > The one thing I do miss is strong typing. In python I found myself
> catching
> > > run time errors that were type errors (not typing errors :))
>
> > If you're sure of types then by all means declare them, you'll then
> > get what you expect. Lisp is nice because you don't have to declare
> > everything in advance- which can also save you when suddenly your ints
> > exceed their declared size or become floats.
>
> Is it possible to declare types of parameters to functions or in lets? I
> know that generic functions use types to dispatch to the appropriate method,
> but that's the only place I've seen them used. I'll have to look
> again.
Yes, declare works for variables in defun parameters, lets, etc.. If
I recall properly, a defmethod specialization ends up acting as a
declare from a compiler standpoint, so you don't have to be redundant
there.
> > For the last 2 years or so I've been re-applying myself to CL after 10
> > years of C, C++ and Assembly, and I've not found type errors
> > particularly troublesome. They do show up, but tend to manifest
> > themselves pretty fast & they're quick to fix. For me, type errors
> > tend to be along the lines of incorrectly assembling a cons or
> > supplying a list where something else is needed (or vice versa).
>
> This was my experience with Python, although after a while it was "not
> again..." One of the biggest headaches could be that Python doesn't require
> declarations as LISP does so you could mispell a variable and get a new
> variable instead. I thought it was cool at first, but after a few mistakes,
> I thought it was a bigger headache. At least LISP has declarations, even if
> they are untyped.
Typos are certainly always going to problematic. Generally, if you
mistype a variable such that it does not match one thats already
defined, Lisp will create it as a global and warn you. Using an
environment with a compiler is helpful, as it will give you the
warning. (I forget if clisp's byte compiler will do this but I
imagine it will...)
You can certainly assign a type in a CL declaration. I tend to not
use declares often, but I think you'll end up having to wait till
runtime to find the type mismatches.
> > Its a good idea. If you do, please try to approach Common Lisp on its
> > own and not force idioms from another language onto it- not to say
> > that all such idioms are bad, just be thoughtful. The learning curve
> > is longer that way, but as a consequence I think you'll find the
> > language easier to use. Once you get it ported, the next hurdle will
> > be optimization.
> >
> That's partially what this thread is for, is to help me realise what those
> idioms are. I don't want to fall back. That was one reason I used a purely
> functional language to start with since it forced me to adapt.
Adapting is good... A purely function approach is no doubt
instructive, but its only one of the different ways you can use Lisp.
Gregm
Excerpts from real C header files:
#define FromHex(n) (((n) >= 'A') ? ((n) + 10 - 'A') : ((n) - '0'))
#define StreamFromFOURCC(fcc) ((WORD) ((FromHex(LOBYTE(LOWORD(fcc))) << 4) + (FromHex(HIBYTE(LOWORD(fcc))))))
#define ToHex(n) ((BYTE) (((n) > 9) ? ((n) - 10 + 'A') : ((n) + '0')))
#define MAKEAVICKID(tcc,stream) MAKELONG((ToHex((stream) & 0x0f) << 8) | (ToHex(((stream) & 0xf0) >> 4)),tcc)
#define va_arg(AP, TYPE) \
(AP = (__gnuc_va_list) ((char *) (AP) + __va_rounded_size (TYPE)), \
*((TYPE *) (void *) ((char *) (AP) \
- ((sizeof (TYPE) < __va_rounded_size (char) \
? sizeof (TYPE) : __va_rounded_size (TYPE))))))
#define PLAUSIBLE_BLOCK_START_P(addr, offset) \
((*((format_word *) \
(((char *) (addr)) + ((offset) - (sizeof (format_word)))))) == \
((BYTE_OFFSET_TO_OFFSET_WORD(offset))))
jag
"Joe Marshall" <prunes...@attbi.com> wrote in message
news:Wqb29.248019$uw.1...@rwcrnsc51.ops.asp.att.net...
Check out pp. 151-158 of Paul Graham's "ANSI Common Lisp" for concise,
efficient, and well-described Common Lisp code for a ray tracer. If
you lack access to the book, http://www.paulgraham.com/lib/acl2.lisp
has the unadorned code; see the functions SQ through RAY-TEST.
>
>> > - Closures (I think this is a <huge> feature of CL)
>>
>> I'm not sure what a closure is I guess. I thought a closure was a level
>> of
>> scope. I guess I'm not sure what the difference is between say a LISP
>> closure and say, a try/catch block or function definition in C++. Please
>> explain.
>
> I'm sure others can explain this better, but I'll give it a try.
>
> Closures are not related to try/catch- look for handler-bind among
> others for that sort of thing.
>
> (let ((x 1)
> (y 2))
>
> (defun foo ()
> (* x y)) )
>
>
> ... and later in some code somewhere ..
>
> (foo) ;; returns 2
>
> Whats happening is the function foo forms a closure around its lexical
> environment at the time of its defintion. Here, its lexical
> environment is the let, so it drags along x and y, even though their
> lexical definitions go out of scope immediately after foo is defined.
> The closure will also grab the variables from other surrounding
> lexical units too. Its somewhat analagous to a C static variable in
> this case. Closures happen with lots of things- and they can extend
> from load to runtime, not only runtime itself.
Just as a sidenote - it would not be unsusual that the exact above code
would not compile to a lexical closure but to a function that simply
returns the constant-folded result "2". Semantically there is no difference
between maintaining a lexical environment here or doing a constant folding.
(let ((x 1)
(y 2))
(defun set-y (a) (setf y a))
(defun foo ()
(* x y)))
In this definition we access the binding of y in the function set-y
therefore it is not possible to do the above optimization.
Now (foo) returns still 2 until we change the binding of y to another value:
(setf-y 7)
(foo) => 7
The compiler is still allowed to optimize the x and the multiplication away
so that foo is simply a function that returns the actual value that is
bound to the variable y.
This are actually implementation issues that are not necessary to understand
the semantics - but I think it is often good to know how certain constructs
are realized internally.
ciao,
Jochen
ciao,
Jochen
--
Biep
Reply via any name whatsoever at the main domain corresponding to
http://www.biep.org
"Joe Marshall" <prunes...@attbi.com> wrote in message
news:Wqb29.248019$uw.1...@rwcrnsc51.ops.asp.att.net...
>
* Joe Marshall
| Excerpts from real C header files:
These phenomena are actually connected. In the Algol family, parentehses
signal pain. In the Lisp family, they signal comfort. Since most people are
highly emotional believers, even programmers, it is very hard for them to
relinquish their beliefs in their associations of parentheses with pain and
suffering. This has nothing to do with aesthetics, design rationales, ease
of use, the value of the code-as-data paradigm, etc. This has everything to
do with the deeply ingrained "knowledge" that you do not need parentheses
unless you want to transcend the (overly simple) rules of the language. The
psychology of programming has taught programmers in the Algol family that
parentheses are to be minimized and that any large number of parentheses is a
good sign that the code, or worse, the thinking behind it, is too complex.
Because of such fundamentally ridiculous syntactic optimizations as the
associativity and precendence of infix operators, which favors the writing of
simplistic code and expressions so much that normal complexity is wildly
exaggerated, such that out-of-the-ordinary complexity becomes intractable, we
have trained a large number of programmers to abhor complex expressions that
would be within reach if they had used parentheses with every operator. The
result is that when they have to do what should have been normal, they
associate it with the painful experience of getting it right and remembering
when they have to do it because the language does not do what they want.
Lisp is like telling an Algol-family programmer to write (x + (y * (z ^ t)))
for x+y*^t, so it would not be different from (((x + y) * z) ^ t). Chances
are an Algol-family programmer would not only balk at this because it goes
against the grain of the language, he would get it wrong because he does not
actually know the rules of his language. Back in the Dark Ages, when I gave
courses in C, one of the most common problems my students experienced was
getting the operator precedences right. I did not have the presence of mind
to teach them never to rely on the stupid rules of the language but instead
write it out explicitly and acquire that level of control that C programmers
are so fond of at the machine level, but apparently not at the language
level. For some reason that I cannot recount, I thought these stupid rules
were worth memorizing and I still help people with them without looking it
up. Today, if anybody let me near a school of newbiesน who wanted to learn
C, I would tell them to save themselves a lot of pain by thinking about what
_they_ want and not about how to "optimize" their code to reduce the number
of parentheses.
Showing people tragically painful C code and arguing that there are just as
many parentheses as in Lisp is the wrong psychological move, because you do
_not_ want to carry the "tragically painful" part over from C. They key is
to make C programmers understand that the ability to _omit_ some parentheses
is the root cause of their painful experiences with the remaining parentheses
and that the right way to write safe C is never to omit any parentheses.
Then you will see just how many parentheses C actually _infers_ for you and
you may get to appreciate the _smaller_ number of parentheses in Lisp.
-------
น That really should be the collective form for "newbie".
--
Erik Naggum, Oslo, Norway ***** One manslaughter is another man's laughter.
Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.
As I programming in CL and now have the occasion to try to understand and utilize some
code written in C++, I have come to simple conclusion that it is far easier to READ what a
programmer has written in Lisp. Not just what has been written, but what is intended and
how it is used.
I think there are a few reasons for this:
Lisp does not contain much syntactic sugar and little type declarations, hence the
meaning to verbosity index is much higher.
In a language like C++ the programmers are seemingly compelled to implement OO, this seems
to have the effect that the real code doing the work (tying the framework together) is
behind the scenes and is not readily apparent. This is opposed to CLOS where the emphasis
is more on methods, making the code doing the work the center of focus.
I think is very important to be able to read a program one has not written and quickly
understand how it works, how to use it and how to extend it. Lisp wins out is this area
hands down.
Wade
I'd add the additional comment that merely requiring full parentheses
wouldn't help. It's also essential to punt infix notation. Consider
Erik's example; the C syntax "x+y*z^t". According to C operator
precedence, this is (x + (y * (z ^ t))).
But a very key way to help parse complex expressions in Lisp is by
helpful indenting. Suppose we make Erik's sample a little more
complex:
(((x^w) * p) + (a ^ b))
Now things start to get hairy. How should this be indented for an
Algol like language? Usually something like:
(((x^w) * p)
+ (a^b))
Now see what I've done: I've used spaces to help within a line. But
if things get much hairier than this, spaces will be no avail. And
infix will just keep biting--so I'll drop the "helpful" spaces within
lines. What if things get more complex now?
(((x^w)*(p^(t+2)))
+((a+x)^(b+x)))
Uh oh. Need another level of spacing. Indeed, the expression before
the principal + should get broken apart. Eek!
(((x^w)
*(p^(t+2)))
+((a+x)^(b+x)))
I start finding this *much* harder to read. How about prefix?
(+ (* (^ x w) (^ p (+ t 2)))
(^ (+ a x) (+ b x)))
Oh, let's be more helpful:
(+ (* (^ x w)
(^ p (+ t 2)))
(^ (+ a x) (+ b x)))
Now *that's* easy to read.
Lisp syntax is a historical accident, caused by someone realizing that
they
could get an interpreter for (almost) free by implementing eval and
apply,
and using the reader, designed for lists, on programs represented as
lists.
Attempts have been made to give Lisp a more Algol-like syntax in the
past,
at least three times (Lisp 2.0, CGOL, Dylan). None of these has exactly
caught on.
It's almost as if Lisp has made up its mind about this, and is telling
us that it doesn't like Algol syntax.
Le Hibou
--
Dalinian: Lisp. Java. Which one sounds sexier?
RevAaron: Definitely Lisp. Lisp conjures up images of hippy coders,
drugs,
sex, and rock & roll. Late nights at Berkeley, coding in Lisp fueled by
LSD.
Java evokes a vision of a stereotypical nerd, with no life or social
skills.
> Lisp syntax is a historical accident, caused by someone realizing
> that they could get an interpreter for (almost) free by implementing
> eval and apply, and using the reader, designed for lists, on
> programs represented as lists.
>
> Attempts have been made to give Lisp a more Algol-like syntax in the
> past, at least three times (Lisp 2.0, CGOL, Dylan). None of these
> has exactly caught on.
>
> It's almost as if Lisp has made up its mind about this, and is
> telling us that it doesn't like Algol syntax.
Actually, similar statements might be made about other syntactic aspects
of Lisp, too, not just the much-discussed parentheses.
Somewhere back along the way, it was beaten into me to use the full names
of variables and functions. That is, to make functions called things like
make-directory, not mk-drctry or mkdir. The rule of thumb was expressed as
"There are many abbreviations, but only one expanded form."
One might almost describe Lisp's syntax as a rejection of the arbitrary.
That is, abbreviating make-directory as mkdir is arbitrary. Surely mkdir
is not uniquely determined. It might have just as well been makdir or
makedir. How on earth could anyone rationally argue that mkdir was a good
core around which to rally passion? And yet make-directory could be,
because make-directory is a celebration of intelligibility.
Webster, in creation of the dictionary, realized that what made proper
spelling "good" was not the choice of spelling but the decision to make a
choice in spelling. You see this decision reiterated in modern spell
checkers which often reject alternate spellings of words not because the
makers don't know there are alternate spellings but because if you accept
both honor and honour, you allow for the possibility that the user will
inadvertently use both in the same document, and will be internally
inconsistent, and in turn will make things like readability and search
hard.
There is likewise a case to be made in bracketing. People who don't
think hard about it complain about parens. But they are partly saying
that they like to create parenthetical firewalls, almost like the
Interlisp super-parens which were the [..] characters. When one does
{ ... ( ... } one can tell not only that a paren is missing but one
has SOME information about whether it is the inner one or outer one.
I think that's what people are really saying when they say they like
Algol syntax. Nothing really wrong with this, but it works against
Lisp's language goals of program being data, and of having simple
traversal operations for treewalking code. Other languages don't even
have such goals, sodon't see syntax that interferes with this as an
impediment. If they did, there might be bigger fights not over
whether to have {...} or () but about which constructs called for
which notation. Right now the decision is intensely arbitrary. If
you compare one "algol-like" language with another, you see none of
them agree on where these bounds are drawn, just as none agree on
which vowels or consonants to omit to make a good function or variable
name. I don't know Lisp 2.0, but certainly CGOL and Dylan did not
agree on what bracketing constructs are used where.
So again, one MIGHT say that Lisp is rejecting the abitrary.
But I don't want to put it quite that way.
Lisp is also arbitrary. The choice of hyphen instead of underscore or
space to connect multiple words is arbitrary. Arbitrary, yes, but
consequential. There are issues of token breaks to consider, which is
why we don't use space, and issues of keyboard shifting, which is why
we prefer hyphen. (Historically, _ was really badly placed on some
consoles, and even today it's harder to get to.)
So rather than a rejection of the arbitrary, I want to say a rejection
of the gratuitous. That is, one DOES need to decide between
make-directory and make_directory, and one DOES need to decide between
() and [] and {}. Lisp routinely makes arbitrary choices where arbitrary
choices are forced, but Lisp does NOT make gratuitous arbitrary choices
if it can help it.
Lisp does not leap to make parsed constructs where non-parsed constructs
are enough. This keeps the language simple without losing power.
This even explains LOOP and FORMAT. LOOP and FORMAT are arbitrary syntaxes
that have been created after extensive work has shown that the simpler,
more regular formats, are so over-verbose that it's worth a little parser
in order to keep these specific facilities readable. The more regular
constructs like DO and PRINT did not generalize in a way that stood the
test of time. So Lisp grudgingly gives ground where it has to. In other
words, it does not see these choices as "gratuitous". They were each hard
fought and the advantages outweighed the disadvantages in the eyes of the
designers of numerous implementations, eventually resulting in a standard.
There is a sometimes-fight over infix numerics. That is, over allowing
a+b*c instead of (+ (* b a)). If Lisp were used more intensely for numerics,
I suspect it's possible some escape notation to infix would be more likely,
as happened in Interlisp (through DWIM) and LispM through some # extension
that slips my mind, maybe #<Escape>...<Escape> or some such. But the point
is that again this was/is be an accomodation to practicality.
Contrast this with the decision of other languages to DEFAULT to using
begin x; y; end
or
begin x; y end
or
{ x; y; }
or whatever even though
(begin x y)
is more concise.
We have, in the Lisp family, stuck to some words (progn instead of
begin, cdr instead of pair-left, etc.) out of tradition. This, too,
is a different rejection of the gratuitous: a rejection of gratuitous
change. An accomodation of history. One might criticize Lisp for
having cdr and say it was two-faced to complain about mkdir. But the
point is that cdr is not a naming convention that we make future
functions from. We simply retain cdr out of respect for long-standing
written works and old code. We don't retain the notion that functions
should be named after assembly code names from old operating systems,
nor that function names should be hard to read.
I guess all of the above was really to both agree and disagree with
Donald's conclusion. To the extent he's saying "maybe there's a
reason" for what we do, I think he's right. To the extent he's aaying
we as a community have rejected the other approach, I think he's
wrong. I think we have not so much rejected those approaches as have
accepted the one we have. That is, I would describe the choice of
long names like make-directory as "convergent" and the choice of (f x
y) as "convergent" because they are natural results of a proper
solution to the prisoner's dilemma. I would describe results like
mkdir and { x; y; } as "divergent" because no rational person I know,
trying to solve the prisoner's dilemma in the absence of favoritism
for a specific dialect would pick such a syntax, would choose this
syntax. We don't do what we do because we don't like those other
syntaxes, but because we find those other syntax arbitrary and we would
have battles forever about whether the choices required to make those other
syntaxes were good ones; rather, we do what we do because we do like the
simplicity of design involved in our choices, and the lack of internal
bickering that occurs in a community that understands that these decisions
were made for good reason. The fact that a community on the outside hasn't
taken the trouble to understand our reasons is not really relevant to us.
That's my take anyway, for what it's worth.
> I am certainly not a troll. I have just started (re) learning LISP and
> wonder why it is a better programming language?
>
> I like C++ for a number of reasons, one of which is its multiparadigm
> capabilities. It is at heart an OO language, but it is capable of
Lisp is also multiparadigm.
> The one thing I do miss is strong typing. In python I found myself catching
See the paper "Accelerating Hindsight", which is part of Kent Pitman's
column "Parenthetically Speaking", for a discussion of why dynamic typing
may be useful. Kent's site is:
http://www.world.std.com/~pitman
Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://www.paoloamoroso.it/ency/README
> On Wed, 31 Jul 2002 05:54:49 GMT, "Icosahedron" <no...@nowhere.com> wrote:
>
> > The one thing I do miss is strong typing. In python I found
> > myself catching
>
> See the paper "Accelerating Hindsight", which is part of Kent Pitman's
> column "Parenthetically Speaking", for a discussion of why dynamic typing
> may be useful. Kent's site is:
>
> http://www.world.std.com/~pitman
Please use this URL instead:
I don't know what www.world.std.com is, but the "www." is unnecessary AND:
The real problem with the URL cite, though, is this (which comment is
relevant to more than just my own personal site and should be taken to heart
for anyone who cites ~foo rather than ~foo/ as a URL reference):
~ has no special status in a URL. merging ~pitman with foo/ should result in
foo/ not ~pitman/foo/ ... My pages DO merge against the base URL and so rely
on the thing you have visited to be ~pitman/ rather than ~pitman.
Many (most?) HTTP servers if you got to ~foo wil take you to ~foo/ and
lately world.std.com seems to have joined that bandwagon of those that not
only do this but do it in a way that rewrites the URL to have the trailing
slash so merging works. In the past, it didn't, and so I'd get bug reports
from people who said their browser would request subdir urls and fail because
the proper rules of merging led to lossage if you started with ~pitman.
And even though the present behavior (rewriting ~pitman to ~pitman/ in the
HTTP server) causes the least breakage, there might be some some day who comes
along and says it is not strictly correct behavior, and ants to take out the
rewrite.
Often I refer people to:
http://www.world.std.com/~pitman/pitman.html
which is a link to the same page, but which has the nice property that its last
few characters don't look omittable in the same way as something ending in
/index.html might look. Heh.
I don't know Lisp 2.0, but certainly CGOL and Dylan did not
> agree on what bracketing constructs are used where.
Lisp 2 was a project started at the Systems Development Corporation in
the later half of the 1960s. Among its many novel features was the
ability to enter expressions in the M-language.
for example (+ 2 3) could be:
(+ 2 3)
+ 2 3
2 + 3
+(2 3)
+(2, 3)
...
the basic rule was that lists were lists, either defined with parenthesis
or on a new line without parenthesis, commas were optional, functions
appeared in either the first or second spot, and new lines with a larger
indentation were considered as part of the parent list.
one could type (begin (print "input: ") (read))
as:
begin
print "input: "
read
it is a little loose structured but I figured it would work well.
underlying it all it is just read as normal lists.
exact indentation was unimportant, only that for sublists it it more than
for the parent.
a little bit of a hack though would be that lone objects in functional
position are considered as evaluating to themselves, as such:
1 and (1) are equivalent, and functions would be detected by the reader as
different from other symbols (or eval could recognize to swap the first and
second arguments). also that it would have to recognize cases like (+ (2
3)), and correct appropriatly.
defun fact(x)
if(zero? x) 1
fact(x - 1)
all this in the name of readability...
i thought it was at stanford under paul abrahams (vague memories from
jean sammet's book on programming languages)
hs
--
don't use malice as an explanation when stupidity suffices
this is a (slightly) unfair set of examples. the need for nost of
them come from the fact that macros are strictly text replacement.
the rest are due to the fact that many coders seem to lack the
confidence of keeping operator precedence right. the above example
could easily be written
# define fromhex(n) ((n)>='A' ? (n)+10-'A' : (n)-'0')
or better (and less problematic) with an inline function
inline int fromhex(char n) { return n>='A' ? n+10-'A' : n-'0'; }
(i think inline is part of C99). even with using your sample: when
using the macro, you really don't have to worry about parentheses
> ...
`Unfair' to what or whom?
> the need for nost of them come from the fact that macros
> are strictly text replacement. the rest are due to the
> fact that many coders seem to lack the confidence of keeping
> operator precedence right.
How odd! I wouldn't have thought that people that disliked
parenthesis so much would immediately turn to a fully parenthesized
notation the instant that operator precedence rules stepped
beyond the simple `multiply and divide *before* add and subtract'.
> the above example could easily be written
>
> # define fromhex(n) ((n)>='A' ? (n)+10-'A' : (n)-'0')
Indeed. I wonder why it wasn't.
Because people who have experienced macros in C learn to parenthesize
defensively... they may hate it, but they know it's a good practice.
Look at how much of your check goes into your retirement plan.
You only have to be bitten once by the bug in
#define product(a,b) a*b
before you decide to put parens *everywhere*, whether you think you
need them or not.
I do think the guy in the example may have been having a fit of manic
hysteria when he parenthesized a few bits, though.
joelh
> As compared to other languages, functional in particular, I would say
> that the code-as-data property of Lisp is the biggest distinguishing
> feature.
Code is data in all languages. But there are some where you have the
compiler at runtime.
> It allows easily definable and rich macros which leads to
> a style of programming in which new languages are implemented
> on top of Lisp to solve problems. Ironically, one of the most often
> heard criticisms of Lisp is its syntax, e.g., Lots of Insipid Silly
> Parentheses. Can't please everyone!
Actually, one can have macros in infix syntax as well. See:
http://www.ai.mit.edu/~jrb/Projects/dexprs.htm
Disclaimer: I happen to like infix syntax. YMMV. The power of Lisp
is deeper than it's syntax.
Andreas
--
"In my eyes it is never a crime to steal knowledge. It is a good
theft. The pirate of knowledge is a good pirate."
(Michel Serres)
Bzzzzt! Wrong.
--
Erik Naggum, Oslo, Norway
Act from reason, and failure makes you rethink and study harder.
> around lisp. Dylan was an attempt to take the best of lisp but to set
> it in a more conventional style, and they failed. It did not get
> very many new programmers and it lost contact with the existing lisp
> community.
I am one of the people who took over maintenance of the Gwydion Dylan
project after funding for CMU stopped. Back then, we were mainly
packaging the compiler for binary distribution. It took me some years
to figure out how compilers work, and it was only by this that I
noticed how close the relationship between Common Lisp and Dylan
really is. Nobody told me back then :).
And while the Dylan community is small, it is constantly growing, and
work on the compiler is constantly progressing. We're actually just
161 failed tests in the test suite away from our first stable release
in years.
And things will get more funky. We do realize that there are areas
where Common Lisp is still more powerful than our Dylan
implementation. These are mainly procedural macros, availability of
the compiler at run time and a MOP.
On the Dylan Hackers Conference last week we hacked an interpreter for
the low-level intermediate code of the compiler, giving us a Dylan
interpreter that uses the infrastructore of the compiler. This should
make writing procedural macro support a breeze. Compilation at
runtime can be done using the trick from Goo: just dlopen a shared
library. This can be replaced by something funkier once a native
backend is available.
I still need to read the MOP book.
We do have some interesting code available: thanks to Functional
Objects, the vendor of the commercial Dylan compiler, we have DUIM
(our CLIM equivalent) and Deuce (our Zwei equivalent); there's a web
server, an XML parser, a Baker-style parser macro system, and a couple
of other nifty libraries.
And we compete with C in terms of performance, beating CMUCL, at least
for some benchmarks:
I wouldn't call that a failure at all. Maybe a slow start.
> Some of the frequent discussions here on c.l.l. kind of presumes that
> choice of programming language is a rational one, but I hold that it
> is not.
No it ain't. But what I like to see is peaceful coexistence on a
powerful platform, the same way Zetalisp and Common Lisp coexisted
on Genera.
> | Code is data in all languages.
> Bzzzzt! Wrong.
If the code is not data, then how do you feed it into the
compiler/interpreter?
> You only have to be bitten once by the bug in
> #define product(a,b) a*b
> before you decide to put parens everywhere, whether you think you
> need them or not.
And how is this not a flaw in the language design (bad enough that inline
directives were added to the latest standard)? More importantly, why does
one group think that using superfluous parens is just peachy-keeno in one
venue, but using them in a useful and beneficial way is irritating in
another?
The more I hear from people who don't like Common Lisp, the more irrational
they seem to me.
faa
[unecessary parentheses]
> the rest are due to the fact that many coders seem to lack the
> confidence of keeping operator precedence right.
This is another argument for prefix syntax in my book. Not having to
remember that
flag & mask == 1
probably isn't what I meant is one more brain cell I can use for
something interesting.
Cheers,
M.
--
Premature optimization is the root of all evil.
-- Donald E. Knuth, Structured Programming with goto Statements
The term "code as data" means that the same structures used for creating
and manipulating data in a language's programs, in Lisp's case the list, are
also used to express code in the language. More generally, it's reifying
a language so that code can be programmatically examined or created
and executed using a defined interface, either declaratively (such as
pattern
matching and template substitution) or procedurally (such as arbitrary
computations to construct code).
Other languages, even "syntax rich" languages, can, and do, certainly
provide the same. Dylan's rule-based macro system comes to mind.
There has been work in this vein in Java and C++. Would I equate
them in expressive power, ease of use, coder popularity, and maturity
with Lisp macros? No.
>But there are some where you have the compiler at runtime.
Are we talking about generating code by creating it as strings,
let's say, writing it to a file, and then compiling and dynamically
linking it into a running process? This would be sort of a
poor man's macro system :)
The issue, at least for procedural macros, is having the evaluator
avaliable at compile-time. Lisp macros are typically expanded at
compile-time. What you need, in general, to expand Lisp macros
is a complete runtime at compile-time for evaluation. After all
macroexpansion in Lisp is general function evaluation.
Dylan macros are expanded at compile-time and do not require
runtime functionality because they do no evaluation.
Scheme also has a pattern-matching based macro system that
is expanded at compile-time and, I believe, requires no evaluation.
> Actually, one can have macros in infix syntax as well. See:
>
> http://www.ai.mit.edu/~jrb/Projects/dexprs.htm
>
> Disclaimer: I happen to like infix syntax. YMMV. The power of Lisp
> is deeper than it's syntax.
>
> Andreas
The power of Lisp is certainly deeper than it's syntax. Note that
my words were "distinguishing feature." But other languages,
functional ones in particular, offer higher-order functions, lexical
closures,
delayed evaluation, and currying. Other languages provide support for
object-oriented programming. Lisp macros are not truly equalled
elsewhere. Its simple, regular prefix syntax and code-as-data approach
is a big reason why.
Disclaimer: To paraphrase Alan Perlis from the foreword to SICP, "I toast
the Lisp programmer who pens his thoughts within nests of parentheses."
jag
> Erik Naggum <er...@naggum.no> writes:
> * Andreas Bogk
> > | Code is data in all languages.
> > Bzzzzt! Wrong.
>
> If the code is not data, then how do you feed it into the
> compiler/interpreter?
A file of C code is a bunch of characters with no obvious
syntactic structure. The code goes through a lexer and a parser,
generating the ``data'', the abstract syntax tree. A file of
Lisp code, on the other hand, contains the printed representation
of this syntax tree already (more or less). When you look at
Lisp code, you see the abstract syntax tree directly.
CL-USER 15 > (defparameter *code* (read-from-string "(mapcar #'1+ '(1 2))"))
*CODE*
CL-USER 16 > (pprint *code*)
(MAPCAR #'1+ '(1 2))
CL-USER 17 > (eval *code*)
(2 3)
CL-USER 18 > (type-of *code*)
CONS
The data structure contained in *code* /is/ the code. And you
can manipulate it directly.
CL-USER 19 > (eval (cons 'mapc (cdr *code*)))
(1 2)
Regards,
--
Nils Goesche
Ask not for whom the <CONTROL-G> tolls.
PGP key ID #xC66D6E6F
A closure is a function-object (or ``functor'') generated on-the-fly.
It has a function, and associated data. That associated data consists of
lexical variables which are visible at the point where the closure is created,
their value bindings which exist at the time of the closure's creation.
In Lisp, the (function ...) operator creates a closure out of a lambda
expression. The lambda expression becomes the function, and the
surrounding lexical scope provides the data.
(let ((counter 0))
(function (lambda () (incf counter))))
Here, the functor gets a function that takes no arguments. It has
the side effect of incrementing a counter, and returns the new
value. The counter is simply the captured variable from the
surrounding let block. The shorthand for (function X) is #'X,
so you can just write #'(lambda () (incf counter)). And there
is a (lambda ...) macro which will write that for you, so you
can omit the #' notation:
(let ((counter 0))
(lambda () (incf counter)))
The closest thing resembling a closure in C++ is a class object which overloads
operator (). But the data member sof the class object are established through
an explicit class declaration, and acquire bindings through an explicit
constructor call. So much setup and bookkeeping is required to make them work,
that the whole idea of function objects is murdered and buried six feet under
ground.
In Lisp, it is commonplace to write macros which generate closures
behind the scenes. The user of the macro is not aware that a closure
is used as part of the implementation of the substrate.
For example, when you define a class, and specify an initialization
form for a slot, that initialization form is retained as a closure.
(let ((myclass-instance-count 0))
(defclass myclass ()
((mynumber :initform (incf myclass-instance-count)))))
So here, each time you make an instance of the class by invoking (make-instance
'myclass), a counter is incremented, and used to initialize the ``mynumber''
slot. The defclass macro will take the argument of :initform and spin it into
a closure, which is how the myclass-instance-count lexical variable gets
captured.
This kind of use of closures is impractical if you don't have a decent macro
system, and a decent syntax for declaring closures. Think of what it would take
to do in C++ in the same way. You would have to write a whole other class to
hold the equivalent of myclass-instance-count. That class would have to have
a constructor which sets the count to zero. You would have to instantiate
that class and pass a reference to the constructor of myclass, which
would have to invoke the () operator to obtain the count.
class counter {
int instance_count; // Oops, limited range! Overflows after INT_MAX.
public:
counter() : instance_count (0) {}
int operator ()() { return ++instance_count; }
};
e
class myclass {
int my_number;
public:
myclass(counter &c) : my_number(c());
};
That's a whole lot of work, so of course, you would probably just revert to
making static class member which represents the count. And note how the
abstraction is completely broken. We have had to set up the mechanism
explicitly, and explicitly invoke the functor.
And this is an easy case, because the call c() only reaches one kind of
functor: it is a statically determined. But there often arises the need to
call different closures from the same point in the program. In Lisp, all you
need is for the target closure to have the right number of parameters and
handle the argument types.
In C++, the static typing gets in your way: you now have to define a base class
of functors, and use a virtual () operator which is called through the base
class reference. That call can only be made to objects which derive from
the base class. In Lisp, the data portion of the closure is not
a distinct object, it has no programmer-visible type or class. Only the
function's parameter list determines the compatibility of a call.
Moreover, there is no difference between function calls and closure calls,
and for every function, you can obtain a function object:
;; add 1 to every element, using standard 1+ function.
(mapcar (function 1+) '(1 2 3))
;; add 2 to every element using closure.
(mapcar (lambda (x) (+ 2 x)) '(1 2 3))
So you really can't discuss features in isolation. Things work together
in Lisp.
You mean all code is made up of bits, and all data is made up of bits, so
therefore code is data? I regret that I took your bait. Bummer.
[...]
> there often arises the need to call different closures from the same
> point in the program. In Lisp, all you need is for the target
> closure to have the right number of parameters and handle the
> argument types.
In C++, if the functor type is handled as a template type in , say,
the context of a template function, the same freedom applies.
> In C++, the static typing gets in your way: you now have to define a
> base class of functors, and use a virtual () operator which is
> called through the base class reference. That call can only be made
> to objects which derive from the base class.
This derivation-from-a-root-type is not the idiomatic way of using
functors in C++. Given the comment above, again, templates remove the
need for a common base type. The type compatibility is still checked
statically at compile time, with no run-time dispatching necessary.
The problem is that one can't afford to write an entire non-trivial
C++ program made up of template functions all over the place. It takes
templates to get the flexibility you describe, so one loses that
flexibility where one can't use templates.
--
Steven E. Harris :: seha...@raytheon.com
Raytheon :: http://www.raytheon.com
I think you are just confusing "information" and "data". A file of C code
is information but not data.
--
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")
Yes, that is a whole lot of work. That is why I would use class variable
rather than an instance variable. To be honest, closures seem to be like
statics in C/C++ for most of the uses that I've seen. I'm not sure it buys
you much.
class myclass {
static int counter; // this is guaranteed to be 0 by the compiler
int my_number;
public:
myclass(void) { my_number = counter++; }
~myclass(void) { counter--; }
}
It isn't the parenthesis. I find it elegantly simple that there isn't much
syntax to learn, though I do like Python's use of whitespace a little
better.
It isn't the closures unfortunately. They look much like static and class
variables in C++, or function pointers.
It isn't the compactness of representation. After looking at my Clean
version of the ray tracer and Paul Graham's ray tracer in ANSI Common Lisp,
I find that we have about the same amount of expressions for similar
features (mine has far more features at this point).
I like the idea behind special variables, I must admit.
In short, I'm sorry I wasted the bandwidth. I think I'm going to just
convert the ray tracer over to Scheme to see if I "get it". I'm going to
use the PLT scheme package since it 1) supports structures, and 2) I hate
the dual namespace for functions and variables of Lisp.
I do like the ideas behind functional programming. I like the "mathematical
function" notation of Haskell and Clean. I like that there is only one loop
construct (recursion) and I like that referential transparency prevents me
from having state hanging around that bites me later. I like higher order
functions that allow me to decompose and generalize my problems with
specifics parameterized.
I guess the thing I regret most is that I wanted to be part of the
"enlightened" club of people who "get it" as Eric S. Raymond said. I guess
I just don't. As Paul Graham said, "Lisp is for good programmers." I guess
I'm not good. :)
Thanks anyways,
Icos
> Because people who have experienced macros in C learn to parenthesize
> defensively... they may hate it, but they know it's a good practice.
I really like the way that everyone is stressing away about
parenthesis but no one has spotted that at least one of these C macros
are *broken* (consider fromhex(i++)). This says something really
wonderful about the state of computer science.
--tim
Oops. I don't know why I put the counter-- in the destructor. But still, I
achieve the same effect with a class variable that you had in all the reams
of code above. I was thinking of a programmable rendering pipeline using
curried functions, but the jury is still out on the utility of that one. It
might be too constrained.
Icos