* dyo...@tazent.com (Daniel J. Yoder) | Can't quite resist this one, coming from C++ land as I do. Don't worry, | I'm not going to try and defend C++. I am quite enjoying working in Lisp | after years of working C++.
well, it was six months of C++ that made me determined to find something else to with my life. I was a free-lance journalist for a while since I just couldn't stomach programming, anymore. I know where you come from.
| That said, strongly typed languages aren't a bad idea by themselves.
I try to separate "strongly typed" from "explicitly typed". I have no qualms whatsoever with strong typing through type inference, à la ML, it's the stupid need to write these things out explicitly that I find so nauseating because the only thing the compiler does with it is bark at me for not agreeing with it. how obnoxious. if it already knows what it should have been, it should shut the <beep> up, and just do its job.
| There are quite a few kinds of applications where, however, unlikely you | might think it is, you don't want to find out at runtime that there is | type mismatch.
my current application is mission critical and such errors would be very annoying, but of all the silly mistakes I have made, a type mismatch has occurred only once. (ok, so there have been 49 programming errors caught by the debugger in 6 months of operation, dealing with such things as calling VECTOR-PUSH-EXTEND on a vector that wasn't actually adjustable -- I'm not sure how much extra stuff I'd have to write to make that error detectible at compile-time, considering that the array is created by a wrapper that adjusts the argument list a little and applies MAKE-ARRAY to it, and the wrapper is passed its arguments from a configuration file...)
| It is really nice to work in Lisp where I don't have to have the | design completely correct before I can start development.
oh, yeah. I used to feel as if I had to have The Perfect Class Hierarchy before I could sit down with C++; the cost of redesign was just too high to stomach. there's hardly a weak that I don't rely on CLOS updating the instances after a class has been changed in a running system, these days.
| Or has it? =)
not that I know. I would like to have code analysis tools that reported on calling patterns in my code, and with deep nesting of functions it is sometimes useful to see where a variable is actually used, despite being passed in to a very high-level function. and lots of other stuff that needs intimate coupling with the compiler.
* Brian Denheyer <bri...@deldotd.com> | Along these lines, it's interesting that I haven't really seen any | good guidelines for coding for speed in lisp or scheme. Anybody have | some good pointers to books or webpages ?
In addition to the books and other good advice, you might want to check out past articles concerning efficiency in this newgroup. I remember two instances where a bunch of people improved/rewrote pieces of code for efficiency. Here are the dejanews pointers to the starting articles:
In article <3127866695326...@naggum.no>, Erik Naggum <e...@naggum.no> writes:
EN> the whole parameterized type nonsense is a testament to bad language EN> design. it is completely devoid of meaning.
I beg to disagree.
Unless you really care for efficiency, the main reason to have types in a language is that they allow you to reason about your code. They allow you to express simple invariants of the form
x always holds an integer; y always holds an object on which the generic function `foo' has an applicable method.
Cardelli (?) coined the term `typeful programming' for this approach.
Whether type declarations are compulsory in the code, optional, or not written at all, and whether they are statically or dynamically checked is more or less irrelevant to this discussion.
In this view, being able to express notions such as `list of integers' is a perfectly natural request. A number of people (basically the dependent types crowd) are even working on calculi in which you have the full power of the lambda-calculus at the level of types -- very natural if you've been brought up on Lisp, and like to have full access to the language at compile-time. You may then express types such as `vector of int of length 42' or `cons is for any n:int a function from vectors of length n to vectors of length n+1'. (I don't think the latter can be expressed in Common Lisp, not even with SATISFIES.)
(No, those calculi are not ready for general consumption by programmers, although they've been remarkably successful in theorem proving.)
P.S. Of course, this has nothing to do with classes and generic function dispatch.
* Erik Naggum <e...@naggum.no> writes: | the whole parameterized type nonsense is a testament to bad language | design. it is completely devoid of meaning.
* Juliusz Chroboczek <j...@dcs.ed.ac.uk> | I beg to disagree.
um, what part of it do you disagree with?
in case it hasn't been clear, I have been trying to show that Common Lisp _has_ what C++ calls parameterized types, and has had it since way before C++ was conceived. the overall context here is "why reject Common Lisp?" and one argument was: "it doesn't have parameterized types", where "parameterized type" is a language-specific feature in C++, much hyped because the language lacks everything that would make this a _natural_ thing in C++. the whole idea of adding parameterized types to a language that doesn't have a useful way of representing type information at run-time is just silly.
so, I'm not arguing against _types_. criminy. I'm not arguing against the ability to reason about code. how could I be a strong favor of type inference and author of code analysis tools if I were? I'm not arguing about the usefulness of type calculus. geez, my University education was all about static typing and verification and all that stuff that has yet to prove itself because it's just too hard on programmers to tell these things all they need to know.
I'm arguing against the stupid disdain for dynamic typing and run-time type information (in C++ terms) except as an inefficient last resort when it's too hard to do it statically.
| Whether type declarations are compulsory in the code, optional, or not | written at all, and whether they are statically or dynamically checked | is more or less irrelevant to this discussion.
I think they are irrelevant only to your own discussion, which doesn't address anything people here have actually argued against. my argument is that you cannot both wish to retain type information _and_ drop it. if your language can't speak about its types, any benefit of this type business is in efficiency in the compiler. if you want to reason about types, you just can't do it in C++, and it is entirely irrelevant to any abstract meaning of "parameterized type" -- C++ does not instantiate it in a useful way, while Common Lisp does.
| In this view, being able to express notions such as `list of integers' is | a perfectly natural request.
there is a point at which you have to ask of any theory what it will change for the better if people adopted it. theories that are very nice and clean and powerful which change nothing in the ways people think or act in practice are ineffectual in the extreme. I want theories that are so powerful that people change their ways _completely_, because what people do today is mostly stupid, especially in the language design area.
do we have the concept of a "list of integers" in Common Lisp? yes. is it different from any other list, operationally and notationally? no. did C++ have the concept of a "list of integers"? _no_. it then chose to add a list implementation that _had_ to be specialized on the type. of course, this is a pain. so naturally the non-solution "parameterized types" had to be invented. this is not because of any lack of usefulness of typeful languages, not because people do not want "list of integers", but because C++ has no concept of a _type_hierarchy_ that would allow "list of any-type" to be expressed usefully. all C++ can do is "list of some-type", and since its types do not form a hierarchy, but are disjoint (even the classes, there is no "universal superclass") so instantiation of the type becomes a _necessity_. the concept of "parameterized types" were born out of this necessity.
now, in a type calculus, you don't talk about types as any special kind of parameter, it's your _favorite_ kind of parameter. you compute types and of course it takes types as arguments. making up "parameterized type" as a phrase means you _don't_ usually compute or specify types abstractly. in other words, you need the terminology because it is something people need to be aware that they can do, and there is no space for it in their existing vocabulary.
| A number of people ... are even working on calculi in which you have the | full power of the lambda-calculus at the level of types -- very natural | if you've been brought up on Lisp, and like to have full access to the | language at compile-time.
well, here's my biggest gripe with the explicit-typing crowd: they have a notion of a dichotomy between run-time and compile-time, and it's a moral dichotomy: "what's good at compile-time is bad at runtime, and what's good at run-time is bad at compile-time". macros in Lisp aren't different from any other function, they are just called under slightly different conditions. the _same_ language is used. why did they need a _new_ language to get a lambda-calculus for typs? well, I think it's because the language in question is so badly designed that they had to.
in all the literature I have read about types, and it's considerable, this notion is hidden under a lot of consequences of holding it, but never made explicit. I think the reason it is not explicit is that it is genuinely stupid, and like so many other things in our cultures and traditions that are stupid, smart people find smart ways around them, and it's fundamentally risky to stand up and say "this is stupid" without having a much better idea, because millions of people will flock to the defense of the stpuid old traditions without even considering the other idea unless it's so intuitively superior that people go "ah, of course", instead. so real progress happens only when some genius is willing to ignore the stupid ways and do something that opens up entirely new paths. in this case, inventing parameterized types hss done nothing to help us get out of the stupid ways. if people were somewhat less insistent on obeying the stupid ways of their ancestors, maybe we also could have more respect for their _smart_ ways. but I digress. (and don't get me started on foreign aid to countries where a family's prestige is determined by how many male children they have.)
| You may then express types such as `vector of int of length 42'
(vector integer 42)
| or `cons is for any n:int a function from vectors of length n to vectors | of length n+1'. (I don't think the latter can be expressed in Common | Lisp, not even with SATISFIES.)
I assume some function like ADJUST-ARRAY was intended, not CONS, since CONS doesn't do that and it's good that it cannot be expressed in Common Lisp. (why do people who want to make theoretical points so often miss the details in ways that shows that they have genuine disdain for them?)
recently, an implementation of core concepts in programming by contract was posted here, in Common Lisp. you can certainly express the above in such a system. (I know the argument to follow: "but does it help the compiler produce better code?" -- more on that just below.)
| (No, those calculi are not ready for general consumption by programmers, | although they've been remarkably successful in theorem proving.)
I think you underestimate practitioners, but hey, that's the norm when people invent theories that have _zero_ effect on the way they work.
here's my view on this type business: it makes no difference whatsoever whether you compute types at run-time or at compile-time, you can reason about the code just the same. (people who scream and shout that they can't, have yet to discover the simple concept of a type hierarchy with embracive, abstract types like NUMBER instead of "integer of 16 bits" being disjoint from "integer of 32 bits") it makes no difference to anything except performance whether the system has to figure out the type of an object at run-time or at compile-time, and since we have a type hierarchy for everything, the way to move something into the compile-time arena is either by type inference or hints from the programmer. and if the programmer wants to ensure that he does not see type mismatches, he can either write code for it or declarations for it, which are identical in conceptual terms. (in particular, if you check for a type, you know that whatever was checked has that type afterwards, just like you would if you had a declaration -- so why do the strong-typing crowd get so anal about this?)
despite all the hype and noise about parameterized types in C++, you still can't use it any useful way at run-time. all you can hope for is that the compiler did all the useful things by the time the code was cast in stone, and hopefully didn't make any mistakes (which the C++ template cruft has been suffering from in all implementations for years). in my view, this restriction on type calculus is phenomenally stupid, and anyone who works with strong typing after accepting the dichotomy between run-time and compile-time is unlikely to contribute anything of serious value, simply because the mind-set is too narrow. in particular, if it is necessary to study how something behaves in a run-time situation in order to come up with a breakthrough idea, the person who believes in compile-time will have to invent his own _language_ and write a compiler for it, which is just so much work that no sane person would want to do it just to prove a simple point, and if someone did it anyway, you would get a new language with a number of features that couldn't be used anywhere else, and nothing would come of it until C++ incorporated a braindamaged version of it so people could argue against Common Lisp not adopting "modern trends in object-orientedness" or whatever their argument was.
in brief, the strong typing community are solving the wrong problems the
Erik Naggum <e...@naggum.no> writes: > however, different things require different forms of attention in > different implementations of Common Lisp, so you really need to profile > your code to see where the hot spots are.
And, actuslly, isn' that the only reaslly useful advice on writing fast code in CL? Profile and then fix the bottlenecks.
jos...@lavielle.com (Rainer Joswig) wrote: >In article <8790e57nju....@soggy.deldotd.com>, Brian Denheyer <bri...@deldotd.com> wrote: >> When it comes to OO sorts of things, I CAN see that CLOS introduces a >> nasty amount of overhead as opposed to C++ where you just toss >> function pointers around.
>> Any reasonable, insightful, comments ?
>Garnet did not really use CLOS. They had their own object system >(prototype based, IIRC). I checked it out in CMU CL years ago - >it was slow also because CMU CL has/had a weak GC implementation.
hmm, garnet used kr because of it's simplicity and better performance, compared to CLOS that days. I personally really enjoy kr and favor it over CLOS, but unfortunately kr's lifetime is over. it reminds me very much how autocad handles objects, everything's event-based.
"KR: Constraint-Based Knowledge Representation Dario Giuse October 1993 Abstract KR is a very efficient knowledge representation language implemented in Common Lisp. It provides powerful frame-based knowledge representation with user-defined inheritance and relations, and an integrated object-oriented programming system. In addition, the system supports a constraint maintenance mechanism which allows any value to be computed from a combination of other values. KR is simple and compact and does not include some of the more complex functionality often found in other knowledge representation systems. Because of its simplicity, however, it is highly optimized and offers good performance. These qualities make it suitable for many applications that require a mixture of good performance and flexible knowledge representation." -- Reini
> Brian Denheyer <bri...@deldotd.com> writes: > > Along these lines, it's interesting that I haven't really seen any > > good guidelines for coding for speed in lisp or scheme. Anybody > > have some goo d pointers to books or webpages ? John Michael Miller wrote:
check out past articles concerning efficiency in this newgroup. I remember two instances where a bunch of people improved/rewrote pieces of code for efficiency. Here are the dejanews pointers to the starting articles:
Thanks very much to Luca Pisati, Erik Naggum, Rainer Joswig & others who made suggestions about optimizing the function I described. I thought I would summarize my results in case other found them useful.
By far, the most important thing was Luca's suggestion to change MISSING-VALUE-P from a function to a macro. Nearly all of the consing was because the floats were being boxed for that function call, which was made very many times. That change reduced the amount of consing by 3 orders of magnitude, and the consequent speedup was nearly 2 orders of magnitude (counting reductions in GC).
The next most important suggestion was to break up the calculation so that I could declare the types in detail. This lead to about a 30% speedup.
The final change that made any difference was to move the range calculation out of the function to avoid redundancy; that shaved off an additional 10% or so.
None of the other suggested changes (using NaN to represent missing values, using a bit vector mask instead of a list of ignore positions, getting rid of the ABS call and testing for negatives explicitly, eliminating the instances when 0 is added to the result, etc.) made any significant difference. Some actually slowed things down!
Thanks to your help, the time it takes to calculate the roughly 30 million applications of this function has gone from about 4 days to under an hour.
The final version of the function is at the end of this message.
My old professor Alan Perlis was not entirely right when he said that lisp programmers know the value of everything and the cost of nothing; some of you really do know where the costs are hiding....
jos...@lavielle.com (Rainer Joswig) writes: > The LOOM project, btw., makes a similar statement about > their new "language" Stella, which can be compiled to > Lisp or C++. The C++ code runs ten times faster, they claim:
Since that was written, the Lisp code has been improved, but we still see a performance difference of roughly 3:1 in favor of C++. Our testing indicates that this difference is largely caused by the extra overhead of CLOS method dispatch versus C++ method dispatch (and especially slot value accesses). In that sense there isn't too much that can be done about it, since a large part of the method dispatch overhead in CLOS is an inherent part of the need to be able to dynamically redefine interfaces.
-- Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu
In article <ymik8xi9oa0....@sevak.isi.edu>, Thomas A. Russ <t...@sevak.isi.edu> wrote:
>Since that was written, the Lisp code has been improved, but we still >see a performance difference of roughly 3:1 in favor of C++. Our >testing indicates that this difference is largely caused by the extra >overhead of CLOS method dispatch versus C++ method dispatch (and >especially slot value accesses). In that sense there isn't too much >that can be done about it, since a large part of the method dispatch >overhead in CLOS is an inherent part of the need to be able to >dynamically redefine interfaces.
Is that really the explanation for the perceived speed difference? I thought Stella->Lisp used some low-level vector-based representation instead of full-fledged CLOS.
Bernhard (who'd still like to know the real reason) -- -------------------------------------------------------------------------- Bernhard Pfahringer Austrian Research Institute for http://www.ai.univie.ac.at/~bernhard/ Artificial Intelligence bernh...@ai.univie.ac.at
> Since that was written, the Lisp code has been improved, but we still > see a performance difference of roughly 3:1 in favor of C++. Our > testing indicates that this difference is largely caused by the extra > overhead of CLOS method dispatch versus C++ method dispatch (and > especially slot value accesses). In that sense there isn't too much > that can be done about it, since a large part of the method dispatch > overhead in CLOS is an inherent part of the need to be able to > dynamically redefine interfaces.
Do you have profile output or whatever that shows this? I'm not doubting, but I'd be interested to see it, as I'm kind of interested in CLOS performance. What implementation was it for (performance seems to vary a huge amount between awful for PCL to really good for some commercial ones, especially for slot access, which some people seem to be able to optimise into the ground).
In article <nkj90dxv0zq....@tfeb.org>, Tim Bradshaw <t...@tfeb.org> wrote: > t...@sevak.isi.edu (Thomas A. Russ) writes:
> > Since that was written, the Lisp code has been improved, but we still > > see a performance difference of roughly 3:1 in favor of C++. Our > > testing indicates that this difference is largely caused by the extra > > overhead of CLOS method dispatch versus C++ method dispatch (and > > especially slot value accesses). In that sense there isn't too much > > that can be done about it, since a large part of the method dispatch > > overhead in CLOS is an inherent part of the need to be able to > > dynamically redefine interfaces.
> Do you have profile output or whatever that shows this?
MCL for example allows to get better slot access speed by restricting classes to use a kind of single inheritance (primary classes). Other vendors may also offer certain speed improvements...