Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Polymorphism in Common Lisp

1,021 views
Skip to first unread message

Software Scavenger

unread,
Aug 17, 2001, 11:41:17 PM8/17/01
to
In comparing Common Lisp with functional languages such as Haskell, I
saw a simple qsort implementation in Haskell, and wanted to do the
same thing in CL, to see if I could make it equally short and simple.
It was something like this:

qsort [] = []
qsort (x:xs) = qsort lt ++ [x] ++ qsort ge
where lt = [y | y <- xs, y < x]
ge = [y | y <- xs, y >= x]

But when I try to do the same thing in CL, such that it would work for
character strings, vectors, lists, etc., I run into the problem that <
is not polymorphic. E.g. string< char< etc. What's the best way to
make a polymorphic version of it, and what's the best way to make this
qsort equally short and simple in CL as in Haskell?

I also have the same problem with sequences. If the sequence is a
list I can use car and cdr, but those don't work if the sequence is a
vector or string. That makes it harder to do the x:xs part of the
Haskell qsort while making it as generic as the Haskell version.

Kent M Pitman

unread,
Aug 17, 2001, 11:53:00 PM8/17/01
to
cubic...@mailandnews.com (Software Scavenger) writes:

> In comparing Common Lisp with functional languages such as Haskell,

I'm glad you didn't say "other" functional languages.

But your coding style below is not "functional" but rather pattern-matching.
CL has good support for functional programming, but does not require static
type analysis nor does it have a primitive built-in pattern matching facility
that is integrated with the type system as Haskell's is. Consequently,
what you are trying to do is going to yield poor results without your doing
a lot more work (which could be done, but is not trivial) to make it work.

> I saw a simple qsort implementation in Haskell, and wanted to do the
> same thing in CL, to see if I could make it equally short and simple.
> It was something like this:
>
> qsort [] = []
> qsort (x:xs) = qsort lt ++ [x] ++ qsort ge
> where lt = [y | y <- xs, y < x]
> ge = [y | y <- xs, y >= x]

I can't read several critical notations in this so can't suggest
how to rewrite it.

> But when I try to do the same thing in CL, such that it would work for
> character strings, vectors, lists, etc., I run into the problem that <
> is not polymorphic. E.g. string< char< etc. What's the best way to
> make a polymorphic version of it, and what's the best way to make this
> qsort equally short and simple in CL as in Haskell?

CL is not about static dispatch, so you may not want to do this. You'll
just be slowing your program down. But you can either shadow "<" and friends
and do

(defmethod < (x y) (cl:< x y))

or else you can just make a different name

(defmethod lessp (x y) (< x y))

IMO, though, your time would be better spent learning how people write CL
programs rather than trying to cast Haskell programming styles into CL.

> I also have the same problem with sequences. If the sequence is a
> list I can use car and cdr, but those don't work if the sequence is a
> vector or string. That makes it harder to do the x:xs part of the
> Haskell qsort while making it as generic as the Haskell version.

Same comment.

Juliusz Chroboczek

unread,
Aug 18, 2001, 4:58:28 AM8/18/01
to
SS> qsort [] = []
SS> qsort (x:xs) = qsort lt ++ [x] ++ qsort ge
SS> where lt = [y | y <- xs, y < x]
SS> ge = [y | y <- xs, y >= x]

Common Lisp doesn't use pattern matching (at least not for function
argument binding) or list comprehensions, so you need to rewrite this
code to destructure and construct by hand:

(defun split (x list predicate)
(let ((lt '()) (gt '()))
(loop for elt in list
do
(if (funcall predicate elt x)
(push elt lt)
(push elt gt)))
(values lt gt)))

(defun qsort (list)
(if (null list) '()
(multiple-value-bind (lt gt)
(split (car list) (cdr list) #'<=)
(nconc (qsort lt) (list (car list)) (qsort gt)))))

Unlike the Haskell code, this scans the input list only once for each
recursive step (although a Sufficiently Smart Compiler could, in
principle... never mind). A solution analogous to the Haskell one
could be written using REMOVE-IF, and would not be significantly
longer than the Haskell version.

(There's no denying, though, that list comprehension is a neat
notation, and something that has no equivalent in Common Lisp. I'm
less enthusiastic about pattern matching, especially eager pattern
matching in a lazy language.)

The interesting question is how you make this polymorphic. The
simplest solution is to defer all important decisions to the caller:

(defun qsort (list predicate)
(if (null list) '()
(multiple-value-bind (lt gt)
(split (car list) (cdr list) predicate)
(nconc (qsort lt) (list (car list)) (qsort gt)))))

Another solution is to do some simple dynamic dispatch using Lisp's
type system:

(defun my<= (x y)
(etypecase x
(real (<= x y))
(string (string<= x y))
(foo (foo<= x y))))

(defun qsort (list)
...
(split (car list) (cdr list) #'my<=)
...)

Finally, you could do the fancy thing, and make MY<= into a generic
function. Not something I'd do, but some people have different tastes.

(defmethod my<= ((x real) (y real))
(<= x y))

(defmethod my<= ((x string) (y string))
...)

SS> If the sequence is a list I can use car and cdr, but those don't
SS> work if the sequence is a vector or string.

If I really wanted to make quicksort polymorphic in the data
structure, I'd use CLOS for that. But quicksort on vectors and
quicksort on lists are really two rather different algorithms -- and a
fully generic single quicksort would no longer be quick.

Juliusz

Software Scavenger

unread,
Aug 18, 2001, 6:53:10 AM8/18/01
to
Kent M Pitman <pit...@world.std.com> wrote in message news:<sfw3d6q...@world.std.com>...

> I can't read several critical notations in this so can't suggest
> how to rewrite it.

Here is a CL version of approximately the same thing for lists of
numbers:

(defun qsort (a)
(if (< (length a) 2) a
(let* ((x (car a))
(xs (cdr a))
(lt (remove-if (lambda (y) (>= y x)) xs))
(ge (remove-if (lambda (y) (< y x)) xs)))
(append (qsort lt) (list x) (qsort ge)))))

What I want to do is make it as short and clear as possible to
contradict the claim that Haskell programs are shorter and clearer
than Lisp programs. It seems like I could use general purpose
functions and macros, as long as they're general-purpose enough to
make it reasonable to have them handy and not just ad-hoc. For
example, I could define len01 such that (len01 a) means (< (length a)
2), because that would be generally useful, as the degenerate case for
a lot of recursive functions. And I could define umlet, to define
subsets of a sequence, using predicates for each subset. ("um" for
"un-merge"). Because that too would be generally useful. Something
like (umlet (var pred var pred ... optional-var) sequence forms). The
optional-var would get the residue after using the predicates to
distribute the sequence into the vars. Or does CL already have such a
macro or function?

It seems like I could shorten qsort to (untested):

(defun qsort (a)
(if (len01 a) a
(umlet (lt (lambda (y) (< y (car a))) ge) (cdr a)
(append (qsort lt) (subseq a 0 0) (qsort ge))))))

But is that the best I could do? The Haskell version still seems
slightly shorter and clearer.

Pierre R. Mai

unread,
Aug 18, 2001, 6:44:56 AM8/18/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> The interesting question is how you make this polymorphic. The
> simplest solution is to defer all important decisions to the caller:
>
> (defun qsort (list predicate)
> (if (null list) '()
> (multiple-value-bind (lt gt)
> (split (car list) (cdr list) predicate)
> (nconc (qsort lt) (list (car list)) (qsort gt)))))

Which is IMHO a very good solution, because many types support more
than one (partial) order, and hence it should be up to the user to
supply the relevant ordering predicate.

> SS> If the sequence is a list I can use car and cdr, but those don't
> SS> work if the sequence is a vector or string.
>
> If I really wanted to make quicksort polymorphic in the data
> structure, I'd use CLOS for that. But quicksort on vectors and
> quicksort on lists are really two rather different algorithms -- and a
> fully generic single quicksort would no longer be quick.

Indeed. And that's why many interesting abstractions only provide a
clean interface, but a very intricate, highly optimized and
specialized implementation. That's why e.g. SORT, STABLE-SORT and
MERGE are part of the standard, and provide nice generic interfaces,
but take up more than 450 LoC in CMU CL's implementation for example.

If you try to make sorting simpler than that, you will end up with a
non-general solution, bad performance, or both. For every such
generic problem, there is a certain amount of intrinsic complexity,
which can't be reduced without increasing complexity in other parts of
the system.

Regs, Pierre.

--
Pierre R. Mai <pm...@acm.org> http://www.pmsf.de/pmai/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents. -- Nathaniel Borenstein

Erik Naggum

unread,
Aug 18, 2001, 7:42:46 AM8/18/01
to
* Software Scavenger

> What I want to do is make it as short and clear as possible to contradict
> the claim that Haskell programs are shorter and clearer than Lisp
> programs.

Clearly, Haskell must have been used for something that Common Lisp can
do better? Rather than let Haskell set the baseline, let Common Lisp set
the baseline and try to show what long and verbose code Haskell needs to
do the same thing. E.g., it, too, should do poorly for parsing XML. :)

> But is that the best I could do? The Haskell version still seems
> slightly shorter and clearer.

Every language has been optimized for _some_ forms of expression. That
you have found that Haskell is doing well in what it was clearly intended
to do is hardly surprising -- there is absolutely no reason to think the
Haskell people are idiots. The same goes for Perl. Of _course_ it wins
hands down in the area of economy of expression for its intended tasks.

The _real_ issue in these heavily optimized languages is whether there is
a huge disparity of compactness for problems of similar complexity of
abstraction and expression. This is where Common Lisp truly excels
beyond _any_ other language. You may, and some do, complain that there
is no super-compact expression of some particular problems, but what you
completely miss is that the _penalty_ for thinking "outside of the box"
or outside of the "intended" uses of the language is _also_ absent.

Optimization of design has _huge_ costs in terms of what you can _not_ do
efficiently. This factor is almost always ignored in language comparisons
because people are happy to switch languages and so enjoy creating _new_
ones that they can optimize for some miniscule little featurette, until,
of course, they run into the popularity problem and their languages grow
so large that the time a human mind needs to learn to use it efficiently
will have dwarfed any _real_ savings from economy of expression.

But by all means, write qsort in Haskell and hello, world in C. I am not
sure what you are going to _do_ what those two utilities, however. If
you have a language that is good at demonstrating such things, maybe it
has been optimized for demonstrations? At least one really huge software
company has made the bulk of its earnings on its ability to demonstrate
to other people that the real users what they could do.

From the Latin word "imponere", base of the obsolete English "impone" and
translated as "impress" in modern English, Nordic hackers have coined the
terms "imponator" (a device that does nothing but impress bystanders,
referred to as the "imponator effect") and "imponade" (that "goo" that
fills you as you get impressed with something -- from "marmelade", often
referred as "full of imponade", always ironic).

I am not sure _exactly_ what about Haskell prompted me to tell you about
those two Nordic hacker jargon terms, however. :)

///

Knut Arild Erstad

unread,
Aug 18, 2001, 8:34:49 AM8/18/01
to
[Software Scavenger]
:
: But when I try to do the same thing in CL, such that it would work for

: character strings, vectors, lists, etc., I run into the problem that <
: is not polymorphic. E.g. string< char< etc. What's the best way to
: make a polymorphic version of it, and what's the best way to make this
: qsort equally short and simple in CL as in Haskell?

I don't know Haskell, but the CL way of doing it would not be to make a
polymorphic "<" (although it is possible), but to supply a predicate to
the function. For instance:

(defun qsort (list &key (test #'<))
(if (null list)
()
(let* ((x (first list))
(xs (rest list))
(test< (lambda (y) (funcall test y x)))
(lt (remove-if-not test< xs))
(gt (remove-if test< xs)))
(append (qsort lt :test test) (list x)
(qsort gt :test test)))))

> (qsort '(1 2 5 3 9 6))
(1 2 3 5 6 9)
> (qsort '(1 2 5 3 9 6) :test #'>)
(9 6 5 3 2 1)

It should probably also accept a KEY parameter, like the standard function
SORT does:

> (sort '((1 4) (5 2) (3 7) (2 1)) #'< :key #'first)
((1 4) (2 1) (3 7) (5 2))
> (sort '((1 4) (5 2) (3 7) (2 1)) #'< :key #'second)
((2 1) (5 2) (1 4) (3 7))

(The KEY parameter is convenient, but you could do the same without it.)

: I also have the same problem with sequences. If the sequence is a


: list I can use car and cdr, but those don't work if the sequence is a
: vector or string. That makes it harder to do the x:xs part of the
: Haskell qsort while making it as generic as the Haskell version.

Well, you could use (elt seq 0) and (subseq seq 1), but that would be
slow. It is probably better to write an optimized algorithm for vectors.

--
Knut Arild Erstad

But if less is more, then just think how much more more will be.
-- from "Frasier"

Kent M Pitman

unread,
Aug 18, 2001, 11:56:23 AM8/18/01
to
cubic...@mailandnews.com (Software Scavenger) writes:

> Kent M Pitman <pit...@world.std.com> wrote in message news:<sfw3d6q...@world.std.com>...
>
> > I can't read several critical notations in this so can't suggest
> > how to rewrite it.
>
> Here is a CL version of approximately the same thing for lists of
> numbers:
>
> (defun qsort (a)
> (if (< (length a) 2) a
> (let* ((x (car a))
> (xs (cdr a))
> (lt (remove-if (lambda (y) (>= y x)) xs))
> (ge (remove-if (lambda (y) (< y x)) xs)))
> (append (qsort lt) (list x) (qsort ge)))))
>
> What I want to do is make it as short and clear as possible to
> contradict the claim that Haskell programs are shorter and clearer
> than Lisp programs.

You first need to define shortness and clearness.

Then you need to decide whether the claim is true. Perhaps it's not.
Or perhaps, as is more likely, it's true for some programs and not others.

Especially since Haskell seems fixated on the idea of little mathy formulas,
I'd think you'd be more concerned with the question of whether if you couid
do this proof for qsort, that would form the necessary inductive basis so
that you could say "and it follows likewise for all other programs".

Forget short and clear, and forget Turing equivalence since I've never seen
it lead to anything other than idiotic conclusions, I doubt the two languages
even have a fully overlapping set of programs they can program, where by
"can program" I don't mean theoretically with infinite tape and all that
tommyrot, but instead I mean "if I assign you a project to be done by some
finite time that is commercially meaningful like day's end, month's end, or
year's end, and then I staff it with finite staff and finite resources and,
for that matter, people with finite imagination, then it will get done at all,
or perhaps also be readable (or even elegant), or be maintainable (or even
extensible)."

I don't think this is special ot Haskell or anything other language. It's
just true of the "expressivity" (rather than "computability") of languages.
Some make it easier to do some things, and some make it easier to do others.
But the making of something easy usually comes at the expense of something
else being hard. And in the end, the languages map out different spaces
of things you can and can't do. I find the consideration of the shape of
that space much more interesting than the question whether of some program
we all know so well that there's no point in writing it looks elegant. What
could that possibly prove?

> It seems like I could use general purpose
> functions and macros, as long as they're general-purpose enough to
> make it reasonable to have them handy and not just ad-hoc. For
> example, I could define len01 such that (len01 a) means (< (length a)
> 2), because that would be generally useful, as the degenerate case for
> a lot of recursive functions. And I could define umlet, to define
> subsets of a sequence, using predicates for each subset. ("um" for
> "un-merge"). Because that too would be generally useful. Something
> like (umlet (var pred var pred ... optional-var) sequence forms). The
> optional-var would get the residue after using the predicates to
> distribute the sequence into the vars. Or does CL already have such a
> macro or function?
>
> It seems like I could shorten qsort to (untested):
>
> (defun qsort (a)
> (if (len01 a) a
> (umlet (lt (lambda (y) (< y (car a))) ge) (cdr a)
> (append (qsort lt) (subseq a 0 0) (qsort ge))))))
>
> But is that the best I could do? The Haskell version still seems
> slightly shorter and clearer.

So?

Does that make some statement about the language? How short is a Haskell
program to compute whether tomorrow is Wednesday, to print the number 19
in old-style roman numerals? If it was shorter in Lisp would that prove
anything? Better questions are how easy/fast/elegant is it in Haskell
to write a program that manages a commercial airline reservation system,
computes the next digit of pi, renders a commercial quality movie, diagnoses
a disease, controls a telephone switching system, performs a taylor series
expansion, monitors the stock market, manages a major commercial web site,
etc. And supposing that even one of these, again, is better in either Lisp
or Haskell, what will this tell you about whether it's going to work
better in another language.

We're a community of practical people. We can live with Haskell
people making claims their programs are shorter and clearer than ours.
We can even live with them being right on some occasions. (Even in
the unlikely case that it's true on all occasions, we seem to be
getting by fine in spite of even that. :-)

CL is there for you to learn and use. Life is short. Spend your time
on something that matters.

Kent M Pitman

unread,
Aug 18, 2001, 12:30:19 PM8/18/01
to
cubic...@mailandnews.com (Software Scavenger) writes:

> (defun qsort (a)
> (if (< (length a) 2) a
> (let* ((x (car a))
> (xs (cdr a))
> (lt (remove-if (lambda (y) (>= y x)) xs))
> (ge (remove-if (lambda (y) (< y x)) xs)))
> (append (qsort lt) (list x) (qsort ge)))))
>
> What I want to do is make it as short and clear as possible

Oh, and by the way, are short and clear always the same?
Statements like this are often made casually as if "short and clear"
moved in lockstep, but the pressure of your quest appears to be toward
short as if clear will follow. You nowhere define "clear".

To offer an extreme example of the importance of this in academic
conversations of this kind, I'll cite this:

I used to love to program in Teco, for example. And I know people who
liked APL for similar reasons. The programs in both tend to be very
short. But the shortest program may not be clearest. And it also may
not be the most efficient. In APL, absent clever compilation, it may
conjure all kinds of intermediate expressions that have no purpose and
need to be contracted out to keep things efficient, for example.
And many people complained that APL and Teco were not "clear" in any case,
to the point where, in spite of my appreciation for their having
a degree of special elegance, I still call them "line noise languages".

And suppose we took teco and lengthened the token length and called it
PostScript. Would that make it better? Would it even make it "longer"?
Or is the length a token-length rather than a character length? Does
translating something to use longer human-language identifiers (as is my
sense on average for, say, German) make it more "long"? Less "clear"?

[I pick German here because I have a collection of various Agatha
Christie's Poirot books in a variety of languages (English, Spanish,
Portuguese, French, and German -- hardly the full set of languages
she's translated into; I think there are near a hundred, competitive
with the Bible in terms of "translated into the most languages" for
hardcopy books). The German translations are notable because they are
in a smaller font, and as nearly as I can tell this is to hold the
page count more or less constant in the face of word lengthening
everywhere. Not a very scientific sampling, obviously. Still, my
real point is not about this or that language, but to merely note that
as one moves from one human language to another, the words one uses
lengthen and shorten. So to say that letter count is a measure of
clarity seems dubious.]

Siegfried Gonzi

unread,
Aug 18, 2001, 5:03:25 PM8/18/01
to
"Kent M Pitman" <pit...@world.std.com> schrieb im Newsbeitrag
news:sfwitfl...@world.std.com...

> CL is there for you to learn and use. Life is short. Spend your time
> on something that matters.

I really respect every Lisp programmer. They are on the right spot, because
they do not bother again and again with things like "malloc" or "int *p" and
other useless constructs. But the Lisp discussion follows a special scheme:
One points out a feature of another language and compares it to Lisp (for
reason of its own). People in the Lisp community then acknowledge the
elegance of that specific feature, but elaborate then in long posts very
disguised about the benefits of Lisp and conclude: Lisp forever and Lisp
will also rule in the future.

Lisp is a powerfull language and is far above C/C++. No question. But Lisp
is not alone on the field. Some claims about Lisp resembles one would say:
Einstein's theory is great and of some benefit but as we know, Newton always
rules. Is this physics? No it is not. Sure, most of the problems are far
enough described by Newton-mechanics, but does this mean that there is no
place for Einstein? You can even calculate the H-atom with "simple"
Bohr-methods. But you can also consult the abstract Schroedinger equations.
And for the He-atom and above you always has to snatch on "Psi".

Why are the Lisper that sure, that there doesn't exist a "pure" functional
language which is a line above Lisp? The defender of Lisp point that Lisp
has so many big projects to show, which have been completely and safely
brought up to an end. But does this really hold? Functional programming
languages (which deserves attention) are in the game only for 15 years or so
compared to 40 years of Lisp. Must we have to show a user system written in
a functional language to compete against Lisp? I say no. Why? Lisper should
it know better, because there is more than shuffling around files.For
example Emacs has been a great example for the usefulness of Lisp but it is
only a tool and hasn't been responsible for the advance of mankind. Or does
someone believe that a Lisp user is more prepared of receiving the
nobel-price? I hope nobody will become surprised when I claim that even a
C++ user can get it. And it is even so, that most of the nobel price winners
are not capable of writing a single line of code.

That said I also jump on the language bandwagon and vote for Clean. But
everybody comfortable with Lisp should stay with Newton's mechanic. This is
not a problem, as long as someone discern that there is "maybe" more.


S. Gonzi

Christopher Stacy

unread,
Aug 18, 2001, 6:06:20 PM8/18/01
to
>>>>> On Sat, 18 Aug 2001 23:03:25 +0200, Siegfried Gonzi ("Siegfried") writes:
Siegfried> Why are the Lisper that sure, that there doesn't exist a "pure"
Siegfried> functional language which is a line above Lisp?

Who made that claim?

Adam Sampson

unread,
Aug 18, 2001, 5:29:33 PM8/18/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> (There's no denying, though, that list comprehension is a neat

> notation, and something that has no equivalent in Common Lisp. [...]

I'd argue that LOOP is equivalent (or, at least, it can be used as an
equivalent way of doing something with every member of a
list). Haskell's "[x + 1 | x <- xs, x > 27]" is:

(loop for x in xs
if (> x 27)
collecting (+ x 1))

--
Adam Sampson <a...@gnu.org> <URL:http://azz.us-lot.org/>

Kaz Kylheku

unread,
Aug 18, 2001, 7:27:57 PM8/18/01
to
In article <99816749...@hagakure.utanet.at>, Siegfried Gonzi wrote:
>reason of its own). People in the Lisp community then acknowledge the
>elegance of that specific feature, but elaborate then in long posts very
>disguised about the benefits of Lisp and conclude: Lisp forever and Lisp
>will also rule in the future.

[ snip ]

>Why are the Lisper that sure, that there doesn't exist a "pure" functional
>language which is a line above Lisp?

Scheme? ;)

Really, anything that is a notch above Lisp has to be Lisp-like, in
the sense that it has a simple read syntax that allows you to write
trees directly. Anything with a more complex read syntax is going to
present difficulties to someone who wants to create new sublanguages
within the language, so going to complex syntax is a mistake.

So right away, when something has some cute lexical sugar, you
know that it is beneath Lisp. That condensed notation corresponds to
some abstract syntax tree, and that syntax tree could be represented
directly in Lisp, and its semantics could be written as code which
processes that tree. In other words, cute notations are just a
compiler parlor trick, a mere exercise in parsing for those who
want to refresh their recollection of the Dragon Book. The compiler
or interpreter for the language with cute notation has to take
that notation and turn it into some other representation, such
as a parse tree. And then whatever the semantics of that parse tree are,
are carried out by an algorithm embedded in that compiler
or interpreter. What prevents you from implementing the same actions
in Lisp, such that they operate on a tree expressed in Lisp?

In Lisp, you can have sugar. Macros can create condensed notations,
allowing you to type something with a simple syntax, which translates
to something complex. The reader can be customized as well to some
extent to create terse notations.

So the advantage in some specialized languages rests entirely in their
convenient notational sugar, which comes with ready-made semantics.
Whereas to get that in Lisp, the user has to do work, possibly
quite difficult work. But then if the result is worthwhile, that
work can be shared with other Lisp users.

>defender of Lisp point that Lisp
>has so many big projects to show, which have been completely and safely
>brought up to an end. But does this really hold? Functional programming
>languages (which deserves attention) are in the game only for 15 years or so
>compared to 40 years of Lisp.

Successful languages support multiple paradigms of programming. Lisp
supports functional programming, and also imperative and object-oriented
programming. It also allows you to write code to support new kinds of
programming, because no single programming paradigm is a panacea. We
are not finished discovering new ways of programming. There is a
reasonable confidence that Lisp will be able to support emerging new
ways be of programming that its users care about.

>only a tool and hasn't been responsible for the advance of mankind. Or does
>someone believe that a Lisp user is more prepared of receiving the
>nobel-price? I hope nobody will become surprised when I claim that even a
>C++ user can get it.

Given that Nobel invented a way to stabilize an explosive substance,
so that it could be handled more safely while remaining explosive,
the awarding of a Nobel prize related to C++ would be poetically ironic. :)

Frode Vatvedt Fjeld

unread,
Aug 18, 2001, 7:25:24 PM8/18/01
to
Adam Sampson <a...@gnu.org> writes:

> (loop for x in xs
> if (> x 27)
> collecting (+ x 1))

I'd rephrase this as

(loop for x in xs when (> x 27) collect (1+ x))

--
Frode Vatvedt Fjeld (nitpicker)

Kaz Kylheku

unread,
Aug 18, 2001, 7:34:57 PM8/18/01
to
In article <87pu9tt...@cartman.azz.us-lot.org>, Adam Sampson wrote:
>Juliusz Chroboczek <j...@pps.jussieu.fr> writes:
>
>> (There's no denying, though, that list comprehension is a neat
>> notation, and something that has no equivalent in Common Lisp. [...]
>
>I'd argue that LOOP is equivalent (or, at least, it can be used as an
>equivalent way of doing something with every member of a
>list). Haskell's "[x + 1 | x <- xs, x > 27]" is:
>
>(loop for x in xs
> if (> x 27)
> collecting (+ x 1))

And of course, you could make the reader parse (for example) the notation:

#[+ x 1 ! (x xs) (> x 27)]

which would spit out the above loop.

But why bother with syntactic sugar, when you have syntactic protein. ;)

Kent M Pitman

unread,
Aug 18, 2001, 7:43:27 PM8/18/01
to
"Siegfried Gonzi" <siegfri...@kfunigraz.ac.at> writes:

> ... People in the Lisp community then acknowledge the


> elegance of that specific feature, but elaborate then in long posts very
> disguised about the benefits of Lisp and conclude: Lisp forever and Lisp
> will also rule in the future.

I don't even assert that Lisp rules now, much less in the future. I
just assert that I like it and that you are welcome to either like it
or not. If you're curious why I like it, I can tell you. But I
cannot tell you that you will share my values. I think most Lisp
programmers are the same, and I think the reason is that they have
been attracted to a multi-paradigm language--a language that has
within it the notion of live and let live. People who want only a
single point of view, IMO, tend to be attracted to other languages,
many of which take the position that there is only one legimate point
of view.

> Lisp is a powerfull language and is far above C/C++. No question. But Lisp
> is not alone on the field.

Did I suggest it was? I thought I said just the opposite.
upstream:

kmp> CL [...] does not require static type analysis nor does it have a
kmp> primitive built-in pattern matching facility that is integrated
kmp> with the type system as Haskell's is.

You may here have heard me saying that this made CL better, but I did not
say that. My remarks were value neutral, merely citing differences. If
you value static type analysis or primitive pattern matching, this was a
statement that Lisp is weaker than Haskell in that department. To admit
weakness in an area doesn't bother me. I'm happy that Lisp is strong in
enough areas to suit me that I can be honest about what it doesn't do well.

kmp> CL is not about static dispatch, so you may not want to do this.

This doesn't condemn static dispatch. It says CL is not based on it.

kmp> You'll just be slowing your program down.

This acknowledges that a program written in this style probably runs faster
in Haskell than in Lisp. (It doesn't get into the issue of whether people
do or don't want to write in this style, whether all programs are amenable
to this style, and other interesting questions. It merely acknowledges a
local fact--that people who want to program this way are probably better
off in a language designed for this kidn of thing.)

The post you more directly quote from does not really contain any assertions
about Lisp at all. It just contains some observations about the way
discussions on program aesthetics typically go awry--i.e., for failure to
have defined the meaning of words like "clear" or "aesthetic" or even "short".

Perhaps you were just seeing in my reply what you expected to see and
not what I actually wrote.

> Some claims about Lisp resembles one would say:
> Einstein's theory is great and of some benefit but as we know, Newton always
> rules. Is this physics? No it is not. Sure, most of the problems are far
> enough described by Newton-mechanics, but does this mean that there is no
> place for Einstein? You can even calculate the H-atom with "simple"
> Bohr-methods. But you can also consult the abstract Schroedinger equations.
> And for the He-atom and above you always has to snatch on "Psi".

If I'm going to have to defend myself, I'd rather you quote what I
actually said than make something up.

> Why are the Lisper that sure, that there doesn't exist a "pure" functional
> language which is a line above Lisp?

I don't even think Lisp IS a functional language, much less a pure
functional language. I made no assertion about Haskell being pure or good
or anything. I merely said that an attempt to use Haskell's programming
style in Lisp will result in confusion.

Marco Antoniotti

unread,
Aug 18, 2001, 7:45:15 PM8/18/01
to

Adam Sampson <a...@gnu.org> writes:

> Juliusz Chroboczek <j...@pps.jussieu.fr> writes:
>
> > (There's no denying, though, that list comprehension is a neat
> > notation, and something that has no equivalent in Common Lisp. [...]
>
> I'd argue that LOOP is equivalent (or, at least, it can be used as an
> equivalent way of doing something with every member of a
> list). Haskell's "[x + 1 | x <- xs, x > 27]" is:
>
> (loop for x in xs
> if (> x 27)
> collecting (+ x 1))

Come on! You can do much better than that! :)

[(+ x 1) / x in xs / (> x 27)]

and

* (defun collecting (xs)
[(+ x 1) / x in xs / (> x 27)])
COLLECTING

* (collecting [1 2 3 4 44 55 3 999 2 -9 125])
(45 56 1000 126)

or

* (defun primes (max)
[n in (range 3 max 2)
/ (not (exist m in (range 3 (min (1- n) (+ 2 (sqrt n))) 2)
/ (= (mod n m) 0)))])
PRIMES

* (primes 100)
(3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97)

Cheers

--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
719 Broadway 12th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
"Hello New York! We'll do what we can!"
Bill Murray in `Ghostbusters'.

Siegfried Gonzi

unread,
Aug 18, 2001, 8:13:02 PM8/18/01
to
----- Original Message -----
From: "Kaz Kylheku" <k...@ashi.footprints.net>

> So the advantage in some specialized languages rests entirely in their
> convenient notational sugar, which comes with ready-made semantics.
> Whereas to get that in Lisp, the user has to do work, possibly
> quite difficult work. But then if the result is worthwhile, that
> work can be shared with other Lisp users.

In the Forth community there is also the saying: Forth is a model for a
scaleable language. Forth defines other languages.

Som digression:
I have to rant here, by the way, that Lisp programmers are that much
far,far, far ahead of Forth programmers that no Lisp programmer is aware of.
Because Forth programmers only implement languages and do not program
problems. Therefore I often do not understand why people compare Forth to
Lisp and say they are similar. Forth has got all the nasty crap, like:
pointers, no memory management (no garbage collection) and so forth.
Lisp programmers code problems.

I am always a little bit sceptical about "defining" your language. Often one
has to make a demarcation line between planing/designing the code/project
and using existing libraries. Isn't it said that packages are Lisp's
greatest strenght in industry?

On the other side Clean has got as feature also "macros" maybe they are not
that powerful as a Lisp macro but it is not verified whether functional
programmers really need any many macros.

´The new Clean compiler will be written in Clean itself and will show some
new features:

-multi-parameter type constructor classes;
-a hybrid type system with both static as well as dynamic typing;
-an improved module system;
-storing and retrieving of expressions (data as well as code) with just one
function call;
-dynamic linking of code (plug ins) which will be type checked dynamically;
-just-in-time code generation.

Clean has the same advantage as Lisp: it is possible to compile (whatever
this means in the two paradigms). Scheme is much more lousy concerning
compiling. Sure there are Stalin and MacGambit and therelike out there. But
they all go via fucking C. This would be not a problem if the C compiler
would be hidden native in the user system (okay in Unix one can assume this
as defacto).

S. Gonzi


Jochen Schmidt

unread,
Aug 19, 2001, 11:11:05 AM8/19/01
to
Siegfried Gonzi wrote:

> Why are the Lisper that sure, that there doesn't exist a "pure" functional
> language which is a line above Lisp?

As W. Gibson wrote in "The Difference Engine":

Mick laughed. "'Pure.' You know what that means, Sybil?
It means they can't get it to run." He rubbed his hands together, grinning.
"No one can get it to run."

ciao,
Jochen

--
http://www.dataheaven.de

Friedrich Dominicus

unread,
Aug 18, 2001, 10:38:51 PM8/18/01
to

"Software Scavenger" <cubic...@mailandnews.com> schrieb im Newsbeitrag
news:a6789134.01081...@posting.google.com...

> In comparing Common Lisp with functional languages such as Haskell, I
> saw a simple qsort implementation in Haskell, and wanted to do the
> same thing in CL, to see if I could make it equally short and simple.
> It was something like this:
>
> qsort [] = []
> qsort (x:xs) = qsort lt ++ [x] ++ qsort ge
> where lt = [y | y <- xs, y < x]
> ge = [y | y <- xs, y >= x]
Most of the Haskell programs I've read are using such dense code often.
Now list comprehension is very nice and patter matchin either but please
try this in Haskell
(defun fact (n)
(format t "n is now = ~A~%" n)
(if (= n 0)
1
(* n (fact (1- n)))))

or how about?
(defclass foo ()
((foo1 :accessor foo1)))

Now how does Haskell?
Regards
Friedrich


Bijan Parsia

unread,
Aug 19, 2001, 12:17:12 PM8/19/01
to
On Sat, 18 Aug 2001, Erik Naggum wrote:

> * Software Scavenger
> > What I want to do is make it as short and clear as possible to contradict
> > the claim that Haskell programs are shorter and clearer than Lisp
> > programs.
>
> Clearly, Haskell must have been used for something that Common Lisp can
> do better? Rather than let Haskell set the baseline, let Common Lisp set
> the baseline and try to show what long and verbose code Haskell needs to
> do the same thing. E.g., it, too, should do poorly for parsing XML. :)

[snip]

Actually, (and beside your point :)) there's some nice XML work in Haskell
that ends up being quite economical next to...oh...XSLT....but then again,
what *doesn't* :)

See my XML.com article:
http://www.xml.com/pub/a/2001/02/14/functional.html


I discuss the HaXML utilities (mostly the combinator lib) on page 2:
http://www.xml.com/pub/a/2001/02/14/functional.html?page=2


People who complain about CL's support of XML obviously have never face
the brutality of the tools available for other languages :)

Cheers,
Bijan Parsia.

Siegfried Gonzi

unread,
Aug 19, 2001, 1:09:19 PM8/19/01
to

"Friedrich Dominicus" <fr...@q-software-solutions.com> schrieb im
Newsbeitrag news:9lolj4$u3l$00$1...@news.t-online.com...

> Most of the Haskell programs I've read are using such dense code often.
> Now list comprehension is very nice and patter matchin either but please
> try this in Haskell
> (defun fact (n)
> (format t "n is now = ~A~%" n)
> (if (= n 0)
> 1
> (* n (fact (1- n)))))

This is as if one would require: Show me multiple inheritance in C.

Clean has the feature to print intermediate computation steps. But I do not
use it often, hence I would have to dig up the manuals.

The fac(n) implementation is straightforward in Clean and Haskell, too. I
think this is not what you want?

S. Gonzi


Friedrich Dominicus

unread,
Aug 19, 2001, 2:36:56 PM8/19/01
to

"Siegfried Gonzi" <siegfri...@kfunigraz.ac.at> schrieb im
Newsbeitrag news:99823983...@hagakure.utanet.at...

>
> "Friedrich Dominicus" <fr...@q-software-solutions.com> schrieb im
> Newsbeitrag news:9lolj4$u3l$00$1...@news.t-online.com...
>
> > Most of the Haskell programs I've read are using such dense code
often.
> > Now list comprehension is very nice and patter matchin either but
please
> > try this in Haskell
> > (defun fact (n)
> > (format t "n is now = ~A~%" n)
> > (if (= n 0)
> > 1
> > (* n (fact (1- n)))))
>
> This is as if one would require: Show me multiple inheritance in C.
I disagree. The OP sends an code example of Haskell which uses features
of Haskell
I showed another example on do output in a otherwise simple function.
Now it was about dense code, how do you write such thing e.g in Haskell.
Or how do you use OO style of programming which is "build-in" into
Common Lisp. Now back to the sort in Common Lisp I do not care about the
implementation of a sort algorithm, I would simply write (sort whatever
:key #'lessp) and done with it Hard to beat isn't it?

It is a very dangerous field to pick out a special feature of a language
(foo)and ask in another group why that isn't so easy in xy. I think it'
fair to counter with other examples with show what is lacking in foo.

Regards
Friedrich


Software Scavenger

unread,
Aug 20, 2001, 4:52:22 AM8/20/01
to
Adam Sampson <a...@gnu.org> wrote in message news:<87pu9tt...@cartman.azz.us-lot.org>...

> I'd argue that LOOP is equivalent (or, at least, it can be used as an
> equivalent way of doing something with every member of a
> list). Haskell's "[x + 1 | x <- xs, x > 27]" is:
>
> (loop for x in xs
> if (> x 27)
> collecting (+ x 1))

I've just started learning functional programming, mostly over the
weekend, but one big difference, it seems to me, is that the above
Lisp code requires memory for xs, while a lazy functional language
doesn't. For example, xs could be an infinite sequence of numbers,
calculated as needed.

To take another example, consider an algorithm for generating a
sequence of prime numbers, starting with 2, and including all primes,
terminating only when no more primes are needed. In a lazy functional
language, you could start with the infinite sequence of numbers 2, 3,
4, ... and use each prime to filter the rest of the numbers.

To do the same thing in Lisp, I might create a function named primes
which would take a sink function as its argument. It should invoke
the sink function with each prime, and should continue indefinitely
until the sink function no longer wants any. (Maybe the sink function
should return t or nil to indicate whether it wants more.) However,
that doesn't seem as elegant as the lazy functional way, because it
implies some kind of explicit state used by the sink function between
invocations to determine whether it has more work to do, and an
explicit list of filter primes or filter functions maintained by the
primes function.

It seems to me that a lot of functional languages are very good at
this kind of thing. But I wonder how well they do at real-world
problems, and how I would find out.

Greg Menke

unread,
Aug 20, 2001, 8:11:02 AM8/20/01
to

> It seems to me that a lot of functional languages are very good at
> this kind of thing. But I wonder how well they do at real-world
> problems, and how I would find out.

I can't speak to the subtler question, but Lisp at least does just
fine in real-world applications. I've not written big ones, but a
number of non-trivial, non-small ones and as I've been able to root
out C/C++ tendencies, Lisp becomes a very nice language to work in.
"Nice" meaning a couple things to me; Its easy to build, manipulate
and discard complex transient structure, without tripping over
pointers, additional block scopes, class assignment/copy issues. I
also find Lisp seems to naturally select for a modular design, though
as always, its easy to make a mess. Lisp is also very extensive, so
you can often find high level functions to help keep a chunk of
algorithm simpler- or at least more understandable.

As far as finding out, I'd suggest writing a few programs and see how
you find it. GUI programming is wonderfully straightforward in
CLIM/CAPI. If you've ever done any MFC GUI programming, you'll stand
up and cheer when you see how Lisp does it.

Gregm

Marco Antoniotti

unread,
Aug 20, 2001, 10:05:07 AM8/20/01
to

cubic...@mailandnews.com (Software Scavenger) writes:

Or you could implement the sort of "limited" lazy evaluation for lists
(and or sequences). Essentially you implement the 'streams' described
in SICP.

> It seems to me that a lot of functional languages are very good at
> this kind of thing. But I wonder how well they do at real-world
> problems, and how I would find out.

All in all lazy evaluation *is* very nice, as type inference is.. My
honest opinion is that languages like *ML and Haskell (and all the
"functional" ones) simply made the mistake to forgo S-expressions as
thier basic building block. All in all I care less for lazy
evaluation than for type inference. My perfect language would be

CL + type inference

(meaning: with type inference mandated by the standard).

Erik Haugan

unread,
Aug 20, 2001, 11:19:44 AM8/20/01
to
* Juliusz Chroboczek <j...@pps.jussieu.fr>

> (defun split (x list predicate)
> (let ((lt '()) (gt '()))
> (loop for elt in list
> do
> (if (funcall predicate elt x)
> (push elt lt)
> (push elt gt)))
> (values lt gt)))
>
> (defun qsort (list)
> (if (null list) '()
> (multiple-value-bind (lt gt)
> (split (car list) (cdr list) #'<=)
> (nconc (qsort lt) (list (car list)) (qsort gt)))))
>
> Unlike the Haskell code, this scans the input list only once for each
> recursive step (although a Sufficiently Smart Compiler could, in
> principle... never mind).

And when sorting huge amounts of data, you don't want to cons up new lists
for every recursion. In Lisp you can easily make split nonconsing
(destructive), from what I know of Haskell, that is not be possible, or at
least not in the spirit of the language.

(defun nsplit (x list test)
(let ((true-list nil)
(false-list nil))
(loop with head
while list
do (setf head list)
(setf list (cdr list))
(setf (cdr head) nil)
(cond
((funcall test (car head) x)
(setf (cdr head) true-list)
(setf true-list head))
(t
(setf (cdr head) false-list)
(setf false-list head))))
(values true-list false-list)))

Erik

Lieven Marchand

unread,
Aug 21, 2001, 4:40:03 AM8/21/01
to
cubic...@mailandnews.com (Software Scavenger) writes:

> To do the same thing in Lisp, I might create a function named primes
> which would take a sink function as its argument. It should invoke
> the sink function with each prime, and should continue indefinitely
> until the sink function no longer wants any. (Maybe the sink function
> should return t or nil to indicate whether it wants more.) However,
> that doesn't seem as elegant as the lazy functional way, because it
> implies some kind of explicit state used by the sink function between
> invocations to determine whether it has more work to do, and an
> explicit list of filter primes or filter functions maintained by the
> primes function.

It's been done 20 years ago in Lisp. Look at
http://series.sourceforge.net. It has even a lot of macrology that
will optimize the functional series statements to loops if
possible. It's an interesting example of the power of CL but it's a
bit a solution in search of a problem IMHO.

--
Whenever I use Emacs, I get the impression that loud organ music
should issue forth from my computer.

Juliusz Chroboczek

unread,
Aug 21, 2001, 1:11:37 PM8/21/01
to
Software Scavenger:

SS> I've just started learning functional programming, mostly over the
SS> weekend, but one big difference, it seems to me, is that the above
SS> Lisp code requires memory for xs, while a lazy functional language
SS> doesn't. For example, xs could be an infinite sequence of numbers,
SS> calculated as needed.

There are two aspects of laziness: laziness of variable binding and
laziness of data structures. Haskell is lazy in both directions
(patter-mathing is eager, though), but the two are in principle
independent.

At the risk of showing lack of vision, I'd say that you just cannot
make variable binding lazy and still get a language that feels even
remotely like Lisp. (Steele makes a similar point about backtracking
in his HOPL paper.)

On the other hand, implementing lazy data structures in Lisp is easy,
and often useful. Most people have done that, although they sometimes
don't know it: the common technique of ``memoisation'' typically uses
lazy data structures. There's also the popular Series package, which
other posters have mentioned (nice design, but somewhat overkill in my
opinion). I also seem to recall an old paper called ``Cons should not
cons its arguments.'' I don't have my bibliography handy, perhaps
someone else can provide more details?

(This is not specific to Lisp, by the way: you'd be surprised at the
number of lazy data structures in the X server at which I'm typing
right now.)

The above is not meant in any way to disparage the undeniable
importance of the work of the functional languages community: they
have significantly improved our understanding of the issues linked to
evaluation strategy, and I feel that I wouldn't be able to confidently
manipulate laziness in eager languages without knowledge of Haskell
and friends.

Juliusz

Francois-Rene Rideau

unread,
Aug 22, 2001, 6:27:46 PM8/22/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:
> I also seem to recall an old paper called ``Cons should not
> cons its arguments.'' I don't have my bibliography handy, perhaps
> someone else can provide more details?

"CONS Should not CONS its Arguments, or, a Lazy Alloc is a Smart Alloc"
http://linux.rice.edu/~rahul/hbaker/LazyAlloc.html
is a paper by Henry G. Baker. Henry Baker's page on the Internet
has disappeared when ftp.netcom.com was discontinued, and hbaker
seems not to have republished it anywhere since (does he still exist?),
but I had a mirror of it, and Rahul Jain put it online.
So you can browse it at
http://linux.rice.edu/~rahul/hbaker/
and you can download a compressed archive of it all at
ftp://Samaris.tunes.org/pub/food/papers/people/Henry.Baker/hbaker.tar.bz2

Yours freely,

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[ TUNES project for a Free Reflective Computing System | http://tunes.org ]
Great hackers don't die... they just get their homepage mirrored.

Ray Blaak

unread,
Aug 23, 2001, 1:21:04 PM8/23/01
to
Francois-Rene Rideau <fare+...@tunes.org> writes:
> "CONS Should not CONS its Arguments, or, a Lazy Alloc is a Smart Alloc"
> http://linux.rice.edu/~rahul/hbaker/LazyAlloc.html
> is a paper by Henry G. Baker. Henry Baker's page on the Internet
> has disappeared when ftp.netcom.com was discontinued, and hbaker
> seems not to have republished it anywhere since (does he still exist?),
> but I had a mirror of it, and Rahul Jain put it online.
> So you can browse it at
> http://linux.rice.edu/~rahul/hbaker/
> and you can download a compressed archive of it all at
> ftp://Samaris.tunes.org/pub/food/papers/people/Henry.Baker/hbaker.tar.bz2

You can also try:

http://www.pipeline.com/~hbaker1/

This seems to be his place, since it lists his email there as
hba...@pipeline.com

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
bl...@infomatch.com The Rhythm has my soul.

Johan Kullstam

unread,
Aug 25, 2001, 11:49:10 AM8/25/01
to
cubic...@mailandnews.com (Software Scavenger) writes:

> In comparing Common Lisp with functional languages such as Haskell, I
> saw a simple qsort implementation in Haskell, and wanted to do the
> same thing in CL, to see if I could make it equally short and simple.
> It was something like this:
>
> qsort [] = []
> qsort (x:xs) = qsort lt ++ [x] ++ qsort ge
> where lt = [y | y <- xs, y < x]
> ge = [y | y <- xs, y >= x]
>

> But when I try to do the same thing in CL, such that it would work for
> character strings, vectors, lists, etc., I run into the problem that <
> is not polymorphic. E.g. string< char< etc. What's the best way to
> make a polymorphic version of it, and what's the best way to make this
> qsort equally short and simple in CL as in Haskell?

i would have my sorting routine take the comparison function as an
argument. for example

(defun qsort (seq &optional (cmp #'<))
;; insert qsort here
)

use the function cmp not the hard coded <. thus instead of (< x y)
use (funcall cmp x y). i don't know haskell, but the above strikes me
as far more "functional" in that i am using and passing a _function_
to drive the sorting rule. since lisp does support first class
functions and has lambda for one-off constant functions, this works
very naturally in lisp.

the reason i don't like the polymorphic approach is that if you want
to use a different comparison you have to re-cast the type to a new
one with a different kind of <. this strikes me as not only a
roundabout way of doing things, it does violence to idea of types for
no justifiable reason.

> I also have the same problem with sequences. If the sequence is a
> list I can use car and cdr, but those don't work if the sequence is a
> vector or string.

you can use ELT to access any sequence, list or vector. if you want
efficiency you could dispatch to a set of more specific functions.

> That makes it harder to do the x:xs part of the
> Haskell qsort while making it as generic as the Haskell version.

--
J o h a n K u l l s t a m
[kull...@ne.mediaone.net]
Don't Fear the Penguin!

Ralf Muschall

unread,
Aug 26, 2001, 7:07:12 PM8/26/01
to
Marco Antoniotti <mar...@cs.nyu.edu> writes:

> [(+ x 1) / x in xs / (> x 27)]

This looks like a bunch of reader macros calling series stuff.
Is this available somewhere?

Ralf

--
GS d->? s:++>+++ a C++++ UL+++ UH++ P++ L++ E+++ W- N++ o-- K- w--- !O M- V-
PS+>++ PE Y+>++ PGP+ !t !5 !X !R !tv b+++ DI+++ D? G+ e++++ h+ r? y?

Enric Plaza

unread,
Aug 27, 2001, 10:04:05 AM8/27/01
to
in article m2elq0d...@euler.axel.nom, Johan Kullstam at
kull...@ne.mediaone.net wrote on 8/25/01 5:49 PM:

> cubic...@mailandnews.com (Software Scavenger) writes:
>
>> In comparing Common Lisp with functional languages such as Haskell, I
>> saw a simple qsort implementation in Haskell, and wanted to do the
>> same thing in CL, to see if I could make it equally short and simple.
>> It was something like this:
>>
>> qsort [] = []
>> qsort (x:xs) = qsort lt ++ [x] ++ qsort ge
>> where lt = [y | y <- xs, y < x]
>> ge = [y | y <- xs, y >= x]
>>
>> But when I try to do the same thing in CL, such that it would work for
>> character strings, vectors, lists, etc., I run into the problem that <
>> is not polymorphic. E.g. string< char< etc. What's the best way to
>> make a polymorphic version of it, and what's the best way to make this
>> qsort equally short and simple in CL as in Haskell?

Polymorphism in CL is achieved with the object orientation. You are right in
that >= is not polymorphic. One way you can be a polymorphic <= is do it
just _inside_ Qsort. That is, Qaort call a generic (polymorphic) function
LEQ defined over the class of sequences. Then you define a method LEQ for
each class of object you may want to compare that simply calls the adequate
Lisp function).

What I do not like about the Haskell code above is that in theory you are
working in an abstract "sequence", but in fact the way you work with it is
as if all sequences where lists (see the x:xs code above). They are in fact
defining CAR and CDR (or head and Tail) for _all_ sequences, incluing
vectors, etc.

-enric

0 new messages