I also like the package system of Common Lisp. It makes it easy to
keep the code of a moderately large project organized neatly. And
CLOS is extremely powerful for GUI development.
OCaml, SML, Haskell, Clean, Scheme, and Common Lisp all seem wonderful
to me, and it's hard to choose between them. Or maybe there's
something even better than all of them, which I haven't discovered
yet. I would rather learn and use them all, but only have time to
focus on one or two.
Does anyone make extensive use of macros in functional programming
languages? Is it a lot of work to write good macros? For each
application program, I want to use macros to build a special purpose
programming language, to narrowly focus on the highest level
abstractions of that application, so there won't be much code getting
in the way of understanding and working with those abstractions.
If you are coming from the Lisp world, and are looking for a practical
functional language, then I'd recommend looking at OCaml. Here's my
bullet-point comparison; mileage may vary.
o Macros: No language has macros as good as Common Lisp does. This is
just a fact of life. Symbol macros, reader macros, compiler macros,
setfs, defmacro expansion: the designers of Common Lisp worked *hard*
at making every kind of compile-time computation possible, and even
harder at making it coherent.
Still, Camlp4 is really nice. It's not as cool as CL macros, but you
can still do lots of neat stuff with it. There's a bit of a paradigm
shift between Camlp4 and CL macros, though -- Camlp4 thinks of
itself as a grammar-extension system, rather than as a "macro".
There are some really pretty things you can do with it. Ocaml's
stream parsers -- a system for writing recursive descent parsers as
a set of pattern matching functions -- are written as a Camlp4
application.
If you aren't doing really hairy stuff -- things like Screamer,
say -- then Camlp4 is likely to be sufficient for your needs.
o Modules: Common Lisp's package system is okay, but nothing special.
ML, in contrast, has one of the coolest module systems ever designed,
and using it will make your brain reel from the possibilities -- it's
as close to making interchangeable parts feasible for software as
anything I've ever seen.
o Keyword arguments: Ocaml has them, and can check their usage
statically. I used them all the time in Lisp and Dylan, since
they increase readability so dramatically. For my part, it was
the addition of labelled function arguments in Ocaml 3.0 that
convinced me to start using it.
o CLOS: This is a really tough comparison, hard enough that I'm not
sure how to approach it. First, I think CLOS's big conceptual win
is multimethods: all the rest is icing on the cake. There isn't one
thing in Ocaml that does the same thing as multimethods in CLOS.
So I'll describe some common patterns of CLOS usage and describe how
they compare to the equivalent idiomatic bit of Ocaml.
One common pattern of using CLOS was to define an abstract class
(say an abstract syntax class), and then define a set of concrete
subclasses (if-expressions, let-bindings, function-calls, lambdas).
Then, you would define a method on the abstract class, specialized
for each of the concrete subclass. This pattern is subsumed using
pattern-matching in Ocaml.
A second common pattern is to write a function, and then let it
parameterize its behavior across types by making it call some
generic functions. So for each type that you want the function to
work on, you define the appropriate methods on the generic
functions. (Eg, you can write a sort function that uses a generic
compare routine.) The canonical way of handling this is to use the
ML module system -- implement the function in a functor (a
parameterized module) body, and then instantiate the functor with
different argument module. Haskell supports this style directly with
its type classes, but Ocaml doesn't have that.
The third use of multimethods is to do vary behavior dynamically
based on the types of the arguments, eg in a GUI application, where
you want each widget to do something different. Here, you typically
use either higher-order functions or Ocaml objects. (I prefer hofs
for simple uses and objects for complex uses, but this is a personal
quirk.)
o The MOP: Ocaml intentionally has nothing like it. This is because
Ocaml is statically typed, and a meta-object protocol makes that
impossible. Likewise with EVAL. It's a tradeoff -- since I rarely
made use of the MOP, I am better off with a statically typed
language. But if your normal code makes extensive use of reflection
or the metaobject protocol, then you will find Ocaml a big change.
Neel
I don't think that CL's packages and OCaml's modules are solutions to the
same problems. Packages are meant only to resolve name conflicts.
Probably the different defsystems are more what one would call a
module-system in CL.
> o CLOS: This is a really tough comparison, hard enough that I'm not
> sure how to approach it. First, I think CLOS's big conceptual win
> is multimethods: all the rest is icing on the cake. There isn't one
> thing in Ocaml that does the same thing as multimethods in CLOS.
> So I'll describe some common patterns of CLOS usage and describe how
> they compare to the equivalent idiomatic bit of Ocaml.
The object system of OCaml was one of the main reasons of holding me away
of using it. It seemed so incredible less flexible compared to CLOS.
> One common pattern of using CLOS was to define an abstract class
> (say an abstract syntax class), and then define a set of concrete
> subclasses (if-expressions, let-bindings, function-calls, lambdas).
> Then, you would define a method on the abstract class, specialized
> for each of the concrete subclass. This pattern is subsumed using
> pattern-matching in Ocaml.
You mean vanilla polymorphy?
> A second common pattern is to write a function, and then let it
> parameterize its behavior across types by making it call some
> generic functions. So for each type that you want the function to
> work on, you define the appropriate methods on the generic
> functions. (Eg, you can write a sort function that uses a generic
> compare routine.) The canonical way of handling this is to use the
> ML module system -- implement the function in a functor (a
> parameterized module) body, and then instantiate the functor with
> different argument module. Haskell supports this style directly with
> its type classes, but Ocaml doesn't have that.
In CL a function like sort would use (and actually uses) higher order
functions to get parameters like a comparison function.
> The third use of multimethods is to do vary behavior dynamically
> based on the types of the arguments, eg in a GUI application, where
> you want each widget to do something different. Here, you typically
> use either higher-order functions or Ocaml objects. (I prefer hofs
> for simple uses and objects for complex uses, but this is a personal
> quirk.)
To emulate multimethods in OCaml you would implement the visitor-pattern as
in all OOP languages which do not support multimethods.
> o The MOP: Ocaml intentionally has nothing like it. This is because
> Ocaml is statically typed, and a meta-object protocol makes that
> impossible. Likewise with EVAL. It's a tradeoff -- since I rarely
> made use of the MOP, I am better off with a statically typed
> language. But if your normal code makes extensive use of reflection
> or the metaobject protocol, then you will find Ocaml a big change.
I made extensive use of the MOP and would not really like to miss it. IMHO
Object-Systems are all about flexible and dynamic behaviour. Object-systems
begin to be totally worthless if they are static.
A thing you missed is the common pattern of "Mixin-Style" OOP. CLOS as a
multiple-inheritance language supports modular cross-cutting aspects. This
is one of the *major* drawbacks in OCaml's (and Java's and C#'s...)
object-systems.
Another thing I would miss are method-combinations (e.g. :after, :around,
:before methods). In CL you can even write your own method-combinations.
ciao,
Jochen
Can you explain what you mean by "modular cross-cutting aspects"?
I am fairly sure that OCaml does have multiple-inheritance though.
--
Aaron Denney
-><-
> On Sat, 02 Mar 2002 18:03:13 +0100, Jochen Schmidt <j...@dataheaven.de>
> wrote:
>> A thing you missed is the common pattern of "Mixin-Style" OOP. CLOS as a
>> multiple-inheritance language supports modular cross-cutting aspects.
>> This is one of the *major* drawbacks in OCaml's (and Java's and C#'s...)
>> object-systems.
>
> Can you explain what you mean by "modular cross-cutting aspects"?
The term comes from aspect oriented programming. An "aspect" is some kind
of behaviour. Some aspects are interesting for many different classes in a
class hierarchy. Examples are logging capabilities or transaction handling.
If you have multiple inheritance then you can implement this aspects in a
so called "mixin-class" (also called abstract subclass). Later you can mix
this aspect into all classes that need that behaviour.
An example would be for example a class "Stack" with methods like push, pop
and count. You could then inherit an ArrayStack and a ListStack from Stack
and implement the one with an array+fill pointer and the other with a list.
Later we want to add the behaviour that the stack checks on push if the
number of elements in it is above a certain limit and if so it throws an
exception.
We would implement a StackSizeLimitMixin as follows (pseudocode)
class StackSizeLimitMixin {
Integer limit=4096;
push(Item item) {
if (self.count()>=limit)
error("Stack is full");
else
super.push(item);
}
}
In CLOS we could do this even nicer by defining a :before method.
(defclass stack-size-limit-mixin ()
((limit :initform 4096 :accessor stack-size-limit)))
(defmethod stack-push :before ((stack stack-size-limit-mixin) item)
(when (>= (stack-count stack) (stack-size-limit stack))
(error "Stack is full")))
now we can implement a LimitedArrayStack and a LimitedListStack by mixing
in the mixin-class:
class LimitedArrayStack : StackSizeLimitMixin, ArrayStack {}
class LimitedListStack : StackSizeLimitMixin, ListStack {}
So we were able to reuse this behaviour successfully.
I think in OCaml the mixin could probably be defined as follows: (but I
don't know OCaml good enough)
class stack_size_limit_mixin =
object (self)
val mutable limit = 4096
method push item = if self#count >= limit then
raise Stack_full
else
super#push item
end;;
There are certainly syntax errors in this code but what interests me most
is if "super" needs to be bound by an "inherit someclass as super" or if it
works this way too. If this is not possible then mixins are not really easy
to create. C++ has a similar problem but it can be solved by using
templates there. I have not read much about OCamls parameterized classes
and if they would help.
> I am fairly sure that OCaml does have multiple-inheritance though.
You are right - sorry for that. I'm not sure how I came to the conclusion
that OCaml does not support MI (probably because i may have read only the
chapter "inheritance" and missed the chapter "multiple inheritance" - I
don't know). It is good to see that there are a few
language designers out there that do not think in such closed minds as the
guys who designed Java or C#.
what is super bound to here? There is no parent class that I see.
> now we can implement a LimitedArrayStack and a LimitedListStack by mixing
> in the mixin-class:
>
> class LimitedArrayStack : StackSizeLimitMixin, ArrayStack {}
> class LimitedListStack : StackSizeLimitMixin, ListStack {}
>
> So we were able to reuse this behaviour successfully.
But ... how should StackSizeLimitMixin find the ArrayStack or ListStack?
How do LimitedStacks pick the StackSizeLimitMixin push method
ahead of the base Stack push?
> I think in OCaml the mixin could probably be defined as follows: (but I
> don't know OCaml good enough)
>
> class stack_size_limit_mixin =
> object (self)
> val mutable limit = 4096
> method push item = if self#count >= limit then
> raise Stack_full
> else
> super#push item
> end;;
>
> There are certainly syntax errors in this code but what interests me most
> is if "super" needs to be bound by an "inherit someclass as super" or if it
> works this way too. If this is not possible then mixins are not really easy
> to create. C++ has a similar problem but it can be solved by using
> templates there. I have not read much about OCamls parameterized classes
> and if they would help.
Aha.
I think something like the following would do what you want.
class virtual ['a] stack =
object
method virtual count : int
method virtual push : 'a -> unit
method virtual pop : 'a
end;;
class ['a] list_stack =
object (self)
inherit ['a] stack
val mutable head = [];
method count = List.length head;
method push x = head <- x::head;
method pop = let x = List.hd head in head <- List.tl head; x;
end;;
class ['a] array_stack =
object (self)
inherit ['a] stack
(*
...
*)
end;;
class ['b stack as 'a] limited_stack (init_limit) =
object (self)
inherit 'a as super
val limit = init_limit
method push x = if self#count >= limit
then raise Stack_full;
else super#push item;
end;;
It doesn't actually work though. Perhaps someone who really knows
the language could take a shot.
Maybe functorizing via the module system is what should really be done.
It seems like the class system alone should be enough to do this
> On Sun, 03 Mar 2002 05:54:57 +0100, Jochen Schmidt <j...@dataheaven.de>
> wrote:
>> We would implement a StackSizeLimitMixin as follows (pseudocode)
>>
>> class StackSizeLimitMixin {
>> Integer limit=4096;
>>
>> push(Item item) {
>> if (self.count()>=limit)
>> error("Stack is full");
>> else
>> super.push(item);
>> }
>> }
>
> what is super bound to here? There is no parent class that I see.
That is why mixins are often called "Abstract subclasses". The superclass
depends on where you mix the mixin in. CLOS calculates for each class a
class-precedence-list by a topological sort of the direct superclasses of
each class.
LimitedArrayStack of our example would have the following
class-precedence-list:
LimitedArrayStack->StackSizeLimitMixin->ArrayStack->Stack->StandardObject->T
So as you can see the Superclass of the push() method in StackSizeLimitMixin
is ArrayStack.push() for a LimitedArrayStack.
Yes - I this was what I meant by using parameterized classes to solve the
problem with the anonym superclass. It looks a bit like how one would do it
in C++ - it's ugly but if would work...
> Maybe functorizing via the module system is what should really be done.
> It seems like the class system alone should be enough to do this
> though.
I do not know the module system or functors well enough to imagine how to
do something like the above with it. Probably someone more knowledgeabe
than me could show how one would do it.
[snip stack code]
> Maybe functorizing via the module system is what should really be done.
> It seems like the class system alone should be enough to do this though.
Being an advocate of module systems as opposed to objects, this would
indeed be a nicer solution in my opinion. First we define a signature for
(imperative) stacks:
module type STACK = sig
type 'el stack
val empty : unit -> 'el stack
val count : 'el stack -> int
val push : 'el stack -> 'el -> unit
val pop : 'el stack -> 'el
end
Then we give an example implementation of such a stack using lists:
module ListStack : STACK = struct
type 'el stack = 'el list ref
let empty () = ref []
let count s = List.length !s
let push s el = s := el :: !s
let pop s =
match !s with
| h :: t -> s := t; h
| [] -> failwith "ListStack.pop: stack empty"
end
Now we implement a functor (= higher-order module) that takes a stack
module as input and returns a stack module again, the difference being
that pushing things on the stack is only allowed up to a limit:
module MakeLimitedStack (Stack : STACK) : STACK = struct
include Stack
let limit = 4096
let push s el =
if count s >= limit then failwith "LimitedStack.push: limit reached"
else Stack.push s el
end
Now we can create limited stacks of any sort by just applying this
functor to some module that conforms to the stack signature:
module LimitedListStack = MakeLimitedStack (ListStack)
As you can see, this is very convenient (and safe!) to do. Some people
might even want to let the functor take two arguments, one for a
specification that contains e.g. the limit or other parameters that
may be relevant, and the stack module, of course. This way you can
construct very complex modules from parts by just assembling them in
functor applications.
Regards,
Markus Mottl
--
Markus Mottl mar...@oefai.at
Austrian Research Institute
for Artificial Intelligence http://www.oefai.at/~markus
> Aaron Denney <wno...@ugcs.caltech.edu> wrote:
>> I think something like the following would do what you want.
>
> [snip stack code]
>
>> Maybe functorizing via the module system is what should really be done.
>> It seems like the class system alone should be enough to do this though.
>
> Being an advocate of module systems as opposed to objects, this would
> indeed be a nicer solution in my opinion. First we define a signature for
> (imperative) stacks:
[snipped module example]
> As you can see, this is very convenient (and safe!) to do. Some people
> might even want to let the functor take two arguments, one for a
> specification that contains e.g. the limit or other parameters that
> may be relevant, and the stack module, of course. This way you can
> construct very complex modules from parts by just assembling them in
> functor applications.
It is definitely a nice example of the usefulness of higher order modules.
I often heard about them but never found the time to try them out.
As far as I can tell from this example they are nothing really different
from mixins in other systems (like CLOS). The module is some kind of class
that implements a particular interface (signature). You can inherit a
module from another module by applying a functor to it. Too me the
difference seems mainly be syntax and names. The adding of a parameter for
the limit would be equally easy in the mixin system. As longer as I look at
it it looks more and more than a renamed implementation of a Mixin-oriented
OOP system.
You emphasise particularily that this module system is "safe" - I trust
your words here of course - but I do not agree if you imply that the mixin
mechanism that is used in CLOS is unsafe.
Modules are indeed quite different from classes, both from a theoretical
and a practical point of view. It's quite difficult to see the difference
at first sight if one comes from an OO-point of view, as I also did a
couple of years ago, because one got indoctrinated by the unjustified
"everything is an object"-mentality.
It's usually a mistake trying to map terminology associated with one
approach to the other one (this is not just valid for debates on modules
vs. objects alone, of course), e.g.:
> You can inherit a module from another module by applying a functor
> to it.
There is no relation between inheritance and functors.
Functors allow you to parameterize one module by another, but this does
not mean in any way that the result ("output") of a functor application
needs to be a module that has the same signature as the input module.
It also doesn't mean that the resulting module shares its implementation
with the input module(s). In fact, if the input module hides
implementation details of types (i.e. only provides abstract types), the
functor body doesn't have any means to manipulate those types other than
by the functions that are present in the signature of the input modules.
Furthermore, both the arguments and the result of functors can be
functors themselves again, allowing one to move to even more abstract
levels of specification.
> Too me the difference seems mainly be syntax and names.
This is the impression most OO-people have that are new to (higher-order)
modules. It's wrong: there are significant semantic differences.
> You emphasise particularily that this module system is "safe" - I trust
> your words here of course - but I do not agree if you imply that the
> mixin mechanism that is used in CLOS is unsafe.
I don't think that CLOS provides static verification that the way its
entities (here: classes and objects) interact is consistent. All conflicts
between specifications in ML-like module- and type systems can be detected
at compile time, which is tremendously useful for large systems.
In any case, my main argument here is not that "X is better than Y"
but that "X is not Y". The latter is a necessary prerequisite before
one can give answers to the first.
Classes and modules do both have their advantages and disadvantages from
a practical point of view: it's easier to provide additional values that
share similar functionality in OO-systems whereas it is easier to add
additional functionality for a certain set of values when using modules.
However, it is consistent with the experience of many people who had
learnt both approaches that writing more abstract code makes classes
less and modules more useful, namely when it becomes more important how
values behave rather than how they are represented.
> I think something like the following would do what you want.
>
[...]
> class ['b stack as 'a] limited_stack (init_limit) =
> object (self)
> inherit 'a as super
> val limit = init_limit
> method push x = if self#count >= limit
> then raise Stack_full;
> else super#push item;
> end;;
Markus Mottl show us how to make it with module, but in fact, you can
mix module and class :
exception Stack_full
module type OBJECSTACK =
sig
class ['a] c :
object
method count : int
method push : 'a -> unit
method pop : 'a
end
end
module MakeLimited (M : OBJECSTACK) =
struct
class ['a] c init_limit =
object (self)
inherit ['a] M.c as super
val limit = init_limit
method push item =
if self#count >= limit
then raise Stack_full
else super#push item
end
end
then you can do things as :
module ListLimiteStack = MakeLimited (
struct
class ['a] c = ['a] list_stack
end)
class ['a] list_limited_stack = ['a] ListLimiteStack.c
--
Rémi Vanicat
van...@labri.u-bordeaux.fr
http://dept-info.labri.u-bordeaux.fr/~vanicat
Ugh. I would much rather directly specify it. To me, this mix-in method
seems like a workaround for not having parametrized classes or modules.
The parametrized solutions show exactly what is going on.
> LimitedArrayStack of our example would have the following
> class-precedence-list:
>
> LimitedArrayStack->StackSizeLimitMixin->ArrayStack->Stack->StandardObject->T
>
> So as you can see the Superclass of the push() method in StackSizeLimitMixin
> is ArrayStack.push() for a LimitedArrayStack.
So, if you switched the order, you could end up with:
/--> StackSizeLimitMixin --\
LimitedArrayStack-< >--> Stack?
\--> ArrayStack --/
And how would you disambiguate order in more complex cases?
(Oh, and thanks to Markus and Remi for the nice and clear module example.
The OCaml manual is great for reference, but a bit harder to use for
learning.)
I do wish the class system could be a bit more powerful though.
Modules are a perfect match when closed recursion is what you want,
but sometimes open recursion is a much nicer fit for the problem at hand.
And something like Haskell's type classes would be really nifty.
(Though the only open recursion there is from the defaults, which are
the only inheritance, and I do hope multi-parameter type classes make
it in to the next standard, or at least will be more widely supported.)
Sure, this would be a clean solution for people who really prefer using
objects.
It is certainly also worth mentioning that even though the module system
of OCaml already has impressive expressiveness, one could still think of
even more powerful module calculi. E.g., Moscow ML supports first-class
and recursive modules. I hope these extensions could make it into OCaml
some time in the future, which would let modules compete even more
strongly against classes/objects...
> Jochen Schmidt <j...@dataheaven.de> wrote:
>> As far as I can tell from this example they are nothing really different
>> from mixins in other systems (like CLOS). The module is some kind of
>> class that implements a particular interface (signature).
>
> Modules are indeed quite different from classes, both from a theoretical
> and a practical point of view. It's quite difficult to see the difference
> at first sight if one comes from an OO-point of view, as I also did a
> couple of years ago, because one got indoctrinated by the unjustified
> "everything is an object"-mentality.
Note that I talked about "mixins" not about objects. Mixins alow a
completely different programming style compared to Java-Style OOP.
Mixins do not even be realized with inheritance!
> It's usually a mistake trying to map terminology associated with one
> approach to the other one (this is not just valid for debates on modules
> vs. objects alone, of course), e.g.:
This is what you actually do too - you map terminology you know from Java
like systems to CLOS. Even people who know CL and CLOS a significant time
do not really understand how different mixins are. They use CLOS the same
way they used Java or Smalltalk.
>> You can inherit a module from another module by applying a functor
>> to it.
>
> There is no relation between inheritance and functors.
The process of inheritance can produce a class or another mixin.
> Functors allow you to parameterize one module by another, but this does
> not mean in any way that the result ("output") of a functor application
> needs to be a module that has the same signature as the input module.
>
> It also doesn't mean that the resulting module shares its implementation
> with the input module(s). In fact, if the input module hides
> implementation details of types (i.e. only provides abstract types), the
> functor body doesn't have any means to manipulate those types other than
> by the functions that are present in the signature of the input modules.
In this sense mixins are also higher-order because you can create a mixin
by mixing different mixins together.
> Furthermore, both the arguments and the result of functors can be
> functors themselves again, allowing one to move to even more abstract
> levels of specification.
>
>> Too me the difference seems mainly be syntax and names.
>
> This is the impression most OO-people have that are new to (higher-order)
> modules. It's wrong: there are significant semantic differences.
>
>> You emphasise particularily that this module system is "safe" - I trust
>> your words here of course - but I do not agree if you imply that the
>> mixin mechanism that is used in CLOS is unsafe.
>
> I don't think that CLOS provides static verification that the way its
> entities (here: classes and objects) interact is consistent. All conflicts
> between specifications in ML-like module- and type systems can be detected
> at compile time, which is tremendously useful for large systems.
You know that there are some incredible big systems written in CL and CLOS
out there? Even whole operating systems right down to the raw iron (more
than 50 Mio. lines of code). What you proclamated is a long and heated
point of debate between static and dynamic programming systems.
One could argue for example that the whole strictness and the fact that all
needs to get resolved at compile time gets more and more in the way the
larger your system is - that the language is not flexible enough to adapt
to bigger systems. I'm sure you will complain that this is not the case,
but it at least shows that it is not so easy to decide as it looked like
first.
> In any case, my main argument here is not that "X is better than Y"
> but that "X is not Y". The latter is a necessary prerequisite before
> one can give answers to the first.
You mean "modules are not classes" therefore "modules are better than
classes"? (I cannot imagine you meant that...). But to say it again - dont
mix Java-OO with mixins (pardon the pun ;-) )
> Classes and modules do both have their advantages and disadvantages from
> a practical point of view: it's easier to provide additional values that
> share similar functionality in OO-systems whereas it is easier to add
> additional functionality for a certain set of values when using modules.
>
> However, it is consistent with the experience of many people who had
> learnt both approaches that writing more abstract code makes classes
> less and modules more useful, namely when it becomes more important how
> values behave rather than how they are represented.
Mixins are mostly useful because they make it easy to reuse behaviour.
It seems to be the case that OCamls classes are unable to be used for
mixin-style programming but you can get the same thing by using modules.
The fact that OCamls classes are useless for this particular task does in
no way mean that classes in _all_ systems are unable to support this style.
> On Sun, 03 Mar 2002 08:41:02 +0100, Jochen Schmidt <j...@dataheaven.de>
> wrote:
>> Aaron Denney wrote:
>>
>> > On Sun, 03 Mar 2002 05:54:57 +0100, Jochen Schmidt <j...@dataheaven.de>
>> > wrote:
>> >> We would implement a StackSizeLimitMixin as follows (pseudocode)
>> >>
>> >> class StackSizeLimitMixin {
>> >> Integer limit=4096;
>> >>
>> >> push(Item item) {
>> >> if (self.count()>=limit)
>> >> error("Stack is full");
>> >> else
>> >> super.push(item);
>> >> }
>> >> }
>> >
>> > what is super bound to here? There is no parent class that I see.
>>
>> That is why mixins are often called "Abstract subclasses". The superclass
>> depends on where you mix the mixin in. CLOS calculates for each class a
>> class-precedence-list by a topological sort of the direct superclasses of
>> each class.
>
> Ugh. I would much rather directly specify it. To me, this mix-in method
> seems like a workaround for not having parametrized classes or modules.
> The parametrized solutions show exactly what is going on.
You actually _do_ directly specify it. The rules are easy enough.
>> LimitedArrayStack of our example would have the following
>> class-precedence-list:
>>
>>
LimitedArrayStack->StackSizeLimitMixin->ArrayStack->Stack->StandardObject->T
>>
>> So as you can see the Superclass of the push() method in
>> StackSizeLimitMixin is ArrayStack.push() for a LimitedArrayStack.
>
> So, if you switched the order, you could end up with:
>
> /--> StackSizeLimitMixin --\
> LimitedArrayStack-< >--> Stack?
> \--> ArrayStack --/
What do you mean? This is not a precedence-list but a DAG. The result is
depending on how you define the order of LimitedArrayStacks superclasses in
your diagram. I suppose you meant what would happen if we switched the
order of the superclasses of LimitedArrayStack like this:
class LimitedArrayStack : ArrayStack, StackSizeLimitMixin {}
or in CLOS Syntax
(defclass limited-array-stack (array-stack stack-size-limit-mixin) ())
If you do this the methods specialized on array-stack are specified to be
more important for limited-array-stack than the ones of
stack-size-limit-mixin. But I do not see your point - do you mean something
like that happens by accident?
> And how would you disambiguate order in more complex cases?
In what way is it ambiguous?
Orthogonality is a virtue, but it's not the only virtue. There is
something to be said for making a lot of powerful concepts -- higher
order functions, objects, modules, type classes -- all available to
the programmer. Just because you can encode one concept in another
does not mean they are all equally convenient to use. I mean, I
wouldn't want give up higher-order functions just because I had
functors, and vice-versa.
You can sometimes unify concepts -- eg, multimethods let you unify
objects and type classes -- but when you don't know how to do that it
makes sense to offer the full kit to the programmer.
Neel
And you are assuming that I speak aout Java - though I have hardly ever
used this language... ;)
>> There is no relation between inheritance and functors.
> The process of inheritance can produce a class or another mixin.
This is a red herring. Functors still don't have anything to do with
inheritance. Your argument does not yield relevant information to counter
my claim. Modules and functors are about abstraction, about factoring out
program dependencies, not about code reuse for the purpose of extending
implementations (even if one can abuse them for this at times).
>> It also doesn't mean that the resulting module shares its implementation
>> with the input module(s). In fact, if the input module hides
>> implementation details of types (i.e. only provides abstract types), the
>> functor body doesn't have any means to manipulate those types other than
>> by the functions that are present in the signature of the input modules.
> In this sense mixins are also higher-order because you can create a mixin
> by mixing different mixins together.
But mixins are still about enriching an implementation rather than about
specification and encapsulation.
>> I don't think that CLOS provides static verification that the way its
>> entities (here: classes and objects) interact is consistent. All conflicts
>> between specifications in ML-like module- and type systems can be detected
>> at compile time, which is tremendously useful for large systems.
> You know that there are some incredible big systems written in CL and
> CLOS out there? Even whole operating systems right down to the raw iron
> (more than 50 Mio. lines of code). What you proclamated is a long and
> heated point of debate between static and dynamic programming systems.
Yes, this is a heated point, and I actually don't want to raise it here
(again). Let's leave it with the counter-argument that the existence of
"incredibly big systems" says nothing about the ability of languages to
scale, given the evidence that people have written even larger systems
in less expressive, often even abysmally less expressive languages.
> One could argue for example that the whole strictness and the fact
> that all needs to get resolved at compile time gets more and more in
> the way the larger your system is - that the language is not flexible
> enough to adapt to bigger systems.
The question is: would you be able to provide evidence for this?
> I'm sure you will complain that this is not the case,
I don't have to: it's up to the one who claims something to support
his views. Read my statement again above: I did not even put CLOS on a
comparative scale, I just said that it doesn't have static verification
(it hasn't), that conflicting specifications can be perfectly detected at
compile time with this, and that this is useful for large systems. It's
you who interpreted this as an attack against CLOS. The latter might
(or might not) have other features that make it a good choice for large
scale programming.
>> In any case, my main argument here is not that "X is better than Y"
>> but that "X is not Y". The latter is a necessary prerequisite before
>> one can give answers to the first.
> You mean "modules are not classes" therefore "modules are better than
> classes"? (I cannot imagine you meant that...).
My argument above is precise: I said that this is a _necessary_
prerequisite, not a _sufficient_ one. If you wanted to show that one
approach is better than another but you failed to even demonstrate that
it actually differs, all further arguments would be void. That's why
pointing out the differences is so important.
> But to say it again - dont mix Java-OO with mixins (pardon the pun ;-) )
No need to say this, because I have never done so: neither did I mention
Java at all nor would I use an inferior OO-language as an example when
comparing paradigms.
> Mixins are mostly useful because they make it easy to reuse behaviour.
Hm, I'd have thought that it is more about extension and alteration of
behaviour, but this may depend on your point of view.
> It seems to be the case that OCamls classes are unable to be used for
> mixin-style programming but you can get the same thing by using modules.
> The fact that OCamls classes are useless for this particular task does
> in no way mean that classes in _all_ systems are unable to support
> this style.
It's unfair to compare much more complex systems to a sub-feature of
a language. As you have noticed yourself, OCaml-modules do remedy to a
great extent shortcomings with expressiveness of OCaml-classes. There
is no need to introduce extra functionality to the OO-part if other
language features already cover it.
Agreed.
> There is something to be said for making a lot of powerful concepts --
> higher order functions, objects, modules, type classes -- all available
> to the programmer.
Sure. There are always tradeoffs, and sometimes it's better to add
something even if it hurts orthogonality.
> Just because you can encode one concept in another does not mean they
> are all equally convenient to use. I mean, I wouldn't want give up
> higher-order functions just because I had functors, and vice-versa.
I don't see how functors replace higher-order functions or vice-versa.
They work on completely different levels of the language.
> You can sometimes unify concepts -- eg, multimethods let you unify
> objects and type classes -- but when you don't know how to do that it
> makes sense to offer the full kit to the programmer.
And sometimes it's better to think longer until a clean solution is
found. We live in a complex world... ;)
> Jochen Schmidt <j...@dataheaven.de> wrote:
>> Note that I talked about "mixins" not about objects. Mixins alow a
>> completely different programming style compared to Java-Style OOP.
>> This is what you actually do too - you map terminology you know from
>> Java like systems to CLOS.
>
> And you are assuming that I speak aout Java - though I have hardly ever
> used this language... ;)
Hehe - I have chosen Java as a particular example of a language Objectsystem
which is not designed to make use of Mixins. There were not many new
mainstream languages which incorporated the idea of mixins. Objective-C
supports them with "Categories", Ruby with "Mixin Modules". My point simply
was that - in contrary to my former belief - OCaml supports Mixin-Style
programming by using HO Modules.
>>> There is no relation between inheritance and functors.
>
>> The process of inheritance can produce a class or another mixin.
>
> This is a red herring. Functors still don't have anything to do with
> inheritance. Your argument does not yield relevant information to counter
> my claim. Modules and functors are about abstraction, about factoring out
> program dependencies, not about code reuse for the purpose of extending
> implementations (even if one can abuse them for this at times).
Mixins are meant to solve the same problem as far as I understand it. They
enable you to factor out program dependencies as you say.
>>> It also doesn't mean that the resulting module shares its implementation
>>> with the input module(s). In fact, if the input module hides
>>> implementation details of types (i.e. only provides abstract types), the
>>> functor body doesn't have any means to manipulate those types other than
>>> by the functions that are present in the signature of the input modules.
>
>> In this sense mixins are also higher-order because you can create a mixin
>> by mixing different mixins together.
>
> But mixins are still about enriching an implementation rather than about
> specification and encapsulation.
They are not about specification that is true - but they are certainly
about encapsulation. This is particularily true if one uses declarative
method-combinations like :before, :after and :around methods or even
defines a specialized method-combination scheme.
>> You know that there are some incredible big systems written in CL and
>> CLOS out there? Even whole operating systems right down to the raw iron
>> (more than 50 Mio. lines of code). What you proclamated is a long and
>> heated point of debate between static and dynamic programming systems.
>
> Yes, this is a heated point, and I actually don't want to raise it here
> (again). Let's leave it with the counter-argument that the existence of
> "incredibly big systems" says nothing about the ability of languages to
> scale, given the evidence that people have written even larger systems
> in less expressive, often even abysmally less expressive languages.
I would not say it says nothing - the so called abysmally less expressive
languages were most of the time static languages too (like C or C++). What
I meant to counter is the often heard argument that you need a static
language for really big systems and that dynamic languages are
"error-prone" or not suited for big systems. It is not really possible to
design a system of that size if there is something _inherently_ wrong with
an approach.
>> One could argue for example that the whole strictness and the fact
>> that all needs to get resolved at compile time gets more and more in
>> the way the larger your system is - that the language is not flexible
>> enough to adapt to bigger systems.
>
> The question is: would you be able to provide evidence for this?
Yes - this is exactly the question. It is the same question that arises
when you claim that static languages particularily help when designing
bigger systems. Nobody of us both can provide anything better than a few
experiences. I believe you that some features of languages like OCaml
really help when programming with big systems - but since the development
process in dynamic languages is so different the same features would not
really help in their situation. The thing I claim is that there is no
evidence that any of the both approaches is inherently better than the
other. They are only different.
>> I'm sure you will complain that this is not the case,
>
> I don't have to: it's up to the one who claims something to support
> his views. Read my statement again above: I did not even put CLOS on a
> comparative scale, I just said that it doesn't have static verification
> (it hasn't), that conflicting specifications can be perfectly detected at
> compile time with this, and that this is useful for large systems. It's
> you who interpreted this as an attack against CLOS. The latter might
> (or might not) have other features that make it a good choice for large
> scale programming.
I think we both did not really grok what the other meant.
You seem to think I wanted to show that the mixin approach would be in any
way better than the module approach - this is not the case. I was glad to
see that OCaml seems to have an equally expressive approach in comparison
to what I know from CLOS.
>>> In any case, my main argument here is not that "X is better than Y"
>>> but that "X is not Y". The latter is a necessary prerequisite before
>>> one can give answers to the first.
>
>> You mean "modules are not classes" therefore "modules are better than
>> classes"? (I cannot imagine you meant that...).
>
> My argument above is precise: I said that this is a _necessary_
> prerequisite, not a _sufficient_ one. If you wanted to show that one
> approach is better than another but you failed to even demonstrate that
> it actually differs, all further arguments would be void. That's why
> pointing out the differences is so important.
No it is not - since I did not even saw a need to answer the first question
("X is better than Y")! I see no significant difference in the both
approaches therefore I do not think that one or the other is better.
I think you misunderstood my intention...
>> But to say it again - dont mix Java-OO with mixins (pardon the pun ;-) )
>
> No need to say this, because I have never done so: neither did I mention
> Java at all nor would I use an inferior OO-language as an example when
> comparing paradigms.
I meant Java as a particular example of non-mixin Objectsystems. There are
many languages with classes and inheritance - most of them do not really
support mixins.
>> Mixins are mostly useful because they make it easy to reuse behaviour.
>
> Hm, I'd have thought that it is more about extension and alteration of
> behaviour, but this may depend on your point of view.
Mixins are about making "behaviour" modules that you can use multiple times.
If a particular behaviour would only appear one times then there is no need
to factor it out for reuse. Mixins can be used to extend or alter existing
behaviour or to add completely new behaviours. This is why I saw them as
being very similar to the modules in OCaml.
>> It seems to be the case that OCamls classes are unable to be used for
>> mixin-style programming but you can get the same thing by using modules.
>> The fact that OCamls classes are useless for this particular task does
>> in no way mean that classes in _all_ systems are unable to support
>> this style.
>
> It's unfair to compare much more complex systems to a sub-feature of
> a language. As you have noticed yourself, OCaml-modules do remedy to a
> great extent shortcomings with expressiveness of OCaml-classes. There
> is no need to introduce extra functionality to the OO-part if other
> language features already cover it.
I did not claim that OCaml has any shortcoming in that regard! I said
explicitely that you can use modules (if one wants combined with classes)
to get the same thing. I did not claim that the class-mechanism has to be
extended to support this feature on its own - this would not make much
sense.
I still think that we simply misunderstood each other.
Complicated until you learn them.
> > So, if you switched the order, you could end up with:
> >
> > /--> StackSizeLimitMixin --\
> > LimitedArrayStack-< >--> Stack?
> > \--> ArrayStack --/
>
> What do you mean? This is not a precedence-list but a DAG.
Sure, but with MI you do have DAGs.
> The result is
> depending on how you define the order of LimitedArrayStacks superclasses in
> your diagram. I suppose you meant what would happen if we switched the
> order of the superclasses of LimitedArrayStack like this:
Exactly.
> class LimitedArrayStack : ArrayStack, StackSizeLimitMixin {}
>
> or in CLOS Syntax
>
> (defclass limited-array-stack (array-stack stack-size-limit-mixin) ())
>
> If you do this the methods specialized on array-stack are specified to be
> more important for limited-array-stack than the ones of
> stack-size-limit-mixin. But I do not see your point - do you mean something
> like that happens by accident?
Yes. In the class a : b, c syntax I can easily see that happening.
I certainly don't expect the semantics of a class to depend on the
order in which I inherit. I'm less sure it would be problematic
with the CLOS syntax. Frankly, I haven't studied CLOS as much
as I would have liked.
> > And how would you disambiguate order in more complex cases?
>
> In what way is it ambiguous?
MI, with two super classes both implementing the same interface, and
a mixin interposing on one.
> On Sun, 03 Mar 2002 23:30:23 +0100, Jochen Schmidt <j...@dataheaven.de>
> wrote:
>> Aaron Denney wrote:
>> > Ugh. I would much rather directly specify it. To me, this mix-in
>> > method seems like a workaround for not having parametrized classes or
>> > modules. The parametrized solutions show exactly what is going on.
>>
>> You actually _do_ directly specify it. The rules are easy enough.
>
> Complicated until you learn them.
It is not more complicated than higher order modules...
>> > So, if you switched the order, you could end up with:
>> >
>> > /--> StackSizeLimitMixin --\
>> > LimitedArrayStack-< >--> Stack?
>> > \--> ArrayStack --/
>>
>> What do you mean? This is not a precedence-list but a DAG.
>
> Sure, but with MI you do have DAGs.
Yes you specifiy the class hierarchy as a DAG - but CLOS calculates the
precedence-lists for each class at compile time. So it is impossible to
have an inconsistent class hierarchy at runtime.
>> The result is
>> depending on how you define the order of LimitedArrayStacks superclasses
>> in your diagram. I suppose you meant what would happen if we switched the
>> order of the superclasses of LimitedArrayStack like this:
>
> Exactly.
>
>> class LimitedArrayStack : ArrayStack, StackSizeLimitMixin {}
>>
>> or in CLOS Syntax
>>
>> (defclass limited-array-stack (array-stack stack-size-limit-mixin) ())
>>
>> If you do this the methods specialized on array-stack are specified to be
>> more important for limited-array-stack than the ones of
>> stack-size-limit-mixin. But I do not see your point - do you mean
>> something like that happens by accident?
>
> Yes. In the class a : b, c syntax I can easily see that happening.
> I certainly don't expect the semantics of a class to depend on the
> order in which I inherit. I'm less sure it would be problematic
> with the CLOS syntax. Frankly, I haven't studied CLOS as much
> as I would have liked.
The class a : b,c syntax is completely hypothetic. I used it as pseudo-code
for our parensopobic readers. If you write CLOS code you are absolutely
aware of the fact that the order of the superclasses is important.
Programming in CLOS and applying C++ semantics does not work very well (as
with any other language too). Switching the superclasses order by accident
does not happen in reality (in my experience).
>> > And how would you disambiguate order in more complex cases?
>>
>> In what way is it ambiguous?
>
> MI, with two super classes both implementing the same interface, and
> a mixin interposing on one.
There is no ambiguity in CLOS - all ambiguities are resolved at
compile-time. (Or better class-definition time). By using the MOP you even
can change the way CLOS orders the classes by defining your own mechanism
for that.
> For each
> application program, I want to use macros to build a special purpose
> programming language, to narrowly focus on the highest level
> abstractions of that application, so there won't be much code getting
> in the way of understanding and working with those abstractions.
That is a worthy and commendable approach. However, you haven't explained why
you plan on using macros, that is, why you need compile-time computation. The
usual approach, for a functional programmer, would be to define a set of
abstract data types, equipped with appropriate operations. Given the
availability of higher order functions, this is, in most cases, just as
powerful/convenient as using macros.
Cheers,
--
François Pottier
Francois...@inria.fr
http://pauillac.inria.fr/~fpottier/
Could you give us - in a nutshell - the exact differences between a CLOS
mixin and a mixin class? (Not that mixin classes are very useful in a
functional context, but it might help us understand the issues.)
Regards,
Joachim
--
This is not an official statement from my employer.
What's your definition of a functor?
(One of the definitions of "functor" that I remember said it's simply a
higher-order function, so you obviously have a different definition.)
Regards
Joachim--
> Jochen Schmidt wrote:
>> Mixins alow a
>> completely different programming style compared to Java-Style OOP.
>>
>> Mixins do not even be realized with inheritance!
>
> Could you give us - in a nutshell - the exact differences between a CLOS
> mixin and a mixin class? (Not that mixin classes are very useful in a
> functional context, but it might help us understand the issues.)
I'm not sure if I understand your question...
In CLOS mixins are realized by using classes and multiple inheritance. The
MI mechanism of CLOS is flexible enough that this is all you need to make
mixins possible. C++ for example though it has classes and MI also needs
templates to make mixins at least possible.
Ruby has a special mixin construct called "Mixin modules".
Objective-C has a special mixin construct called "Categories". I did not
look if Rubys and Objective-C's mixin constructs are comparable to mixins
in CLOS.
In CLOS the usefulness of mixins is particularily emphasized because of
it's rich declarative method-combination facilities. You can define so
called "before", "after" or "around" methods. So if you want to mixin some
testing behaviour before your primary method you create a mixin with a
"before" method that is then automatically called before the primary method.
CLOS even allows to define your own method-combination protocols besides of
the standard ones (after, before, around, primary). Using that facility you
can easily provide a mechanism like Eiffels "Design by contract".
It is especially this method-combination facility that makes mixins so
flexible in CLOS.
> I would not say it says nothing - the so called abysmally less
> expressive languages were most of the time static languages too (like
> C or C++).
Not only are they abysmally less expressive (which is a major aspect), but
calling languages that allow (not to say "encourage") unsound programming
"static" surely misses the point.
> What I meant to counter is the often heard argument that you need a
> static language for really big systems and that dynamic languages are
> "error-prone" or not suited for big systems.
I have not said that they are not suited for big systems. There are
other points about these languages that allow them to scale better than
mainstream languages. I still hold my view that languages in general
scale better when more static verification is possible, which is a
consequence of my deep distrust in the abilities of humans (due to
experience including mine, of course).
> It is not really possible to design a system of that size if there is
> something _inherently_ wrong with an approach.
I am afraid, it is. People even wrote large systems in assembler.
> I think we both did not really grok what the other meant.
> You seem to think I wanted to show that the mixin approach would be in any
> way better than the module approach - this is not the case.
No, I said this nowhere. I was surprised that you took this as a
competitive comparison although I was just pointing out that module
systems are something completely different, especially, because they not
only allow static verification, they _enforce_ it. This requires a high
degree of formalization.
> I still think that we simply misunderstood each other.
The usual Usenet-illness... ;)
> What's your definition of a functor?
> (One of the definitions of "functor" that I remember said it's simply a
> higher-order function, so you obviously have a different definition.)
There are several definitions, including yours. A functor may also be,
for example, a (structure-preserving) morphism between categories in
category theory. In this context it was obvious that functor was meant to
be a higher-order module, i.e. a mapping from some algebraic signature(s)
to another signature, or possibly even on a higher-order.
Clearly Neel has in mind a highly orthogonal language where modules and
records are unified (confused?) and so functors are just functions :-)
More seriously, I'd also like an enhanced module system, but the class
system of OCaml isn' going away. If anything, it will get more powerful
soon (polymorphic methods) and some of those comparisons between classes
and modules will then allow that classes can handle more than before. I
still see them as very different, though non-orthogonal, features.
> > You can sometimes unify concepts -- eg, multimethods let you unify
> > objects and type classes -- but when you don't know how to do that it
> > makes sense to offer the full kit to the programmer.
>
> And sometimes it's better to think longer until a clean solution is
> found.
Sometimes the best is the enemy of the good enough. And language rants are
always more fun than endless anti-formality rants crossposted to
comp.object (ducking and running for cover ;)
-- Brian
Maybe some language that builds on pure category theory? ;)
> More seriously, I'd also like an enhanced module system, but the class
> system of OCaml isn' going away. If anything, it will get more powerful
> soon (polymorphic methods) and some of those comparisons between classes
> and modules will then allow that classes can handle more than before. I
> still see them as very different, though non-orthogonal, features.
That's ok for me. Unless I get forced to use objects in standard
libraries, I can well live with this non-orthogonal extension.
>> And sometimes it's better to think longer until a clean solution is
>> found.
> Sometimes the best is the enemy of the good enough. And language rants
> are always more fun than endless anti-formality rants crossposted to
> comp.object (ducking and running for cover ;)
I don't suppose Mr. Wissler is reading this thread... ;)
Nothing nearly so clever. Just the observation that a function like
val sort : 'a list -> ('a -> 'a -> bool) -> 'a list
which uses hofs can frequently be replaced with a module with a
signature like:
module type FOO =
functor (Bar : sig type 'a t val (<) : 'a t -> 'a t -> bool end) ->
sig
val sort : 'a Bar.t list -> 'a Bar.t list
end
My practice is to use functors whenever I have more than 2 or 3 higher
order functions I need to parameterize a function with.
> > > You can sometimes unify concepts -- eg, multimethods let you unify
> > > objects and type classes -- but when you don't know how to do that it
> > > makes sense to offer the full kit to the programmer.
> >
> > And sometimes it's better to think longer until a clean solution is
> > found.
>
> Sometimes the best is the enemy of the good enough.
Also, it's often impossible to find the best without actual experience
in a not-quite-perfect language. Language design is not like a finding
a mathematical proof. At that, finding a mathematical proof is not
often like the idealized vision of finding a mathematical proof. :)
Neel
> Also, it's often impossible to find the best without actual experience
> in a not-quite-perfect language. Language design is not like a finding
> a mathematical proof. At that, finding a mathematical proof is not
> often like the idealized vision of finding a mathematical proof. :)
Applying the Curry-Howard isomorphism:
Language design is not like finding a mathematical proof system. At
that, finding a mathematical proof system is not often like the
idealized vision of finding a mathematical proof system - otherwise
we would be contented with having just one of them ... 8^)
While I'd agree that hofs are as *powerful* as macros, they are not
nearly as *convenient*. Here's an example illustrating why. Suppose
that have a language with a try/catch exception system, and we want to
extend it with a try/finally statement. So we write a higher-order
function (in Ocaml):
# let finally body cleanup =
let cleaned_up = ref false in
try
let v = body() in
begin
cleaned_up := true;
cleanup();
v
end
with
exc ->
begin
if not !cleaned_up then cleanup();
raise exc
end;;
- : (unit -> 'a) -> (unit -> unit) -> 'a = <fun>
Fair enough so far. Now, let's try using it. Suppose we want to
process some data from a file, but make sure that we close the file
even if an exception is thrown. Using the finally hof, we would need
to write an expression like:
finally
(fun () ->
process file)
(fun () ->
close file)
As you can see, there's a lot of syntactic noise from wrapping
everything in thunks, and having an actual try/finally syntactic
construct would make things a lot easier to read. When I did Scheme or
Dylan programming, my preferred style was to write a bunch of
higher-order functions to provide the functionality, and then write
macros to hide the thunks and make their invocation cleaner and easier
on the eyes.
If I couldn't use the macros, I wouldn't use the combinators either --
aesthetics count!
Neel
> Francois Pottier <fpot...@rivesaltes.inria.fr> wrote:
>> In article <a6789134.02030...@posting.google.com>,
>> Software Scavenger <cubic...@mailandnews.com> wrote:
>>
>> > For each
>> > application program, I want to use macros to build a special purpose
>> > programming language, to narrowly focus on the highest level
>> > abstractions of that application, so there won't be much code getting
>> > in the way of understanding and working with those abstractions.
>>
>> That is a worthy and commendable approach. However, you haven't
>> explained why you plan on using macros, that is, why you need
>> compile-time computation. The usual approach, for a functional
>> programmer, would be to define a set of abstract data types,
>> equipped with appropriate operations. Given the availability of
>> higher order functions, this is, in most cases, just as
>> powerful/convenient as using macros.
>
> While I'd agree that hofs are as *powerful* as macros, they are not
> nearly as *convenient*. Here's an example illustrating why. Suppose
> that have a language with a try/catch exception system, and we want to
> extend it with a try/finally statement. So we write a higher-order
> function (in Ocaml):
I agree. You even outlined cases where macro solutions are trivial. The
difference is much bigger if one uses the macrosystem for some non trivial
tasks. Sometimes there are special constructs in other languages that allow
similar things - but there is nothing like having a general code generation
system.
There's another big issue: If you use HOFs to "assemble" a program from
parts, the assembling gets done at run-time (unless you use partial
evaluation). If you do it with macros, it gets done at macro-expansion
time. In some cases, this can be a big efficiency advantage for macros.
David Feuer
You're right, this is a much nicer definition. The other one is
the first thing that came to mind. Personally, I'm happy I gave an
ugly definition, rather than an incorrect one. :)
> I use something very similar for safe exception cleanup in OCaml:
>
> let safely setup cleanup subject f =
> let x = setup subject in
> let result = try f x with e -> cleanup x; raise e in
> cleanup x;
> result
>
> Here's how I use "safely" for file processing:
>
> safely open_in close_in "myfile.dat" process_file
>
> Maybe this solution isn't as general as what you like, but I found
> it matches a useful pattern in lots of places in my code. [...]
I'll try out your safely function in my own code and see how it goes.
> Well maybe in some cases, the wrapping in thunks isn't necessary if
> you are willing to thread a value through your finalizer. (Is there
> a word for that?).
I think you just used it.
> Finally (heh), you can write a more general "finally clause" in camlp4.
> I gave a try at that, just to learn a little camlp4:
>
> http://www.bagley.org/~doug/ocaml/always/
Nice!
Neel
NK> Software Scavenger <cubic...@mailandnews.com> wrote:
>>
>> I've heard that Ocamlp4 is almost as good as Common Lisp macros. I
>> haven't actually used it nor learned OCaml yet, but am considering
>> going in that direction, instead of spending all my time on Common
>> Lisp. What I like most about Common Lisp is the way the
>> code-as-data paradigm makes writing sophisticated macros trivial.
>> Do several functional programming languages have macro paradigms
>> almost as powerful as that?
NK> If you are coming from the Lisp world, and are looking for a practical
NK> functional language, then I'd recommend looking at OCaml. Here's my
NK> bullet-point comparison; mileage may vary.
NK> o Macros: No language has macros as good as Common Lisp does. This is
NK> just a fact of life. Symbol macros, reader macros, compiler macros,
NK> setfs, defmacro expansion: the designers of Common Lisp worked *hard*
NK> at making every kind of compile-time computation possible, and even
NK> harder at making it coherent.
Except Common Lisp doesn't address the hygiene issue. Scheme does.
--
Cheers =8-} Mike
Friede, Völkerverständigung und überhaupt blabla
> Except Common Lisp doesn't address the hygiene issue. Scheme does.
And loses much of the convenience and power of macros while doing so...
I'd agree about the standard R5RS macro system, but syntax-case (built
into Chez Scheme and DRScheme, and is also available in SLIB) offers
all the capability of defmacro but is also hygienic. For details,
see:
http://www.cs.indiana.edu/chezscheme/syntax-case/
It's a solid improvement over defmacro. However, Common Lisp has
many other macrological features, such reader and symbol macros,
that Schemes normally dsn't offer.
Neel
> Jochen Schmidt <j...@dataheaven.de> wrote:
>> Michael Sperber [Mr. Preprocessor] wrote:
>>
>> > Except Common Lisp doesn't address the hygiene issue. Scheme does.
>>
>> And loses much of the convenience and power of macros while doing so...
>
> I'd agree about the standard R5RS macro system, but syntax-case (built
> into Chez Scheme and DRScheme, and is also available in SLIB) offers
> all the capability of defmacro but is also hygienic. For details,
> see:
>
> http://www.cs.indiana.edu/chezscheme/syntax-case/
There speaks nothing against adding a non-standard macro-system to Common
Lisp too - it's only that there was actually no need to have a hygienic
macrosystem in Common Lisp. Many argue that Scheme being a Lisp1 and not
having any standard package-system has a much higher need for a _hygienic_
macrofacility.
> It's a solid improvement over defmacro. However, Common Lisp has
> many other macrological features, such reader and symbol macros,
> that Schemes normally dsn't offer.
And not to forget compiler macros.
What I wanted to say is that it is nonsense to proclaim an actual lack in
Common Lisp because its macrosystem is not hygienic. The whole hygienic
macro stuff was always a scheme thingy and are highly biased by the design
of scheme as a language.
But I don't think that this is really on-topic here in c.l.f...
Perhaps, but isn't another solution to trim down the syntactic fat
around a closure definition. For example, what if you could write :-
finally
[ process file ]
[ close file ]
Where [ ] delimit a closure. If [ ] are taken, pick another pair of
brackets or throw in some other characters e.g. ...
finally
[| process file |]
[| close file |]
Yes, I think this would make an excellent macro. :)
Macros let you abstract out syntactic noise, and write programs in a
more visually obvious style. For example, Haskell has very lightweight
function definitions, but to support monadic programming they added
the do-notation.
Here's another example of why syntactic flexibility is good: this
time, I'll use ML infixes to illustrate. I'm writing a string format
(ala C's printf) library in Ocaml, based on Olivier Danvy's
"Functional Unparsing" paper. So I can write a format like this:
# let x = format(str $ list int $ lit" ha ha ha.");;
- : int list -> int -> string = <fun>
# x " foo" [1; 2; 3];;
- : string = "foo [1; 2; 3] ha ha ha."
Here I make use of the flexibility infixes afford to make the format
constructor's structure (the 'str $ list int $ lit" ha ha ha."'
expression) resemble like the output it generates.
Macros merely permit a programmer to more generally add specific
notation for domain-specific extensions. A really nice example is the
DUIM UI library for Dylan; there are a set of macros to define
windows. Here's one that defines a pull-down menu:
define command-table *file-command-table* (*global-command-table*)
menu-item "Open" = open-file,
accelerator: make-keyboard-gesture(#"o", #"control"),
documentation: "Opens an existing file.";
menu-item "Save" = save-file,
accelerator: make-keyboard-gesture(#"s", #"control"),
documentation: "Saves the current file to disk.";
menu-item "Save As..." = save-as-file,
documentation: "Saves the current file with a new name.";
menu-item "Exit" = exit-task,
accelerator: make-keyboard-gesture(#"f4", #"alt"),
documentation: "Exits the application.";
end command-table *file-command-table*;
The macro permits the structure of the code to directly match the
structure of the UI, which makes the UI code much easier to read and
program. It's the same principle as my format function writ large,
both in scale and utility.
Another reason I like macros is because I am a big fan of domain
specific languages, *and* I'm also a big fan of safety guarantees. If
I write an interpreter for a DSL, then I have to redo all the work to
get guarantees like tail-call merging, type-safety, etc. But if I
define my DSL as a transformation into (say) ML, then I get all those
guarantees for free. Any legal program in my DSL is translated into
ML, and so my DSL inherits all the guarantees ML makes without my
having to redo any work. This is an important complexity-management
tool for computer science papers; I see no reason why it shouldn't be
a complexity-management tool for actual programs too. The only
difference is we expand terms into real ML rather than mini-ML. :)
Neel
Why is that a big issue?
Good compilers for languages with HOFs will do partial evaluation.
--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
>Another reason I like macros is because I am a big fan of domain
>specific languages, *and* I'm also a big fan of safety guarantees. If
>I write an interpreter for a DSL, then I have to redo all the work to
>get guarantees like tail-call merging, type-safety, etc. But if I
>define my DSL as a transformation into (say) ML, then I get all those
>guarantees for free.
Not true!
>Any legal program in my DSL is translated into
>ML, and so my DSL inherits all the guarantees ML makes without my
>having to redo any work.
No; the guarantees will apply only to the generated ML code.
You won't necessarily get corresponding guarantees for the DSL
source code.
For example, just because you compile into ML doesn't mean that
your DSL will support proper tail call optimization; that only
holds if tail calls in your DSL get translated into tail calls
in the generated ML, which might not be the case.
As far as safety guarantees go, there's not much difference between
writing an interpreter in ML and writing a compiler to ML.
I certainly don't see any safety advantage for macros over higher-order
functions.
I don't know if this is relevant to the original topic, but macros can
potentially have a safety advantage over higher-order functions: some
errors may be caught at macro-expansion time that would otherwise be
caught at runtime. I don't know how significant this is for real
systems.
David
Could you give an example for such a situation?
Matthias
The following paper introduces a macro-system that guarantees that the
result of a macro-expansion is well-formed and *well-typed*.
Macros as Multi-Stage Computations: Type-Safe, Generative,
Binding Macros in MacroML
Steve Ganz, Amr Sabry, Walid Taha
ICFP'01
http://citeseer.nj.nec.com/467071.html
Re: Can higher-order functions supplant macros?
No. Every language with more syntax than lambda-calculus includes
syntactic forms that are not expressions (or values). Examples of such
second-class forms are: type, module, fixity and other declarations;
binding constructs; statements. Only macros can expand into a
second-class object. The result of a function is limited to an
expression or a value.
Do we really want to manipulate second-order constructs? The
developers of GHC seem to think so. GHC includes a notable amount of
CPP macros. The paper
Keith Wansbrough (1999). Macros and Preprocessing in Haskell. Unpublished.
http://www.cl.cam.ac.uk/~kw217/research/misc/hspp-hw99.ps.gz
documents all such uses, and argues that macros should be included
into language standards.
No, but I can give a general description: Both macros and HOFs can be
used to assemble programs from separate pieces. If the pieces don't
match, the macro processor can give you an error. The HOF won't be
called until runtime, and (in a lazy language in particular) may not
even cause an error until somewhere deep in the execution of the
program. Some of these problems can be solved with proper types (which
the macros can't do), others cannot.
David
Ok, so at least give me a challenge: Show me a macro that you *think*
has the property that it can't be done with HOFs and types.
Matthias
> Ok, so at least give me a challenge: Show me a macro that you *think*
> has the property that it can't be done with HOFs and types.
It does not even have to be complicated:
(defmacro fast (&body forms)
`(locally (declare (optimize (speed 3) (safety 0)))
,@forms))
Q.E.D.
> Another reason I like macros is because I am a big fan of domain
> specific languages, *and* I'm also a big fan of safety guarantees. If
> I write an interpreter for a DSL, then I have to redo all the work to
> get guarantees like tail-call merging, type-safety, etc. But if I
> define my DSL as a transformation into (say) ML, then I get all those
> guarantees for free.
But then why is not ML itself a macro? If the macro system is
expressive enough for domain specific languages, you should be
able to write the ML->assembler (or whatever target) translation
as a macro. After all ML is nothing more than a domain specific
language for writing domain specific languages. And you get the
type checking for free from the type system of assembly language.
> Both macros and HOFs can be used to assemble programs from separate
> pieces. If the pieces don't match, the macro processor can give you
> an error. The HOF won't be called until runtime,
As Fergus has already pointed out, this is not true: A good optimizing
compiler will evaluate the HOF at compile time, and catch the error.
> and (in a lazy language in particular) may not even cause an error
> until somewhere deep in the execution of the program.
One good thing about HOFs is that in a statically typed language (if
you don't use extensions like 'error', and let the compiler check
for complete constructor matching) no runtime error can occur.
> Some of these problems can be solved with proper types
Yes. See above.
> (which the macros can't do), others cannot.
Which ones?
IMHO, there are two advantages of macros vs. HOFs with an optimizing
compiler:
* You can use arbitrary syntactic sugar, which sometimes makes the source
much more readable (e.g. Haskell list comprehension).
* You have exact control over what part gets evaluated at runtime, and
what part at compile time.
The disadvantage is that you need a separate macro systems (and maybe
a separate macro type system). With HOFs, you get both for the price of
one. And the optimizing compiler might even evaluate some functions
at compile time which you wouldn't bother to write macros for.
- Dirk
>> Ok, so at least give me a challenge: Show me a macro that you *think*
>> has the property that it can't be done with HOFs and types.
>
> It does not even have to be complicated:
>
> (defmacro fast (&body forms)
> `(locally (declare (optimize (speed 3) (safety 0)))
> ,@forms))
>
Please recall that the question was to find a macro solution where the
macro expander does some compile-time *checking* that cannot be emulated
by type checking in the HOF case and, thus, would have to be done at
runtime when the HOF gets called. The above code merely annotates
its arguments with optimizer settings. However, if we widen the scope of
the problem then this does, indeed, show something that HOFs can't do. It
is somewhat Lisp-specific and does not apply to languages where forms
akin to "(declare (optimize ...))" do not exist. However, as
someone else has already pointed out, there are always going to be things
that HOFs inherently can't do as long as macros are permitted to operate
at, for example, the module- or type-level, i.e., in areas that mere values
have no access to.
(The quoted "challenge" sentence alone does not give enough context since the
discussion leading up to it was snipped.)
Anyway, for the record: I already discussed the original problem with David
Feuer in e-mail. He did, indeed, have a good example to support his case.
Matthias
GHC does indeed use quite a few macros. However, from my casual
glance at the source it seems that in many cases this is a stylistic
choice rather than because of necessity.
Macros are very common in C but the following notes that there are
(better) alternatives for some (but not all) common uses of macros :-
#ifdef Considered Harmful, or Portability Experience With C News
Henry Spencer and Geoff Collyer
Proceedings of the 1992 Summer USENIX Conference
http://citeseer.nj.nec.com/928.html
Also a quick look over the source to SML/NJ didn't reveal any macros
in the SML code and my fading memory of the MLWorks source code is
that it didn't have any macros in the SML code either. Whether macros
would have improved/simplified/clarified the compilers is open to
debate -- at least in the case of SML/NJ since anyone can look at the
source and suggest where macros would help.
These are only examples that already exist. I could invent many more.
Actually, many of these above examples as well as examples I'd fancy
do or would actively use HOFs and types, too - so macros are not in
opposition to HOFs and types.
Actually, it depends on what you call "done with HOFs and types".
If you know *in advance* the kind of treatment that you will do
to your code, you can always reingeneer the code into HOFs (and
their HO combinators) that instead of providing and consuming
one entry point (execute), have multiple entry points
(execute, execute with backtracking, documentation, etc.).
The advantage of macros here is that you don't need do it in advance.
You can add features to existing programs and languages,
whereas the HOF approach would require a complete rewrite
every time you think of a new feature.
You can also taylor the syntax to the end-users' needs,
which makes for more programs to fit in a screenful,
hence for a wider range of tractable problems.
A good read about macros, now online: Paul Graham's "On Lisp".
http://www.paulgraham.com/onlisp.html
[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[ TUNES project for a Free Reflective Computing System | http://tunes.org ]
In the beginning there was nothing. And the Lord said "Let There Be Light!"
And still there was nothing, but at least now you could see it.