Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Design Patterns and Functional programming

53 views
Skip to first unread message

Jon Harrop

unread,
Jul 6, 2007, 11:48:08 AM7/6/07
to

Is there any literature describing how object-oriented design patterns
reinvent existing techniques like functional programming?

--
Dr Jon D Harrop, Flying Frog Consultancy
The OCaml Journal
http://www.ffconsultancy.com/products/ocaml_journal/?usenet

H. S. Lahman

unread,
Jul 7, 2007, 11:00:46 AM7/7/07
to
Responding to Harrop...

> Is there any literature describing how object-oriented design patterns
> reinvent existing techniques like functional programming?

They don't. Design patterns exist outside both functional and OO
development. The OO paradigm just makes expressing design patterns
easier because it already emphasizes problem space abstraction.

Since the OO and functional paradigms are fundamentally incompatible,
one would need a different representation of design to express design
patterns in a form that is unambiguously mappable to FPLs. At present
there does not seem to be a body of literature for functional
programming that provides the sort of design methodologies that the
OOA/D literature provides. Without an appropriate notation for design
consistent with the paradigm it becomes difficult to express notions
like design patterns.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
h...@pathfindermda.com
Pathfinder Solutions
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
"Model-Based Translation: The Next Step in Agile Development". Email
in...@pathfindermda.com for your copy.
Pathfinder is hiring:
http://www.pathfindermda.com/about_us/careers_pos3.php.
(888)OOA-PATH

gpuchtel

unread,
Jul 7, 2007, 11:47:56 AM7/7/07
to
H.S., brace yourself. I was going to respond earlier as well; however,
the wording of Jon Harrop's question prompted me to view his history
of postings. Afterwards I decided against it because of what I found.
I think you may have just opened Pandora's box. This should be
interesting. :-)

Jon Harrop

unread,
Jul 7, 2007, 6:16:03 PM7/7/07
to
H. S. Lahman wrote:
> Responding to Harrop...
>> Is there any literature describing how object-oriented design patterns
>> reinvent existing techniques like functional programming?
>
> They don't.

Here are some examples:

Function object => Closure
Factory => Curried function
Singleton => Value
Command => Variant
Iterator => Fold/Zipper
Chain of Responsibility => Event
Inversion of Control => Continuation passing style

Let me give you a concrete example as well. Here is a simple curried
higher-order function that computes the numerical derivative of "f" at "x"
written first in a functional style (OCaml code):

# let d f x =
let dx = sqrt epsilon_float in
(f(x +. dx) -. f(x -. dx)) /. (2. *. dx);;
val d : (float -> float) -> float -> float = <fun>

Given the function f(x) = x^3-x-1:

# let f x = x**3. -. x -. 1.;;
val f : float -> float = <fun>

The derivative function f'(x) is given by:

# let f' = d f;;
val f' : float -> float = <fun>

Its value f'(3) at x=3 is:

# f' 3.;;
- : float = 26.

and in object oriented style (C++ code):

#include <iostream>
#include <limits>
#include <cmath>

using namespace std;

numeric_limits<double> real;

struct F { virtual double operator()(double x) = 0; };

struct D : F {
F *f;
double dx;
D(F *g) : f(g), dx(sqrt(real.epsilon())) {}
virtual double operator()(double x) {
return ((*f)(x + dx) - (*f)(x - dx)) / (2 * dx);
}
};

struct F1 : F {
virtual double operator()(double x) { return x*x*x - x - 1; }
};

int main() {
F1 f1;
cout << "d f 3 = " << D(&f1)(3) << endl;
}

The class "D" represents the function "d". Higher-order translates into
parameterizing D over F. Currying translates into deriving D from F. The
result is a factory for creating derivative functions.

> Design patterns exist outside both functional and OO

> The OO paradigm just makes expressing design patterns easier because it
> already emphasizes problem space abstraction.

You mean design patterns try to solve the problems that object orientation
has with abstraction?

> Since the OO and functional paradigms are fundamentally incompatible,

Given that some languages provide both OO and FP paradigms and several
languages even compile functional programs into object oriented
intermediate representations (F# targetting .NET and Scala targetting the
JVM, for example), in what sense are these paradigms "incompatible"? Do you
mean "complementary"?

If you look closer at OCaml, for example, you'll see that its OO features
are largely obsoleted by its functional features. Rewriting the above
example in an object oriented style in OCaml:

# class type f = object
method f : float -> float
end;;
class type f = object method f : float -> float end
# class f1 : f = object
method f x = x**3. -. x -. 1.
end;;
class f1 : f
# class d(f : f) : f = object
val dx = sqrt epsilon_float
method f x = (f#f(x +. dx) -. f#f(x -. dx)) /. (2. *. dx)
end;;
class d : f -> f
# let f1' = new d(new f1);;
val f1' : d = <obj>
# f1'#f 3.;;
- : float = 26.

You can see that the object oriented approach requires twice as much code.
Although the design patterns quantify the workarounds for object
orientations problems with abstraction, they clearly do not compete with
the existing functional solution in this case (and most others). When given
the choice, you can understand why programmers choose not to use OOP and
design patterns here.

> one would need a different representation of design to express design
> patterns in a form that is unambiguously mappable to FPLs. At present
> there does not seem to be a body of literature for functional
> programming that provides the sort of design methodologies that the
> OOA/D literature provides.

Yes, there are certainly a lot more books discussing problems with
abstraction in the context of object orientation.

Daniel T.

unread,
Jul 7, 2007, 8:47:06 PM7/7/07
to
Jon Harrop <j...@ffconsultancy.com> wrote:
> H. S. Lahman wrote:
> > Jon Harrop <j...@ffconsultancy.com> wrote:

> > > Is there any literature describing how object-oriented design patterns
> > > reinvent existing techniques like functional programming?
> >
> > They don't.
>

> [example snipped]


>
> You can see that the object oriented approach requires twice as much code.

I'm intrigued. Now show an OO problem written in both paradigms. Say two
classes that satisfy LSP and use the two state charts below:

f()
+->[A]---------->[B]
| |
| x() |
+-----------------+


f() f()
+->[A]---------->[B]---------->[C]
| | |
| x() | |
+-----------------+-------------+


Say in C++ (since you seem to know it:)

class Interface {
public:
virtual void f() = 0;
virtual void x() = 0;
};

class ObjA : public Interface {
enum State { A, B } state;
public:
virtual void f() {
state = B;
}
virtual void x() {
state = A;
}
};

class ObjB : public Interface {
enum State { A, B, C } state;
public:
virtual void f() {
state = state == A ? B : C;
}
virtual void x() {
state = A;
}
};

One last comment:

>
> #include <iostream>
> #include <limits>
> #include <cmath>
>
> using namespace std;
>
> numeric_limits<double> real;
>
> struct F { virtual double operator()(double x) = 0; };
>
> struct D : F {
> F *f;
> double dx;
> D(F *g) : f(g), dx(sqrt(real.epsilon())) {}
> virtual double operator()(double x) {
> return ((*f)(x + dx) - (*f)(x - dx)) / (2 * dx);
> }
> };
>
> struct F1 : F {
> virtual double operator()(double x) { return x*x*x - x - 1; }
> };
>
> int main() {
> F1 f1;
> cout << "d f 3 = " << D(&f1)(3) << endl;
> }

Why not just write:

int main() {
cout << "d f 3 = 26";
}

It does the same thing after all and much faster.

Jon Harrop

unread,
Jul 7, 2007, 11:28:17 PM7/7/07
to
Daniel T. wrote:
> class Interface {
> public:
> virtual void f() = 0;
> virtual void x() = 0;
> };
>
> class ObjA : public Interface {
> enum State { A, B } state;
> public:
> virtual void f() {
> state = B;
> }
> virtual void x() {
> state = A;
> }
> };
>
> class ObjB : public Interface {
> enum State { A, B, C } state;
> public:
> virtual void f() {
> state = state == A ? B : C;
> }
> virtual void x() {
> state = A;
> }
> };

A simple functional equivalent in OCaml is ~9 lines long:

# type t =
| ObjA of [`A | `B]
| ObjB of [`A | `B | `C];;
type t = ObjA of [ `A | `B ] | ObjB of [ `A | `B | `C ]
# let f = function
| ObjA (`A | `B) -> ObjA `B
| ObjB `A -> ObjB `B
| ObjB (`B | `C) -> ObjB `C;;
val f : t -> t = <fun>
# let x = function
| ObjA (`A | `B) -> ObjA `A
| ObjB (`A | `B | `C) -> ObjB `A;;
val x : t -> t = <fun>

You only need to extend this example slightly to glimpse more of the power
of the functional approach. Consider an abstract syntax tree:

struct Expr {
};

struct Int : public Expr {
int n;
Int(int m) : n(m) {}
};

struct Add : public Expr {
Expr *f, *g;
Add(Expr *f2, Expr *g2) : f(f2), g(g2) {}
};

struct Mul : public Expr {
Expr *f, *g;
Mul(Expr *f2, Expr *g2) : f(f2), g(g2) {}
};

This is equivalent to:

# type expr =
| Int of int
| Add of expr * expr
| Mul of expr * expr;;
type expr = Int of int | Add of expr * expr | Mul of expr * expr

Now consider adding infix operators to simplify symbolic expressions as they
are constructed. In the functional style:

# let rec ( +: ) f g = match f, g with
| Int n, Int m -> Int (n + m)
| Int 0, e | e, Int 0 -> e
| f, g -> Add(f, g);;
val ( +: ) : expr -> expr -> expr = <fun>
# let rec ( *: ) f g = match f, g with
| Int n, Int m -> Int (n * m)
| Int 0, e | e, Int 0 -> Int 0
| Int 1, e | e, Int 1 -> e
| f, g -> Mul(f, g);;
val ( *: ) : expr -> expr -> expr = <fun>

How do you add that to the object oriented design?

Dmitry A. Kazakov

unread,
Jul 8, 2007, 4:18:44 AM7/8/07
to

No problem actually. You first derive from abstract expression an abstract
infix operation (that is - two arguments one result) and then specialize it
to concrete operation.

BTW, in a real compiler you won't have them in the form you have proposed.
Think about optimizations made later during code generation. You would
really wish to factorize infix commutative operators like a+b+c+d out into
Sum(a,b,c,d). The number of arguments becomes variable.

How do you add precedence to your design? Association order? Association
checks? [this is when x and y or z is illegal without brackets] What about
types and types matching? Reasonable error diagnostic and recovery? Array
aggregates? Literals? Function calls? Keyed parameter association?
Defaulted parameters? Extension aggregates?

Do you honestly believe that a syntax + semantic analyzer can be 100 lines
long? If not, then what about readability, maintainability, reuse etc?

Object-oriented only means that the tree and its nodes are considered
objects of some types.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

Daniel T.

unread,
Jul 8, 2007, 9:43:04 AM7/8/07
to
Jon Harrop <j...@ffconsultancy.com> wrote:
> Daniel T. wrote:

> > class Interface {
> > public:
> > virtual void f() = 0;
> > virtual void x() = 0;
> > };
> >
> > class ObjA : public Interface {
> > enum State { A, B } state;
> > public:
> > virtual void f() {
> > state = B;
> > }
> > virtual void x() {
> > state = A;
> > }
> > };
> >
> > class ObjB : public Interface {
> > enum State { A, B, C } state;
> > public:
> > virtual void f() {
> > state = state == A ? B : C;
> > }
> > virtual void x() {
> > state = A;
> > }
> > };
>
> A simple functional equivalent in OCaml is ~9 lines long:

I'm not sure what you are calling "lines" here to say the example below
is "~9 lines long". The C++ version has only 4 executable lines... What
is the functional equivalent in Scheme?

I have no clue what the above does, so I can't say. I have been
intrigued by functional languages for many years, but I have been unable
to wrap my head around their apparently stateless nature. My programming
problems have always involved state machines reacting to each other and
the user through time.

Can you recommend some good tutorials? I would be interested in, for
example, how one would program pong in OCaml or any functional language.
What language do you recommend I start with?

S Perryman

unread,
Jul 8, 2007, 9:53:06 AM7/8/07
to
Jon Harrop wrote:

> H. S. Lahman wrote:

>>Responding to Harrop...

JH>Is there any literature describing how object-oriented design patterns
JH>reinvent existing techniques like functional programming?

>>They don't.

> Here are some examples:

> Function object => Closure

Not quite.

Often in OOP, the "function" objects (orderings etc) take all
parameters as arguments (ie some of the parameters are not already
bound to the function) .


> Factory => Curried function

Quite a stretch to make this statement fit the facts.

Factory objects are not constrained to have factory ops taking only
one input parameter (a requirement for "curried" ops) .

And to be honest, attempting to emulate an N parameter function in
curried form with a chain of factory object invocations makes the
brain bleed just thinking about it.


> Singleton => Value

Yeah, you can use Singleton to emulate the latter.
But the Singleton pattern did not emerge to solve this particular
problem.


> Command => Variant

Actually the Command pattern IMHO has a stronger relationship to
closures (specifically, the attempt to provide a homogeneous interface
to a set of behaviours that often have different input parameters) .


> Iterator => Fold/Zipper

The typical iterator used in OOP is state-based (advance, more etc) .

An OOP collection framework that provides ops that do 'something' to each
element in a collection etc are more akin to the classic map/reduce of FP
(and applicative programming) .


The summary is that certain OO patterns can be used to provide equivalents
of what would be done in FP by particular semantic constructs. But this is
not really a case of OOP trying to "re-invent" what has been established in
FP for some time.


Regards,
Steven Perryman

Jon Harrop

unread,
Jul 8, 2007, 10:21:38 AM7/8/07
to
S Perryman wrote:

> Jon Harrop wrote:
>> Function object => Closure
>
> Not quite.
>
> Often in OOP, the "function" objects (orderings etc) take all
> parameters as arguments (ie some of the parameters are not already
> bound to the function) .

With a closure, the parameters are the free variables and these are captured
automatically for you.

>> Factory => Curried function
>
> Quite a stretch to make this statement fit the facts.
>
> Factory objects are not constrained to have factory ops taking only
> one input parameter (a requirement for "curried" ops) .

Neither is a curried function.

> And to be honest, attempting to emulate an N parameter function in
> curried form with a chain of factory object invocations makes the
> brain bleed just thinking about it.

This is actually the convention in OCaml.

>> Command => Variant
>
> Actually the Command pattern IMHO has a stronger relationship to
> closures (specifically, the attempt to provide a homogeneous interface
> to a set of behaviours that often have different input parameters) .

Ok.

>> Iterator => Fold/Zipper
>
> The typical iterator used in OOP is state-based (advance, more etc) .

That is equivalent.

> An OOP collection framework that provides ops that do 'something' to each
> element in a collection etc are more akin to the classic map/reduce of FP
> (and applicative programming) .

Yes.

> The summary is that certain OO patterns can be used to provide equivalents
> of what would be done in FP by particular semantic constructs. But this is
> not really a case of OOP trying to "re-invent" what has been established
> in FP for some time.

It certainly seems this way to me.

S Perryman

unread,
Jul 8, 2007, 10:37:44 AM7/8/07
to
Daniel T. wrote:

> Jon Harrop <j...@ffconsultancy.com> wrote:

>> [ FP example snipped ... ]

> I have no clue what the above does, so I can't say. I have been
> intrigued by functional languages for many years, but I have been unable
> to wrap my head around their apparently stateless nature. My programming
> problems have always involved state machines reacting to each other and
> the user through time.

> Can you recommend some good tutorials? I would be interested in, for
> example, how one would program pong in OCaml or any functional language.
> What language do you recommend I start with?

How about we start with something simple like writing a program to do stuff
with stacks (read numbers from IO, push onto the stack, write top to IO and
pop until stack empty etc) ??

You know how to do this with an imperative prog lang.
Which means you have terms of reference if/when the FP form starts to give
you conceptual problems. You can bridge to FP by using an intermediate form
(applicative programming) which is written in imperative prog langs almost
identically to FP.


Regards,
Steven Perryman

Jon Harrop

unread,
Jul 8, 2007, 10:57:07 AM7/8/07
to
Daniel T. wrote:
> I'm not sure what you are calling "lines" here to say the example below
> is "~9 lines long".

I'm writing in the style that the interactive top-level shows. So you write
an expression or definition after the "#" prompt and it prints the
definition or value and its type in response. For example:

# 1 + 2*3;;
- : int = 7

# let sqr x = x*x;;
val sqr : int -> int = <fun>

If you were writing the code I gave in a source code file, it would just be:

type t =
| ObjA of [`A | `B]
| ObjB of [`A | `B | `C]

let f = function


| ObjA (`A | `B) -> ObjA `B
| ObjB `A -> ObjB `B
| ObjB (`B | `C) -> ObjB `C

let x = function


| ObjA (`A | `B) -> ObjA `A
| ObjB (`A | `B | `C) -> ObjB `A

> The C++ version has only 4 executable lines...

I'm including the definitions. Also, I only defined the type "t" for
clarity: it can be completely inferred:

let f = function
| `ObjA (`A | `B) -> `ObjA `B
| `ObjB `A -> `ObjB `B
| `ObjB (`B | `C) -> `ObjB `C

let x = function


| `ObjA (`A | `B) -> `ObjA `A
| `ObjB (`A | `B | `C) -> `ObjB `A

> What is the functional equivalent in Scheme?

Good question. I've no idea but I'll have a play with it. :-)

>> # let rec ( +: ) f g = match f, g with
>> | Int n, Int m -> Int (n + m)
>> | Int 0, e | e, Int 0 -> e
>> | f, g -> Add(f, g);;
>> val ( +: ) : expr -> expr -> expr = <fun>
>> # let rec ( *: ) f g = match f, g with
>> | Int n, Int m -> Int (n * m)
>> | Int 0, e | e, Int 0 -> Int 0
>> | Int 1, e | e, Int 1 -> e
>> | f, g -> Mul(f, g);;
>> val ( *: ) : expr -> expr -> expr = <fun>
>>
>> How do you add that to the object oriented design?
>
> I have no clue what the above does, so I can't say.

You use the infix operators +: and *: to construct symbolic expressions:

a +: b
Int 3 *: c

but these operators simplify the expressions as they construct them:

Int 3 +: Int 4 -> Int 7

> I have been
> intrigued by functional languages for many years, but I have been unable
> to wrap my head around their apparently stateless nature. My programming
> problems have always involved state machines reacting to each other and
> the user through time.

Stateless programming is really very simple. Indeed, simplicity is its main
advantage. Stateless programming can also be very useful in OO languages
(i.e. const) although it is less common.

Whenever you update an immutable data structure you simply return a new data
structure. However, this does not require the whole data structure to be
copied. When you create the new data structure you can refer back to parts
of the previous data structure because you know they cannot change (they
are immutable).

For example, consider a type representing a binary tree:

type 'a t =
| Empty
| Node of 'a t * 'a * 'a t

You can write a function to insert an element into an ordered tree like
this:

let rec insert x = function
| Empty -> Node(Empty, x, Empty)
| Node(l, v, r) when x<v -> Node(insert x l, v, r)
| Node(l, v, r) when x=v -> Node(l, x, r)
| Node(l, v, r) -> Node(l, v, insert x r)

Note that inserting an element into the left subtree creates a new
node "Node(insert x l, v, r)" that refers to the whole of the old right
subtree. Consequently, the right subtree is not copied (you might call it a
shallow copy).

> Can you recommend some good tutorials?

http://www.ffconsultancy.com/products/ocaml_journal/free/introduction.html
http://www.ffconsultancy.com/ocaml/benefits
http://www.ffconsultancy.com/ocaml

> I would be interested in, for example, how one would program pong in OCaml
> or any functional language.

The above pages describe a variety of different uses, from Sudoku Solving to
real-time visualization using OpenGL.

> What language do you recommend I start with?

If you're using Linux or Mac OS X, I recommend OCaml:

http://caml.inria.fr

If you're using Windows, I recommend F#:

http://research.microsoft.com/fsharp

We also have lots of F# tutorials and articles:

http://www.ffconsultancy.com/products/fsharp_journal/free/introduction.html
http://www.ffconsultancy.com/dotnet/fsharp

Daniel T.

unread,
Jul 8, 2007, 11:31:58 AM7/8/07
to
S Perryman <q...@q.net> wrote:
> Daniel T. wrote:
> > Jon Harrop <j...@ffconsultancy.com> wrote:

> > Can you recommend some good tutorials? I would be interested in, for
> > example, how one would program pong in OCaml or any functional language.
> > What language do you recommend I start with?
>
> How about we start with something simple like writing a program to do stuff
> with stacks (read numbers from IO, push onto the stack, write top to IO and
> pop until stack empty etc) ?

That isn't my world. In my world, things react when they detect changes
in their environment. At its most fundamental, I see state changing over
time. How is that handled in FP languages?

> You know how to do this with an imperative prog lang.

I know how to program Pong in an imperative programming language as
well. Pong is simple (at least in imperative programming languages,) is
there a reason that it is too complex an example in an FP language?

Here's something simpler than Pong, a ball bouncing. The user can hit a
key and if he hits the key just as the ball hits the ground, the
magnitude of the ball's vector increases.

H. S. Lahman

unread,
Jul 8, 2007, 11:39:05 AM7/8/07
to
Responding to Harrop...

>>>Is there any literature describing how object-oriented design patterns
>>>reinvent existing techniques like functional programming?
>>
>>They don't.
>
>
> Here are some examples:
>
> Function object => Closure
> Factory => Curried function
> Singleton => Value
> Command => Variant
> Iterator => Fold/Zipper
> Chain of Responsibility => Event
> Inversion of Control => Continuation passing style

As Perryman's response (and yours to him!) indicate, FPLs and OOPLs both
provide syntax that enable the implementation of design patterns. My
point was that the design patterns exist independently of the solution
paradigm. The mapping between OOPLs and, say, GoF design patterns is
just more direct because an OOA/D notation was used to express the
design patterns themselves.

>>Design patterns exist outside both functional and OO
>>The OO paradigm just makes expressing design patterns easier because it
>>already emphasizes problem space abstraction.
>
>
> You mean design patterns try to solve the problems that object orientation
> has with abstraction?

I mean that one needs to represent design patterns to communicate them.
Since patterns, by their nature, are about abstraction and
generalization, an OOA/D notation is naturally suited to representing
them _as concepts_.

OTOH, expressing design patterns in an OOA/D notation creates problems
when one tries to map them into a non-OO development context. So if one
wants to map design patterns into FPL syntax rather than OOPL syntax,
one would be better expressing the design patterns in a different notation.

>>Since the OO and functional paradigms are fundamentally incompatible,
>
>
> Given that some languages provide both OO and FP paradigms and several
> languages even compile functional programs into object oriented
> intermediate representations (F# targetting .NET and Scala targetting the
> JVM, for example), in what sense are these paradigms "incompatible"? Do you
> mean "complementary"?

The issue is methodological (i.e., the way that applications are
constructed). For example, collaboration in functional programming is
inherently hierarchical while in OO development it is peer-to-peer. (One
can make a strong case that the primary goal of the OO paradigm is to
eliminate the hierarchical structure inherent in functional
decomposition and procedural message passing.) Thus OOPL procedures will
be designed differently than FPL procedures.

[Providing mapping between representations is an issue for MDA. In
theory any representation can be mapped into any other provided both
representations have a rigorous semantic model and the subject matter is
the same.]

>
> <snip exmple>


>
> You can see that the object oriented approach requires twice as much code.

As a general rule OO applications will be more verbose than either
functional or procedural applications. There are trade-offs in every
aspect of software development and that applies to overall paradigms as
well. Size and performance are trade-offs that the OO paradigm chooses
to make versus maintainability.

>>one would need a different representation of design to express design
>>patterns in a form that is unambiguously mappable to FPLs. At present
>>there does not seem to be a body of literature for functional
>>programming that provides the sort of design methodologies that the
>>OOA/D literature provides.
>
>
> Yes, there are certainly a lot more books discussing problems with
> abstraction in the context of object orientation.

Ah, I see where this is going. I am not going to get into a religious
discussion of OOPLs vs. FPLs. Ta-ta.

Daniel T.

unread,
Jul 8, 2007, 11:58:29 AM7/8/07
to
Jon Harrop <j...@ffconsultancy.com> wrote:

> > Can you recommend some good tutorials?
>
> http://www.ffconsultancy.com/products/ocaml_journal/free/introduction.html
> http://www.ffconsultancy.com/ocaml/benefits
> http://www.ffconsultancy.com/ocaml
>
> > I would be interested in, for example, how one would program pong in OCaml
> > or any functional language.
>
> The above pages describe a variety of different uses, from Sudoku Solving to
> real-time visualization using OpenGL.

I'm downloading and building OCaml now, in the mean time...
How about an alarm clock example? Or simply a program that prints a '.'
on the screen every second?

How are such things done in a stateless environment?

gpuchtel

unread,
Jul 8, 2007, 1:02:31 PM7/8/07
to

In 'Gulliver's Travels', "that all true believers break their eggs at
the convenient end." -- Jonathan Swift


Jon Harrop

unread,
Jul 8, 2007, 1:35:51 PM7/8/07
to
Dmitry A. Kazakov wrote:
>> How do you add that to the object oriented design?
>
> No problem actually. You first derive from abstract expression an abstract
> infix operation (that is - two arguments one result) and then specialize
> it to concrete operation.

Could you post working code for that design, please?

> BTW, in a real compiler you won't have them in the form you have proposed.

Most compilers written in Standard ML, Haskell or OCaml use this form.
Indeed, this carries through to machine code generation as the machine
itself doesn't implement n-ary addition.

> Think about optimizations made later during code generation. You would
> really wish to factorize infix commutative operators like a+b+c+d out into
> Sum(a,b,c,d). The number of arguments becomes variable.

If you wanted to do that then it is easy enough. Replace the binary Add
constructor with:

| Sum of expr * expr list

I don't think there is any point though.

> How do you add precedence to your design?

Precedence is implemented the same way in the functional and OO styles. For
programmatic construction, it is inherited from the use of infix operators.
For parsing, use yacc and specify the operator precedences.

> Association order?

Add clauses to rotate the abstract syntax tree as it is constructed:

| Add(f, Add(g, h)) -> f +: g +: h

| Mul(f, Mul(g, h)) -> f *: g *: h

> Association checks? [this is when x and y or z is illegal without
> brackets]

That is a check in the parser. Nothing to do with functional vs OO designs.

> What about types and types matching?

Type constructors handle run-time types. The pattern matches handle
deconstruction of boxes values (type matching). If you want static typing
then you must design and implement a static type checker in OCaml/SML/F#.
In Haskell, you can embed the static type system of a domain specific
language in Haskell's own type system and get it to do the static type
checking for you (Google for "GADTs"). The same can be done in C++ using
template metaprogramming. This is not related to functional vs OO style.

> Reasonable error diagnostic and recovery?

This is mostly automated by the "scrap your boilerplate" techniques in
Haskell and OCaml. This is not related to functional vs OO style.

> Array aggregates?

Add a constructor to the expr type:

# type expr =
| Int of int

| Array of expr array


| Add of expr * expr
| Mul of expr * expr;;

> Literals?

Add a rule to the grammar with an action that uses the Array constructor.

> Function calls?

See the complete interpreter in this post for a working example.

> Keyed parameter association? Defaulted parameters?

Implementation dependent: what semantics would you like them to support?

> Do you honestly believe that a syntax + semantic analyzer can be 100 lines
> long?

I've attached a 49-line interpreter for a dynamically-typed purely
functional programming language.

> If not, then what about readability, maintainability, reuse etc?

In this context, all much better with a functional style. This family of
languages were designed for compiler writing...

> Object-oriented only means that the tree and its nodes are considered
> objects of some types.

Yes. An object oriented approach can be quite devastating to readability and
maintainability, as I'm sure you'll agree if you try implementing this
simple functionality in an entirely OO style.

The following is a complete term-level interpreter for a dynamically-typed
functional programming language that supports higher-order functions and
currying:

open Camlp4.PreCast;;

let expr = Gram.Entry.mk "expr";;
EXTEND Gram
expr:
[ "prog" NONA
[ "if"; p = expr; "then"; t = expr; "else"; f = expr -> `If(p, t, f)
| "let"; "rec"; f = LIDENT; x = LIDENT; "="; body = expr; "in"; rest =
expr ->
`LetRec(f, x, body, rest) ]
| "equal" LEFTA
[ e1 = expr; "="; e2 = expr -> `Equal(e1, e2) ]
| "sum" LEFTA
[ e1 = expr; "+"; e2 = expr -> `Add(e1, e2)
| e1 = expr; "-"; e2 = expr -> `Add(e1, `Mul(`Int(-1), e2)) ]
| "product" LEFTA
[ e1 = expr; "*"; e2 = expr -> `Mul(e1, e2) ]
| "apply" LEFTA
[ e1 = expr; e2 = expr -> `Apply(e1, e2) ]
| "simple" NONA
[ v = LIDENT -> `Var(v)
| n = INT -> `Int(int_of_string n)
| "("; e = expr; ")" -> e ] ];
END;;

(* Evaluator *)
let int = function `Int n -> n | _ -> invalid_arg "int";;
let bool = function `Bool b -> b | _ -> invalid_arg "bool";;

let rec eval vars = function
| `Apply(func, arg) -> apply (eval vars func) (eval vars arg)
| `Add(e1, e2) -> `Int (int(eval vars e1) + int(eval vars e2))
| `Mul(e1, e2) -> `Int (int(eval vars e1) * int(eval vars e2))
| `Equal(e1, e2) -> `Bool (eval vars e1 = eval vars e2)
| `If(p, t, f) -> eval vars (if bool (eval vars p) then t else f)
| `Int i -> `Int i
| `LetRec(var, arg, body, rest) ->
let rec vars' = (var, `Closure(arg, vars', body)) :: vars in
eval vars' rest
| `Var s -> List.assoc s vars
and apply func arg = match func with
| `Closure(var, vars, body) -> eval ((var, arg) :: vars) body
| _ -> invalid_arg "Attempt to apply a non-function value";;

(* Top level *)
let string_of_value = function
| `Int n -> string_of_int n
| `Bool true -> "true"
| `Bool false -> "false"
| `Closure _ -> "<fun>";;

let () =
let program = Gram.parse expr Loc.ghost (Stream.of_channel stdin) in
let vars = ["true", `Bool true; "false", `Bool false] in
print_endline(string_of_value(eval vars program));;

This can interpret the following program, for example:

let rec fib n =
if n=0 then 0 else
if n=1 then 1 else
fib(n - 1) + fib(n - 2) in
fib 35

What would the equivalent be in an object oriented design?

Patrick May

unread,
Jul 8, 2007, 2:28:02 PM7/8/07
to
"H. S. Lahman" <h.la...@verizon.net> writes:
> Responding to Harrop...

>> Function object => Closure
>> Factory => Curried function
>> Singleton => Value
>> Command => Variant
>> Iterator => Fold/Zipper
>> Chain of Responsibility => Event
>> Inversion of Control => Continuation passing style
>
> As Perryman's response (and yours to him!) indicate, FPLs and OOPLs
> both provide syntax that enable the implementation of design
> patterns. My point was that the design patterns exist independently
> of the solution paradigm.

Peter Norvig noted almost 10 years ago that 16 of the 23 patterns
in the GoF book are "invisible or simpler" in Lisp
(http://norvig.com/design-patterns). This suggests that, rather than
existing independently of the solution domain, many design patterns
are actually standardized workarounds for limitations in the language
used for implementation.

Paul Graham touches on this in his essay "Revenge of the Nerds":

"When I see patterns in my programs, I consider it a sign of
trouble. The shape of a program should reflect only the problem
it needs to solve. Any other regularity in the code is a sign,
to me at least, that I'm using abstractions that aren't powerful
enough -- often that I'm generating by hand the expansions of
some macro that I need to write."

(http://www.paulgraham.com/icad.html)

I find this argument compelling. Many patterns are symptoms of a
problem, not aspects of a solution.

Regards,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc. | Large scale, mission-critical, distributed OO
| systems design and implementation.
p...@spe.com | (C++, Java, Common Lisp, Jini, middleware, SOA)

Jon Harrop

unread,
Jul 8, 2007, 2:26:32 PM7/8/07
to
Daniel T. wrote:
> That isn't my world. In my world, things react when they detect changes
> in their environment. At its most fundamental, I see state changing over
> time. How is that handled in FP languages?

Consider integrating the dynamics of accelerating particles over time. In an
imperative style, you define the position "x" and velocity "dx" to be
mutable and then alter them in-place when updating a particle:

# type particle = {mutable x: float; mutable dx: float};;
# let update g dt p =
p.x <- p.x +. dt *. p.dx;
p.dx <- p.dx +. dt *. g;;
val update : float -> float -> particle -> unit = <fun>

For example, updating an array of particles in place:

# let state = [|{x=0.; dx=1.}; {x=0.; dx=2.}|];;
val state : particle array = [|{x = 0.; dx = 1.}; {x = 0.; dx = 2.}|]
# Array.iter (update 0.1 0.001) state;;
- : unit = ()
# state;;
- : particle array = [|{x = 0.001; dx = 1.0001}; {x = 0.002; dx = 2.0001}|]

To avoid mutation, you simply return a new state rather than mutating the
existing state:

# type particle = {x: float; dx: float};;
type particle = { x : float; dx : float; }
# let update g dt p =
{x = p.x +. dt *. p.dx;
dx = p.dx +. dt *. g};;
val update : float -> float -> particle -> particle = <fun>

Now you "map" the update function over an array of particles to create a new
array of particles:

# let state = [|{x=0.; dx=1.}; {x=0.; dx=2.}|];;
val state : particle array = [|{x = 0.; dx = 1.}; {x = 0.; dx = 2.}|]
# Array.map (update 0.1 0.001) state;;
- : particle array = [|{x = 0.001; dx = 1.0001}; {x = 0.002; dx = 2.0001}|]

> I know how to program Pong in an imperative programming language as
> well. Pong is simple (at least in imperative programming languages,) is
> there a reason that it is too complex an example in an FP language?

Pong is probably best implemented in an imperative style (note that OCaml is
also an imperative language, as the above example showed). For slightly
more complicated examples (even a Sudoku solver) the back end of a GUI
application can be written very elegantly in a purely functional style
(without mutation). I would still use mutation in the front-end though.

> Here's something simpler than Pong, a ball bouncing. The user can hit a
> key and if he hits the key just as the ball hits the ground, the
> magnitude of the ball's vector increases.

Here is an OCaml program that simulates the dynamics of dozens of rigid
balls in real time and visualizes the result using OpenGL, in <400 lines of
code:

http://www.ffconsultancy.com/ocaml/balls?co

I just wrote a complete GUI Sudoku Solver in ~100 lines of code for an OCaml
Journal article...

Jon Harrop

unread,
Jul 8, 2007, 2:37:53 PM7/8/07
to
Daniel T. wrote:
> I'm downloading and building OCaml now, in the mean time...
> How about an alarm clock example? Or simply a program that prints a '.'
> on the screen every second?
>
> How are such things done in a stateless environment?

Take your last example. You can implement it statefully in OCaml using a
mutable reference:

# let clock () =
let ot = ref (int_of_float(Unix.gettimeofday())) in
while true do
let t = int_of_float(Unix.gettimeofday()) in
Printf.printf "%s%!" (String.make (t - !ot) '.');
ot := t;
done;;
# clock ();;
......

or statelessly as a recursive function that passes state to itself:

# let rec clock ot =
let t = int_of_float(Unix.gettimeofday()) in
Printf.printf "%s%!" (String.make (t - ot) '.');
clock t;;
val clock : int -> 'a = <fun>
# clock (int_of_float(Unix.gettimeofday()));;
......

This recursive "clock" function accepts the last time it was invoked (as an
int "ot") and prints t-ot dots before recursing with "t" as the last time
it was invoked. The "String.make n c" makes a string of "n" characters "c".
The "%!" in the call to printf is a flush.

I find graphical programs very gratifying to write and great for teaching.
Have a look at our Smoke vector graphics library (there's a free edition):

http://www.ffconsultancy.com/products/smoke_vector_graphics?co

It uses a purely functional scene graph so it is much easier to use than
alternatives like Windows Presentation Foundation. Moreover, it is vastly
faster (and written entirely in OCaml). :-)

S Perryman

unread,
Jul 8, 2007, 2:52:54 PM7/8/07
to
Patrick May wrote:

> Peter Norvig noted almost 10 years ago that 16 of the 23 patterns
> in the GoF book are "invisible or simpler" in Lisp
> (http://norvig.com/design-patterns).

True, but irrelevant to the topic at hand.


> This suggests that, rather than
> existing independently of the solution domain, many design patterns
> are actually standardized workarounds for limitations in the language
> used for implementation.

True, but irrelevant to the topic at hand.


> Paul Graham touches on this in his essay "Revenge of the Nerds":

> "When I see patterns in my programs, I consider it a sign of
> trouble. The shape of a program should reflect only the problem
> it needs to solve. Any other regularity in the code is a sign,
> to me at least, that I'm using abstractions that aren't powerful
> enough -- often that I'm generating by hand the expansions of
> some macro that I need to write."

> (http://www.paulgraham.com/icad.html)

> I find this argument compelling. Many patterns are symptoms of a
> problem, not aspects of a solution.

True, but not relevant to the topic at hand.


Do you see a pattern emerging (and what that means by your statement
immediately above) ... :-)

The fact that for many of these "problems" , prevailing FP langs provide
better means (by several measures) to solve the problem than prevailing
OOP langs, does not mean that all OOP does here is "re-invent" what FP
does.

Each paradigm has material constraints within, which makes solution to one
problem worse than in another. That is why for all the significant
paradigms (FP, OO, logic, procedural etc) , not one has become the
all-conquering paradigm( *** ) . Quite right too.


Regards,
Steven Perryman

*** although I do believe that FP + type substitutability is close to
being that paradigm.


Regards,
Steven Perryman

Dmitry A. Kazakov

unread,
Jul 8, 2007, 2:58:52 PM7/8/07
to
On Sun, 08 Jul 2007 18:35:51 +0100, Jon Harrop wrote:

> Dmitry A. Kazakov wrote:
>>> How do you add that to the object oriented design?
>>
>> No problem actually. You first derive from abstract expression an abstract
>> infix operation (that is - two arguments one result) and then specialize
>> it to concrete operation.
>
> Could you post working code for that design, please?

http://www.dmitry-kazakov.de/ada/components.htm#Parsers_etc

It contains a complete sample for building parsing trees of Ada 95 infix
expressions with all bells and whistles.

>> BTW, in a real compiler you won't have them in the form you have proposed.
>
> Most compilers written in Standard ML, Haskell or OCaml use this form.
> Indeed, this carries through to machine code generation as the machine
> itself doesn't implement n-ary addition.

Yes, but you definitely wanted to fold constants in expressions like 1+a+2.

>> How do you add precedence to your design?
>
> Precedence is implemented the same way in the functional and OO styles. For
> programmatic construction, it is inherited from the use of infix operators.

But your idea was to map that onto some [functional] language constructs,
which apparently do not take care of.

[...]



> What would the equivalent be in an object oriented design?

Follow the link I have provided. The focus of the design was to avoid
hard-coded grammar. Therefore decomposition goes along different lines. It
is operations, arguments, sources and parsers. The approach is
table-driven, that is - lexical elements, like infix operations, their
precedence are defined dynamically. Static are only the classes of, like
operator, bracket, comma, ligature. This gives you the operation and
argument stack objects in a fully generic way.

Jon Harrop

unread,
Jul 8, 2007, 2:52:46 PM7/8/07
to
Patrick May wrote:
> Many patterns are symptoms of a problem, not aspects of a solution.

Exactly. Perhaps design patterns were supposed to reflect language agnostic
repetitions in software design but most things that are currently touted as
design patterns simply reflect shortcomings in object orientated
programming.

The Model-View-Controller design pattern is one of the more language-
agnostic patterns but it is still inherently imperative so it does not
generalize to functional reactive GUI programming, for example.

Jon Harrop

unread,
Jul 8, 2007, 3:07:55 PM7/8/07
to
S Perryman wrote:
> Patrick May wrote:
>> Peter Norvig noted almost 10 years ago that 16 of the 23 patterns
>> in the GoF book are "invisible or simpler" in Lisp
>> (http://norvig.com/design-patterns).
>
> True, but irrelevant to the topic at hand.

On the contrary, Patrick's statements hit the nail on the head.

> Each paradigm has material constraints within, which makes solution to one
> problem worse than in another. That is why for all the significant
> paradigms (FP, OO, logic, procedural etc) , not one has become the
> all-conquering paradigm( *** ) . Quite right too.

OCaml programmers are free to choose between many different paradigms
including OOP and FP and they almost always choose FP over OOP.

Even in F#, where every advantage is given to OOP as the underlying
implementation (.NET) is fundamentally object oriented and optimized for
that style of programming, functional style often wins through.

S Perryman

unread,
Jul 8, 2007, 3:35:24 PM7/8/07
to
Jon Harrop wrote:

> S Perryman wrote:

>>Patrick May wrote:

>>>Peter Norvig noted almost 10 years ago that 16 of the 23 patterns
>>>in the GoF book are "invisible or simpler" in Lisp
>>>(http://norvig.com/design-patterns).

>>True, but irrelevant to the topic at hand.

> On the contrary, Patrick's statements hit the nail on the head.

No, he hit *a* nail. Not *the* nail. Which is :

<quote>

Is there any literature describing how object-oriented design patterns
reinvent existing techniques like functional programming?

</quote>


>>Each paradigm has material constraints within, which makes solution to one
>>problem worse than in another. That is why for all the significant
>>paradigms (FP, OO, logic, procedural etc) , not one has become the
>>all-conquering paradigm( *** ) . Quite right too.

> OCaml programmers are free to choose between many different paradigms
> including OOP and FP and they almost always choose FP over OOP.

If you wish to troll about FP/OCaml vs OOP, feel free to do so.

If you wish (as per your original posting) to find literature describing
what you seek, no one here appears to be able to help you. I suspect that
no such literature exists (I could be wrong) .


Regards,
Steven Perryman

Jon Harrop

unread,
Jul 8, 2007, 3:55:13 PM7/8/07
to
Dmitry A. Kazakov wrote:
> On Sun, 08 Jul 2007 18:35:51 +0100, Jon Harrop wrote:
>> Dmitry A. Kazakov wrote:
>>>> How do you add that to the object oriented design?
>>>
>>> No problem actually. You first derive from abstract expression an
>>> abstract infix operation (that is - two arguments one result) and then
>>> specialize it to concrete operation.
>>
>> Could you post working code for that design, please?
>
> http://www.dmitry-kazakov.de/ada/components.htm#Parsers_etc
>
> It contains a complete sample for building parsing trees of Ada 95 infix
> expressions with all bells and whistles.

Note that my post that you were replying to was about a term rewriter rather
than a parser. Regardless, parsing is another interesting example. I
believe your 239-line Ada program is equivalent to the following 23-line
OCaml program:

open Camlp4.PreCast;;

let expr = Gram.Entry.mk "expr";;

EXTEND Gram
expr:
[ "sum" LEFTA
[ e1 = expr; "+"; e2 = expr -> e1 +. e2
| e1 = expr; "-"; e2 = expr -> e1 -. e2 ]
| "product" LEFTA
[ e1 = expr; "*"; e2 = expr -> e1 *. e2
| e1 = expr; "/"; e2 = expr -> e1 /. e2 ]
| "power" LEFTA
[ e1 = expr; "**"; e2 = expr -> e1 ** e2 ]
| "simple" NONA
[ n = INT -> float_of_string n
| x = FLOAT -> float_of_string x


| "("; e = expr; ")" -> e ] ];
END;;

let rec loop() =
Printf.printf "> %!";
let x = Gram.parse expr Loc.ghost (Stream.of_string (input_line stdin)) in
Printf.printf "= %f\n%!" x;
loop();;

let () = loop()

>>> BTW, in a real compiler you won't have them in the form you have
>>> proposed.
>>
>> Most compilers written in Standard ML, Haskell or OCaml use this form.
>> Indeed, this carries through to machine code generation as the machine
>> itself doesn't implement n-ary addition.
>
> Yes, but you definitely wanted to fold constants in expressions like
> 1+a+2.

You might want to do that. If you do, just sort by commuting and the
existing reduction rules will take care of constant folding. Something like
this:

| Add(Add(f, g), h) when g>h -> f +: h +: g
| Add(f, g) when f>g -> g +: f

>>> How do you add precedence to your design?
>>
>> Precedence is implemented the same way in the functional and OO styles.
>> For programmatic construction, it is inherited from the use of infix
>> operators.
>
> But your idea was to map that onto some [functional] language constructs,
> which apparently do not take care of.
>
> [...]

If we're talking about parsers then the handling of precedence levels should
be clear from the above example.

>> What would the equivalent be in an object oriented design?
>
> Follow the link I have provided. The focus of the design was to avoid
> hard-coded grammar. Therefore decomposition goes along different lines. It
> is operations, arguments, sources and parsers. The approach is
> table-driven, that is - lexical elements, like infix operations, their
> precedence are defined dynamically. Static are only the classes of, like
> operator, bracket, comma, ligature. This gives you the operation and
> argument stack objects in a fully generic way.

Does that design have any advantages over the functional solution above?

I would still be interested to see an object oriented equivalent to the term
rewriter. :-)

Jon Harrop

unread,
Jul 8, 2007, 4:00:23 PM7/8/07
to
S Perryman wrote:
> Jon Harrop wrote:
>> OCaml programmers are free to choose between many different paradigms
>> including OOP and FP and they almost always choose FP over OOP.
>
> If you wish to troll about FP/OCaml vs OOP, feel free to do so.

How can a statement about OOP in OCaml be construed as OCaml vs OOP?!

> If you wish (as per your original posting) to find literature describing
> what you seek, no one here appears to be able to help you. I suspect that
> no such literature exists (I could be wrong).

Peter Norvig's article that Patrick cited is an excellent example. Peter
discussed only dynamically typed functional languages like Lisp and I think
a lot more can be said in the context of statically typed functional
languages as they provide many (all?) of the type checking benefits of
statically typed OO languages.

Patrick May

unread,
Jul 8, 2007, 4:13:10 PM7/8/07
to
S Perryman <q...@q.net> writes:
[ . . . ]

> True, but not relevant to the topic at hand.
[ . . . ]

> True, but not relevant to the topic at hand.
[ . . . ]

> True, but not relevant to the topic at hand.
>
> Do you see a pattern emerging (and what that means by your statement
> immediately above) ... :-)

Yes, but it's not relevant to the topic at hand. ;-)

> The fact that for many of these "problems" , prevailing FP langs
> provide better means (by several measures) to solve the problem than
> prevailing OOP langs, does not mean that all OOP does here is
> "re-invent" what FP does.

I agree. I was just replying to Mr. Lehman's post regarding
patterns being part of the solution domain rather than the problem
domain.

Rainer Joswig

unread,
Jul 8, 2007, 5:38:48 PM7/8/07
to
In article <m2vecue...@spe.com>, Patrick May <p...@spe.com> wrote:

> "H. S. Lahman" <h.la...@verizon.net> writes:
> > Responding to Harrop...
> >> Function object => Closure
> >> Factory => Curried function
> >> Singleton => Value
> >> Command => Variant
> >> Iterator => Fold/Zipper
> >> Chain of Responsibility => Event
> >> Inversion of Control => Continuation passing style
> >
> > As Perryman's response (and yours to him!) indicate, FPLs and OOPLs
> > both provide syntax that enable the implementation of design
> > patterns. My point was that the design patterns exist independently
> > of the solution paradigm.
>
> Peter Norvig noted almost 10 years ago that 16 of the 23 patterns
> in the GoF book are "invisible or simpler" in Lisp
> (http://norvig.com/design-patterns). This suggests that, rather than
> existing independently of the solution domain, many design patterns
> are actually standardized workarounds for limitations in the language
> used for implementation.

Actually the Lisp and Dylan he has used both have strong OO features
that mix functional and object-oriented programming.
There is some tradition in Lisp for 'Knowledge Representation',
so lots of different patterns have been expressed in Lisp
and libraries/languages on top. It has features that makes
some patterns easy to express or to extend the vocabulary
so that patterns can be expressed.

BUT!

The patterns in the GoF book were just a small beginning.
There are hundreds of patterns (-> Booch) and many are not
that easy to express as language construct in most languages.

Patterns that are easily expressible in a functional language
are quite different from those that are easily expressible
in an object-oriented language.

A lot of architectural thinking uses ideas from the world
we see. Object-oriented architecture has some strong relationship
to our ideas of the world. We see objects. We see that
objects have an identity. There is communication between
objects. We have categories for objects (chair, furniture,
four-legged-thing, ...). Patterns around these ideas
are more easily expressed in object-oriented languages.
The first object-oriented language was a simulation language
(Simula).
That a few of these patterns are somehow trivial to express
in some languages is not that important. There are lots of patterns that
are not that easy to express by built-in language features.
We need to extend the language to express those or we can
try to replicate these patterns by more complex use of
built-in language features.

A mathematician may have other ways to model things
she sees. She has a very different vocabulary. She may
use stochastic models, markov chains, functional
process models, algebras, theorems, ... This
vocabulary is not easily expressible in an object-oriented
language (there have been some attempts). Patterns
around these can be expressed, but it is not trivial.
It is a different way of thinking.

The pattern community has a strong relation to object-oriented
design. But I'd say the functional way of thinking has also
lots of different patterns - patterns that you can
find in mathematics or physics books for example.

>
> Paul Graham touches on this in his essay "Revenge of the Nerds":
>
> "When I see patterns in my programs, I consider it a sign of
> trouble. The shape of a program should reflect only the problem
> it needs to solve. Any other regularity in the code is a sign,
> to me at least, that I'm using abstractions that aren't powerful
> enough -- often that I'm generating by hand the expansions of
> some macro that I need to write."
>
> (http://www.paulgraham.com/icad.html)
>
> I find this argument compelling. Many patterns are symptoms of a
> problem, not aspects of a solution.
>
> Regards,
>
> Patrick
>
> ------------------------------------------------------------------------
> S P Engineering, Inc. | Large scale, mission-critical, distributed OO
> | systems design and implementation.
> p...@spe.com | (C++, Java, Common Lisp, Jini, middleware, SOA)

--
http://lispm.dyndns.org

Patrick May

unread,
Jul 8, 2007, 6:05:57 PM7/8/07
to
Rainer Joswig <jos...@lisp.de> writes:
> In article <m2vecue...@spe.com>, Patrick May <p...@spe.com> wrote:
>> Peter Norvig noted almost 10 years ago that 16 of the 23
>> patterns in the GoF book are "invisible or simpler" in Lisp
>> (http://norvig.com/design-patterns). This suggests that, rather
>> than existing independently of the solution domain, many design
>> patterns are actually standardized workarounds for limitations in
>> the language used for implementation.
[ . . . ]

> The patterns in the GoF book were just a small beginning. There are
> hundreds of patterns (-> Booch) and many are not that easy to
> express as language construct in most languages.

In most languages, certainly. In a language like Common Lisp
that supports multiple programming styles natively and that is highly
extensible, it is possible to raise the level of abstraction that can
be managed to an extent that the implemenations of many patterns
become equivalent to core language features.

> Patterns that are easily expressible in a functional language are
> quite different from those that are easily expressible in an
> object-oriented language.

True, which is why I like multi-style languages.

[ . . . ]


> That a few of these patterns are somehow trivial to express in some
> languages is not that important. There are lots of patterns that are
> not that easy to express by built-in language features. We need to
> extend the language to express those or we can try to replicate
> these patterns by more complex use of built-in language features.

I agree with the final sentiment, but that leads me to disagree
with your first sentence. There is significant benefit to be gained
from languages that support that level of extensibility.

Daniel T.

unread,
Jul 8, 2007, 6:12:03 PM7/8/07
to
Jon Harrop <j...@ffconsultancy.com> wrote:
> Daniel T. wrote:

> > I'm downloading and building OCaml now, in the mean time...
> > How about an alarm clock example? Or simply a program that prints a '.'
> > on the screen every second?
> >
> > How are such things done in a stateless environment?
>
> Take your last example. You can implement it statefully in OCaml using a
> mutable reference:
>
> # let clock () =
> let ot = ref (int_of_float(Unix.gettimeofday())) in
> while true do
> let t = int_of_float(Unix.gettimeofday()) in
> Printf.printf "%s%!" (String.make (t - !ot) '.');
> ot := t;
> done;;
> # clock ();;
> ......

Mutable reference? That doesn't sound like it satisfies the functional
paradigm.

> or statelessly as a recursive function that passes state to itself:
>
> # let rec clock ot =
> let t = int_of_float(Unix.gettimeofday()) in
> Printf.printf "%s%!" (String.make (t - ot) '.');
> clock t;;
> val clock : int -> 'a = <fun>
> # clock (int_of_float(Unix.gettimeofday()));;
> ......
>
> This recursive "clock" function accepts the last time it was invoked (as an
> int "ot") and prints t-ot dots before recursing with "t" as the last time
> it was invoked. The "String.make n c" makes a string of "n" characters "c".
> The "%!" in the call to printf is a flush.

Well, I obviously don't know OCaml.

What does Unix.gettimeofday() do exactly? As I understand the functional
paradigm, it much return the same value every time it is called, but to
solve the problem, it can't do that. Also, to solve the problem, it
can't return immediately but must wait a second before returning...

Rainer Joswig

unread,
Jul 8, 2007, 6:57:09 PM7/8/07
to
In article <m2myy6e...@spe.com>, Patrick May <p...@spe.com> wrote:

> Rainer Joswig <jos...@lisp.de> writes:
> > In article <m2vecue...@spe.com>, Patrick May <p...@spe.com> wrote:
> >> Peter Norvig noted almost 10 years ago that 16 of the 23
> >> patterns in the GoF book are "invisible or simpler" in Lisp
> >> (http://norvig.com/design-patterns). This suggests that, rather
> >> than existing independently of the solution domain, many design
> >> patterns are actually standardized workarounds for limitations in
> >> the language used for implementation.
> [ . . . ]
> > The patterns in the GoF book were just a small beginning. There are
> > hundreds of patterns (-> Booch) and many are not that easy to
> > express as language construct in most languages.
>
> In most languages, certainly. In a language like Common Lisp
> that supports multiple programming styles natively and that is highly
> extensible, it is possible to raise the level of abstraction that can
> be managed to an extent that the implemenations of many patterns
> become equivalent to core language features.

That is not the point. The point is that you need to extend
the initial language to create such a language. Implementing the
language can be easier or more elegant. Still the work may be significant.
See below for an example.

> > Patterns that are easily expressible in a functional language are
> > quite different from those that are easily expressible in an
> > object-oriented language.
>
> True, which is why I like multi-style languages.
>
> [ . . . ]
> > That a few of these patterns are somehow trivial to express in some
> > languages is not that important. There are lots of patterns that are
> > not that easy to express by built-in language features. We need to
> > extend the language to express those or we can try to replicate
> > these patterns by more complex use of built-in language features.
>
> I agree with the final sentiment, but that leads me to disagree
> with your first sentence. There is significant benefit to be gained
> from languages that support that level of extensibility.

Still it is far from trivial.

An example problem: Speech recognition. Signal comes in.
Many knowledge sources on various levels are needed to
process and interpret the signal. Feedback
from all levels to other levels is necessary.

How can we organize an architecture of these knowledge sources?

One approach is a blackboard architecture
(http://en.wikipedia.org/wiki/Blackboard_system).

A blackboard architecture is far from trivial in most
languages regardless of their extensibility.
A C++ implementation 'could' be less elegant, but the
complexity of the implementation would be similar to
one in Java or Lisp. All implementations likely
would use some OO features.

For a purely functional modelling
language, the blackboard would probably less attractive
as an architecture.

>
> Regards,
>
> Patrick
>
> ------------------------------------------------------------------------
> S P Engineering, Inc. | Large scale, mission-critical, distributed OO
> | systems design and implementation.
> p...@spe.com | (C++, Java, Common Lisp, Jini, middleware, SOA)

--
http://lispm.dyndns.org

Daniel T.

unread,
Jul 8, 2007, 7:12:08 PM7/8/07
to
Jon Harrop <j...@ffconsultancy.com> wrote:
> Daniel T. wrote:

> > I know how to program Pong in an imperative programming language as
> > well. Pong is simple (at least in imperative programming languages,) is
> > there a reason that it is too complex an example in an FP language?
>
> Pong is probably best implemented in an imperative style (note that OCaml is
> also an imperative language, as the above example showed). For slightly
> more complicated examples (even a Sudoku solver) the back end of a GUI
> application can be written very elegantly in a purely functional style
> (without mutation). I would still use mutation in the front-end though.

I get the impression that any problem that deals with change over time,
works best written in an imperative style.

The computer itself is a state machine that contains information that
mutates over time.

Jon Harrop

unread,
Jul 8, 2007, 7:12:24 PM7/8/07
to
Daniel T. wrote:
> Mutable reference? That doesn't sound like it satisfies the functional
> paradigm.

Functional programming languages are divided into purely and impurely
functional. Only purely functional languages (e.g. Haskell) prohibit
mutation.

You can mutate state all you want in OCaml and other impure functional
languages. Indeed, OCaml provides many mutable data structures (e.g. arrays
and hash tables) as well as immutable ones (e.g. lists, sets and maps).

> What does Unix.gettimeofday() do exactly?

Returns the time in seconds since Unix began:

http://www.scit.wlv.ac.uk/cgi-bin/mansec?3C+gettimeofday

> As I understand the functional
> paradigm, it much return the same value every time it is called, but to
> solve the problem, it can't do that. Also, to solve the problem, it
> can't return immediately but must wait a second before returning...

Those are the requirements of a purely functional programming language like
Haskell. I'm only just learning about them myself but the basic idea is
that you hide anything that isn't pure in a "monad". In practice, the
source code looks almost identical and most monads are hidden from you.

For example, you can get the current time in Haskell with Time.getClockTime:

Prelude> Time.getClockTime
Mon Jul 9 00:06:28 BST 2007

but if you try to make a pair of two times then Haskell complains,
essentially because the order of evaluation is unspecified:

Prelude> (Time.getClockTime, Time.getClockTime)

<interactive>:1:0:
No instance for (Show (IO System.Time.ClockTime))
arising from use of `print' at <interactive>:1:0-37
Possible fix:
add an instance declaration for (Show (IO System.Time.ClockTime))
In the expression: print it
In a 'do' expression: print it

Many people think that purely functional programming is impractical but it
has recently grown considerably in popularity, with many general interest
books on Haskell being published.

Jon Harrop

unread,
Jul 8, 2007, 7:48:06 PM7/8/07
to
Daniel T. wrote:
> I get the impression that any problem that deals with change over time,
> works best written in an imperative style.

I usually choose an impure functional style. For example, mutation seems
like a good choice when you're handling user input in a GUI application.

But a functional style is also hugely beneficial in that setting because you
can register callbacks so much more easily when functions are first class
values in the language (compared to deriving a class and overriding
methods, remembering to call the handlers in the parent class etc.).

> The computer itself is a state machine that contains information that
> mutates over time.

Yes. It is the job of the compiler to translate a high-level representation
of a solution into a program for such a machine. Excel and Haskell are
beautiful examples of purely functional languages.

Check out this Haskell implementation of the Burrows-Wheeler block sorting
data compression algorithm:

module BWT where

import Data.List

rotate1 :: [x] -> [x]
rotate1 [] = []
rotate1 xs = last xs : init xs

rotate :: [x] -> [[x]]
rotate xs = take (length xs) (iterate rotate1 xs)

bwt :: (Ord x) => [x] -> [x]
bwt = map last . sort . rotate

and that was a first attempt posted by a newbie! =8-)

I haven't quite worked it out myself yet, but the "iterate rotate1 xs"
generates an infinite list...

Patrick May

unread,
Jul 8, 2007, 7:56:42 PM7/8/07
to
Rainer Joswig <jos...@lisp.de> writes:
>> > That a few of these patterns are somehow trivial to express in
>> > some languages is not that important. There are lots of patterns
>> > that are not that easy to express by built-in language features.
>> > We need to extend the language to express those or we can try to
>> > replicate these patterns by more complex use of built-in language
>> > features.
>>
>> I agree with the final sentiment, but that leads me to
>> disagree with your first sentence. There is significant benefit to
>> be gained from languages that support that level of extensibility.
>
> Still it is far from trivial.

Implementation can be far from trivial, but the resulting
abstraction should be straightforward to use. Otherwise, what's the
point?

> An example problem: Speech recognition. Signal comes in. Many
> knowledge sources on various levels are needed to process and
> interpret the signal. Feedback from all levels to other levels is
> necessary.
>
> How can we organize an architecture of these knowledge sources?
>
> One approach is a blackboard architecture

I think we may have different definitions of "pattern". I
consider something on the scale of a blackboard system to be far
larger than a pattern. When I speak of patterns I'm referring to
something closer to an idiom or technique.

That being said, I can see the argument that the term can be used
to describe larger constructs. There does appear to be a qualitative
difference between the two types, though.

Daniel T.

unread,
Jul 8, 2007, 9:39:01 PM7/8/07
to
Jon Harrop <j...@ffconsultancy.com> wrote:
> Daniel T. wrote:

> > I get the impression that any problem that deals with change over time,
> > works best written in an imperative style.
>
> I usually choose an impure functional style. For example, mutation seems
> like a good choice when you're handling user input in a GUI application.
>
> But a functional style is also hugely beneficial in that setting because you
> can register callbacks so much more easily when functions are first class
> values in the language (compared to deriving a class and overriding
> methods, remembering to call the handlers in the parent class etc.).

OK, so you have established that for some problems an imperative style
is better than a functional style of programming, and vice-versa. What
is it exactly that you are trying to show then?

> > The computer itself is a state machine that contains information that
> > mutates over time.
>
> Yes.

Living organisms are state machines that contain information that
mutates over time.

topmind

unread,
Jul 8, 2007, 11:56:03 PM7/8/07
to
On Jul 6, 8:48 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> Is there any literature describing how object-oriented design patterns
> reinvent existing techniques like functional programming?
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy

Paul Graham, a Lisp proponent, has suggested that most of the
documented design patterns are indicators of missing language
features. Comments from C2.com:

http://c2.com/cgi/wiki?AreDesignPatternsMissingLanguageFeatures

-T-

topmind

unread,
Jul 9, 2007, 12:01:51 AM7/9/07
to

Does this allegedly extend to all domains, or only physical modeling
domains?

(snip)

> > Regards,
> >
> > Patrick
> >
> > ------------------------------------------------------------------------
> > S P Engineering, Inc. | Large scale, mission-critical, distributed OO
> > | systems design and implementation.
> > p...@spe.com | (C++, Java, Common Lisp, Jini, middleware, SOA)
>
> --
> http://lispm.dyndns.org

-T-
oop.ismad.com

S Perryman

unread,
Jul 9, 2007, 3:59:10 AM7/9/07
to
Jon Harrop wrote:

> S Perryman wrote:

>>Jon Harrop wrote:

JH>OCaml programmers are free to choose between many different paradigms
JH>including OOP and FP and they almost always choose FP over OOP.

>>If you wish to troll about FP/OCaml vs OOP, feel free to do so.

> How can a statement about OOP in OCaml be construed as OCaml vs OOP?!

This thread started with a specific request from you.
Now it merely appears to be a "soap box" for you to talk about the
deficiencies of OOP vs FP (a talk that tells something like me nothing
that I don't already know) .

So, do you wish to change the topic of this thread from your original
request, to something like : FP solves particular problems in a more
simpler/consistent/elegant manner than OOP design patterns ... ??


>>If you wish (as per your original posting) to find literature describing
>>what you seek, no one here appears to be able to help you. I suspect that
>>no such literature exists (I could be wrong).

> Peter Norvig's article that Patrick cited is an excellent example.

Not at all.
The paper states that FP langs such as Lisp have simpler constructs etc to
solve problems that design patterns are used by OOP to solve.

But (once again) , nothing stating that OO design patterns "re-invent
existing techniques" .


> Peter
> discussed only dynamically typed functional languages like Lisp and I think
> a lot more can be said in the context of statically typed functional
> languages as they provide many (all?) of the type checking benefits of
> statically typed OO languages.

True, but not relevant to the topic at hand (as you originally posted it) .


Regards,
Steven Perryman

Dmitry A. Kazakov

unread,
Jul 9, 2007, 4:36:33 AM7/9/07
to
On Sun, 08 Jul 2007 20:55:13 +0100, Jon Harrop wrote:

> Dmitry A. Kazakov wrote:
>> On Sun, 08 Jul 2007 18:35:51 +0100, Jon Harrop wrote:
>>> Dmitry A. Kazakov wrote:
>>>>> How do you add that to the object oriented design?
>>>>
>>>> No problem actually. You first derive from abstract expression an
>>>> abstract infix operation (that is - two arguments one result) and then
>>>> specialize it to concrete operation.
>>>
>>> Could you post working code for that design, please?
>>
>> http://www.dmitry-kazakov.de/ada/components.htm#Parsers_etc
>>
>> It contains a complete sample for building parsing trees of Ada 95 infix
>> expressions with all bells and whistles.
>
> Note that my post that you were replying to was about a term rewriter rather
> than a parser.

That should not change much.

> Regardless, parsing is another interesting example. I
> believe your 239-line Ada program is equivalent to the following 23-line
> OCaml program:
>
> open Camlp4.PreCast;;
>
> let expr = Gram.Entry.mk "expr";;
>
> EXTEND Gram
> expr:
> [ "sum" LEFTA
> [ e1 = expr; "+"; e2 = expr -> e1 +. e2
> | e1 = expr; "-"; e2 = expr -> e1 -. e2 ]
> | "product" LEFTA
> [ e1 = expr; "*"; e2 = expr -> e1 *. e2
> | e1 = expr; "/"; e2 = expr -> e1 /. e2 ]
> | "power" LEFTA
> [ e1 = expr; "**"; e2 = expr -> e1 ** e2 ]
> | "simple" NONA
> [ n = INT -> float_of_string n
> | x = FLOAT -> float_of_string x
> | "("; e = expr; ")" -> e ] ];
> END;;

The above is not the Ada's grammar, which is far more complex. See

http://www.cs.vu.nl/grammars/ada/#gdef:expression

The problem is in additional constraints, like a**-b (illegal). This is the
problem with grammars, which turn very verbose on real-life examples. You
cannot come away with just one "expr."

Further, diagnostics will be a mess, because in the above you have only
match or no match, while what is required is a lot of semi-match states
where you could hint the user what was wrong. This is where internal states
shine.

>> Yes, but you definitely wanted to fold constants in expressions like
>> 1+a+2.
>
> You might want to do that. If you do, just sort by commuting and the
> existing reduction rules will take care of constant folding. Something like
> this:
>
> | Add(Add(f, g), h) when g>h -> f +: h +: g
> | Add(f, g) when f>g -> g +: f

What about 1+b+a+2? What about 1-b+a-2? The approach does not scale. Which
is the point. There are many languages in which you can solve some simple
task in few lines. But this does not imply that you could get a quality
production code in any reasonable time. You certainly know the 80/20 law.
Software design is not about that 80%, it is about what to do with these
damned 20% of the rest.

[Please, don't consider this as a language flame, I have nothing against
"oh, calm" (:-)) et al. Just, code snippets prove nothing.]

> If we're talking about parsers then the handling of precedence levels should
> be clear from the above example.

Yep, you can trash them into the grammar, and in fact, people are doing
this all the time. But the result will most likely be an unreadable mess,
which is again the point.

>>> What would the equivalent be in an object oriented design?
>>
>> Follow the link I have provided. The focus of the design was to avoid
>> hard-coded grammar. Therefore decomposition goes along different lines. It
>> is operations, arguments, sources and parsers. The approach is
>> table-driven, that is - lexical elements, like infix operations, their
>> precedence are defined dynamically. Static are only the classes of, like
>> operator, bracket, comma, ligature. This gives you the operation and
>> argument stack objects in a fully generic way.
>
> Does that design have any advantages over the functional solution above?

Yes, because of code reuse. It is modular, you don't need to rewrite (and
to test) all the code in order to parse, say C++. Other issues are
diagnostics, consider marking up the errors in the source. (BTW, what is a
functional equivalent to "marking in"? (:-)) Error recovery, what about
skipping to the end of the expression? You need two different grammars for
this and switching forth and back. To colorize the source there would be a
third grammar, and don't forget expansion of the methods when the cursor
hovers over... (Uhmm, "to hover" is it functional? (:-))

> I would still be interested to see an object oriented equivalent to the term
> rewriter. :-)

Rewriter is defined as a function, its OO equivalent would be again a
function. Don't expect me reinventing wheels. (:-))

But, what about sets of rewriters?

1. How would you describe a class of, with certain properties? How a user
of the class could design a program in terms of these properties?

2. How would you extend / modify a rewriter? When you have it as a
function, there is little you could do with it beyond construction of a
composition with some another function.

If you wanted to "programmatically" look into a rewriter, then it would
become interesting. You could decompose it functional, you could do it as a
state machine, you could use a stack automat.

OO does not limit you here in any way. Probably differently to some people
here, I don't see any opposition between FP and OO. You don't like stateful
objects? I don't either. But I see reasons when an object should better
have a state.

JXStern

unread,
Jul 9, 2007, 5:16:57 AM7/9/07
to
On Sun, 08 Jul 2007 20:56:03 -0700, topmind <top...@technologist.com>
wrote:

Excellent. I am very much in that camp.

J.

Dmitry A. Kazakov

unread,
Jul 9, 2007, 5:58:05 AM7/9/07
to

Certainly. But note that once a set of patters gets digested by languages
in the form of new or better features, soon a new set will become
available.

Patterns is a glimpse forward into the abyss of Goedel incompleteness...

S Perryman

unread,
Jul 9, 2007, 6:40:08 AM7/9/07
to
Dmitry A. Kazakov wrote:

> On Mon, 09 Jul 2007 09:16:57 GMT, JXStern wrote:

TM>Paul Graham, a Lisp proponent, has suggested that most of the
TM>documented design patterns are indicators of missing language
TM>features. Comments from C2.com:

TM>http://c2.com/cgi/wiki?AreDesignPatternsMissingLanguageFeatures

>>Excellent. I am very much in that camp.

> Certainly. But note that once a set of patters gets digested by languages
> in the form of new or better features, soon a new set will become available.

> Patterns is a glimpse forward into the abyss of Goedel incompleteness...

Not just patterns.
The quest for ever more powerful abstraction/semantic constructs to
develop systems will push prog lang development forwards.

However it is notable that the reality is a world of stagnation (30-40 yrs
old) , whatever programming paradigm (and its mainstream/predominant prog
langs) one scrutinises.


Regards,
Steven Perryman

Dmitry A. Kazakov

unread,
Jul 9, 2007, 8:23:57 AM7/9/07
to
On Mon, 09 Jul 2007 11:40:08 +0100, S Perryman wrote:

> Dmitry A. Kazakov wrote:
>
>> On Mon, 09 Jul 2007 09:16:57 GMT, JXStern wrote:
>
> TM>Paul Graham, a Lisp proponent, has suggested that most of the
> TM>documented design patterns are indicators of missing language
> TM>features. Comments from C2.com:
>
> TM>http://c2.com/cgi/wiki?AreDesignPatternsMissingLanguageFeatures
>
>>>Excellent. I am very much in that camp.
>
>> Certainly. But note that once a set of patters gets digested by languages
>> in the form of new or better features, soon a new set will become available.
>
>> Patterns is a glimpse forward into the abyss of Goedel incompleteness...
>
> Not just patterns.
> The quest for ever more powerful abstraction/semantic constructs to
> develop systems will push prog lang development forwards.

Sure. it would become interesting in some future. Clearly we cannot move
forward without leaving something behind. The language is a bubble raising
up abstraction levels, it cannot remain a shaft. At some point the bubble
will start to lose more and more substantial things. That will be the end
of manned software engineering.

> However it is notable that the reality is a world of stagnation (30-40 yrs
> old) , whatever programming paradigm (and its mainstream/predominant prog
> langs) one scrutinises.

Agree, though it is rather a universal phenomenon. We don't spend our
weekends on the moon. 40 years ago Summa Technologiae considered such
scenarios, when the technological progress [and the civilisation] would
implode at the point of "Mach=1", when there is no available resources left
to continue research on the whole front. It seems that we are reaching the
barrier much sooner than was expected...

gpuchtel

unread,
Jul 9, 2007, 8:57:11 AM7/9/07
to
Perhaps the pattern here is the affect any dogma has on its disciples.
Sometimes this effect is so strong, it can compel those who have
achieved the esteem title of Dr. to exhibit the most juvenile of
behaviors: "Mine is better than yours", "I'm right, you're wrong".
Boy, am I glad that I am not as self righteous as those around me. ;-)

H. S. Lahman

unread,
Jul 9, 2007, 10:49:55 AM7/9/07
to
Responding to May...

> Peter Norvig noted almost 10 years ago that 16 of the 23 patterns
> in the GoF book are "invisible or simpler" in Lisp
> (http://norvig.com/design-patterns). This suggests that, rather than
> existing independently of the solution domain, many design patterns
> are actually standardized workarounds for limitations in the language
> used for implementation.

Simpler than what? All Novig is pointing out is that LISP has syntax to
support design patterns that is simpler to use than the OOPLs'. Nobody
is arguing that the OO paradigm doesn't lead to verbose 3GL code; that's
one of the fundamental trade-offs that the paradigm makes versus
maintainability.

The key question remains: how does the code author know to use that
particular syntax and how to use it properly to solve the problem in
hand? That insight is the design pattern.

>
> Paul Graham touches on this in his essay "Revenge of the Nerds":
>
> "When I see patterns in my programs, I consider it a sign of
> trouble. The shape of a program should reflect only the problem
> it needs to solve. Any other regularity in the code is a sign,
> to me at least, that I'm using abstractions that aren't powerful
> enough -- often that I'm generating by hand the expansions of
> some macro that I need to write."
>
> (http://www.paulgraham.com/icad.html)
>
> I find this argument compelling. Many patterns are symptoms of a
> problem, not aspects of a solution.

Alas, I think this misses the point. What is a design pattern? It is a
generalization of a solution that can be implemented in a variety of
similar but different problem contexts. IOW, a design pattern is
/always/ more abstract than the specific problem being solved; otherwise
it wouldn't be a pattern.

The corollary is that the design pattern always needs a specific
implementation for the problem in hand. That's why one can't implement
design patterns once and put them in libraries. IOW, if the pattern
could be implemented once and reused, it wouldn't be a design pattern;
it would just be implementation reuse.

BTW, if one looks at the GoF patterns almost all of them are designed to
solve a single class of problems: managing OO associations that require
context-based dynamic substitution that is too complex to express in
terms of a simple association. IOW, most of the GoF problems start off with:

[Client] ------------- [Context]

where the simple problem space association doesn't cut it. The various
patterns just represent several different contexts where that can be a
problem.

That general problem happens to be uniquely OO in nature because the OO
paradigm relies so heavily on relationships and their navigation when
solving problems. So if one is employing a design methodology that
relegates relationships to no role or a minor role, one would expect
such dynamic substitution to be managed in some other way. IOW, one
would need a /different/ set of design patterns to solve that class of
problem that didn't carnally involve relationships. More to the point,
if the methodology was sufficiently different, then dynamic substitution
itself might not even be an issue.

IOW, the comparison of the implementations of OO design patterns to
implementations in other paradigms is pretty much apples & oranges (at
least as far as most of the GoF patterns are concerned). Almost all of
the GoF patterns employ generalization structures and inclusion
polymorphism to reify simple associations. How many non-OO paradigms
rely on relationships in a fundamental way and make heavy use of
generalization and inclusion polymorpnism?


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
h...@pathfindermda.com
Pathfinder Solutions
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
"Model-Based Translation: The Next Step in Agile Development". Email
in...@pathfindermda.com for your copy.
Pathfinder is hiring:
http://www.pathfindermda.com/about_us/careers_pos3.php.
(888)OOA-PATH

H. S. Lahman

unread,
Jul 9, 2007, 11:15:16 AM7/9/07
to
Responding to Jacobs...

> Paul Graham, a Lisp proponent, has suggested that most of the
> documented design patterns are indicators of missing language
> features. Comments from C2.com:
>
> http://c2.com/cgi/wiki?AreDesignPatternsMissingLanguageFeatures

To some extent that is true. It is a matter of the level of abstraction.
3GL procedural block structuring, scope, and message passing are design
patterns for abstracting Turing hardware mechanics. The 3GLs exist to
implement those abstractions as language features. Similarly 4GL DSLs
exist to implement higher level abstractions as language features and
RAD IDEs exist to implement UI and RDB paradigms as language features.

To that extent Graham is right that we just don't have languages today
that directly capture the design abstractions that we routinely use
today. But as soon as we do we will just have a new set of more abstract
design patterns that aren't directly expressible in those languages that
will have to be used. IOW, design patterns are more abstract that the
languages that implement them by definition; they exist to solve design
problems in using the language.

IOW, one can push back the level of abstraction of design patterns by
upgrading language syntax to higher levels of abstraction. But whatever
the language level of abstraction, there will always be a next higher
level of design patterns that needs to be implemented with the
languages' lower level abstractions.

Unfortunately, as Kazakov points out, there are limits. By the time the
languages get sophisticated enough so that all one needs to do is encode
doIt() to solve any problem, the machines that execute those languages
will have replaced us long before then.

gpuchtel

unread,
Jul 9, 2007, 1:13:28 PM7/9/07
to
I'd be curious to know how large and complex the largest (known) OCaml
project/solution/product is. I understand and can't dispute the claim
that OCaml requires less lines of code than an OOPL; however, just how
many lines of code are we talking in the largest known OCaml project?
Is it greater or less than 100 KLOC? Similarly, I'd be curious to know
how well an OCaml project is managed in within a large established
organization of developers, say over 25.

Again, keeping "the right tool for the job" metaphor in mind, I
wouldn't necessarily employ extensive OO and design pattern idioms if
I was a team of one or two writing code for a thermostat. Is there any
literature describing how OCaml reinvents existing techniques for
handling complexity?


Daniel T.

unread,
Jul 9, 2007, 3:37:20 PM7/9/07
to
gpuchtel <gpuc...@gmail.com> wrote:

> I'd be curious to know how large and complex the largest (known) OCaml
> project/solution/product is. I understand and can't dispute the claim
> that OCaml requires less lines of code than an OOPL; however, just how
> many lines of code are we talking in the largest known OCaml project?

Obviously, there are many factors that go into a language choice apart
from "lines of code". After all, a hundred lines of well understood code
is far better than 10 lines of apparently random characters. *Even if
they both cause the computer to do the exact same thing.*

Is this thread supposed to compare and contrast OCaml vs. C++? If so,
since the languages in question are both multi-paradigm languages, I
don't think comp.object is the correct place to discuss the issue. If
this discussion is supposed to be about which of the languages better
expresses OO concepts, then Mr. Harrop is presenting the wrong examples.

Or is it supposed to be about functional programming vs. OO programming?
If so, I suspect that any developer worth his salt knows the value of
the functional paradigm when solving static problems and every language
that I know of has at least some support for functional concepts. The
fact still remains; the underlying machine is an imperative device, the
programer is an imperative being, doing an imperative job. The basic
ideas of imperative programming are both conceptually familiar and
directly embodied in the hardware.

Jon Harrop

unread,
Jul 9, 2007, 4:25:53 PM7/9/07
to
H. S. Lahman wrote:
> How many non-OO paradigms rely on relationships in a fundamental way and
> make heavy use of generalization and inclusion polymorpnism?

Most. Look at monads, continuation passing style and combinators from
functional languages, for example.

--
Dr Jon D Harrop, Flying Frog Consultancy

Dmitry A. Kazakov

unread,
Jul 9, 2007, 4:34:45 PM7/9/07
to
On Mon, 09 Jul 2007 19:37:20 GMT, Daniel T. wrote:

> The
> fact still remains; the underlying machine is an imperative device, the
> programer is an imperative being, doing an imperative job. The basic
> ideas of imperative programming are both conceptually familiar and
> directly embodied in the hardware.

Analogue computers were declarative. More recent pipelined processors could
barely be considered imperative (in the sense of how much control the
programmer could have over the machine state). But one is clear, both are
much harder to handle.

Actually the distinction between imperative and declarative is largely
invented. It is rather "we understand the consequences" vs. "we don't." For
a car there is no difference whether you start the engine on or declare it
on. Just don't forget the key...

Jon Harrop

unread,
Jul 9, 2007, 4:53:26 PM7/9/07
to
Dmitry A. Kazakov wrote:
> The above is not the Ada's grammar, which is far more complex.

Did your calculator example implement Ada's grammar?

> The problem is in additional constraints, like a**-b (illegal).

Depending upon what exactly is illegal (is a** -b illegal?), I would raise
an exception in the lexer or parser.

> Further, diagnostics will be a mess...

Why do you believe that?

>> You might want to do that. If you do, just sort by commuting and the
>> existing reduction rules will take care of constant folding. Something
>> like this:
>>
>> | Add(Add(f, g), h) when g>h -> f +: h +: g
>> | Add(f, g) when f>g -> g +: f
>
> What about 1+b+a+2?

Already handled => 3+a+b

> What about 1-b+a-2?

Already handled => -1+a-b

> The approach does not scale.

It seems to be scaling rather well but you need to make a comparison before
you can draw a justifiable conclusion.

> Which is the point.

Yes.

>> If we're talking about parsers then the handling of precedence levels
>> should be clear from the above example.
>
> Yep, you can trash them into the grammar, and in fact, people are doing
> this all the time. But the result will most likely be an unreadable mess,

You need to make a comparison before you can draw a justifiable conclusion.

> which is again the point.

Yes.

>> Does that design have any advantages over the functional solution above?
>
> Yes, because of code reuse. It is modular, you don't need to rewrite (and
> to test) all the code in order to parse, say C++.

Are they not both reusable and modular?

> Other issues are
> diagnostics, consider marking up the errors in the source. (BTW, what is a
> functional equivalent to "marking in"? (:-)) Error recovery, what about
> skipping to the end of the expression? You need two different grammars for
> this and switching forth and back. To colorize the source there would be a
> third grammar, and don't forget expansion of the methods when the cursor
> hovers over... (Uhmm, "to hover" is it functional? (:-))

I don't follow. I am implementing all of these things for an IDE and memory
profiler at the moment and they seem trivial with the above approach. Code
position is carried in the AST automatically and I am using it to highlight
heavily-allocating code.

>> I would still be interested to see an object oriented equivalent to the
>> term rewriter. :-)
>
> Rewriter is defined as a function,

A pattern match.

> its OO equivalent would be again a function.

There is no OO equivalent to the pattern match. You could reach for a design
pattern but it will not provide the same static checking or performance
without an enormous amount of effort (essentially writing a pattern
matcher).

> Don't expect me reinventing wheels. (:-))

I would like the see the an OO equivalent.

> But, what about sets of rewriters?
>
> 1. How would you describe a class of, with certain properties?

A higher-order function parameterized over the properties.

> How a user of the class could design a program in terms of these
> properties?

Apply the appropriate arguments.

> 2. How would you extend / modify a rewriter?

Using a continuation.

> When you have it as a function, there is little you could do with it
> beyond construction of a composition with some another function.

Yes.

> If you wanted to "programmatically" look into a rewriter, then it would
> become interesting. You could decompose it functional, you could do it as
> a state machine, you could use a stack automat.

Sounds like a task for monads.

> OO does not limit you here in any way.

With enough work you can solve the same problems with an OO design but
requiring 10x as much code to do the same thing is severely limiting in
practice.

> Probably differently to some people here, I don't see any opposition
> between FP and OO. You don't like stateful objects?

You are referring to a purely functional approach.

> I don't either. But I see reasons when an object should better have a
> state.

I agree. But why objects?

Jon Harrop

unread,
Jul 9, 2007, 4:38:01 PM7/9/07
to
Daniel T. wrote:

> Jon Harrop <j...@ffconsultancy.com> wrote:
>> But a functional style is also hugely beneficial in that setting because
>> you can register callbacks so much more easily when functions are first
>> class values in the language (compared to deriving a class and overriding
>> methods, remembering to call the handlers in the parent class etc.).
>
> OK, so you have established that for some problems an imperative style
> is better than a functional style of programming, and vice-versa.

No, they are not mutually exclusive.

> What is it exactly that you are trying to show then?

You were asking what approach I would prefer for GUI programming. I prefer
an imperative functional style but I have not yet studied the purely
functional alternatives (functional reactive GUI programming).

>> > The computer itself is a state machine that contains information that
>> > mutates over time.
>>
>> Yes.
>
> Living organisms are state machines that contain information that
> mutates over time.

If you were implementing Conway's Game of Life, would you modify the states
in-place?

Jon Harrop

unread,
Jul 9, 2007, 5:15:15 PM7/9/07
to
gpuchtel wrote:
> I'd be curious to know how large and complex the largest (known) OCaml
> project/solution/product is. I understand and can't dispute the claim
> that OCaml requires less lines of code than an OOPL; however, just how
> many lines of code are we talking in the largest known OCaml project?
> Is it greater or less than 100 KLOC? Similarly, I'd be curious to know
> how well an OCaml project is managed in within a large established
> organization of developers, say over 25.

There are several such organizations such as Jane St. Capital and Microsoft
who probably maintain millions of lines of OCaml code. We're maintaining
around 250kLOC of OCaml code here, and a growing amount of F# code. Moving
from C++, code density improved ~4x and worse-case performance improved 5x.

> Again, keeping "the right tool for the job" metaphor in mind, I
> wouldn't necessarily employ extensive OO and design pattern idioms if
> I was a team of one or two writing code for a thermostat. Is there any
> literature describing how OCaml reinvents existing techniques for
> handling complexity?

That's just it, there is surprisingly little information about such things.
You get the odd article (e.g. Norvig's) and there are several academic
papers describing design patterns in functional programming languages but
there are no books. Most people are moving on to multi-paradigm languages
these days and I think it is rather a sore omission that all solutions
currently point to OOP when it is clearly not the panacea that it was hyped
to be.

--
Dr Jon D Harrop, Flying Frog Consultancy

Daniel T.

unread,
Jul 9, 2007, 6:56:02 PM7/9/07
to
Jon Harrop <j...@ffconsultancy.com> wrote:
> Daniel T. wrote:

> > What is it exactly that you are trying to show then?
>
> You were asking what approach I would prefer for GUI programming. I
> prefer an imperative functional style but I have not yet studied the
> purely functional alternatives (functional reactive GUI programming).

Let me be more clear with my question, why is it you are posting? Are
you claiming that functional solutions are superior to OO solutions in
all cases? Are you claiming they are superior in cases in which they are
superior? Are you just touting the benefits of OCaml over Java and C++?
Or, are you claiming nothing at all?

shend...@gmail.com

unread,
Jul 9, 2007, 7:00:26 PM7/9/07
to
If you want to try out something new in sudoku, try shendoku, using
the sudoku rules but playing two people, one against the other, like
battleshipps. They have a free version to download at http://www.shendoku.com/sample.pdf
. Anything else they are bringing out or they are working on you can
find at www.shendoku.com or at they´r blog www.shendoku.blogspot.com .
Have fun, I am. I specially like one slogan I heard about Shendoku:
SUDOKU is like masturbation (one person).... SHENDOKU is like sex (it
takes two).

Patrick May

unread,
Jul 9, 2007, 10:15:52 PM7/9/07
to
"H. S. Lahman" <h.la...@verizon.net> writes:
> Responding to May...
>> Peter Norvig noted almost 10 years ago that 16 of the 23
>> patterns in the GoF book are "invisible or simpler" in Lisp
>> (http://norvig.com/design-patterns). This suggests that, rather
>> than existing independently of the solution domain, many design
>> patterns are actually standardized workarounds for limitations in
>> the language used for implementation.
>
> Simpler than what? All Novig is pointing out is that LISP has syntax
> to support design patterns that is simpler to use than the
> OOPLs'. Nobody is arguing that the OO paradigm doesn't lead to
> verbose 3GL code; that's one of the fundamental trade-offs that the
> paradigm makes versus maintainability.

I always enjoy your posts here, but I've got to call shenanigans
on that statement. First, any inherent maintainability of a
particular programming paradigm is deeply subordinate to the quality
of the code written in that paradigm. Second, if you're claiming
that OO languages result in inherently more maintainable code and,
further, that these languages were deliberately designed to be more
verbose, and, even further, that this verbosity increases
maintainability, well . . . Lucy, you got some 'splainin to do.

> The key question remains: how does the code author know to use that
> particular syntax and how to use it properly to solve the problem in
> hand? That insight is the design pattern.

I'd argue that the insight and corresponding pattern is only
required if the functionality of the pattern is not supported directly
by the language.

>> Paul Graham touches on this in his essay "Revenge of the
>> Nerds":
>>
>> "When I see patterns in my programs, I consider it a sign of
>> trouble. The shape of a program should reflect only the
>> problem it needs to solve. Any other regularity in the code
>> is a sign, to me at least, that I'm using abstractions that
>> aren't powerful enough -- often that I'm generating by hand
>> the expansions of some macro that I need to write."
>>
>> (http://www.paulgraham.com/icad.html)
>>
>> I find this argument compelling. Many patterns are symptoms
>> of a problem, not aspects of a solution.
>
> Alas, I think this misses the point. What is a design pattern? It is
> a generalization of a solution that can be implemented in a variety
> of similar but different problem contexts.

As demonstrated by Peter Norvig, many patterns are only
applicable within the context of a particular language. It doesn't
make sense to speak of a "while loop pattern" in 3GLs, but it
qualifies as such in assembler. The Visitor pattern is essentially
meaningless in languages that support multiple dispatch, but useful in
C++, for example.

> IOW, a design pattern is /always/ more abstract than the specific
> problem being solved; otherwise it wouldn't be a pattern.

You're making the implicit assumption that all patterns are
aspects of the problem domain. In fact, many patterns are aspects of
the solution domain and their presence indicates a lack of power or
expressiveness in the language of the solution.

> The corollary is that the design pattern always needs a specific
> implementation for the problem in hand. That's why one can't
> implement design patterns once and put them in libraries.

Some patterns may fit this description, but Norvig's work
demonstrates that many do not.

[ . . . ]


> That general problem happens to be uniquely OO in nature because the
> OO paradigm relies so heavily on relationships and their navigation
> when solving problems. So if one is employing a design methodology
> that relegates relationships to no role or a minor role, one would
> expect such dynamic substitution to be managed in some other
> way. IOW, one would need a /different/ set of design patterns to
> solve that class of problem that didn't carnally involve
> relationships. More to the point, if the methodology was
> sufficiently different, then dynamic substitution itself might not
> even be an issue.
>
> IOW, the comparison of the implementations of OO design patterns to
> implementations in other paradigms is pretty much apples & oranges
> (at least as far as most of the GoF patterns are concerned). Almost
> all of the GoF patterns employ generalization structures and
> inclusion polymorphism to reify simple associations. How many non-OO
> paradigms rely on relationships in a fundamental way and make heavy
> use of generalization and inclusion polymorpnism?

Interesting points, but I'd suggest that the problem isn't heavy
use of relationship navigation in OO techniques, but, again,
insufficiently expressive languages. Common Lisp, the first ANSI
standard "OO" language, avoids the need for many of the GoF patterns
because of its powerful macro facility. Because it is a highly
extensible language, design patterns can be incorporated into Common
Lisp as first class concepts. Many can be made to effectively
disappear.

JXStern

unread,
Jul 9, 2007, 11:23:02 PM7/9/07
to
On Mon, 9 Jul 2007 11:58:05 +0200, "Dmitry A. Kazakov"
<mai...@dmitry-kazakov.de> wrote:

>>>Paul Graham, a Lisp proponent, has suggested that most of the
>>>documented design patterns are indicators of missing language
>>>features. Comments from C2.com:
>>>
>>>http://c2.com/cgi/wiki?AreDesignPatternsMissingLanguageFeatures
>>
>> Excellent. I am very much in that camp.
>
>Certainly. But note that once a set of patters gets digested by languages
>in the form of new or better features, soon a new set will become
>available.
>
>Patterns is a glimpse forward into the abyss of Goedel incompleteness...

Or a glimpse upward into the mountain of Hegelian dialectic ...

:)

But I think the point here is that Lisp doesn't have these problems
today, didn't have them twenty or forty years ago, and the need isn't
so much for progress as for best practices. Will there ever come a
time in the future when we are not our own worst enemies?

:)

Anyway, why not pursue this "progress" in any case, it helps to pass
the time ...

J.

Dmitry A. Kazakov

unread,
Jul 10, 2007, 5:45:04 AM7/10/07
to
On Mon, 09 Jul 2007 21:53:26 +0100, Jon Harrop wrote:

> Dmitry A. Kazakov wrote:
>> The above is not the Ada's grammar, which is far more complex.
>
> Did your calculator example implement Ada's grammar?

No, that is not Ada. The sample of Ada 95 expression is at the end of the
chapter 8:

http://www.dmitry-kazakov.de/ada/components.htm#8.9

>> The problem is in additional constraints, like a**-b (illegal).
>
> Depending upon what exactly is illegal (is a** -b illegal?),

Brackets are required:

2.0**-1 -- This is illegal
2.0**(-1) -- This is OK

> I would raise an exception in the lexer or parser.

How, non-functionally... (:-))



>> Further, diagnostics will be a mess...
>
> Why do you believe that?

Because diagnostics is a side-effect, while the result is a compiled
program. It does not fit into a purely functional model. You have to bend
your design or else face "states."

>>> If we're talking about parsers then the handling of precedence levels
>>> should be clear from the above example.
>>
>> Yep, you can trash them into the grammar, and in fact, people are doing
>> this all the time. But the result will most likely be an unreadable mess,
>
> You need to make a comparison before you can draw a justifiable conclusion.

Look at the grammar of Ada 95, I gave you the link. It does that thing and
also handles many constraints at the grammar level. If you consider it
readable, then well, I take my hat off to you.

>>> I would still be interested to see an object oriented equivalent to the
>>> term rewriter. :-)
>>
>> Rewriter is defined as a function,
>
> A pattern match.

No, a pattern match gives yes/no, rewriter translates from one
representation to another.


>> its OO equivalent would be again a function.
>
> There is no OO equivalent to the pattern match.

I am not sure what you mean. In the solution space, an OO equivalent is:

Source'Class -- The class of sources
Pattern'Class -- The class of patterns

function Match (X : in out Source; Y : Pattern) return Boolean;

If you mean pattern matching as a language feature, then you should
elaborate why do you believe it essential.

(I have some personal interest in pattern matching. There was a time I
worked much on it. If you are interested, see:

http://www.dmitry-kazakov.de/match/match.htm

At some point I came to conclusion that pattern matching is not an answer.
IMO it brings more problems than solves.)

> You could reach for a design
> pattern but it will not provide the same static checking or performance
> without an enormous amount of effort (essentially writing a pattern
> matcher).

Hmm, I don't see an obvious connection between "pattern matching" and
"design patterns." You could also add "pattern recognition" as known in AI
to the list. These three are way different things, as I understand them.

>> Don't expect me reinventing wheels. (:-))
>
> I would like the see the an OO equivalent.

Functional decomposition is an integral part of OOP. [Many here would
probably disagree.]

>> But, what about sets of rewriters?
>>
>> 1. How would you describe a class of, with certain properties?
>
> A higher-order function parameterized over the properties.

That means you need the properties being evaluated to certain values. This
looks very constraining. Consider a property of being non-zero everywhere
except at a finite number of points.

>> How a user of the class could design a program in terms of these
>> properties?
>
> Apply the appropriate arguments.

That's too late. The whole idea is to know properties in advance during
design.

>> 2. How would you extend / modify a rewriter?
>
> Using a continuation.

That is same as composition. BTW, this is a great problem of many OO
languages too. They also use extension = composition with some epilogue and
prologues, or overriding. That is too limiting in many cases. A typical
example is dispatching constructors / destructors.

>> OO does not limit you here in any way.
>
> With enough work you can solve the same problems with an OO design but
> requiring 10x as much code to do the same thing is severely limiting in
> practice.

I doubt it is a relation. It is rather a+b/x+b/x*x+... I would expect that
in a large project you will eventually have much *less* code in an OOP
variant than in a pure FP. If you give something away from FP and
OO-preachers do the same, then you will probably meet somewhere at the
optimum. Stateless design is a great advantage *when* appropriate.

>> I don't either. But I see reasons when an object should better have a
>> state.
>
> I agree. But why objects?

Because a value cannot have state. It is a state of something else.

Dmitry A. Kazakov

unread,
Jul 10, 2007, 5:47:21 AM7/10/07
to
On Tue, 10 Jul 2007 03:23:02 GMT, JXStern wrote:

> But I think the point here is that Lisp doesn't have these problems
> today,

Macro assembler hadn't them either. It depends on the abstraction level of
the language. Lisp is a machine language, level 0.

> didn't have them twenty or forty years ago, and the need isn't
> so much for progress as for best practices.

Practice is not binding, a technology is.

> Will there ever come a
> time in the future when we are not our own worst enemies?

Never, there is always a price to pay.

> Anyway, why not pursue this "progress" in any case, it helps to pass
> the time ...

Oh, yes, as Steve has pointed out, much time passed, little progress is in
sight. But the problem space does not wait for us. 40 years ago a text
editor was a large project. Today it is an air-traffic control system.
Would we commit our wealth and lives to good practice?

JXStern

unread,
Jul 10, 2007, 11:02:23 AM7/10/07
to
On Tue, 10 Jul 2007 11:47:21 +0200, "Dmitry A. Kazakov"
<mai...@dmitry-kazakov.de> wrote:

>On Tue, 10 Jul 2007 03:23:02 GMT, JXStern wrote:
>
>> But I think the point here is that Lisp doesn't have these problems
>> today,
>
>Macro assembler hadn't them either.

An interesting point!


>It depends on the abstraction level of
>the language. Lisp is a machine language, level 0.

Well, I don't much like Lisp, either.


>> Anyway, why not pursue this "progress" in any case, it helps to pass
>> the time ...
>
>Oh, yes, as Steve has pointed out, much time passed, little progress is in
>sight. But the problem space does not wait for us. 40 years ago a text
>editor was a large project. Today it is an air-traffic control system.
>Would we commit our wealth and lives to good practice?

In the US, aren't we still running a 1965 air traffic control system?
I'm not sure it wasn't built in macro assembler!

J.

Jon Harrop

unread,
Jul 9, 2007, 5:15:15 PM7/9/07
to
gpuchtel wrote:
> I'd be curious to know how large and complex the largest (known) OCaml
> project/solution/product is. I understand and can't dispute the claim
> that OCaml requires less lines of code than an OOPL; however, just how
> many lines of code are we talking in the largest known OCaml project?
> Is it greater or less than 100 KLOC? Similarly, I'd be curious to know
> how well an OCaml project is managed in within a large established
> organization of developers, say over 25.

There are several such organizations such as Jane St. Capital and Microsoft


who probably maintain millions of lines of OCaml code. We're maintaining
around 250kLOC of OCaml code here, and a growing amount of F# code. Moving
from C++, code density improved ~4x and worse-case performance improved 5x.

> Again, keeping "the right tool for the job" metaphor in mind, I


> wouldn't necessarily employ extensive OO and design pattern idioms if
> I was a team of one or two writing code for a thermostat. Is there any
> literature describing how OCaml reinvents existing techniques for
> handling complexity?

That's just it, there is surprisingly little information about such things.


You get the odd article (e.g. Norvig's) and there are several academic
papers describing design patterns in functional programming languages but
there are no books. Most people are moving on to multi-paradigm languages
these days and I think it is rather a sore omission that all solutions
currently point to OOP when it is clearly not the panacea that it was hyped
to be.

--

Dr Jon D Harrop, Flying Frog Consultancy

Jon Harrop

unread,
Jul 9, 2007, 4:38:01 PM7/9/07
to
Daniel T. wrote:
> Jon Harrop <j...@ffconsultancy.com> wrote:
>> But a functional style is also hugely beneficial in that setting because
>> you can register callbacks so much more easily when functions are first
>> class values in the language (compared to deriving a class and overriding
>> methods, remembering to call the handlers in the parent class etc.).
>
> OK, so you have established that for some problems an imperative style
> is better than a functional style of programming, and vice-versa.

No, they are not mutually exclusive.

> What is it exactly that you are trying to show then?

You were asking what approach I would prefer for GUI programming. I prefer
an imperative functional style but I have not yet studied the purely
functional alternatives (functional reactive GUI programming).

>> > The computer itself is a state machine that contains information that


>> > mutates over time.
>>
>> Yes.
>
> Living organisms are state machines that contain information that
> mutates over time.

If you were implementing Conway's Game of Life, would you modify the states
in-place?

--

Jon Harrop

unread,
Jul 8, 2007, 1:04:48 PM7/8/07
to
Dmitry A. Kazakov wrote:
>> How do you add that to the object oriented design?
>
> No problem actually. You first derive from abstract expression an abstract
> infix operation (that is - two arguments one result) and then specialize
> it to concrete operation.

Could you post working code for that design, please?

> BTW, in a real compiler you won't have them in the form you have proposed.

Most compilers written in Standard ML, Haskell or OCaml use this form.
Indeed, this carries through to machine code generation as the machine
itself doesn't implement n-ary addition.

> Think about optimizations made later during code generation. You would
> really wish to factorize infix commutative operators like a+b+c+d out into
> Sum(a,b,c,d). The number of arguments becomes variable.

If you wanted to do that then it is easy enough. Replace the binary Add
constructor with:

| Sum of expr * expr list

I don't think there is any point though.

> How do you add precedence to your design?

Precedence is implemented the same way in the functional and OO styles. For
programmatic construction, it is inherited from the use of infix operators.
For parsing, use yacc and specify the operator precedences.

> Association order?

Add clauses to rotate the abstract syntax tree as it is constructed:

| Add(f, Add(g, h)) -> f +: g +: h

| Mul(f, Mul(g, h)) -> f *: g *: h

> Association checks? [this is when x and y or z is illegal without
> brackets]

That is a check in the parser. Nothing to do with functional vs OO designs.

> What about types and types matching?

Type constructors handle run-time types. The pattern matches handle
deconstruction of boxes values (type matching). If you want static typing
then you must design and implement a static type checker in OCaml/SML/F#.
In Haskell, you can embed the static type system of a domain specific
language in Haskell's own type system and get it to do the static type
checking for you (Google for "GADTs"). The same can be done in C++ using
template metaprogramming. This is not related to functional vs OO style.

> Reasonable error diagnostic and recovery?

This is mostly automated by the "scrap your boilerplate" techniques in
Haskell and OCaml. This is not related to functional vs OO style.

> Array aggregates?

Add a constructor to the expr type:

# type expr =
| Int of int
| Array of expr array
| Add of expr * expr
| Mul of expr * expr;;

> Literals?

Add a rule to the grammar with an action that uses the Array constructor.

> Function calls?

See the complete interpreter in this post for a working example.

> Keyed parameter association? Defaulted parameters?

Implementation dependent: what semantics would you like them to support?

> Do you honestly believe that a syntax + semantic analyzer can be 100 lines
> long?

I've attached a 49-line interpreter for a dynamically-typed purely
functional programming language.

> If not, then what about readability, maintainability, reuse etc?

In this context, all much better with a functional style. This family of
languages were designed for compiler writing...

> Object-oriented only means that the tree and its nodes are considered
> objects of some types.

Yes. An object oriented approach can be quite devastating to readability and
maintainability, as I'm sure you'll agree if you try implementing this
simple functionality in an entirely OO style.

The following is a complete term-level interpreter for a dynamically-typed
functional programming language that supports higher-order functions and
currying:

open Camlp4.PreCast;;

let expr = Gram.Entry.mk "expr";;
EXTEND Gram
expr:

[ "prog" NONA
[ "if"; p = expr; "then"; t = expr; "else"; f = expr -> `If(p, t, f)
| "let"; "rec"; f = LIDENT; x = LIDENT; "="; body = expr; "in"; rest =
expr ->
`LetRec(f, x, body, rest) ]
| "equal" LEFTA
[ e1 = expr; "="; e2 = expr -> `Equal(e1, e2) ]
| "sum" LEFTA
[ e1 = expr; "+"; e2 = expr -> `Add(e1, e2)
| e1 = expr; "-"; e2 = expr -> `Add(e1, `Mul(`Int(-1), e2)) ]
| "product" LEFTA
[ e1 = expr; "*"; e2 = expr -> `Mul(e1, e2) ]
| "apply" LEFTA
[ e1 = expr; e2 = expr -> `Apply(e1, e2) ]
| "simple" NONA
[ v = LIDENT -> `Var(v)
| n = INT -> `Int(int_of_string n)


| "("; e = expr; ")" -> e ] ];
END;;

(* Evaluator *)
let int = function `Int n -> n | _ -> invalid_arg "int";;
let bool = function `Bool b -> b | _ -> invalid_arg "bool";;

let rec eval vars = function
| `Apply(func, arg) -> apply (eval vars func) (eval vars arg)
| `Add(e1, e2) -> `Int (int(eval vars e1) + int(eval vars e2))
| `Mul(e1, e2) -> `Int (int(eval vars e1) * int(eval vars e2))
| `Equal(e1, e2) -> `Bool (eval vars e1 = eval vars e2)
| `If(p, t, f) -> eval vars (if bool (eval vars p) then t else f)
| `Int i -> `Int i
| `LetRec(var, arg, body, rest) ->
let rec vars' = (var, `Closure(arg, vars', body)) :: vars in
eval vars' rest
| `Var s -> List.assoc s vars
and apply func arg = match func with
| `Closure(var, vars, body) -> eval ((var, arg) :: vars) body
| _ -> invalid_arg "Attempt to apply a non-function value";;

(* Top level *)
let string_of_value = function
| `Int n -> string_of_int n
| `Bool true -> "true"
| `Bool false -> "false"
| `Closure _ -> "<fun>";;

let () =
let program = Gram.parse expr Loc.ghost (Stream.of_channel stdin) in
let vars = ["true", `Bool true; "false", `Bool false] in
print_endline(string_of_value(eval vars program));;

This can interpret the following program, for example:

let rec fib n =
if n=0 then 0 else
if n=1 then 1 else
fib(n - 1) + fib(n - 2) in
fib 35

What would the equivalent be in an object oriented design?

--

Jon Harrop

unread,
Jul 10, 2007, 12:59:38 PM7/10/07
to
JXStern wrote:
> But I think the point here is that Lisp doesn't have these problems

The fact that monadic programming, continuation-passing style and even
non-trivial higher-order functions are absent from real code in dynamically
typed functional languages is strong evidence that these techniques are not
feasible without a static type checker as a safety net. So I would not say
that Lisp "doesn't have these problems" any more than the next language.

--
Dr Jon D Harrop, Flying Frog Consultancy

Jon Harrop

unread,
Jul 10, 2007, 1:01:17 PM7/10/07
to
Patrick May wrote:
> You're making the implicit assumption that all patterns are
> aspects of the problem domain. In fact, many patterns are aspects of
> the solution domain and their presence indicates a lack of power or
> expressiveness in the language of the solution.

This is exactly what I've been trying to say but was too ineloquent to do
so. :-)

Dmitry A. Kazakov

unread,
Jul 10, 2007, 1:17:32 PM7/10/07
to
On Tue, 10 Jul 2007 15:02:23 GMT, JXStern wrote:

> In the US, aren't we still running a 1965 air traffic control system?

I don#t know. In EU a new one is under construction, AFAIK.

> I'm not sure it wasn't built in macro assembler!

One should compare the requirements. After the Denver Airport disaster they
sort luggage manually.

Would they manage it instead of C++ if they tried to develop it in
assembler, OCaml, SQL, whatever? I doubt it. If you have a technology, the
language is no more in question.

Jon Harrop

unread,
Jul 10, 2007, 1:17:01 PM7/10/07
to
Dmitry A. Kazakov wrote:
> On Mon, 09 Jul 2007 21:53:26 +0100, Jon Harrop wrote:
>> Did your calculator example implement Ada's grammar?
>
> No, that is not Ada. The sample of Ada 95 expression is at the end of the
> chapter 8:
>
> http://www.dmitry-kazakov.de/ada/components.htm#8.9

Ok, I hadn't read the whole page. :-)

>>> Further, diagnostics will be a mess...
>>
>> Why do you believe that?
>
> Because diagnostics is a side-effect,

You need to shed the idea that functional programming prohibits
side-effects. It does not.

> while the result is a compiled
> program. It does not fit into a purely functional model. You have to bend
> your design or else face "states."

Absolutely. This never had anything to do with a purely functional model.
All of the code I posted was written in an impure functional language that
allows side effects.

>>> Yep, you can trash them into the grammar, and in fact, people are doing
>>> this all the time. But the result will most likely be an unreadable
>>> mess,
>>
>> You need to make a comparison before you can draw a justifiable
>> conclusion.
>
> Look at the grammar of Ada 95, I gave you the link. It does that thing and
> also handles many constraints at the grammar level. If you consider it
> readable, then well, I take my hat off to you.

True.

>>> Rewriter is defined as a function,
>>
>> A pattern match.
>
> No, a pattern match gives yes/no

What do you mean by this?

>>> its OO equivalent would be again a function.
>>
>> There is no OO equivalent to the pattern match.
>
> I am not sure what you mean. In the solution space, an OO equivalent is:
>
> Source'Class -- The class of sources
> Pattern'Class -- The class of patterns
>
> function Match (X : in out Source; Y : Pattern) return Boolean;
>
> If you mean pattern matching as a language feature, then you should
> elaborate why do you believe it essential.

Integrating pattern matching into a statically typed language allows
programs to be statically checked more thoroughly and greatly improves
performance by allowing pattern matches to be compiled and optimized
statically.

> (I have some personal interest in pattern matching. There was a time I
> worked much on it. If you are interested, see:
>
> http://www.dmitry-kazakov.de/match/match.htm
>
> At some point I came to conclusion that pattern matching is not an answer.
> IMO it brings more problems than solves.)

Have you studied Wadler's views? These are implemented as "active patterns"
in F# and are very useful under certain circumstances. For example, you can
write active patterns that deconstruct XML documents directly from the .NET
representation. Then you can use pattern matching to manipulate XML
documents quickly and easily from F# code.

>> You could reach for a design
>> pattern but it will not provide the same static checking or performance
>> without an enormous amount of effort (essentially writing a pattern
>> matcher).
>
> Hmm, I don't see an obvious connection between "pattern matching" and
> "design patterns."

I'm assuming there are design patterns representing special cases of pattern
matching, just as there are design patterns representing special cases of
first-class functions.

>> I would like the see the an OO equivalent.
>
> Functional decomposition is an integral part of OOP. [Many here would
> probably disagree.]

Interesting.

>>> But, what about sets of rewriters?
>>>
>>> 1. How would you describe a class of, with certain properties?
>>
>> A higher-order function parameterized over the properties.
>
> That means you need the properties being evaluated to certain values. This
> looks very constraining. Consider a property of being non-zero everywhere
> except at a finite number of points.
>
>>> How a user of the class could design a program in terms of these
>>> properties?
>>
>> Apply the appropriate arguments.
>
> That's too late. The whole idea is to know properties in advance during
> design.

Both of these are addressed by parameterizing via a functor (a higher-order
module) rather than a higher-order function. A monadic style is another
alternative.

>>> 2. How would you extend / modify a rewriter?
>>
>> Using a continuation.
>
> That is same as composition. BTW, this is a great problem of many OO
> languages too. They also use extension = composition with some epilogue
> and prologues, or overriding. That is too limiting in many cases. A
> typical example is dispatching constructors / destructors.

I don't really understand this. Can you give an example?

>>> OO does not limit you here in any way.
>>
>> With enough work you can solve the same problems with an OO design but
>> requiring 10x as much code to do the same thing is severely limiting in
>> practice.
>
> I doubt it is a relation. It is rather a+b/x+b/x*x+... I would expect
> that in a large project you will eventually have much *less* code in an
> OOP variant than in a pure FP.

I think the only way we can justify a conclusion is by looking at some real
examples. Can you think of any equivalent compilers written in both
functional and OO styles? Prolog implementations, maybe.

> If you give something away from FP and
> OO-preachers do the same, then you will probably meet somewhere at the
> optimum. Stateless design is a great advantage *when* appropriate.

I'm not advocating statelessness though, only first-class functions.

>>> I don't either. But I see reasons when an object should better have a
>>> state.
>>
>> I agree. But why objects?
>
> Because a value cannot have state. It is a state of something else.

Well, an immutable value cannot have state. Mutable values do, of course.

H. S. Lahman

unread,
Jul 10, 2007, 4:36:13 PM7/10/07
to
Responding to May...

>>> Peter Norvig noted almost 10 years ago that 16 of the 23
>>>patterns in the GoF book are "invisible or simpler" in Lisp
>>>(http://norvig.com/design-patterns). This suggests that, rather
>>>than existing independently of the solution domain, many design
>>>patterns are actually standardized workarounds for limitations in
>>>the language used for implementation.
>>
>>Simpler than what? All Novig is pointing out is that LISP has syntax
>>to support design patterns that is simpler to use than the
>>OOPLs'. Nobody is arguing that the OO paradigm doesn't lead to
>>verbose 3GL code; that's one of the fundamental trade-offs that the
>>paradigm makes versus maintainability.
>
>
> I always enjoy your posts here, but I've got to call shenanigans
> on that statement. First, any inherent maintainability of a
> particular programming paradigm is deeply subordinate to the quality
> of the code written in that paradigm. Second, if you're claiming
> that OO languages result in inherently more maintainable code and,
> further, that these languages were deliberately designed to be more
> verbose, and, even further, that this verbosity increases
> maintainability, well . . . Lucy, you got some 'splainin to do.

Relative to the first point there is an assumption that the application
is well-formed within the paradigm context.

My claim is that the OO /paradigm/ is designed to produce more
maintainable code. It does that by combining problem space abstraction,
eliminating hierarchical dependencies, and a host of other techniques
into a single, cohesive /methodological/ package.

The OOPLs provide a 3GL implementation that directly supports the OO
paradigm. To that extent well-formed OOPL code will be maintainable.
Whether it will be more maintainable than equivalent 3GL code in a
non-OO languages is problematic. That's because one can write
maintainable code in non-OO languages; it is just much harder to do
properly and it requires using the language in counter-intuitive ways.
IOW, the discipline needed to produce maintainable code in non-OO
languages must be provided entirely by the developer while the OOA/D
methodologies built it into the construction techniques.

An interesting example is Michael Jackson's JSD methodology. It was
supposedly a Structured Programming design methodology. Yet programs
produced with JSD tended to look very different than all other programs
developed under the Structured umbrella and at the time no one knew
exactly why that was. As it turned out, the reason was that Jackson was
way ahead of his time and JSD was actually much more closely related to
modern OOA/D methodologies than with Structured methodologies. He was
essentially doing OOA/D using procedural notations.

As it happened one reason for JSD's brief popularity was that the
applications developed properly with JSD were more maintainable than the
Structured applications to which they were compared. Unfortunately for
Jackson the formal OOA/D methodologies and OOPLs that came out of the
late '70s eclipsed JSD so he never got the credit he deserved for being
an Early Adopter of a paradigm shift.

As far as OOPLs being designed to be verbose, that is not quite what I
said. OOPL verbosity is a side effect of militant use of static
structure to map the problem space. The verbosity comes with the
territory of declaring static structure at the 3GL level and providing
workarounds for the inherently procedural nature of 3GLs.

The last point segues to another issue. Both procedural and functional
approaches to software development as quite intuitive at the 3GL level
because they map very directly to the hardware computational models as
abstracted by 3GLs (procedural block structuring, scope, and message
passing combined with an inherently synchronous execution model). The OO
paradigm is not very intuitive at all at the 3GL level; it is designed
to map well into non-computational customer environments. So one can
also argue that much of the OOPL verbosity really reflects the need to
convert between two very different problem solving domains.

>>The key question remains: how does the code author know to use that
>>particular syntax and how to use it properly to solve the problem in
>>hand? That insight is the design pattern.
>
>
> I'd argue that the insight and corresponding pattern is only
> required if the functionality of the pattern is not supported directly
> by the language.

There's a reason why they are called DESIGN patterns rather than
SOFTWARE patterns. The language provides software patterns for
implementing designs. The developer has to design the software to solve
the customer's problem. Once the developer designs a solution, the
developer then maps that design into 3GL syntax. The design patterns are
relevant to the solution design stage, not the solution coding stage.

>>> Paul Graham touches on this in his essay "Revenge of the
>>> Nerds":
>>>
>>> "When I see patterns in my programs, I consider it a sign of
>>> trouble. The shape of a program should reflect only the
>>> problem it needs to solve. Any other regularity in the code
>>> is a sign, to me at least, that I'm using abstractions that
>>> aren't powerful enough -- often that I'm generating by hand
>>> the expansions of some macro that I need to write."
>>>
>>>(http://www.paulgraham.com/icad.html)
>>>
>>> I find this argument compelling. Many patterns are symptoms
>>>of a problem, not aspects of a solution.
>>
>>Alas, I think this misses the point. What is a design pattern? It is
>>a generalization of a solution that can be implemented in a variety
>>of similar but different problem contexts.
>
>
> As demonstrated by Peter Norvig, many patterns are only
> applicable within the context of a particular language. It doesn't
> make sense to speak of a "while loop pattern" in 3GLs, but it
> qualifies as such in assembler. The Visitor pattern is essentially
> meaningless in languages that support multiple dispatch, but useful in
> C++, for example.

I think you are just arguing the case that I already made elsewhere in
the thread that 3GLs are an example of implementing BAL design patterns
in language syntax. Pick a level of abstraction for your language syntax
and I will be able to define design patterns at a higher level of
abstraction that are not directly implemented in that language's syntax.

As it happens I am not a fan of Visitor and will go to substantial
lengths to avoid it. Nonetheless, when one is using a paradigm that
emphasizes generalization structures and relationships, one will
eventually end up in a situation where there are a combinatorial number
of relationships between members of different generalizations.

My assertion is that if one must solve that problem, then Visitor is a
more maintainable way to do it than multiple dispatch. [I grew up back
in the era when FORTRAN's assigned GOTO and COBOL's ALTER statements
were being hailed as a boon to programmers. A decade later using either
one was grounds for summary dismissal in some shops. Whether one is
dispatching off a funky label variable or a type, it opens a Pandora's
Box of foot-shooting opportunities during maintenance. IOW, there is a
reason why most OOPLs don't support multiple dispatch. But let's not go
there and get further side-tracked...]

>>IOW, a design pattern is /always/ more abstract than the specific
>>problem being solved; otherwise it wouldn't be a pattern.
>
>
> You're making the implicit assumption that all patterns are
> aspects of the problem domain. In fact, many patterns are aspects of
> the solution domain and their presence indicates a lack of power or
> expressiveness in the language of the solution.

It depends on the problem domain. As I pointed out elsewhere in the
thread almost all of the GoF patterns address a single general problem
that is quite specific to OO development: dynamic substitution that is
too complex to describe with a simple association. IOW, for most of the
GoF patterns the problem domain is the OO paradigm itself.

Wow. The mind boggles at the notion that any flavor of LISP could
possibly be construed as an OOPL. B-) The hierarchical structure of
every LISP I have ever seen would be sufficient to completely disqualify it.

BTW, I don't see expressiveness or extensibility as being very relevant
qualities here. Consider UML with behaviors described in a compliant
abstract action language. At that point one has a 4GL that can be
unambiguously transformed automatically into C++, LISP, or any other 3GL
one cares to target (though a brute force transformation to an FPL would
probably make FPers seriously ill). IOW, the features of the 3GL de jour
are essentially irrelevant so long as there is a rigorous semantic
meta-model of the language. Thus an OOA model in UML is expressed in
terms of the customer space and is completely independent of the
computing environment. That model is also executable and testable
outside the computing environment.

What that all means is that the UML OOA model is at such a high level of
abstraction that 3GL issues like closure and multiple dispatch aren't
relevant. IOW, the OOA solution must be general enough and rigorous
enough that is can be mapped into whatever target implementation
language the computing environment provides. So one can't count on a
specific suite of syntactic features being available.

OTOH, design patterns like the GoF's are applied at the OOA/D level
where they are so abstract that they can be implemented in any 3GL.

[FYI, it is routine for transformation engines to target straight C for
performance reasons. Since one does not maintain the C code in
translation environments, it doesn't matter whether one uses an OOPL or
not. And even if an OOPL were targeted the generated code would make a
dependency management guru throw up. The point being that the 3GL
implementation of the OOA solution is quite orthogonal to the OOness of
the solution design.]

JXStern

unread,
Jul 10, 2007, 8:31:56 PM7/10/07
to
On Tue, 10 Jul 2007 19:17:32 +0200, "Dmitry A. Kazakov"
<mai...@dmitry-kazakov.de> wrote:

>Would they manage it instead of C++ if they tried to develop it in
>assembler, OCaml, SQL, whatever? I doubt it. If you have a technology, the
>language is no more in question.

I assume most software project screwups are management-oriented and
have almost nothing to do with the technology platforms, as long as
they are any of the top ten common ones. Bad process, bad
requirements, incompetent managers, theft, abused developers, ... that
sort of thing. I've written some great software on some horrible
platforms, looked back on it a few years later and wondered how on
Earth I ever had the nerve to start, much less finish!

J.

JXStern

unread,
Jul 10, 2007, 8:33:27 PM7/10/07
to
On Tue, 10 Jul 2007 17:59:38 +0100, Jon Harrop <j...@ffconsultancy.com>
wrote:

>JXStern wrote:
>> But I think the point here is that Lisp doesn't have these problems
>
>The fact that monadic programming, continuation-passing style and even
>non-trivial higher-order functions are absent from real code in dynamically
>typed functional languages is strong evidence that these techniques are not
>feasible without a static type checker as a safety net. So I would not say
>that Lisp "doesn't have these problems" any more than the next language.

You have any evidence, even anecdotal, that projects failed due to
dynamic rather than static type checking?

J.

Jon Harrop

unread,
Jul 10, 2007, 8:42:53 PM7/10/07
to
H. S. Lahman wrote:
> My claim is that the OO /paradigm/ is designed to produce more
> maintainable code.

Compared to what?

> one can write maintainable code in non-OO languages; it is just much
> harder to do properly and it requires using the language in
> counter-intuitive ways.

Why would you think that?

> IOW, the discipline needed to produce maintainable code in non-OO
> languages must be provided entirely by the developer while the OOA/D
> methodologies built it into the construction techniques.

Are you implying that OOP is the only paradigm that provides static
interfaces between modules?

> As far as OOPLs being designed to be verbose, that is not quite what I
> said. OOPL verbosity is a side effect of militant use of static
> structure to map the problem space. The verbosity comes with the
> territory of declaring static structure at the 3GL level

Do other paradigms not also "declare the static structure at the 3GL level"
using less code?

> and providing workarounds for the inherently procedural nature of 3GLs.

Is Haskell a 3GL?

> The last point segues to another issue. Both procedural and functional
> approaches to software development as quite intuitive at the 3GL level
> because they map very directly to the hardware computational models as
> abstracted by 3GLs (procedural block structuring, scope, and message
> passing combined with an inherently synchronous execution model).

The whole point of purely functional languages is that they are
not "inherently synchronous".

> There's a reason why they are called DESIGN patterns rather than
> SOFTWARE patterns. The language provides software patterns for
> implementing designs. The developer has to design the software to solve
> the customer's problem. Once the developer designs a solution, the
> developer then maps that design into 3GL syntax.

What is 3GL syntax?

> The design patterns are relevant to the solution design stage, not the
> solution coding stage.

I'll refer back to this later.

>> As demonstrated by Peter Norvig, many patterns are only
>> applicable within the context of a particular language. It doesn't
>> make sense to speak of a "while loop pattern" in 3GLs, but it
>> qualifies as such in assembler. The Visitor pattern is essentially
>> meaningless in languages that support multiple dispatch, but useful in
>> C++, for example.
>
> I think you are just arguing the case that I already made elsewhere in
> the thread that 3GLs are an example of implementing BAL design patterns
> in language syntax.

What is a BAL design pattern?

> Pick a level of abstraction for your language syntax and I will be able to
> define design patterns at a higher level of abstraction that are not
> directly implemented in that language's syntax.

This is not a syntactic issue.

> My assertion is that if one must solve that problem, then Visitor is a
> more maintainable way to do it than multiple dispatch. [I grew up back
> in the era when FORTRAN's assigned GOTO and COBOL's ALTER statements
> were being hailed as a boon to programmers. A decade later using either
> one was grounds for summary dismissal in some shops. Whether one is
> dispatching off a funky label variable or a type, it opens a Pandora's
> Box of foot-shooting opportunities during maintenance. IOW, there is a
> reason why most OOPLs don't support multiple dispatch. But let's not go
> there and get further side-tracked...]

Wasn't that problem solved by pattern matching and variant types in
statically-typed functional languages about 30 years ago?

>> You're making the implicit assumption that all patterns are
>> aspects of the problem domain. In fact, many patterns are aspects of
>> the solution domain and their presence indicates a lack of power or
>> expressiveness in the language of the solution.
>
> It depends on the problem domain. As I pointed out elsewhere in the
> thread almost all of the GoF patterns address a single general problem
> that is quite specific to OO development: dynamic substitution that is
> too complex to describe with a simple association. IOW, for most of the
> GoF patterns the problem domain is the OO paradigm itself.

Your statement "the problem domain is the OO paradigm itself" is equivalent
to Patrick's statement "many patterns are aspects of the solution domain".

Moreover, with your earlier statement:

"The design patterns are relevant to the solution design stage, not the
solution coding stage."

This implies that that design and solution are the same thing.

> Wow. The mind boggles at the notion that any flavor of LISP could
> possibly be construed as an OOPL. B-) The hierarchical structure of
> every LISP I have ever seen would be sufficient to completely disqualify
> it.

What exactly do you mean by "hierarchical structure of Lisp"?

> What that all means is that the UML OOA model is at such a high level of
> abstraction that 3GL issues like closure and multiple dispatch aren't
> relevant.

The UML OOA model and closures being mutually irrelevant does not make
either "higher level".

> So one can't count on a specific suite of syntactic features being
> available.

Again, these are not syntactic differences.

> OTOH, design patterns like the GoF's are applied at the OOA/D level
> where they are so abstract that they can be implemented in any 3GL.

Of course, they are Turing complete. However, you would not use an OO design
if you were going to write a functional solution and most of the GoF's
design patterns are specific to OOP, so they would not be useful.

Daniel T.

unread,
Jul 10, 2007, 10:10:03 PM7/10/07
to
Jon Harrop <j...@ffconsultancy.com> wrote:

> Of course, they are Turing complete. However, you would not use an OO design
> if you were going to write a functional solution and most of the GoF's
> design patterns are specific to OOP, so they would not be useful.

And you wouldn't use functional decomposition if you were going to write
an OO design. I ask again, what is your point? Are you claiming that

functional solutions are superior to OO solutions in all cases? Are you

claiming they are superior in cases in which they are superior? What?

Dmitry A. Kazakov

unread,
Jul 11, 2007, 5:14:33 AM7/11/07
to
On Wed, 11 Jul 2007 00:31:56 GMT, JXStern wrote:

> On Tue, 10 Jul 2007 19:17:32 +0200, "Dmitry A. Kazakov"
> <mai...@dmitry-kazakov.de> wrote:
>
>>Would they manage it instead of C++ if they tried to develop it in
>>assembler, OCaml, SQL, whatever? I doubt it. If you have a technology, the
>>language is no more in question.
>
> I assume most software project screwups are management-oriented and
> have almost nothing to do with the technology platforms, as long as
> they are any of the top ten common ones. Bad process, bad
> requirements, incompetent managers, theft, abused developers, ... that
> sort of thing.

I meant technology in a wider sense. To have a technology implies an
ability to manage the design process, and the reverse.

Dmitry A. Kazakov

unread,
Jul 11, 2007, 6:36:49 AM7/11/07
to
On Tue, 10 Jul 2007 18:17:01 +0100, Jon Harrop wrote:

> Dmitry A. Kazakov wrote:

>> Because diagnostics is a side-effect,
>
> You need to shed the idea that functional programming prohibits
> side-effects. It does not.

? If side effect is functional, then I don't what to say. At least it
implies that assignment is functional too.



> Absolutely. This never had anything to do with a purely functional model.

This raises questions. What is left? How much functional is impurely
functional? Why impurity is needed? How much of impurity one would require
in an average project?

>>>> Rewriter is defined as a function,
>>>
>>> A pattern match.
>>
>> No, a pattern match gives yes/no
>
> What do you mean by this?

Pattern is a function which takes a chain of letters from some alphabet and
yields true or false. When the result is true, it is said that the pattern
matches the chain.



> Integrating pattern matching into a statically typed language allows
> programs to be statically checked more thoroughly and greatly improves
> performance by allowing pattern matches to be compiled and optimized
> statically.

Why? What is the subject of matching. Some strings (the program processes),
or the program's source code? In either case I see no relation to static
typing. Maybe you mean types matching?



> These are implemented as "active patterns"
> in F# and are very useful under certain circumstances. For example, you can
> write active patterns that deconstruct XML documents directly from the .NET
> representation. Then you can use pattern matching to manipulate XML
> documents quickly and easily from F# code.

Looks like fighting some architectural problems to me...

>>> You could reach for a design
>>> pattern but it will not provide the same static checking or performance
>>> without an enormous amount of effort (essentially writing a pattern
>>> matcher).
>>
>> Hmm, I don't see an obvious connection between "pattern matching" and
>> "design patterns."
>
> I'm assuming there are design patterns representing special cases of pattern
> matching, just as there are design patterns representing special cases of
> first-class functions.

It is a too wide stretch, and I think it is wrong. The whole idea of
pattern matching is to ignore the semantics of the thing being analysed. In
some cases it gives you an advantage of doing the job quickly without much
clutter. When we talk about design patterns, the idea is quite opposite.
Design patterns heavily rely on the semantics while trying to ignore any
syntactical differences.

>>>> But, what about sets of rewriters?
>>>>
>>>> 1. How would you describe a class of, with certain properties?
>>>
>>> A higher-order function parameterized over the properties.
>>
>> That means you need the properties being evaluated to certain values. This
>> looks very constraining. Consider a property of being non-zero everywhere
>> except at a finite number of points.
>>
>>>> How a user of the class could design a program in terms of these
>>>> properties?
>>>
>>> Apply the appropriate arguments.
>>
>> That's too late. The whole idea is to know properties in advance during
>> design.
>
> Both of these are addressed by parameterizing via a functor (a higher-order
> module) rather than a higher-order function. A monadic style is another
> alternative.

I.e. by screwing the paradigm.

>>>> 2. How would you extend / modify a rewriter?
>>>
>>> Using a continuation.
>>
>> That is same as composition. BTW, this is a great problem of many OO
>> languages too. They also use extension = composition with some epilogue
>> and prologues, or overriding. That is too limiting in many cases. A
>> typical example is dispatching constructors / destructors.
>
> I don't really understand this. Can you give an example?

The problem is, to dispatch to some virtual method from the constructor. In
terms of functional decomposition you need to inject some code into the
body of another function, here a constructor. The problem is that at this
point the object is still under construction, so you cannot pass to object
to the target. It can only be solved by proper typing.

> I think the only way we can justify a conclusion is by looking at some real
> examples. Can you think of any equivalent compilers written in both
> functional and OO styles? Prolog implementations, maybe.

You need somebody to pay for such study. A typical software maintenance
period is 20+ years. Then you will know the answer, which probably nobody
will hear anyway.

Actually it is much simpler. Ask you senator to pass a law making
no-warranty licenses illegal. You will be amazed how quickly the number of
languages will reduce.

>> If you give something away from FP and
>> OO-preachers do the same, then you will probably meet somewhere at the
>> optimum. Stateless design is a great advantage *when* appropriate.
>
> I'm not advocating statelessness though, only first-class functions.

Oh, that is OK. Surely subroutines have to be ADTs. Why do you think that
this would be anti-OO?

>>>> I don't either. But I see reasons when an object should better have a
>>>> state.
>>>
>>> I agree. But why objects?
>>
>> Because a value cannot have state. It is a state of something else.
>
> Well, an immutable value cannot have state. Mutable values do, of course.

No, there is no such thing as a mutable value. Mutable can be a variable,
parameter, object. They are functions mapping execution states onto values.
Switching states just selects another value without changing them.

JXStern

unread,
Jul 11, 2007, 11:34:04 AM7/11/07
to
On Wed, 11 Jul 2007 11:14:33 +0200, "Dmitry A. Kazakov"
<mai...@dmitry-kazakov.de> wrote:

>>>Would they manage it instead of C++ if they tried to develop it in
>>>assembler, OCaml, SQL, whatever? I doubt it. If you have a technology, the
>>>language is no more in question.
>>
>> I assume most software project screwups are management-oriented and
>> have almost nothing to do with the technology platforms, as long as
>> they are any of the top ten common ones. Bad process, bad
>> requirements, incompetent managers, theft, abused developers, ... that
>> sort of thing.
>
>I meant technology in a wider sense. To have a technology implies an
>ability to manage the design process, and the reverse.

Oh, I must disagree. Technology can be misused. Give an idiot a
bulldozer, and an average guy a shovel, and see who can move a pile of
dirt better. IT management has traditionally been bad, and in my
opinion has gotten much worse over the past ten years. Projects that
three guys could finish in a year twenty guys fail at over two years,
given the same technology ... these numbers from an actual situation I
participated in. Couple of them, actually. Not to mention the
failure of huge government projects where the numbers are 100x bigger,
like just trying to get the FBI up on current email!

J.

Jon Harrop

unread,
Jul 11, 2007, 12:21:44 PM7/11/07
to
JXStern wrote:
> You have any evidence, even anecdotal, that projects failed due to
> dynamic rather than static type checking?

I certainly stopped using dynamically typed languages when I found them too
error-prone for significant projects. Since then I have learned techniques
to leverage static type systems that let me do things I could not possible
have done in a dynamically typed language.

topmind

unread,
Jul 11, 2007, 12:30:16 PM7/11/07
to

H. S. Lahman wrote:
> Responding to Jacobs...

>
> > Paul Graham, a Lisp proponent, has suggested that most of the
> > documented design patterns are indicators of missing language
> > features. Comments from C2.com:
> >
> > http://c2.com/cgi/wiki?AreDesignPatternsMissingLanguageFeatures
>
> To some extent that is true. It is a matter of the level of abstraction.
> 3GL procedural block structuring, scope, and message passing are design
> patterns for abstracting Turing hardware mechanics. The 3GLs exist to
> implement those abstractions as language features. Similarly 4GL DSLs
> exist to implement higher level abstractions as language features and
> RAD IDEs exist to implement UI and RDB paradigms as language features.
>
> To that extent Graham is right that we just don't have languages today
> that directly capture the design abstractions that we routinely use
> today. But as soon as we do we will just have a new set of more abstract
> design patterns that aren't directly expressible in those languages that
> will have to be used.

I don't think it would be a good idea to build GOF patterns directly
into a language. I think one reason why they are trivial or non-
notable in Lisp is because Lisp blurs the distinction between data
structure and code. (I am generally for the same kind of thing, but
via relational instead of ess-expressions.) Hard-wiring such
constructs into a language will not solve the weakness that created
them in the first place. Blurring the concept of collection handling
and code generally goes against OO encapsulation and object
independence ("self-handling noun"). Thus, I largely blame OO for the
poliferation of GOF-like patterns. ("Oh [bleep], Toppie is starting
another flamewar, that [bleeping] troll!]

> IOW, design patterns are more abstract that the
> languages that implement them by definition; they exist to solve design
> problems in using the language.
>
> IOW, one can push back the level of abstraction of design patterns by
> upgrading language syntax to higher levels of abstraction. But whatever
> the language level of abstraction, there will always be a next higher
> level of design patterns that needs to be implemented with the
> languages' lower level abstractions.
>
> Unfortunately, as Kazakov points out, there are limits. By the time the
> languages get sophisticated enough so that all one needs to do is encode
> doIt() to solve any problem, the machines that execute those languages
> will have replaced us long before then.


>
>
> *************
> There is nothing wrong with me that could
> not be cured by a capful of Drano.
>
> H. S. Lahman
> h...@pathfindermda.com
> Pathfinder Solutions
> http://www.pathfindermda.com
> blog: http://pathfinderpeople.blogs.com/hslahman
> "Model-Based Translation: The Next Step in Agile Development". Email
> in...@pathfindermda.com for your copy.
> Pathfinder is hiring:
> http://www.pathfindermda.com/about_us/careers_pos3.php.
> (888)OOA-PATH

-T-

Jon Harrop

unread,
Jul 11, 2007, 12:26:43 PM7/11/07
to
Daniel T. wrote:
> And you wouldn't use functional decomposition if you were going to write
> an OO design. I ask again, what is your point? Are you claiming that
> functional solutions are superior to OO solutions in all cases? Are you
> claiming they are superior in cases in which they are superior? What?

I noted that the implication here has been that OO is fundamentally better
and then disproved it by counter example with a term rewriter that requires
asymptotically more code in the OO paradigm.

The assertion about OO has been reduced to "scales better" which is an
untestable hypothesis, so it cannot be disproven.

topmind

unread,
Jul 11, 2007, 1:08:24 PM7/11/07
to

Jon Harrop wrote:
> Daniel T. wrote:
> > And you wouldn't use functional decomposition if you were going to write
> > an OO design. I ask again, what is your point? Are you claiming that
> > functional solutions are superior to OO solutions in all cases? Are you
> > claiming they are superior in cases in which they are superior? What?
>
> I noted that the implication here has been that OO is fundamentally better
> and then disproved it by counter example with a term rewriter that requires
> asymptotically more code in the OO paradigm.
>
> The assertion about OO has been reduced to "scales better" which is an
> untestable hypothesis, so it cannot be disproven.

The pot-of-gold at the end of the rainbow analogy comes to mind. OO's
primary claim keeps changing over the years as scrutiny eventually
dissolves the assembly line of newly-arriving claims. "Reuse" used to
be the calling card, but scrutiny pretty much popped that one.
Polymorphism reducing or simplifying decisions (IF and Case) has also
been pretty much popped (except for unrealistic toy examples).

The problem I see with the "scalability" claim is that OO itself seems
to make for big programs/apps, creating its own problem and then
allegedly getting credit for fixing a problem it creates. My favored
approach is to split big apps into multiple smaller apps and let the
RDBMS be the central chalk-board that allows sharing of info. I call
this the "villages on the bank of the Nile" approach. Then one does
not need to scale code/apps (turning it into a RDBMS scaling issue).
To some extent, OO can do this also, but nobody seems to be
associating it with OO, partly because it requires heavier reliance on
RDBMS, which OO'ers don't seem to like. Thus, the claim is reduced to
"OO scales better when you use OO instead of an RDBMS". Which is
something I tend to agree with, actually. Without the power of RDBMS,
procedural can be nasty.

>
> --
> Dr Jon D Harrop, Flying Frog Consultancy
> The OCaml Journal
> http://www.ffconsultancy.com/products/ocaml_journal/?usenet

-T-

JXStern

unread,
Jul 11, 2007, 1:20:52 PM7/11/07
to
On Wed, 11 Jul 2007 17:21:44 +0100, Jon Harrop <j...@ffconsultancy.com>
wrote:

>JXStern wrote:


>> You have any evidence, even anecdotal, that projects failed due to
>> dynamic rather than static type checking?
>
>I certainly stopped using dynamically typed languages when I found them too
>error-prone for significant projects. Since then I have learned techniques
>to leverage static type systems that let me do things I could not possible
>have done in a dynamically typed language.

As long as the type system supports assertions, I much prefer dynamic.

Suspect you have other process, design, and coding issues if type
errors are a significant problem for you.

J.

Jon Harrop

unread,
Jul 11, 2007, 1:44:25 PM7/11/07
to
Dmitry A. Kazakov wrote:
> ? If side effect is functional, then I don't what to say. At least it
> implies that assignment is functional too.

Exactly, yes.

>> Absolutely. This never had anything to do with a purely functional model.
>
> This raises questions. What is left?

Impure functional programming languages like Mathematica, Lisp, Scheme,
OCaml, Standard ML and Scala.

> How much functional is impurely functional?

Depends who you ask. :-)

> Why impurity is needed?

Impurity is not required, as purely functional languages do exist, but it
offers a different set of trade-offs that are preferable for certain
classes of user. In particular, impure functional languages are among the
fastest languages in existence whereas purely functional languages tend to
trail with dynamically typed languages as the purity requires laziness
(thunks) and graph traversal for evaluation.

> How much of impurity one would require in an average project?

I use impurity quite a bit in OCaml and F#. In F#, it is essential for
interoperability with the rest of .NET, which imposes mutability everywhere
to horrible effect. In OCaml, impurity is simply easier under certain
circumstances.

As Daniel mentioned, anything handling user inputs can be written elegantly
(avoiding a rats nest of dependencies) using event-based programming and
the model-view-controller design pattern. So I use impurity extensively in
GUI code.

>>>>> Rewriter is defined as a function,
>>>>
>>>> A pattern match.
>>>
>>> No, a pattern match gives yes/no
>>
>> What do you mean by this?
>
> Pattern is a function which takes a chain of letters from some alphabet
> and yields true or false. When the result is true, it is said that the
> pattern matches the chain.

Ok. In languages like OCaml, patterns can match trees as well as sequences.

>> Integrating pattern matching into a statically typed language allows
>> programs to be statically checked more thoroughly and greatly improves
>> performance by allowing pattern matches to be compiled and optimized
>> statically.
>
> Why? What is the subject of matching. Some strings (the program
> processes), or the program's source code? In either case I see no relation
> to static typing. Maybe you mean types matching?

In these languages, sum types (known as "variant" types) are a core part of
the type system and complement product types. A sum type presents
alternatives (like a tagged union in C). For example, a binary tree:

# type t =
| Empty
| Node of t * t;;
type t = Empty | Node of t * t

and values may be constructed of this type:

# Node(Node(Empty, Empty), Empty);;
- : t = Node (Node (Empty, Empty), Empty)

Note that the type of this value was automatically inferred to be the
type "t".

Pattern matching is the only way to deconstruct such data structures in
these languages. For example, a rewriter:

# let rec rewrite rule = function
| Empty as t -> rule t
| Node(l, r) -> rule(Node(rewrite rule l, rewrite rule r));;
val rewrite : (t -> t) -> t -> t = <fun>

Note that the type is automatically inferred to be a higher-order function
handling functions over "t" and a value of type "t". Moreover, the compiler
uses the definition of the type "t" to check that pattern matches over
values of the type "t" are both exhaustive (account for both Empty and Node
in this case) and not redundancy (e.g. do not account for Empty twice).

This is very beneficial in practice as it catches a lot of errors. Moreover,
this form of static checking requires the pattern matcher to be integrated
with the type system.

In the case of OCaml, you can even have the sum type inferred and statically
checked:

# let rec rewrite rule = function
| `E as t -> rule t
| `N(l, r) -> rule(`N(rewrite rule l, rewrite rule r));;
val rewrite :
([> `E | `N of 'a * 'a ] -> 'a) -> ([< `E | `N of 'b * 'b ] as 'b) -> 'a =
<fun>

A stage inside a compiler might alter an abstract syntax tree to remove
certain kinds of node. Using the above approach, the compiler can prove
that the functions that make up this stage remove the constructors and the
inferred sum type of the result will be a new abstract syntax tree that has
fewer kinds of node.

For example, the following function substitutes variables for their values
and, consequently, removes the need for the Var node in the resulting
abstract syntax tree:

# let rec subst vars = function
| `Int _ as e -> e
| `Var v -> List.assoc v vars
| `Add(f, g) -> `Add(subst vars f, subst vars g);;
val subst :
('a * ([> `Add of 'b * 'b | `Int of 'c ] as 'b)) list ->
([< `Add of 'd * 'd | `Int of 'c | `Var of 'a ] as 'd) -> 'b = <fun>

Note that the resulting tree has the inferred type "[> `Add of 'b * 'b |
`Int of 'c ] as 'b" without Var.

>> These are implemented as "active patterns"
>> in F# and are very useful under certain circumstances. For example, you
>> can write active patterns that deconstruct XML documents directly from
>> the .NET representation. Then you can use pattern matching to manipulate
>> XML documents quickly and easily from F# code.
>
> Looks like fighting some architectural problems to me...

I don't really see a better way around this. The conventional OO approach
used by the XML libraries of Java and .NET is certainly horrendous in
comparison.

>> I'm assuming there are design patterns representing special cases of
>> pattern matching, just as there are design patterns representing special
>> cases of first-class functions.
>
> It is a too wide stretch, and I think it is wrong. The whole idea of
> pattern matching is to ignore the semantics of the thing being analysed.
> In some cases it gives you an advantage of doing the job quickly without
> much clutter. When we talk about design patterns, the idea is quite
> opposite. Design patterns heavily rely on the semantics while trying to
> ignore any syntactical differences.

Well, there are certainly problems if you overuse pattern matching.
Particularly in languages that don't support views (as F# does).

I wrote a GUI editor for some technical presentation software and my
extensive use of pattern matching in its core ties it to the data structure
(lots of lists). If we want to handle a more asymptotically efficient data
structure then I would need to rewrite it.

>>>>> How a user of the class could design a program in terms of these
>>>>> properties?
>>>>
>>>> Apply the appropriate arguments.
>>>
>>> That's too late. The whole idea is to know properties in advance during
>>> design.
>>
>> Both of these are addressed by parameterizing via a functor (a
>> higher-order module) rather than a higher-order function. A monadic style
>> is another alternative.
>
> I.e. by screwing the paradigm.

Functors are a core component of the ML family of languages. I wouldn't say
that using them is "screwing the paradigm" in any way. Indeed, they are
just a static version of a higher-order function.

Hmm, I bet some OO design patterns correspond to functors. An ordinary
module is probably a Singleton pattern. So a functor might be an Abstract
Factory for making Singletons.

>>>>> 2. How would you extend / modify a rewriter?
>>>>
>>>> Using a continuation.
>>>
>>> That is same as composition. BTW, this is a great problem of many OO
>>> languages too. They also use extension = composition with some epilogue
>>> and prologues, or overriding. That is too limiting in many cases. A
>>> typical example is dispatching constructors / destructors.
>>
>> I don't really understand this. Can you give an example?
>
> The problem is, to dispatch to some virtual method from the constructor.
> In terms of functional decomposition you need to inject some code into the
> body of another function, here a constructor. The problem is that at this
> point the object is still under construction, so you cannot pass to object
> to the target. It can only be solved by proper typing.

I see. Yes. I've certainly hit that problem a few times with OO. I think
higher-order objects are the solution: you just pass an object to the
constructor of another object. Of course, this leads you straight into
object-hell, where you start creating objects for everything unnecessary,
as Java and .NET do for any non-trivial sets of parameters.

>> I think the only way we can justify a conclusion is by looking at some
>> real examples. Can you think of any equivalent compilers written in both
>> functional and OO styles? Prolog implementations, maybe.
>
> You need somebody to pay for such study. A typical software maintenance
> period is 20+ years. Then you will know the answer, which probably nobody
> will hear anyway.

Yes.

> Actually it is much simpler. Ask you senator to pass a law making
> no-warranty licenses illegal. You will be amazed how quickly the number of
> languages will reduce.

:-)

>>> If you give something away from FP and
>>> OO-preachers do the same, then you will probably meet somewhere at the
>>> optimum. Stateless design is a great advantage *when* appropriate.
>>
>> I'm not advocating statelessness though, only first-class functions.
>
> Oh, that is OK. Surely subroutines have to be ADTs. Why do you think that
> this would be anti-OO?

I would call it an excellent complement to OO rather than anti-OO. Languages
like OCaml and F# demonstrate that these methods can usefully coexist. I'm
still trying to pin down the applications for which I would choose OO and
those for which I would choose a functional style though...

>> Well, an immutable value cannot have state. Mutable values do, of course.
>
> No, there is no such thing as a mutable value. Mutable can be a variable,
> parameter, object. They are functions mapping execution states onto
> values. Switching states just selects another value without changing them.

This is just nomenclature. In OCaml, a mutable reference is a type of value:

# let a = ref 4;;
val a : int ref = {contents = 4}

Daniel T.

unread,
Jul 11, 2007, 2:34:27 PM7/11/07
to
Jon Harrop <j...@ffconsultancy.com> wrote:
> Daniel T. wrote:

> > And you wouldn't use functional decomposition if you were going to write
> > an OO design. I ask again, what is your point? Are you claiming that
> > functional solutions are superior to OO solutions in all cases? Are you
> > claiming they are superior in cases in which they are superior? What?
>
> I noted that the implication here has been that OO is fundamentally better
> and then disproved it by counter example with a term rewriter that requires
> asymptotically more code in the OO paradigm.

Really? Let's look at your OP again...

> Is there any literature describing how object-oriented design patterns
> reinvent existing techniques like functional programming?

Hmm... Nothing in there about noting anything so ridiculous as "OO is
fundamentally better". Who made this claim you say you are "noting" and
"disproving"? Who even implied such a claim? Sounds like a straw man to
me.

Or are you claiming (or attempting to imply) that functional solutions
are "fundamentally better"?

Dmitry A. Kazakov

unread,
Jul 11, 2007, 3:08:37 PM7/11/07
to
On Wed, 11 Jul 2007 15:34:04 GMT, JXStern wrote:

> On Wed, 11 Jul 2007 11:14:33 +0200, "Dmitry A. Kazakov"
> <mai...@dmitry-kazakov.de> wrote:
>
>>>>Would they manage it instead of C++ if they tried to develop it in
>>>>assembler, OCaml, SQL, whatever? I doubt it. If you have a technology, the
>>>>language is no more in question.
>>>
>>> I assume most software project screwups are management-oriented and
>>> have almost nothing to do with the technology platforms, as long as
>>> they are any of the top ten common ones. Bad process, bad
>>> requirements, incompetent managers, theft, abused developers, ... that
>>> sort of thing.
>>
>>I meant technology in a wider sense. To have a technology implies an
>>ability to manage the design process, and the reverse.
>
> Oh, I must disagree. Technology can be misused. Give an idiot a
> bulldozer, and an average guy a shovel, and see who can move a pile of
> dirt better.

Technology also necessarily includes mechanisms for fencing idiots out. One
needs a license to drive bulldozer. You cannot sell a shovel with an
ergonomically carved handle as a new generation of bulldozers.

> IT management has traditionally been bad, and in my
> opinion has gotten much worse over the past ten years. Projects that
> three guys could finish in a year twenty guys fail at over two years,
> given the same technology ... these numbers from an actual situation I
> participated in. Couple of them, actually.

See, idiots are looking for best nesting places, IT positions are ideal for
laying their eggs. This is not because of technology, it is because of
absence of any. The only technology there is of fooling one idiots by
others and by themselves.

> Not to mention the
> failure of huge government projects where the numbers are 100x bigger,
> like just trying to get the FBI up on current email!

I would not blame government for that. It gave away in late 80's. Before
that time it tried to influence things, remember Ada mandate. Since then it
just could not compete with huge civil software marked. Today governmental
contracts have no economical weight in relation to the money spent on other
rubbish. So they started to play by the rules of other, idiots. These
others actually were much bigger and far more evolutionary advanced idiots.
The result for government is Windows, C++ and Java deployed for
mission-critical applications.

S Perryman

unread,
Jul 11, 2007, 3:45:58 PM7/11/07
to
JXStern wrote:

> On Wed, 11 Jul 2007 17:21:44 +0100, Jon Harrop <j...@ffconsultancy.com>

>>I certainly stopped using dynamically typed languages when I found them too


>>error-prone for significant projects. Since then I have learned techniques
>>to leverage static type systems that let me do things I could not possible
>>have done in a dynamically typed language.

> As long as the type system supports assertions, I much prefer dynamic.

> Suspect you have other process, design, and coding issues if type
> errors are a significant problem for you.

Weakly-typed prog langs cannot be used across the range that strongly-typed
prog langs are. No one builds large-scale safety-critical systems in a
prog lang that pops up with "message not understand" errors at runtime
when I request the nuclear reactor object to shut-down to prevent
melt-down.

The only argument there has ever been in favour of weakly-typed prog langs
has been for meta-typing based problems or (similarly related) , types for
which possession of properties may be conditional. No other forms of
problem appear to be easily solvable in weakly-typed prog langs but not
similarly so in strongly-typed prog langs.


Regards,
Steven Perryman

Dmitry A. Kazakov

unread,
Jul 11, 2007, 4:16:58 PM7/11/07
to
On Wed, 11 Jul 2007 18:44:25 +0100, Jon Harrop wrote:

> Dmitry A. Kazakov wrote:
>> ? If side effect is functional, then I don't what to say. At least it
>> implies that assignment is functional too.
>
> Exactly, yes.

Then where is any difference?

Note that if OO is understood as typed, as ADT, then that necessarily
includes functional decomposition, because one cannot define types without
operations on them.

>>> Absolutely. This never had anything to do with a purely functional model.
>>
>> This raises questions. What is left?
>
> Impure functional programming languages like Mathematica, Lisp, Scheme,
> OCaml, Standard ML and Scala.

These are languages. What is left of the model they allegedly embody? I
don't see any consistent way to keep it simultaneously "functional" and
typed. Once you have types, operations automatically become just a part of.

>> Why impurity is needed?
>
> Impurity is not required, as purely functional languages do exist, but it
> offers a different set of trade-offs that are preferable for certain
> classes of user. In particular, impure functional languages are among the
> fastest languages in existence whereas purely functional languages tend to
> trail with dynamically typed languages as the purity requires laziness
> (thunks) and graph traversal for evaluation.

It seems that you say here that most (how many?) of known algorithms cannot
be implemented in a functional way achieving their theoretical complexity.
E.g. instead of O(log N) you get O(N). If it is indeed so, then, sorry, you
have to scrap your paradigm completely.

>>>>>> Rewriter is defined as a function,
>>>>>
>>>>> A pattern match.
>>>>
>>>> No, a pattern match gives yes/no
>>>
>>> What do you mean by this?
>>
>> Pattern is a function which takes a chain of letters from some alphabet
>> and yields true or false. When the result is true, it is said that the
>> pattern matches the chain.
>
> Ok. In languages like OCaml, patterns can match trees as well as sequences.

No it is still the same class. A tree is equivalent to a sequence with
balanced brackets.

>>> Integrating pattern matching into a statically typed language allows
>>> programs to be statically checked more thoroughly and greatly improves
>>> performance by allowing pattern matches to be compiled and optimized
>>> statically.
>>
>> Why? What is the subject of matching. Some strings (the program
>> processes), or the program's source code? In either case I see no relation
>> to static typing. Maybe you mean types matching?
>
> In these languages, sum types (known as "variant" types) are a core part of
> the type system and complement product types. A sum type presents
> alternatives (like a tagged union in C). For example, a binary tree:

[...]

> This is very beneficial in practice as it catches a lot of errors. Moreover,
> this form of static checking requires the pattern matcher to be integrated
> with the type system.

I don't see why pattern matching is required here, whatever thing the
compiler use to parse literals of the type t, that is up to the compiler.
Why should I care. Anyway I would prefer a language where aggregates and
literals where abstract operations to override.

> In the case of OCaml, you can even have the sum type inferred and statically
> checked:

I don't like type inference and I don't count it for an advantage. The
language should clearly distinguish its static (types) and dynamic (values
of) parts.

> I don't really see a better way around this. The conventional OO approach
> used by the XML libraries of Java and .NET is certainly horrendous in
> comparison.

There is a much simpler approach - just don't use XML.

For somebody who cannot refrain himself in his love to angular brackets.
The trick is - write them on the sheet of paper, then write the data into
the file/wire etc. The paper can be then put on the wall for meditation
purposes and better digestion after lunch time. The file is used for work.

>> The problem is, to dispatch to some virtual method from the constructor.
>> In terms of functional decomposition you need to inject some code into the
>> body of another function, here a constructor. The problem is that at this
>> point the object is still under construction, so you cannot pass to object
>> to the target. It can only be solved by proper typing.
>
> I see. Yes. I've certainly hit that problem a few times with OO. I think
> higher-order objects are the solution: you just pass an object to the
> constructor of another object.

This breaks the abstraction, because you split one type into two. What is
worse you have to derive from both handling two hierarchies of types.

>>> Well, an immutable value cannot have state. Mutable values do, of course.
>>
>> No, there is no such thing as a mutable value. Mutable can be a variable,
>> parameter, object. They are functions mapping execution states onto
>> values. Switching states just selects another value without changing them.
>
> This is just nomenclature. In OCaml, a mutable reference is a type of value:

Hmm, reference itself should have a type and a value.

Jon Harrop

unread,
Jul 11, 2007, 5:59:04 PM7/11/07
to
topmind wrote:
> The problem I see with the "scalability" claim is that OO itself seems
> to make for big programs/apps, creating its own problem and then
> allegedly getting credit for fixing a problem it creates.

Exactly. That's what most design patterns really are, IMHO.

Jon Harrop

unread,
Jul 11, 2007, 6:24:33 PM7/11/07
to
Dmitry A. Kazakov wrote:
> On Wed, 11 Jul 2007 18:44:25 +0100, Jon Harrop wrote:
>> Dmitry A. Kazakov wrote:
>>> ? If side effect is functional, then I don't what to say. At least it
>>> implies that assignment is functional too.
>>
>> Exactly, yes.
>
> Then where is any difference?

Elsewhere. :-)

OO and functional differ in their forms of abstraction: classes vs
functions/functors.

> Note that if OO is understood as typed, as ADT, then that necessarily
> includes functional decomposition, because one cannot define types without
> operations on them.

If functions are first-class values then values can represent objects, thus
including class decomposition. So the two are equivalent. However, this
says nothing of how much you can check at compile time or how easy it is to
extend OO or functional decompositions, which are the interesting
questions.

>> Impure functional programming languages like Mathematica, Lisp, Scheme,
>> OCaml, Standard ML and Scala.
>
> These are languages. What is left of the model they allegedly embody? I
> don't see any consistent way to keep it simultaneously "functional" and
> typed.

You just include a type equivalent to:

'a -> 'b

that represents function values. That's all you need: first-class functions.

> Once you have types, operations automatically become just a part of.

There are many different type systems, ranging from completely dynamic in
Mathematica and Lisp to completely static in Standard ML.

>>> Why impurity is needed?
>>
>> Impurity is not required, as purely functional languages do exist, but it
>> offers a different set of trade-offs that are preferable for certain
>> classes of user. In particular, impure functional languages are among the
>> fastest languages in existence whereas purely functional languages tend
>> to trail with dynamically typed languages as the purity requires laziness
>> (thunks) and graph traversal for evaluation.
>
> It seems that you say here that most (how many?) of known algorithms
> cannot be implemented in a functional way achieving their theoretical
> complexity. E.g. instead of O(log N) you get O(N). If it is indeed so,
> then, sorry, you have to scrap your paradigm completely.

Provided you have laziness, purity only degrades the constant prefactor and
not the asymptotic complexity. Same for dynamic typing.

>> This is very beneficial in practice as it catches a lot of errors.
>> Moreover, this form of static checking requires the pattern matcher to be
>> integrated with the type system.
>
> I don't see why pattern matching is required here, whatever thing the
> compiler use to parse literals of the type t, that is up to the compiler.

Then compiler leaves it up to the programmer. Consider a trivial pattern
match that rotates addition nodes to be left-associative:

a+(b+c) -> (a+b)+c

I'd rather write the pattern match than a hierarchy of explicitly-specified
classes laid out in a design pattern representing a manually-compiled
pattern match.

> Why should I care. Anyway I would prefer a language where aggregates and
> literals where abstract operations to override.

They are with active patterns.

>> In the case of OCaml, you can even have the sum type inferred and
>> statically checked:
>
> I don't like type inference and I don't count it for an advantage. The
> language should clearly distinguish its static (types) and dynamic (values
> of) parts.

The static type information is still available, you just don't have to write
it down yourself (repeatedly). Note that OCaml inferred the equivalent of
an entire class hierarchy in the example I gave.

>> I don't really see a better way around this. The conventional OO approach
>> used by the XML libraries of Java and .NET is certainly horrendous in
>> comparison.
>
> There is a much simpler approach - just don't use XML.

:-)

>> I see. Yes. I've certainly hit that problem a few times with OO. I think
>> higher-order objects are the solution: you just pass an object to the
>> constructor of another object.
>
> This breaks the abstraction, because you split one type into two. What is
> worse you have to derive from both handling two hierarchies of types.

Just infer the types. ;-)

>>>> Well, an immutable value cannot have state. Mutable values do, of
>>>> course.
>>>
>>> No, there is no such thing as a mutable value. Mutable can be a
>>> variable, parameter, object. They are functions mapping execution states
>>> onto values. Switching states just selects another value without
>>> changing them.
>>
>> This is just nomenclature. In OCaml, a mutable reference is a type of
>> value:
>
> Hmm, reference itself should have a type and a value.

The type is denoted:

'a ref

the value is denoted:

ref x

Jon Harrop

unread,
Jul 11, 2007, 8:30:57 PM7/11/07
to
gpuchtel wrote:
> however, just how many lines of code are we talking in the largest known
> OCaml project? Is it greater or less than 100 KLOC?

I just checked out of interest and the OCaml distribution is ~450kLOC of
OCaml.

Daniel T.

unread,
Jul 11, 2007, 9:56:06 PM7/11/07
to
Jon Harrop <j...@ffconsultancy.com> wrote:
> topmind wrote:

> > The problem I see with the "scalability" claim is that OO itself seems
> > to make for big programs/apps, creating its own problem and then
> > allegedly getting credit for fixing a problem it creates.
>
> Exactly. That's what most design patterns really are, IMHO.

That makes it sound like you are implying that functional solutions
are "fundamentally better". Is that true?

gpuchtel

unread,
Jul 11, 2007, 11:58:49 PM7/11/07
to
I get the impression that Dr. Harrop sees everything as a FOP nail and
anyone using anything else other than a FOP hammer must somehow be
blind. Moreover, I get the impression he is trying to make us see the
error of our ways; to save us from ourselves. This is unfortunate; I
would have thought anyone with a prestigious title of Dr. would
acknowledge there is more than one right way to do something; however,
this consistent _appearance_ of arrogance by insisting that his way is
always more right, discredits any strong arguments he may have made.
He doesn't seem satisfied that his way is arguably superior in some
cases, rather, the OO way is somehow evil in all cases and those who
practices it are themselves somehow evil doers.

For me, I'd prefer to work with a pile of objects that exhibit 'state'
and 'behavior', which are (hopefully) independent than a pile of
functions that are (hopelessly) dependent. Yes, I understand that a
mess or elegance is achievable in either paradigm; however, objects
make sense to me, I 'get' them and I can associate them to real world
abstractions easier than I can functions. Isn't the 'real' measurement
of a successful paradigm how well an individual 'gets' it?

I don't profess to be a guru of OOP, but I am more productive using
its techniques. Yet I not blind that there may be another, possibly
even better way to do something in a given case. In that case I would
embrace it, not ridicule it, but I'd do it in a language I understood.
Maybe I'm subservient to one side of my brain more than the other, but
I relate to 'things' more than 'processes'. So, maybe this religious
war between FOP and OOP is really one between those who are left-side
dominate and those who are right-side dominate; Lilliputians and
Blefuscudans - "all true believers break their eggs at the convenient
end." -- Jonathan Swift

topmind

unread,
Jul 12, 2007, 12:14:35 AM7/12/07
to

S Perryman wrote:
> JXStern wrote:
>
> > On Wed, 11 Jul 2007 17:21:44 +0100, Jon Harrop <j...@ffconsultancy.com>
>
> >>I certainly stopped using dynamically typed languages when I found them too
> >>error-prone for significant projects. Since then I have learned techniques
> >>to leverage static type systems that let me do things I could not possible
> >>have done in a dynamically typed language.
>
> > As long as the type system supports assertions, I much prefer dynamic.
>
> > Suspect you have other process, design, and coding issues if type
> > errors are a significant problem for you.
>
> Weakly-typed prog langs cannot be used across the range that strongly-typed
> prog langs are. No one builds large-scale safety-critical systems in a
> prog lang that pops up with "message not understand" errors at runtime
> when I request the nuclear reactor object to shut-down to prevent
> melt-down.

I agree that medical & emergency control systems are where strong-
typing shines. However, I think weakly typed languages are good when
*productivity* matters, that is when the biz wants lots of software on
a short budget, which is usually the case. I find such languages
easier to read, closer to psuedo-code, because you don't have to deal
with complex verbose declarations and chain-to-chain-to-chain of
typing-dependency logic.

>
> The only argument there has ever been in favour of weakly-typed prog langs
> has been for meta-typing based problems or (similarly related) , types for
> which possession of properties may be conditional. No other forms of
> problem appear to be easily solvable in weakly-typed prog langs but not
> similarly so in strongly-typed prog langs.
>
>
> Regards,
> Steven Perryman

-T-

S Perryman

unread,
Jul 12, 2007, 4:17:16 AM7/12/07
to
topmind wrote:

> S Perryman wrote:

JH>As long as the type system supports assertions, I much prefer dynamic.

JX>Suspect you have other process, design, and coding issues if type
JX>errors are a significant problem for you.

>>Weakly-typed prog langs cannot be used across the range that strongly-typed
>>prog langs are. No one builds large-scale safety-critical systems in a
>>prog lang that pops up with "message not understand" errors at runtime
>>when I request the nuclear reactor object to shut-down to prevent
>>melt-down.

> I agree that medical & emergency control systems are where strong-
> typing shines. However, I think weakly typed languages are good when
> *productivity* matters, that is when the biz wants lots of software on
> a short budget, which is usually the case. I find such languages
> easier to read, closer to psuedo-code, because you don't have to deal
> with complex verbose declarations and chain-to-chain-to-chain of
> typing-dependency logic.

A claim completely demolished by the range of strongly-typed prog langs
that currently exist which provide :

- type inference on top of or instead of manifest typing (FP being
a mainstream example of this)

- an *environment* that enables the programmer to switch between a
weakly-typed env (Smalltalk etc) , to a strongly-typed one (type
checking, compilation etc) at the command line (just as I was doing when
developing with CLOS/Lisp *17 years ago* )


Regards,
Steven Perryman

Daniel T.

unread,
Jul 12, 2007, 10:15:48 AM7/12/07
to
gpuchtel <gpuc...@gmail.com> wrote:

> For me, I'd prefer to work with a pile of objects that exhibit 'state'
> and 'behavior', which are (hopefully) independent than a pile of
> functions that are (hopelessly) dependent. Yes, I understand that a
> mess or elegance is achievable in either paradigm; however, objects
> make sense to me, I 'get' them and I can associate them to real world
> abstractions easier than I can functions. Isn't the 'real' measurement
> of a successful paradigm how well an individual 'gets' it?

I think the real measurement of a paradigm is how well it fits the
problem at hand. When one has a static problem (such as a sudoku solver
or the "who's fish" problem we tossed about back in May/June) where
there is only one solution to a given set of inputs and the problem
itself is not time dependent, functional approaches will probably
produce more efficient systems. I use side effect free functions all the
time when faced with these sorts of problems (even if I don't
necessarily use a pure functional language.)

But you are right of course, if the designer doesn't understand the
paradigm (or in many cases simply has limited experience with it,) they
will be unlikely to find that more efficient system.

There is no reason, however, to limit a complex problem to one paradigm.
There are plenty of multi-paradigm languages in existence and I see no
reason to refuse to use any of the constructs they provide simply
because the end solutions won't be "pure".

Jon Harrop

unread,
Jul 12, 2007, 11:05:56 AM7/12/07
to

That statement is completely independent of other paradigms. Just an
observation about the state of design patterns in the literature.

topmind

unread,
Jul 12, 2007, 11:29:02 AM7/12/07
to

S Perryman wrote:
> topmind wrote:
>
> > S Perryman wrote:
>
> JH>As long as the type system supports assertions, I much prefer dynamic.
>
> JX>Suspect you have other process, design, and coding issues if type
> JX>errors are a significant problem for you.
>
> >>Weakly-typed prog langs cannot be used across the range that strongly-typed
> >>prog langs are. No one builds large-scale safety-critical systems in a
> >>prog lang that pops up with "message not understand" errors at runtime
> >>when I request the nuclear reactor object to shut-down to prevent
> >>melt-down.
>
> > I agree that medical & emergency control systems are where strong-
> > typing shines. However, I think weakly typed languages are good when
> > *productivity* matters, that is when the biz wants lots of software on
> > a short budget, which is usually the case. I find such languages
> > easier to read, closer to psuedo-code, because you don't have to deal
> > with complex verbose declarations and chain-to-chain-to-chain of
> > typing-dependency logic.
>
> A claim completely demolished by the range of strongly-typed prog langs
> that currently exist which provide :

The existence of a language does not prove the utility/benefits of
weak or strong typing. It only proves implementability.

>
> - type inference on top of or instead of manifest typing (FP being
> a mainstream example of this)
>
> - an *environment* that enables the programmer to switch between a
> weakly-typed env (Smalltalk etc) , to a strongly-typed one (type
> checking, compilation etc) at the command line (just as I was doing when
> developing with CLOS/Lisp *17 years ago* )
>
>
> Regards,
> Steven Perryman

-T-

-T-

Jon Harrop

unread,
Jul 12, 2007, 11:34:33 AM7/12/07
to
topmind wrote:
> The existence of a language does not prove the utility/benefits of
> weak or strong typing. It only proves implementability.

Have a look at statically-typed languages with type inference that Steven
referred to. They solved the problems you cited, combining the development
speed and brevity of dynamic typing with the assurance of static checking.
There is basically no need for dynamic typing now.

S Perryman

unread,
Jul 12, 2007, 11:54:33 AM7/12/07
to
topmind wrote:

> S Perryman wrote:

TM>I agree that medical & emergency control systems are where strong-
TM>typing shines. However, I think weakly typed languages are good when
TM>*productivity* matters, that is when the biz wants lots of software on
TM>a short budget, which is usually the case. I find such languages
TM>easier to read, closer to psuedo-code, because you don't have to deal
TM>with complex verbose declarations and chain-to-chain-to-chain of
TM>typing-dependency logic.

>>A claim completely demolished by the range of strongly-typed prog langs
>>that currently exist which provide :

> The existence of a language does not prove the utility/benefits of
> weak or strong typing. It only proves implementability.

Existed for a long time in the field. And proven to be useful/beneficial.

Therefore your claim is fallacy.


Regards,
Steven Perryman

Daniel T.

unread,
Jul 12, 2007, 2:37:24 PM7/12/07
to
On Jul 12, 11:05 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> Daniel T. wrote:

> > That makes it sound like you are implying that functional solutions
> > are "fundamentally better". Is that true?
>
> That statement is completely independent of other paradigms. Just an
> observation about the state of design patterns in the literature.

I ask yet again. Are you claiming that FP solutions are "fundamentally
better"?You keep avoiding the question.

If you aren't saying that they are fundamentally better, then all you
are saying is that when they are better, they are better, but when
they are not better, then they are not. I.E., you are saying nothing
at all.

I'm beginning to think that I should just put you in the same category
as topmind...

It is loading more messages.
0 new messages