Ants: Clojure vs. Common Lisp

1426 views
Skip to first unread message

Pascal Costanza

unread,
Jun 14, 2009, 3:30:48 PM6/14/09
to
Hi,

I have reimplemented the Ants simulation example that comes with Clojure
in Common Lisp, more precisely in LispWorks, using its multiprocessing
and graphics capabilities.

Furthermore, it doesn't use STM as in Clojure, but rather a solution
based on locks, which doesn't seem to be much harder to me and required
only very little tweaking. Since there is no need to monitor read and
write accesses to shared mutable state in the lock-based version, it
should also be more efficient.

The original Clojure version can be found here:
http://clojure.googlegroups.com/web/ants.clj

The Common Lisp / LispWorks version can be found here:
http://paste.lisp.org/+1R5O

I have tested the latter both in LispWorks 5.1.2 as well in the alpha
version of LispWorks 6.0 with multicore support (which is not publicly
available yet).


Greets,
Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Slobodan Blazeski

unread,
Jun 14, 2009, 4:30:54 PM6/14/09
to
I'm switching to yours accessor naming convention classname-
slotname.It's way better then mine slotname-slot
(defclass ant ()
((dir :initarg :dir :accessor ant-dir)..


Bobi

Pascal Costanza

unread,
Jun 14, 2009, 4:36:43 PM6/14/09
to

It depends. If your class hierarchies become deep, this naming
convention starts to reveal too much about it. I use the
classname-slotname convention typically only in code where I want to
illustrate ideas. In "real" code, I try to avoid it, for better
information hiding...

D Herring

unread,
Jun 14, 2009, 9:40:32 PM6/14/09
to
Pascal Costanza wrote:

> I have reimplemented the Ants simulation example that comes with Clojure
> in Common Lisp, more precisely in LispWorks, using its multiprocessing
> and graphics capabilities.
>
> Furthermore, it doesn't use STM as in Clojure, but rather a solution
> based on locks, which doesn't seem to be much harder to me and required
> only very little tweaking. Since there is no need to monitor read and
> write accesses to shared mutable state in the lock-based version, it
> should also be more efficient.

Careful. You're awfully close to trolling...

The ASM coder pulls out a toy C example; "I can do that in a few
hundred lines of ASM, and there's no need to pull in the huge libc.
No need for C."

The C coder examines a toy CL example; "I can do that easily without
recursion, closures, or garbage collection. No need for CL."

PC examines a toy Clojure example; "one lock isn't hard to use. No
need for STM."

- Daniel

Pascal Costanza

unread,
Jun 15, 2009, 2:20:47 AM6/15/09
to

Where did I say "No need for STM."?

ACL

unread,
Jun 15, 2009, 8:22:25 AM6/15/09
to

Probably inferring from this:

"...a solution based on locks, //which doesn't seem to be much harder
to me and required only very little tweaking.//"

I think he improperly extrapolated this to mean 'for every project
imaginable'.

D Herring

unread,
Jun 15, 2009, 9:29:16 AM6/15/09
to
Pascal Costanza wrote:
> D Herring wrote:
>> Pascal Costanza wrote:
>>
...

>>> Furthermore, it doesn't use STM as in Clojure, but rather a solution
>>> based on locks, which doesn't seem to be much harder to me and
>>> required only very little tweaking. Since there is no need to monitor
>>> read and write accesses to shared mutable state in the lock-based
>>> version, it should also be more efficient.
...

>> PC examines a toy Clojure example; "one lock isn't hard to use. No
>> need for STM."
>
> Where did I say "No need for STM."?

I apologize for putting words in your mouth and then critiquing them.
I had overreacted to the assertions that one lock was easy to use
and more efficient; together these indicated to me that the
alternative (STM) was being degraded, with this example code as
supporting evidence.

Rereading your post, I see that this second paragraph was just a
remark on the present implementation rather than a comment on more
general principles.

- Daniel

Rich Hickey

unread,
Jun 15, 2009, 2:30:33 PM6/15/09
to
On Jun 14, 3:30 pm, Pascal Costanza <p...@p-cos.net> wrote:
> Hi,
>
> I have reimplemented the Ants simulation example that comes with Clojure
> in Common Lisp, more precisely in LispWorks, using its multiprocessing
> and graphics capabilities.

No, you haven't. What you've implemented is a pale imitation of the
ants simulation that neglects its most important features.

>
> Furthermore, it doesn't use STM as in Clojure, but rather a solution
> based on locks, which doesn't seem to be much harder to me and required
> only very little tweaking. Since there is no need to monitor read and
> write accesses to shared mutable state in the lock-based version, it
> should also be more efficient.
>

It doesn't come close to doing the same job.

The ants simulation was designed to show how to do things that are
difficult to do without an STM. It is not simply a demo of how to use
multiple threads to make ants move around on screen.

STM lets you make decisions based upon consistent information. So,
e.g. in the Clojure version when the ant looks around and sees food on
an empty space, and makes a decision to pick it up, it will be able to
do so without finding it missing or the space occupied, i.e. it will
atomically be able to make a decision and act upon it, even one
involving multiple ad hoc elements. In your code you've made it so the
ants can't see each other, thus removing the coordination problem
rather than solving it. The Clojure code can handle arbitrary sets of
read/modify operations, on arbitrary sets of involved data,
consistently.

You use timeouts on your locks, and silently fail to do the requested
operation if the lock times out. But the caller of move expects it to
work - it's not called maybe-move-if-we-can-get-the-lock. What if
evaporate did something important? It wouldn't be ok for it to skip
some steps if it can't get the locks.

There's more. You hold locks on an ongoing basis, vs obtaining and
releasing in a scope - always tricky and dangerous. Do you know how
that will behave should those locks map to OS resources? How might non-
scoped locks thwart optimization? In move, you obtain the new lock, do
some work, then unlock the old, and right now that work might be
trivial. But what happens when someone comes along later and makes the
work something that can fail? You risk orphaning a lock on such
failure. You simply cannot use locks without proper error handling.

The Clojure ants sim also tackled the 'reporting' problem. How can you
get a consistent view of a world with many concurrent updates without
stopping it? The Clojure render function works with a consistent
snapshot of the world. There's no chance of it drawing the same ant
twice, or two ants in the same place. You've taken all the challenge
out of it by removing correlated data. Still, your code can show a
view of the world that never existed. You might think it's no big deal
when you are just painting in a game-like sim, but the techniques in
the Clojure version work in serious situations when related things
need to add up, and yours simply won't.

In short, your version doesn't compare whatsoever to what the Clojure
version provides. By making the problem simpler you've destroyed its
purpose, and you've avoided the illustrative challenges.

The purpose of the ants sim was to demonstrate techniques for handling
multithreaded applications where units of work involve consistent,
arbitrary, overlapping subsets of information, and, reporting
consistently on correlated data subject to concurrent modifications.
The Clojure version demonstrates that and your version does not.

Rich

Xah Lee

unread,
Jun 15, 2009, 5:15:01 PM6/15/09
to

Great info. Thanks.

PS i'm spreading this piece of writing to comp.lang.scheme and
comp.lang.functional. I'm doing this on my own accord, because i
believe it is beneficial in several aspects.

Xah
http://xahlee.org/


Jon Harrop

unread,
Jun 16, 2009, 12:07:51 AM6/16/09
to
Xah Lee wrote:
> PS i'm spreading this piece of writing to comp.lang.scheme and
> comp.lang.functional. I'm doing this on my own accord, because i
> believe it is beneficial in several aspects.

Very interesting, thanks. This sounds like a great STM example...

--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u

Message has been deleted

Pascal Costanza

unread,
Jun 18, 2009, 9:42:41 AM6/18/09
to
D Herring wrote:
> Pascal Costanza wrote:
>> D Herring wrote:
>>> Pascal Costanza wrote:
>>>
> ...
>>>> Furthermore, it doesn't use STM as in Clojure, but rather a solution
>>>> based on locks, which doesn't seem to be much harder to me and
>>>> required only very little tweaking. Since there is no need to
>>>> monitor read and write accesses to shared mutable state in the
>>>> lock-based version, it should also be more efficient.
> ...
>>> PC examines a toy Clojure example; "one lock isn't hard to use. No
>>> need for STM."
>>
>> Where did I say "No need for STM."?
>
> I apologize for putting words in your mouth and then critiquing them. I
> had overreacted to the assertions that one lock was easy to use and more
> efficient; together these indicated to me that the alternative (STM) was
> being degraded, with this example code as supporting evidence.

Well, it is also known that I am skeptical of STM, which could also be a
reason.

It is important to keep in mind in such discussions that STM usually
incurs an overhead for every read and write operation to potentially
shared variables in the range of factor 10-20, which is very high.

> Rereading your post, I see that this second paragraph was just a remark
> on the present implementation rather than a comment on more general
> principles.

Thanks for your post.

Pascal Costanza

unread,
Jun 18, 2009, 9:57:53 AM6/18/09
to
Rich Hickey wrote:
> On Jun 14, 3:30 pm, Pascal Costanza <p...@p-cos.net> wrote:
>> Hi,
>>
>> I have reimplemented the Ants simulation example that comes with Clojure
>> in Common Lisp, more precisely in LispWorks, using its multiprocessing
>> and graphics capabilities.
>
> No, you haven't. What you've implemented is a pale imitation of the
> ants simulation that neglects its most important features.

Maybe. I have only looked at the source code, and from the code alone,
it wasn't clear to me which requirements it was supposed to fulfill.

>> Furthermore, it doesn't use STM as in Clojure, but rather a solution
>> based on locks, which doesn't seem to be much harder to me and required
>> only very little tweaking. Since there is no need to monitor read and
>> write accesses to shared mutable state in the lock-based version, it
>> should also be more efficient.
>>
>
> It doesn't come close to doing the same job.
>
> The ants simulation was designed to show how to do things that are
> difficult to do without an STM. It is not simply a demo of how to use
> multiple threads to make ants move around on screen.

OK.

> STM lets you make decisions based upon consistent information. So,
> e.g. in the Clojure version when the ant looks around and sees food on
> an empty space, and makes a decision to pick it up, it will be able to
> do so without finding it missing or the space occupied, i.e. it will
> atomically be able to make a decision and act upon it, even one
> involving multiple ad hoc elements. In your code you've made it so the
> ants can't see each other, thus removing the coordination problem
> rather than solving it. The Clojure code can handle arbitrary sets of
> read/modify operations, on arbitrary sets of involved data,
> consistently.

That's incorrect, the ants can indeed "see" each other in my code as
well. I am just using the locks instead of flags to indicate which cells
are occupied by ants.

> You use timeouts on your locks, and silently fail to do the requested
> operation if the lock times out. But the caller of move expects it to
> work - it's not called maybe-move-if-we-can-get-the-lock. What if
> evaporate did something important? It wouldn't be ok for it to skip
> some steps if it can't get the locks.

wrt move: In your code, you check whether a cell is occupied by another
ant before you move there. So you do something like this:

(if (not (:ant @ahead))
(move loc)
... do something else ...)

In my code, the move function has different semantics, in that it
combines the check with the move. So the equivalent code in my version
is this:

(if (not (move loc))
... do something else ...)

Operationally, this is the same.

wrt evaporate: I was under the impression that the ants simulation
should reflect "reality" in the sense that evaporation doesn't take
place on a cell that is occupied by an ant, and in that regard my code
can be considered "correct." If you want to ensure that evaporation
always takes place, that's easy to fix. (See below.)

> There's more. You hold locks on an ongoing basis, vs obtaining and
> releasing in a scope - always tricky and dangerous. Do you know how
> that will behave should those locks map to OS resources? How might non-
> scoped locks thwart optimization? In move, you obtain the new lock, do
> some work, then unlock the old, and right now that work might be
> trivial. But what happens when someone comes along later and makes the
> work something that can fail? You risk orphaning a lock on such
> failure. You simply cannot use locks without proper error handling.

That's a lot of ifs and whens here. Yes, some language constructs are
"dangerous." But in this particular case, the solution seems to work
quite well.

> The Clojure ants sim also tackled the 'reporting' problem. How can you
> get a consistent view of a world with many concurrent updates without
> stopping it? The Clojure render function works with a consistent
> snapshot of the world. There's no chance of it drawing the same ant
> twice, or two ants in the same place. You've taken all the challenge
> out of it by removing correlated data. Still, your code can show a
> view of the world that never existed. You might think it's no big deal
> when you are just painting in a game-like sim, but the techniques in
> the Clojure version work in serious situations when related things
> need to add up, and yours simply won't.

Easy to fix. (See below.) In the new version, I get consistent views of
each ant, and consistent views of the whole evaporation state. Getting
consistent views of all ants at the same time is not so hard either. (I
had a version of that as well, but didn't want to show each and every
variation.)

> In short, your version doesn't compare whatsoever to what the Clojure
> version provides. By making the problem simpler you've destroyed its
> purpose, and you've avoided the illustrative challenges.
>
> The purpose of the ants sim was to demonstrate techniques for handling
> multithreaded applications where units of work involve consistent,
> arbitrary, overlapping subsets of information, and, reporting
> consistently on correlated data subject to concurrent modifications.
> The Clojure version demonstrates that and your version does not.

See above, it wasn't clear to me what requirements your code was
supposed to meet.


See the new version at http://paste.lisp.org/display/82056

Rich Hickey

unread,
Jun 18, 2009, 11:56:08 AM6/18/09
to

For anyone that's interested in the concurrency challenges Clojure is
designed to meet, the facilities it provides, and a detailed
description of the ant program, and is willing to listen to me run on
for a couple of hours, here is a video of the talk I gave, for which
the ant demo was written.

http://blip.tv/file/812787

Slides and code links are there.

Rich

fft1976

unread,
Jun 18, 2009, 2:15:00 PM6/18/09
to
What bothers me a little bit about Clojure's overall philosophy is
that it uses OOP (in one of the many senses of the term) like the
"sequence abstraction", etc., but at the same time, it discourages OOP
(such as creating your own abstractions).

It feels like biting the hand that feeds you.

jgrant27

unread,
Jun 18, 2009, 4:45:23 PM6/18/09
to

Yes the larger issues of Clojure's philosophy bother me to.

STM is cool but it comes with it's own issues. There's no silver
bullet for concurrent programming.
Pushing the use of STM along with the idealistic but impractical
purely "functional" use of data structures on programmers, as part of
Clojure's philosophy, is where I take issue. Let the programmer decide
what works best for their needs without making too many important
assumptions for them. That is the Java way.

There was a more detailed discussion on concurrent/multi-core
programming regarding "purely" functional languages (Clojure /
Haskell) vs. Common Lisp here --> http://clozure.com/pipermail/openmcl-devel/2009-May/009516.html

Rich Hickey

unread,
Jun 18, 2009, 5:19:11 PM6/18/09
to

A language comes with some data structures. Each is either immutable
or not (they can't be both!). Clojure's providing some immutable data
structures is no more it 'deciding for' or 'pushing' the programmer
than is Common Lisp's decision to provide mutable ones (and no,
choosing not to mutate a mutable data structure does not convey the
benefits of an immutable persistent data structure - you can't make a
cheap copy, you can't trust it won't be accidentally modified once
shared, the optimizer can't leverage the immutability etc).

If you want to program with mutable data structures, locks, etc with
Clojure you certainly can. In fact, if you want to use Clojure's
sequence library with mutable data structures you can. Can I use CL's
sequence library with immutable data structures? No - it is hardwired
to the (very few) mutable ones CL comes with. How is that letting the
programmer decide?

Rich

jgrant27

unread,
Jun 18, 2009, 7:18:53 PM6/18/09
to
> A language comes with some data structures. Each is either immutable
> or not (they can't be both!).

Haskell (a 'purely' functional language) has mutable and immutable
versions of the same data structures :
e.g. Data.Array.MArray vs. Data.Array.IArray
http://hackage.haskell.org/packages/archive/array/0.1.0.0/doc/html/Data-Array-MArray.html#7
Also, for immutable/mutable arrays in Haskell there are functions to
convert/copy between each kind.
See freeze, unsafeFReeze, thaw and unsafeThaw.
The GHC compiler also does the write kinds of optimizations('cheap
copies') between certain pairs of array types.
All this allows the programmer to make the decisions.

> (and no,
> choosing not to mutate a mutable data structure does not convey the
> benefits of an immutable persistent data structure - you can't make a
> cheap copy, you can't trust it won't be accidentally modified once
> shared, the optimizer can't leverage the immutability etc).

All this can be implemented in CL (but convention goes a long way
instead).
As for STM, there are libraries (and not part of the language) for
this --> http://common-lisp.net/project/cl-stm/doc/Examples.html#Concurrent_0020counter


-Justin


Paul Rubin

unread,
Jun 18, 2009, 8:45:21 PM6/18/09
to
jgrant27 <jgra...@gmail.com> writes:
> Pushing the use of STM along with the idealistic but impractical
> purely "functional" use of data structures on programmers, as part of
> Clojure's philosophy, is where I take issue. Let the programmer decide
> what works best for their needs without making too many important
> assumptions for them. That is the Java way.

Replace the acronym "STM" with "GC", and "functional use" with
"automatic management", and we heard the exact same song about C
vs. Java.

jgrant27

unread,
Jun 18, 2009, 10:42:12 PM6/18/09
to
On Jun 18, 5:45 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
> Replace the acronym "STM" with "GC", and "functional use" with
> "automatic management", and we heard the exact same song about C
> vs. Java.

C and Java are imperative languages. We're talking about functional
languages here, either 'pure' or 'impure'.
This is a critical difference. STM with imperative languages means
that we are dealing with mutable data structures while in functional
languages we may or may not be dealing with mutable data structures.

STM is a good tool in many situations where we want to perform
concurrent programming WITH MUTABLE TYPES. The whole point of using
STM is to read/write something safely and so immutable data structures
don't add much value here. Even in the Haskell 'purely functional'
camp which has been around a while, there are many valid reasons that
they have Monads, immutable/mutable data structures and ways to modify
them. Things are usually immutable in Haskell by default but that does
not mean state cannot be changed, just that when it's done it's
explicit.

Of course there are many other valid uses for immutable data
structures too but why are they required with STM in Clojure ? If we
use immutable data structures in Clojure then why do we need STM at
all ?
In Clojure they are probably required because there are layers of type
systems. The Clojure types(immutable) live on top of Java types
(mutable). Then there is also the Java interop ability of Clojure
which means that STM does have value because we could be dealing with
mutable Java types. This is a bizarre mix to have to keep in mind when
trying to write software (let alone concurrent software).

If STM is being used for concurrent programming then the argument for
immutable types only seems to lose weight more quickly in Clojure(or
any other language). Optimization hints, 'cheap copies' (just
pointers ?) and ensuring no accidental changes with mutable data
structures is easy enough. C has it's 'const', Java has it's 'final'
keyword, Lisp has defconstant etc etc.


-Justin

Steve Schafer

unread,
Jun 18, 2009, 11:42:40 PM6/18/09
to
On Thu, 18 Jun 2009 19:42:12 -0700 (PDT), jgrant27 <jgra...@gmail.com>
wrote:

>On Jun 18, 5:45�pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
>> Replace the acronym "STM" with "GC", and "functional use" with
>> "automatic management", and we heard the exact same song about C
>> vs. Java.
>
>C and Java are imperative languages. We're talking about functional
>languages here, either 'pure' or 'impure'.
>This is a critical difference. STM with imperative languages means
>that we are dealing with mutable data structures while in functional
>languages we may or may not be dealing with mutable data structures.

I think you're kind of missing Paul's point...

>...

>Of course there are many other valid uses for immutable data
>structures too but why are they required with STM in Clojure ? If we
>use immutable data structures in Clojure then why do we need STM at
>all ?

And, apparently, the point of STM in Clojure, too. STM, much like IO and
other monads in Haskell, is a way of introducing mutability in a
_controlled_ fashion, so that you can get your work done without having
to worry about all of the nasty issues that would otherwise arise in
concurrent programming.

I think you're off the mark regarding Clojure's philosophy, too. If I
might paraphrase (and anthropomorphize): "Hi, my name is Clojure. You
know something? Functional programming with higher-order functions is
great stuff. Your code is much simpler and more concise, and is more
likely to work the way you want it to. Yeah, I know, some kinds of
things are just plain hard to express in pure functional code. But don't
worry, I've got some cool stuff for that, too. One of the neatest things
is called STM, and it makes stateful, concurrent programming a whole lot
easier."

Steve Schafer
Fenestra Technologies Corp.
http://www.fenestra.com/

Scott Burson

unread,
Jun 19, 2009, 1:10:28 AM6/19/09
to
On Jun 18, 2:19 pm, Rich Hickey <richhic...@gmail.com> wrote:
>
> If you want to program with mutable data structures, locks, etc with
> Clojure you certainly can. In fact, if you want to use Clojure's
> sequence library with mutable data structures you can. Can I use CL's
> sequence library with immutable data structures? No - it is hardwired
> to the (very few) mutable ones CL comes with.

Not if you use FSet :)

(I know you know that -- I just couldn't resist the opportunity to
plug FSet :)

-- Scott

Jon Harrop

unread,
Jun 19, 2009, 4:27:54 AM6/19/09
to
jgrant27 wrote:
> Let the programmer decide what works best for their needs without making
> too many important assumptions for them. That is the Java way.

Nonsense. Java is the OO+generics way or the high way.

If Java really did leave it up to the programmer to make the right decision
then it would be a multiparadigm common-language platform with first-class
lexical closures, value types, tail calls, a fast FFI, a decent static type
system etc.

In reality, Java seriously penalizes the programmer for even attempting to
do anything slightly outside its very limited design constraints. Even
generics were half-assed.

The CLR does a much better job of leaving it up to the programmer. If you're
writing C# and want to do logic programming or tree munging you can drop to
a more appropriate language like F# seamlessly. If you're writing F# and
want to write Fortran-style numerical code or use established
code-generating tools then you can drop to C# seamlessly. You can lash
together VB and interoperate with that seamlessly too. There are still
important design limitations but it is by far the most versatile
industrial-strength platform in existence today.

jgrant27

unread,
Jun 19, 2009, 4:37:43 AM6/19/09
to
On Jun 19, 1:27 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> jgrant27 wrote:
> > Let the programmer decide what works best for their needs without making
> > too many important assumptions for them. That is the Java way.
>
> Nonsense. Java is the OO+generics way or the high way.

You either misread this or I should have phrased it better, I actually
agree with you.
The Java way assumes that the Java language designer knows best and
will make many assumptions for the Java programmer.
I have always had a problem with this.

Pascal Costanza

unread,
Jun 19, 2009, 9:16:17 AM6/19/09
to
Rich Hickey wrote:
> For anyone that's interested in the concurrency challenges Clojure is
> designed to meet, the facilities it provides, and a detailed
> description of the ant program, and is willing to listen to me run on
> for a couple of hours, here is a video of the talk I gave, for which
> the ant demo was written.
>
> http://blip.tv/file/812787
>
> Slides and code links are there.

You didn't comment on the fact that my version of the code now actually
does what it's supposed to do.

Pascal Costanza

unread,
Jun 19, 2009, 9:17:26 AM6/19/09
to

...except that at the time Java entered the market, it was know how to
implement garbage collectors efficiently, while for STM we don't know
this yet.

Rich Hickey

unread,
Jun 19, 2009, 11:37:08 AM6/19/09
to
On Jun 19, 9:16 am, Pascal Costanza <p...@p-cos.net> wrote:
> Rich Hickey wrote:
> > For anyone that's interested in the concurrency challenges Clojure is
> > designed to meet, the facilities it provides, and a detailed
> > description of the ant program, and is willing to listen to me run on
> > for a couple of hours, here is a video of the talk I gave, for which
> > the ant demo was written.
>
> >http://blip.tv/file/812787
>
> > Slides and code links are there.
>
> You didn't comment on the fact that my version of the code now actually
> does what it's supposed to do.
>

I haven't even looked at your new code. I don't have time to right
now.

Rich

Tayssir John Gabbour

unread,
Jun 21, 2009, 4:33:12 PM6/21/09
to
On Jun 19, 3:16 pm, Pascal Costanza <p...@p-cos.net> wrote:

> Rich Hickey wrote:
> >http://blip.tv/file/812787
>
> > Slides and code links are there.
>
> You didn't comment on the fact that my version of the code now actually
> does what it's supposed to do.


Hey Pascal,

Have you looked at the points he made in his video/slides? What do you
think about them? (I can point out where in the video he offered his
arguments, if you want.)

His conclusion, that a Clojure-style STM is a better default UI than
locks, sounds reasonable to me. For the code I've written which I
later wanted to be multithreaded "safe", one of Clojure's approaches
(atoms/vars/refs/agents) would've likely been just what I wanted.

(The connotation of "safe" is accurate -- I just wanted the stupid
thing to run without much energy or art on my part, and I felt this
wasn't an unreasonable request, conceptually.)


Tayssir


PS: To be offtopic a bit, I think the usefulness of Clojure's metadata
is underrated. Object metadata is precisely what I've been wanting for
a long time -- and implementing in an ad-hoc manner.

(I think metadata adds extra dimensions/colors to code and data. Even
just applied to sourcecode, it'd be interesting what an IDE might help
you visualize, and what applications it has to macros.)

Also, Clojure's deconstruction of OOP appears to fit my style in some
ways, since for me, object taxonomies are fluid, context dependent and
often independent of the language's type system. (Though I suspect
that at some point I'll feel compelled to enhance Clojure's
multimethods with features from CLOS.)

Pascal Costanza

unread,
Jun 21, 2009, 6:08:29 PM6/21/09
to
Tayssir John Gabbour wrote:
> On Jun 19, 3:16 pm, Pascal Costanza <p...@p-cos.net> wrote:
>> Rich Hickey wrote:
>>> http://blip.tv/file/812787
>>> Slides and code links are there.
>> You didn't comment on the fact that my version of the code now actually
>> does what it's supposed to do.
>
> Hey Pascal,
>
> Have you looked at the points he made in his video/slides? What do you
> think about them? (I can point out where in the video he offered his
> arguments, if you want.)

I haven't looked at the video, and I didn't find the slides when
following the link he posted. However, I do know the pros put forward in
favor of STM in general quite well.

> His conclusion, that a Clojure-style STM is a better default UI than
> locks, sounds reasonable to me. For the code I've written which I
> later wanted to be multithreaded "safe", one of Clojure's approaches
> (atoms/vars/refs/agents) would've likely been just what I wanted.
>
> (The connotation of "safe" is accurate -- I just wanted the stupid
> thing to run without much energy or art on my part, and I felt this
> wasn't an unreasonable request, conceptually.)

There are situations, where this is probably indeed a good solution.
There are cases where some software is inherently concurrent because of
the problem that is being solved. For example, web applications are
typically like that, because several users use the same service at the
same time in parallel. If they access shared resources (like in online
ticket reservation systems, or so), then some coordination between these
different users is a necessary requirement. If performance is not the
top priority in such situations, which can be reasonably argued since
the bottleneck is typically elsewhere, STM is probably a very
appropriate solution.

However, check the description at
http://clojure.org/concurrent_programming which starts with suggesting
that STM is a good solution for "leveraging the power of multi-core
CPUs." This is simply misleading (and indeed got me on the wrong track
about Clojure in the beginning).

There is a fundamental misunderstanding here: If a solution requires
concurrency due to the natural concurrency arising from the problem
domain (as above), there is no need whatsoever to use multicore CPUs. We
already had multithreading support in languages and operating systems
for quite some time by now to address exactly that purpose. STM is
useful in such scenarios no matter whether there are multi-core CPUs
involved or not.

On the other hand, multi-core CPUs have been introduced because
single-core CPUs cannot be made faster anymore. If they could, nobody
would bother to try to deal with the complexity of multi-core CPUs.
However, it seems that we have to deal with them somehow for the
foreseeable future. And here is the main point: The _only_ reason why we
want to use multicore CPUs is for improving performance. There is no
other reason. Again: Concurrency arising out of a problem domain doesn't
need multicore CPUs, if we could just have faster single-core CPUs, we
would be equally happy with continuing to using those for such problem
domains. (This is really an important point to understand!)

If you want to get performance out of multicore CPUs, you cannot afford
the performance overhead that STM incurs. To repeat: If you use STM,
each and every read and write access to potentially shared variables
incurs an overhead of a factor of about 10-20, and there is no real good
solution in sight to make this substantially faster. As a reminder:
Sometimes you get recommendations to use defstruct instead of defclass
because slot access in defstruct is something like 20% faster than in
defclass. For STM, we are talking about making these operations 10-20
times slower. That's serious.

[There is the analogy of STM to garbage collection, and that garbage
collection is now also efficient enough to replace manual memory
management completely. But (a), this is not completely true (ask the
Franz/Allegro guys ;), and (b), it took quite some time to make garbage
collection as efficient as it is now. It's not clear at all that you can
get equal efficiency for STM, because it doesn't seem to be possible to
just "delay" the monitoring of read and write accesses, like you could
delay the memory management when allocating memory. Each and every read
and write access has to be monitored, and it doesn't seem to be possible
to really avoid that.]

I also think that the complexity of locks is sometimes overrated. Atomic
blocks doesn't keep you from having to think about how you design the
architecture of your solution. (But indeed, STM probably makes the first
throw at parallelizing some existing code easier.)

Finally, and that's another very important point: If you indeed want
performance out of multicore CPUs, it's probably best _not_ to use
multithreading (and more specifically, task parallelism) explicitly.
It's rather better to keep your design as a sequential program, and to
use data parallelism (like parallel for loops, etc.) to parallelize the
bottlenecks. This ensures that your code remains easier to understand,
because sequential code is in fact a lot easier to understand in the
general case, and data parallelism is known to give you good performance
and scalability.

I redid the ants simulation in Common Lisp to see whether it shows that
STM is somehow essential in this particular case, but as far as I can
tell, it isn't (and this is not a Turing-equivalence argument, because
the overall structure of my version remains mostly the same). I would
like to see an example where STM is really essential. I guess Rich
Hickey wanted to use the ants simulation as an illustration of the STM
ideas, not to show that STM is essential in this case, which is fine. I
found the exercise interesting nevertheless.

> Tayssir
>
>
> PS: To be offtopic a bit, I think the usefulness of Clojure's metadata
> is underrated. Object metadata is precisely what I've been wanting for
> a long time -- and implementing in an ad-hoc manner.

Indeed, that's an interesting feature. If we ever have a "new" Common
Lisp standard, I would push for making sure that every object gets a
plist, not only symbols. ;)

> (I think metadata adds extra dimensions/colors to code and data. Even
> just applied to sourcecode, it'd be interesting what an IDE might help
> you visualize, and what applications it has to macros.)

Indeed.

> Also, Clojure's deconstruction of OOP appears to fit my style in some
> ways, since for me, object taxonomies are fluid, context dependent and
> often independent of the language's type system. (Though I suspect
> that at some point I'll feel compelled to enhance Clojure's
> multimethods with features from CLOS.)

Clojure's generic functions are interesting, and are a "natural
progression" that everybody thinks of when really getting the idea of
generic functions. In essence, what we all actually want is some form of
predicate dispatch. ;) Unfortunately, predicate dispatch doesn't work in
the general case, and it shows in the Clojure solution: There are corner
cases there when two methods are applicable at the same time where the
ways to disambiguate them are only very ad hoc. I have serious doubts
that this will scale well, and I bet that this will keep the Clojure
community quite busy in the years to come, when they have to deal with
the problems that this will eventually create. But that's just my guess,
what do I know... ;)

Jim Newton, Christophe Rhodes and myself have (independently of each
other) recently tried to define some tractable variations of subsets of
predicate dispatch (custom specializers and filtered dispatch) which
avoid these issues. I don't think we have reached the "ideal" version of
this idea yet, but I think we're getting closer...

D Herring

unread,
Jun 21, 2009, 8:10:42 PM6/21/09
to
Pascal Costanza wrote:

> I also think that the complexity of locks is sometimes overrated. Atomic
> blocks doesn't keep you from having to think about how you design the
> architecture of your solution. (But indeed, STM probably makes the first
> throw at parallelizing some existing code easier.)

Given more than one lock in a program (including libraries), the
programmer must determine which locks may ever be used in a single
call stack and then specify a total ordering on those locks so as to
avoid priority inversion. Thus locks={an,bn,cn,...}, and a_i must be
locked before a_{i+k}... Given sufficient analysis, there are
occasions where this rule can be weakened, but most are very
susceptible to code modifications. One common safe exception being
single locks used within single functions that don't call nonstandard
functions (which may lock). Unfortunately, such analysis and ordering
do not scale well, especially when maintaining lock priorities between
libraries. Priority inversions often require major code redesign.

In a nutshell, that's why I hate locks.

As far as speed goes, I'm surprised by your estimate that STM is only
10-20x slower than regular access. Some time ago, I wrote a little
C++ test app that timed (a) simple variable assignment, (b)
compare-and-swap assignment, and (c) a mutex guarding assignment.
IIRC, CAS was 10x slower than simple assignment, and the mutex was
another 10x slower. Much of the mutex slowdown could be traced to
switching in and out of the kernel. Thus STM must either use CAS or
each lock must protect a few variables on average. Then again, most
code does things other than access external variables; so the slowdown
may have been amortized.

My guess: CAS and shallow-copy data structures (similar to Clojure's
immutable containers) are the way of the future (through to 2030).
Programmers will learn to write algorithms that work with "a view of
the world" that restart or not when that view changes, rather than
rely on the world not changing (agree with this part of the STM
philosophy). However, shallow copy makes explicit that which STM does
implicitly; thus STM requires more sophisticated compilers and will
never be popular with hard-core optimizing programmers. There are
certain parallels with language support for objects versus continuations.

- Daniel

Rich Hickey

unread,
Jun 22, 2009, 12:04:45 PM6/22/09
to
> However, check the description athttp://clojure.org/concurrent_programmingwhich starts with suggesting

I don't have time to go into detail re: the many ways I disagree with
this post, so will make the following more generic comments:

The slides are here, I don't know why Blip isn't rendering the links
as links anymore:

http://clojure.googlegroups.com/web/ClojureConcurrencyTalk.pdf

Talking about STM as if that implies something specific is wrong.
There are many different way to do STM and they have very different
characteristics.

Talking about "STM is slow" is like talking about "Lisp is slow", and
for the same reasons.

Even if there was a 10x overhead, on a 600-core box (available now,
and to become common), there's still a potential 60x speedup. Do you
have other recipes for a potential 60x speedup? Ad hoc locking schemes
are notorious for incidentally introducing single serialization
points, and avoiding them is the job of experts. Multicore does and
will continue to matter.

Also re: multicore, it's true, and I've made the assertion myself,
that programs were decomposed into thread-like abstractions, to great
benefit, even before multicore. However, if they were running on some
single-threaded language supplied timeslicing mechanism, chances are
excellent they are full of bugs never or not often manifested on such
a system, and when run on a true multicore where simultaneity is
possible, will fall down horribly, i.e. if you think you are doing
multithreaded programming on a timeslicing or single core system you
are kidding yourself. All arguments against multicore that focus on
performance ignore the great value of STM for those that want simple
correctness. The parallels to the old GC arguments hold here too,

The problems solved by data parallelism aren't solved by task
parallelism, and vice versa.

STM is never a requirement, there is always an equivalent-end-result
locking version. As a trivialism, a system with a single global lock
suffices.

The granularity of use of an STM is quite important. There is a lot of
research on STM as being a way to keep on programming as you currently
do (i.e. with mutable objects of some sort), and wrap that in STM
transactions for thread safety. These systems need to track reads at
the field level, and there is no way to see a consistent composite
value except in the context of a transaction. I actually don't have
high hopes for that approach, and rejected it for Clojure.

Clojure's approach is quite different, it emphasizes programming with
values, and using reference cells with concurrency semantics (not just
STM cells) for managing identities that have different (composite)
values at different points in time. This greatly coarsens the
granularity, greatly reducing any overheads. In addition, Clojure's
use of MVCC means that reads need not be tracked, and, for all of
Clojure's reference cells, once a value is obtained, its immutability
means that it can be utilized henceforth with no transaction/locking/
CAS overhead whatsoever, beating out locks and STM. In Clojure's STM,
value consistency (to some application determined granularity) is
independent of transaction scope.

I have never claimed STM is a panacea or a good fit for all problems.
Clojure's multiple reference cell types attest to this. However, IMO
there is simply no reason not to have STM in your toolbox moving
forward.

An extensive discussion on locks vs STM with an expert on locking
(Cliff Click Jr.) is here:

http://blogs.azulsystems.com/cliff/2008/05/clojure-stms-vs.html

For a description of Clojure's approach to identity and state, see:

http://clojure.org/state

Rich

Pascal Costanza

unread,
Jun 22, 2009, 4:33:10 PM6/22/09
to
D Herring wrote:
> Pascal Costanza wrote:
>
>> I also think that the complexity of locks is sometimes overrated.
>> Atomic blocks doesn't keep you from having to think about how you
>> design the architecture of your solution. (But indeed, STM probably
>> makes the first throw at parallelizing some existing code easier.)
>
> Given more than one lock in a program (including libraries), the
> programmer must determine which locks may ever be used in a single call
> stack and then specify a total ordering on those locks so as to avoid
> priority inversion. Thus locks={an,bn,cn,...}, and a_i must be locked
> before a_{i+k}... Given sufficient analysis, there are occasions where
> this rule can be weakened, but most are very susceptible to code
> modifications. One common safe exception being single locks used within
> single functions that don't call nonstandard functions (which may
> lock). Unfortunately, such analysis and ordering do not scale well,
> especially when maintaining lock priorities between libraries. Priority
> inversions often require major code redesign.
>
> In a nutshell, that's why I hate locks.

I'm glad that Rich just posted a link to a conversation with Cliff
Click, where he argues that STM will probably turn out similarly complex.

> As far as speed goes, I'm surprised by your estimate that STM is only
> 10-20x slower than regular access. Some time ago, I wrote a little C++
> test app that timed (a) simple variable assignment, (b) compare-and-swap
> assignment, and (c) a mutex guarding assignment. IIRC, CAS was 10x
> slower than simple assignment, and the mutex was another 10x slower.
> Much of the mutex slowdown could be traced to switching in and out of
> the kernel. Thus STM must either use CAS or each lock must protect a
> few variables on average. Then again, most code does things other than
> access external variables; so the slowdown may have been amortized.

I got the numbers from a recent publication about STM, so I don't know
exactly where they come from.

However, what I do know is that read and write accesses in typical STM
algorithms involve: checking the state of the current transaction, maybe
negotiating with other threads what to do with the memory location in
question, recording or checking the current value with the current
read/write set, and maybe rolling back because of that. On top of that,
at the end of a transaction, the read/write sets need to be
double-checked for a final test whether everything is still ok. Some of
these operations require locks or compare-and-swap operations. And this
is all for each read and write access. So there are serious overheads to
be expected here.

> My guess: CAS and shallow-copy data structures (similar to Clojure's
> immutable containers) are the way of the future (through to 2030).
> Programmers will learn to write algorithms that work with "a view of the
> world" that restart or not when that view changes, rather than rely on
> the world not changing (agree with this part of the STM philosophy).
> However, shallow copy makes explicit that which STM does implicitly;
> thus STM requires more sophisticated compilers and will never be popular
> with hard-core optimizing programmers. There are certain parallels with
> language support for objects versus continuations.

It's probably a good idea to learn about STM, which is probably good for
sketching ideas. But programmers should also know about locks, and more
importantly also about data parallelism, which is more powerful than one
might expect...

Kaz Kylheku

unread,
Jun 22, 2009, 7:08:25 PM6/22/09
to
On 2009-06-22, D Herring <dher...@at.tentpost.dot.com> wrote:
> Pascal Costanza wrote:
>
>> I also think that the complexity of locks is sometimes overrated. Atomic
>> blocks doesn't keep you from having to think about how you design the
>> architecture of your solution. (But indeed, STM probably makes the first
>> throw at parallelizing some existing code easier.)
>
> Given more than one lock in a program (including libraries), the
> programmer must determine which locks may ever be used in a single
> call stack and then specify a total ordering on those locks so as to
> avoid priority inversion.

You can do the smart thing and avoid the situation by giving up locks. Do not
call another module while holding the lock pertaining to this module. This
also ensures that the locks extend over the minimal sequences of instructions
(i.e. are not held over the execution of unknown quantities of code buried in
functions elsewhere).

Madhu

unread,
Jun 22, 2009, 9:03:13 PM6/22/09
to
* Rich Hickey
Wrote on Mon, 22 Jun 2009 09:04:45 -0700 (PDT):

<SNIP>

Clearly RH has vested interests in the points he makes, but I wanted to
point out he is not being honest in his arguments.

He has not responded to specific points throughout this thread, instead
keeps advertising the marketing material he has prepared for the
particular software/infrastructure/development model he is pushing.

His responses just consist of more material against which the initial
objections still apply.

His line of response is understandable in light of nature of the product
he is selling: His claims would be valid automatically if the product
were adopted and sold successfully, and not otherwise.

--
Madhu

D Herring

unread,
Jun 23, 2009, 12:43:49 AM6/23/09
to

That requires knowing how the library code uses locks. It doesn't
compose nearly as well as a normal function call. And sometimes you
really do want the function call to be within the critical section.

But agreed that locks should be released ASAP.

- Daniel

ACL

unread,
Jun 23, 2009, 1:18:22 AM6/23/09
to

Can we be more concrete about what the objections to STM are?

What I've gotten out of it is:
Cons:
- Not as efficient as hand optimized locks
- New technology so we don't know how to implement it perfectly yet.

Pros:
+ Don't have to worry about certain race condition bugs
+ Gives concurrency a concrete framework to program in
+ Don't have to mess with hand rolled locks/locking models which may
end up implementing an ad-hoc STM anyway.

Neutrals:
+- Is not clear how much parallelism you need to gain benefits of
using Clojure for multi-processing.

The stuff he posted seems to explain fairly well the point of view
that he has on concurrency and Clojure's approach to it.

I'm having a hard time reconciling his endorsement 'no harm in having
another tool' with the idea that it is purely marketing propaganda.

Scott Burson

unread,
Jun 23, 2009, 2:38:04 AM6/23/09
to
On Jun 22, 10:18 pm, ACL <anonymous.c.lis...@gmail.com> wrote:
>
> Can we be more concrete about what the objections to STM are?
>
> Pros:
> + Don't have to worry about certain race condition bugs
> + Gives concurrency a concrete framework to program in
> + Don't have to mess with hand rolled locks/locking models which may
> end up implementing an ad-hoc STM anyway.

I think you're missing the most important one: modules written using
locks don't compose easily (as Daniel just pointed out above). I have
the impression that using STM improves composibility, because
transactions nest. I have to qualify this because I haven't actually
used Clojure or any other STM, but I nonetheless think there's
probably some truth to this claim.

-- Scott

Madhu

unread,
Jun 23, 2009, 2:47:23 AM6/23/09
to
* ACL <1d54d4a3-bae3-4fc4...@z9g2000yqi.googlegroups.com> :
Wrote on Mon, 22 Jun 2009 22:18:22 -0700 (PDT):

| I'm having a hard time reconciling his endorsement 'no harm in having
| another tool' with the idea that it is purely marketing propaganda.

I'm out of my league here; But I did not wish to suggest it was "purely
marketing propaganda", which I cannot suggest without sounding
ridiculous.

The `specific point of view on concurrency' and the `approach endorsed'
partitions the available techniques that can be used programming problms
into two sets: Use Clojure/STM or don't use Clojure/STM. So it goes
from "having another tool" to "Everybody should use this", including the
platform and all that that entails.

I found nothing else objectionable in your summary.
--
Madhu

Madhu

unread,
Jun 23, 2009, 2:49:06 AM6/23/09
to
* ACL <1d54d4a3-bae3-4fc4...@z9g2000yqi.googlegroups.com> :
Wrote on Mon, 22 Jun 2009 22:18:22 -0700 (PDT):

| I'm having a hard time reconciling his endorsement 'no harm in having


| another tool' with the idea that it is purely marketing propaganda.

I'm out of my league here; But I did not wish to suggest it was "purely


marketing propaganda", which I cannot suggest without sounding
ridiculous.

The `specific point of view on concurrency' and the `approach endorsed'

partitions the available techniques that can be used to solve
programming problms into two sets: Those that use Clojure/STM and those
that don't use Clojure/STM. So it goes from "having another tool" to

ACL

unread,
Jun 23, 2009, 1:27:50 PM6/23/09
to
On Jun 23, 2:49 am, Madhu <enom...@meer.net> wrote:
> * ACL <1d54d4a3-bae3-4fc4-a0f9-97b7aff6b...@z9g2000yqi.googlegroups.com> :

> Wrote on Mon, 22 Jun 2009 22:18:22 -0700 (PDT):
>
> | I'm having a hard time reconciling his endorsement 'no harm in having
> | another tool' with the idea that it is purely marketing propaganda.
>
> I'm out of my league here; But I did not wish to suggest it was "purely
> marketing propaganda", which I cannot suggest without sounding
> ridiculous.
>
> The `specific point of view on concurrency' and the `approach endorsed'
> partitions the available techniques that can be used to solve
> programming problms into two sets: Those that use Clojure/STM and those
> that don't use Clojure/STM.  So it goes from "having another tool" to
> "Everybody should use this", including the platform and all that that
> entails.
>

Ah, ok, I misunderstood the argument.

D Herring

unread,
Jun 23, 2009, 10:14:36 PM6/23/09
to

For the curious, here are a few papers from leading STM developers.
http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/
http://research.microsoft.com/en-us/um/people/tharris/

Titles like "Composable memory transactions" really catch attention.

- Daniel

Pascal Costanza

unread,
Jun 28, 2009, 3:30:58 PM6/28/09
to

OK, thanks. No big surprises here, though, the material is pretty much
what I already know about Clojure.

> Talking about STM as if that implies something specific is wrong.
> There are many different way to do STM and they have very different
> characteristics.

[...]

> The problems solved by data parallelism aren't solved by task
> parallelism, and vice versa.

...but surprisingly a lot of programs can actually be implemented using
data parallelism. See http://www.cs.cmu.edu/~scandal/nesl.html for one
good source of examples.

> The granularity of use of an STM is quite important. There is a lot of
> research on STM as being a way to keep on programming as you currently
> do (i.e. with mutable objects of some sort), and wrap that in STM
> transactions for thread safety. These systems need to track reads at
> the field level, and there is no way to see a consistent composite
> value except in the context of a transaction. I actually don't have
> high hopes for that approach, and rejected it for Clojure.
>
> Clojure's approach is quite different, it emphasizes programming with
> values, and using reference cells with concurrency semantics (not just
> STM cells) for managing identities that have different (composite)
> values at different points in time. This greatly coarsens the
> granularity, greatly reducing any overheads. In addition, Clojure's
> use of MVCC means that reads need not be tracked, and, for all of
> Clojure's reference cells, once a value is obtained, its immutability
> means that it can be utilized henceforth with no transaction/locking/
> CAS overhead whatsoever, beating out locks and STM.

If you are serious that Clojure provides considerably better performance
than the STM approaches reported on in the literature, then you should
definitely report about that. That would be a valuable contribution.

> I have never claimed STM is a panacea or a good fit for all problems.
> Clojure's multiple reference cell types attest to this. However, IMO
> there is simply no reason not to have STM in your toolbox moving
> forward.

It's probably important to know about STM and have it available if
necessary. I'm a bit more skeptical about designing a language around
it, and especially restricting other parts of the language to make it
work. But it's clear that we disagree in that regard.

> An extensive discussion on locks vs STM with an expert on locking
> (Cliff Click Jr.) is here:
>
> http://blogs.azulsystems.com/cliff/2008/05/clojure-stms-vs.html

Thanks for providing that link, that was very insightful. (Needless to
say I'm more with Cliff Click... ;)

Pascal Costanza

unread,
Jun 28, 2009, 3:42:25 PM6/28/09
to
ACL wrote:
> On Jun 22, 9:03 pm, Madhu <enom...@meer.net> wrote:
>> * Rich Hickey
>> Wrote on Mon, 22 Jun 2009 09:04:45 -0700 (PDT):
>>
>> <SNIP>
>>
>> Clearly RH has vested interests in the points he makes, but I wanted to
>> point out he is not being honest in his arguments.
>>
>> He has not responded to specific points throughout this thread, instead
>> keeps advertising the marketing material he has prepared for the
>> particular software/infrastructure/development model he is pushing.
>>
>> His responses just consist of more material against which the initial
>> objections still apply.
>>
>> His line of response is understandable in light of nature of the product
>> he is selling: His claims would be valid automatically if the product
>> were adopted and sold successfully, and not otherwise.
>>
>
> Can we be more concrete about what the objections to STM are?
>
> What I've gotten out of it is:
> Cons:
> - Not as efficient as hand optimized locks
> - New technology so we don't know how to implement it perfectly yet.

According to the posting by Cliff Click (well worth a read):

- Hard-to-predict efficiency: Deadlocks may not be a problem anymore,
but there is a chance that you get livelocks (different transactions
continuously rolling each other back). Even without such livelocks, it
could be that your application suffers from too many rollbacks, and it
will probably be hard to figure why that's the case and to resolve such
problems. Cliff Click argues that it would be easier to deal with
deadlocks, in comparison.

- Transactions may nest well, but this doesn't answer the question where
to put the transaction boundaries. This requires domain knowledge, may
be hard to determine, and may have a negative impact on overall
performance in large-scale programs. Furthermore, transactions need to
be dynamically scoped, while locks can be used both dynamically scoped
or in other ways.

> Pros:
> + Don't have to worry about certain race condition bugs
> + Gives concurrency a concrete framework to program in

This is actually incorrect. The framework is either task parallelism or
data parallelism. STM is for managing synchronization between
concurrently executing code, which is an orthogonal issue.

> + Don't have to mess with hand rolled locks/locking models which may
> end up implementing an ad-hoc STM anyway.
>
> Neutrals:
> +- Is not clear how much parallelism you need to gain benefits of
> using Clojure for multi-processing.

This depends on what kinds of benefits you have in mind.

Pascal Costanza

unread,
Jun 28, 2009, 3:50:57 PM6/28/09
to

...but see also articles like this one:
http://doi.acm.org/10.1145/1400214.1400228

Tayssir John Gabbour

unread,
Jun 30, 2009, 10:36:15 AM6/30/09
to
On Jun 28, 9:30 pm, Pascal Costanza <p...@p-cos.net> wrote:
> >> However, check the description at
> >> http://clojure.org/concurrent_programming which starts with

> >> suggesting that STM is a good solution for "leveraging the power
> >> of multi-core CPUs." This is simply misleading (and indeed got me
> >> on the wrong track about Clojure in the beginning).
>
> [...]

> It's probably important to know about STM and have it available if
> necessary. I'm a bit more skeptical about designing a language around
> it, and especially restricting other parts of the language to make it
> work. But it's clear that we disagree in that regard.

Clojure may have yet another reference type which will "probably be
some flavor of a safe mutex," which may help enforce lock acquisition
order, detect deadlock, or... something. So I don't think that Clojure
is built around the STM. (Immutable values seems the more fundamental
idea.)
http://olifante.blogs.com/covil/2009/05/transcript-of-rich-hickey-interview.html

Clojure.org says something very different from your paraphrase above,
so I don't think it's misleading:

"Today's systems have to deal with many simultaneous tasks and
leverage the power of multi-core CPUs. Doing so with threads can
be very difficult due to the complexities of
synchronization. Clojure simplifies multi-threaded programming in
several ways."

I think that's a reasonable claim. And in that same paragraph, he goes
on to list 4 mechanisms within Clojure which support multithreaded
programming. Of which STM is only one.

In fact, there's more multithreading operators in Clojure than that.
(Unless I misunderstand, or haven't mentally unified concepts.) For
instance, there's 'locking' and clojure.parallel. Not to mention
java.util.concurrent and the Clojure wrappers built upon it, like
'future'.

(I haven't yet used the STM; I just see it as an interesting
alternative to playing "Who's holding the Talking Stick" with locks.)


#<Tayssir "this person is unreadable">

Scott Burson

unread,
Jun 30, 2009, 3:19:55 PM6/30/09
to
On Jun 28, 12:50 pm, Pascal Costanza <p...@p-cos.net> wrote:
>
> ...but see also articles like this one:http://doi.acm.org/10.1145/1400214.1400228

Interesting. Maybe we'll need some hardware support before TM really
works.

I'm certainly glad, though, to see people experimenting with it.

-- Scott

Pascal Costanza

unread,
Jun 30, 2009, 3:24:32 PM6/30/09
to

Absolutely. That's the only way to make progress...

Pascal Costanza

unread,
Jul 1, 2009, 5:39:25 AM7/1/09
to

That Clojure simplifies multi-threaded programming may be a reasonable
claim. That multi-threaded programming has something to do with
multi-core chips is, to the extent suggested, misleading. The paragraph
could have easily started with just the second sentence
("Multi-threading can be very difficult due to the complexities of
synchronization. ..."), without any loss. Mentioning multi-core chips in
this context is simply not necessary.

Again, the only reason why industry moves towards multi-core chips is
because they don't see any way to improve single-core chips anymore in
terms of performance. If there were ways to improve single-core chips,
then nobody would, in their right mind, want to deal with multi-core
architectures, because such architectures just makes things more
complicated, and it is indeed very hard to realize their theoretically
possible performance gains. If you don't care about performance, then
it's better to stick with single-core chips, for the same reason.

You usually use multi-threading in an application because of
requirements of the problem domain. If the problem domain asks for it,
you would use multi-threading no matter whether the underlying
architecture is based on single-core or multi-core chips. If the problem
domain doesn't ask for it, it's better to avoid multi-threading.

If your goal is to take advantage of the potential performance benefits
of multi-core architectures, there are easier ways than multi-threading
to do so.

> In fact, there's more multithreading operators in Clojure than that.
> (Unless I misunderstand, or haven't mentally unified concepts.) For
> instance, there's 'locking' and clojure.parallel. Not to mention
> java.util.concurrent and the Clojure wrappers built upon it, like
> 'future'.

From a superficial perspective, the JSR 166 concurrency utilities
indeed look indeed like a reasonable design. (Since Java 7 won't have
closures after all, will JSR 166 eventually be in Java 7 or not?)

Alessio Stalla

unread,
Jul 1, 2009, 11:21:20 AM7/1/09
to
On Jul 1, 11:39 am, Pascal Costanza <p...@p-cos.net> wrote:
>  From a superficial perspective, the JSR 166 concurrency utilities
> indeed look indeed like a reasonable design. (Since Java 7 won't have
> closures after all, will JSR 166 eventually be in Java 7 or not?)

JSR 166 has been part of Java since Java 5. While real closures would
have greatly simplified working with collections (among other things),
and I'm sad Java 7 won't have them, they're not required for these
concurrency utilities to work.

And now that I'm here, I'll give you my unwanted and probably useless
opinion on STM & Clojure ;)

I didn't know what a STM was before Clojure became popular. This alone
is a goal Clojure has managed to accomplish, and I'm glad it did. I
don't think that STM is a "silver bullet" for concurrency, though it
probably makes it simple to introduce concurrency in non-concurrent
programs. I'm also skeptical about the "everything should be
immutable" philosophy. On the other hand, I do think both these
approaches have their reasons to be used in certain contexts.

One of the reasons I like Common Lisp is that it leaves much freedom
on the programmer to use his favourite paradigm. I fear Clojure is a
bit behind CL on this point. However, I know Clojure too little to
really judge it.

Finally, Pascal's point about multicores is absolutely right; but imho
it misses the point that wide availability of multicore processors has
objectively made parallelism more hyped, even if rationally this
doesn't make much sense (has hype ever made sense?). Probably that
statement on Clojure's page is meant to attract people interested in
buzzwords, rather than to suggest the concrete reason for which Rich
Hickey believes STM is a good thing to have built-in.

Bye,
Alessio

Pascal Costanza

unread,
Jul 2, 2009, 5:04:35 AM7/2/09
to
Alessio Stalla wrote:
> On Jul 1, 11:39 am, Pascal Costanza <p...@p-cos.net> wrote:
>> From a superficial perspective, the JSR 166 concurrency utilities
>> indeed look indeed like a reasonable design. (Since Java 7 won't have
>> closures after all, will JSR 166 eventually be in Java 7 or not?)
>
> JSR 166 has been part of Java since Java 5. While real closures would
> have greatly simplified working with collections (among other things),
> and I'm sad Java 7 won't have them, they're not required for these
> concurrency utilities to work.

Sorry, my mistake. I meant JSR166Y. See
http://g.oswego.edu/dl/concurrency-interest/

Alessio Stalla

unread,
Jul 2, 2009, 8:35:53 AM7/2/09
to
On Jul 2, 11:04 am, Pascal Costanza <p...@p-cos.net> wrote:
> Sorry, my mistake. I meant JSR166Y. See http://g.oswego.edu/dl/concurrency-interest/

Didn't know about that. Interesting, thanks for pointing it out.

Alessio

Pascal Costanza

unread,
Jul 2, 2009, 9:08:34 AM7/2/09
to

Ah, checked the page myself again: Fork/join seems to be in, but
parallel arrays seem to be out.

Alessio Stalla

unread,
Jul 2, 2009, 9:50:24 AM7/2/09
to
On Jul 2, 3:08 pm, Pascal Costanza <p...@p-cos.net> wrote:
> Alessio Stalla wrote:
> > On Jul 2, 11:04 am, Pascal Costanza <p...@p-cos.net> wrote:
> >> Sorry, my mistake. I meant JSR166Y. Seehttp://g.oswego.edu/dl/concurrency-interest/

>
> > Didn't know about that. Interesting, thanks for pointing it out.
>
> Ah, checked the page myself again: Fork/join seems to be in, but
> parallel arrays seem to be out.

Now I see what you meant: I hadn't seen that "extra" stuff, and that
ugly, ugly Ops class! And there are people that, in year 2009, have
the courage to say that Java doesn't *need* closures!

(Of course, even if they had added them, they would have probably
managed to screw them up - see http://www.javac.info/ and in
particular, the thing that struck me most at first glance, "Why
doesn't return yield a result from the innermost closure?"
http://gafter.blogspot.com/2006/08/tennents-correspondence-principle-and.html)

Ale

Reply all
Reply to author
Forward
0 new messages