Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

multicore lisp?

75 views
Skip to first unread message

TerryMc

unread,
Dec 12, 2007, 1:45:55 PM12/12/07
to
I'm returning to Lisp after a hiatus of about 30 years, with a project
( the game of Go ), which could use a "programmable programming
language" in many interesting ways.

The intended development target will be a multicore box - quite likely
an 8-way Mac.

I'm in the preliminary design and experimentation stages here, but it
looks like it would be real handy to be able to spawn many
( thousands ) of lightweight threads, in the manner of erlang - fast,
light, cheap concurrency. The unix fork mechanism would be too costly
for this purpose.

Is anyone doing multicore concurrency in Lisp? Lightweight threads?
Many of the projects I've found on the web seem not to have been
maintained for several years. What is state-of-the-art today, with the
proliferation of dual- and quad-core chips?

Thanks for any information.

-- Terry


Raffael Cavallaro

unread,
Dec 12, 2007, 1:57:30 PM12/12/07
to
On 2007-12-12 13:45:55 -0500, TerryMc <terry.m...@gmail.com> said:

> The intended development target will be a multicore box - quite likely
> an 8-way Mac.
>
> I'm in the preliminary design and experimentation stages here, but it
> looks like it would be real handy to be able to spawn many
> ( thousands ) of lightweight threads, in the manner of erlang - fast,
> light, cheap concurrency. The unix fork mechanism would be too costly
> for this purpose.
>
> Is anyone doing multicore concurrency in Lisp? Lightweight threads?

Clozure CL/OpenMCL uses native threads on Mac OS X so you can have
multiple lisp threads executing on multiple cores simultaneously.
They're also developing a nice Mac-like IDE:

<http://trac.clozure.com/openmcl>

John Thingstad

unread,
Dec 12, 2007, 2:06:42 PM12/12/07
to
PÃ¥ Wed, 12 Dec 2007 19:45:55 +0100, skrev TerryMc
<terry.m...@gmail.com>:

You might have a look at this bad boy:

http://www.scieneer.com/scl/index.html

--------------
John Thingstad

John Thingstad

unread,
Dec 12, 2007, 2:13:39 PM12/12/07
to
PÃ¥ Wed, 12 Dec 2007 19:57:30 +0100, skrev Raffael Cavallaro
<raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com>:

No, that is a misunderstanding I think. Most lisp's lock the scheduling to
one processor even when they accept multiple threads. (for exception see
below.)

--------------
John Thingstad

Sohail Somani

unread,
Dec 12, 2007, 2:20:44 PM12/12/07
to
On Wed, 12 Dec 2007 20:13:39 +0100, John Thingstad wrote:

> No, that is a misunderstanding I think. Most lisp's lock the scheduling
> to one processor even when they accept multiple threads. (for exception
> see below.)

What about SBCL? And where below were you referring to?

--
Sohail Somani
http://uint32t.blogspot.com

John Thingstad

unread,
Dec 12, 2007, 2:29:22 PM12/12/07
to
PÃ¥ Wed, 12 Dec 2007 19:45:55 +0100, skrev TerryMc
<terry.m...@gmail.com>:

> I'm returning to Lisp after a hiatus of about 30 years, with a project

Allegro (www.franz.com), might (they will not say) release a multicore
version in the (near?) future.
That would work with a MAC too.

--------------
John Thingstad

Edi Weitz

unread,
Dec 12, 2007, 2:40:00 PM12/12/07
to
On Wed, 12 Dec 2007 20:13:39 +0100, "John Thingstad" <jpt...@online.no> wrote:

>> Clozure CL/OpenMCL uses native threads on Mac OS X so you can have
>> multiple lisp threads executing on multiple cores simultaneously.
>

> No, that is a misunderstanding I think. Most lisp's lock the
> scheduling to one processor even when they accept multiple
> threads.

Why do you think Clozure CL does that?

Edi.

--

Lisp is not dead, it just smells funny.

Real email: (replace (subseq "spam...@agharta.de" 5) "edi")

Raffael Cavallaro

unread,
Dec 12, 2007, 3:07:41 PM12/12/07
to
On 2007-12-12 14:13:39 -0500, "John Thingstad" <jpt...@online.no> said:

> No, that is a misunderstanding I think. Most lisp's lock the scheduling
> to one processor even when they accept multiple threads. (for
> exception see below.)

ClozureCL/OpenMCL is not "most lisp's" [sic]

from:
<http://openmcl.clozure.com/Doc/index.html#Programming-with-Threads>

"For a variety of reasons - better utilization of CPU resources on
single and multiprocessor systems and better integration with the OS in
general - threads in OpenMCL 0.14 and later are preemptively scheduled.
In this model, lisp threads are native threads and all scheduling
decisions involving them are made by the OS kernel. (Those decisions
might involve scheduling multiple lisp threads simultaneously on
multiple processors on SMP systems.) This change has a number of subtle
effects:

it is possible for two (or more) lisp threads to be executing
simultaneously, possibly trying to access and/or modify the
same data structures."

John Thingstad

unread,
Dec 12, 2007, 3:07:29 PM12/12/07
to
PÃ¥ Wed, 12 Dec 2007 20:40:00 +0100, skrev Edi Weitz <spam...@agharta.de>:

> On Wed, 12 Dec 2007 20:13:39 +0100, "John Thingstad" <jpt...@online.no>
> wrote:
>
>>> Clozure CL/OpenMCL uses native threads on Mac OS X so you can have
>>> multiple lisp threads executing on multiple cores simultaneously.
>>
>> No, that is a misunderstanding I think. Most lisp's lock the
>> scheduling to one processor even when they accept multiple
>> threads.
>
> Why do you think Clozure CL does that?
>
> Edi.
>

Perhaps SPUR Lisp?

http://portal.acm.org/citation.cfm?id=894779

--------------
John Thingstad

John Thingstad

unread,
Dec 12, 2007, 3:13:33 PM12/12/07
to
PÃ¥ Wed, 12 Dec 2007 21:07:41 +0100, skrev Raffael Cavallaro
<raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com>:

> On 2007-12-12 14:13:39 -0500, "John Thingstad" <jpt...@online.no> said:

I stand corrected..

--------------
John Thingstad

Edi Weitz

unread,
Dec 12, 2007, 3:37:53 PM12/12/07
to
On Wed, 12 Dec 2007 21:07:29 +0100, "John Thingstad" <jpt...@online.no> wrote:

>> Why do you think Clozure CL does that?
>

> Perhaps SPUR Lisp?

Huh?

vanekl

unread,
Dec 12, 2007, 4:26:30 PM12/12/07
to

The rule of thumb is you only want to have 1 active thread per core
for efficiency, unless you are working with a language where
transitioning between threads is extremely inexpensive (think Erlang).
Otherwise, the actual amount of work that gets done goes down because
of the time spent context switching.
That said, if you are dead set on working with thousands of threads,
check out Termite and Gambit Scheme. JFGI

Alexander Kjeldaas

unread,
Dec 12, 2007, 5:17:49 PM12/12/07
to
TerryMc wrote:
>
> Is anyone doing multicore concurrency in Lisp? Lightweight threads?
> Many of the projects I've found on the web seem not to have been
> maintained for several years. What is state-of-the-art today, with the
> proliferation of dual- and quad-core chips?
>

I think you will find lots of support for multiple threads. The
problem is the garbage collector. Few implementations (any?) have
garbage collectors that are concurrent or parallel. So the garbage
collector can easily inhibit scaleability, or to put it another way,
you need to be careful about consing when using multiple cores.

Alexander

Tim Bradshaw

unread,
Dec 12, 2007, 6:40:48 PM12/12/07
to
On Dec 12, 9:26 pm, vanekl <va...@acd.net> wrote:

>
> The rule of thumb is you only want to have 1 active thread per core
> for efficiency, unless you are working with a language where
> transitioning between threads is extremely inexpensive (think Erlang).

I think in fact the rule of thumb would be that you only want to have
one active thread per hardware-supported thread. Quite mundane
processors are using the technique which was (I think) first used
aggressively by Tera and having multiple hardware threads per core to
hide latency. These threads typically have no scheduling overhead -
the processor just decides which one to run each cycle. Depending on
the processor this decision may be more-or-less naive - the Tera MTA I
think decided based on availability of data from memory, while Sun's
Niagara I think simply round-robins and relies on statistics to hide
memory latency.

vanekl

unread,
Dec 12, 2007, 8:33:34 PM12/12/07
to

I'm not a hardware expert so I don't doubt that everything you said
was accurate,
but I'd like to add that there's more to switching contexts than
scheduling.
A couple of years ago I spent some time poking around Ruby's green
threads and
was astounded at all the work that goes into doing this behind the
scenes.
In Ruby, scheduling was one of the simpler tasks, because it was
pretty much just
Round Robin unless a thread was blocked and then it was skipped (this
was back on 1.8.2).
I realize this was done in software, but even if there's hardware
support things
like cache flushes or caches filled with data from an old context,
changing stacks
and registers, etc., can really slow things down. Which is why some
people don't see
much of a future for virtual operating systems (at least on today's
hardware) because even though
they may seem stable enough, they can thrash the caches, which most
modern CPUs rely
upon since main memory is so slow.
I know, this was all a bit off topic. Now back to regularly scheduled
programming (pun
intentional, in retrospect).

Paul Wallich

unread,
Dec 12, 2007, 9:01:15 PM12/12/07
to
Tim Bradshaw wrote:
> On Dec 12, 9:26 pm, vanekl <va...@acd.net> wrote:
>
>> The rule of thumb is you only want to have 1 active thread per core
>> for efficiency, unless you are working with a language where
>> transitioning between threads is extremely inexpensive (think Erlang).
>
> I think in fact the rule of thumb would be that you only want to have
> one active thread per hardware-supported thread. Quite mundane
> processors are using the technique which was (I think) first used
> aggressively by Tera and having multiple hardware threads per core to
> hide latency. These threads typically have no scheduling overhead -
> the processor just decides which one to run each cycle.

No scheduling overhead once you've paid the cost of designing the
processor to have an appropriate number of immediately-available
execution contexts and/or the cost of compiling your software so that it
behaves in ways that let the hardware pretend effectively.

As various folks have discussed here, there are some interesting issues
for Lisp (as well as other languages) regarding how much state you want
to share among your threads by default.

paul

Maciej Katafiasz

unread,
Dec 13, 2007, 2:22:02 AM12/13/07
to
Den Wed, 12 Dec 2007 21:07:29 +0100 skrev John Thingstad:

>>>> Clozure CL/OpenMCL uses native threads on Mac OS X so you can have
>>>> multiple lisp threads executing on multiple cores simultaneously.
>>>
>>> No, that is a misunderstanding I think. Most lisp's lock the
>>> scheduling to one processor even when they accept multiple threads.
>>
>> Why do you think Clozure CL does that?
>>

Have you been smoking something harder than usual recently?

Cheers,
Maciej

Tim Bradshaw

unread,
Dec 13, 2007, 5:10:12 AM12/13/07
to
On Dec 13, 2:01 am, Paul Wallich <p...@panix.com> wrote:

> No scheduling overhead once you've paid the cost of designing the
> processor to have an appropriate number of immediately-available
> execution contexts and/or the cost of compiling your software so that it
> behaves in ways that let the hardware pretend effectively.
>

Yes, of course. The point is that there's a fairly strong argument
that this is how hardware will be designed, because you need to find
something to throw all those transistors at, and supporting multiple
HW threads is one option which can do a pretty good job of hiding
memory latency.

I am fairly sure that the Tera MTA did indeed require a lot of
compiler knowledge, but it was a pretty extreme solution - it had no
cache at all but relied on a very large number of threads to hide
latency (so a loop over an array would spawn 100 threads or something,
which would then get scheduled as data came in from memory).

However for things like Niagara & descendants of it, it just looks to
applications (and I think to the OS to a very large extent) as if
there are more CPUs than there are - if you aska T2000 how many CPUs
it has, it will say "32" even though there are only 8 physical cores.

Tim Bradshaw

unread,
Dec 13, 2007, 5:17:00 AM12/13/07
to
On Dec 13, 1:33 am, vanekl <va...@acd.net> wrote:

> I realize this was done in software, but even if there's hardware
> support things
> like cache flushes or caches filled with data from an old context,
> changing stacks
> and registers, etc., can really slow things down.

Cache behaviour is interesting, but the point of these HW threaded
designs is that the changing stacks and registers bit just happens in
a cycle - the core has multiple sets of registers etc, and will choose
one each cycle. From the application's perspective it just looks like
there are more processors than there are.

> Which is why some
> people don't see
> much of a future for virtual operating systems (at least on today's
> hardware) because even though
> they may seem stable enough, they can thrash the caches, which most
> modern CPUs rely
> upon since main memory is so slow.

Memory isn't slow (or should not be in a decent design), it just has
high latency. Caches are one way of hiding memory latency, hardware
threads as I'm discussing here are another. They both have their
drawbacks, of course.

Slobodan Blazeski

unread,
Dec 13, 2007, 9:42:36 AM12/13/07
to
On Dec 12, 8:06 pm, "John Thingstad" <jpth...@online.no> wrote:
> PÃ¥ Wed, 12 Dec 2007 19:45:55 +0100, skrev TerryMc
> <terry.mcint...@gmail.com>:


Have you worked with it?

Slobodan

John Thingstad

unread,
Dec 13, 2007, 11:14:47 AM12/13/07
to
PÃ¥ Wed, 12 Dec 2007 21:07:41 +0100, skrev Raffael Cavallaro
<raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com>:

> On 2007-12-12 14:13:39 -0500, "John Thingstad" <jpt...@online.no> said:

That is great news! (I will get a MAC)

--------------
John Thingstad

alex.re...@gmail.com

unread,
Dec 13, 2007, 12:00:24 PM12/13/07
to
On Dec 12, 11:45 am, TerryMc <terry.mcint...@gmail.com> wrote:
> I'm returning to Lisp after a hiatus of about 30 years, with a project
> ( the game of Go ), which could use a "programmable programming
> language" in many interesting ways.
>
> The intended development target will be a multicore box - quite likely
> an 8-way Mac.

Sounds like a great exploration. Being able to launch, say, 8 threads,
e.g., in Clozure CL, executing on 8 cores is part of the answer but
unfortunately will not help much with traditional algorithms such as
min/max search typically used in turn based games. The problem is that
many of these search algorithms are very difficult to implement in a
parallel way or to make them work incrementally.I encourage you to
have a look at our Antiobjects work.Antiobjects are based on diffusion
equations and 2d or 3d cells. You can take your game world, e.g., a 19
x 19 Go board and map that onto the 8 thread/cores running
collaborative diffusions. The communication is kept at a minimum as
you just have to have the edges of your game board pieces that need to
exchange diffusion values. A different way to slice the problem is to
have each core run a collaborative diffusion layer of you game and
then you accumulate the layers into some global goal landscape.

Can Collaborative Diffusion really be used to make a great game of Go?
I don't know but I have a hunch that with some conceptual additions it
may. I would be very interested to hear from people tying to apply
this idea.


Repenning, A., Collaborative Diffusion: Programming Antiobjects. in
OOPSLA 2006, ACM SIGPLAN International Conference on Object-Oriented
Programming Systems, Languages, and Applications, (Portland, Oregon,
2006), ACM Press.

http://www.cs.colorado.edu/~ralex/papers/PDF/OOPSLA06antiobjects.pdf

Alex

Andreas Davour

unread,
Dec 13, 2007, 12:28:50 PM12/13/07
to
"John Thingstad" <jpt...@online.no> writes:

> Allegro (www.franz.com), might (they will not say) release a multicore
> version in the (near?) future.
> That would work with a MAC too.

You know that MAC is not the same as Mac? Mac is the computer, MAC is as
in Project MAC. In Lisp circles mentioning MAC can get the response
you're not looking for, and Mac is short for Macintosh so it
shouldn't be capitalized. (sp?)

/Andreas

--
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

scoot...@gmail.com

unread,
Dec 13, 2007, 12:51:36 PM12/13/07
to
On Dec 12, 10:45 am, TerryMc <terry.mcint...@gmail.com> wrote:
> The intended development target will be a multicore box - quite likely
> an 8-way Mac.

And here I thought that I was expressing an outrageous opinion:
functional languages such as Lisp, might make a comeback as an answer
to many/multicore programming problems -- part of a panel discussion
at the SC'07 "Many/Multicore" workshop last month.

To get Lisp into the multicore arena requires better virtual machines
and compilers, or VMs-on-top-of-VMs, to analyze the CFG to determine
what can and cannot be parallelized (what Patterson is calling
"autotuners" in the "Berkeley View of Parallelism, 2.0".) Explicit
thread creation and control is a good first step, but there's a lot of
work to be done to make multicore successful.

Good to see an explicit thread (pardon the pun) on c.l.l.


-scooter

vanekl

unread,
Dec 13, 2007, 1:10:03 PM12/13/07
to
On Dec 13, 5:00 pm, alex.repenn...@gmail.com wrote:
> On Dec 12, 11:45 am, TerryMc <terry.mcint...@gmail.com> wrote:
>
> > I'm returning to Lisp after a hiatus of about 30 years, with a project
> > ( the game of Go ), which could use a "programmable programming
> > language" in many interesting ways.
>
> > The intended development target will be a multicore box - quite likely
> > an 8-way Mac.
>
> Sounds like a great exploration. Being able to launch, say, 8 threads,
> e.g., in Clozure CL, executing on 8 cores is part of the answer but
> unfortunately will not help much with traditional algorithms such as
> min/max search typically used in turn based games. The problem is that
> many of these search algorithms are very difficult to implement in a
> parallel ...

Min/Max search is tree search. Tree search is one of the easiest
algorithms
to parallelize because in many instances you can spin off a new thread
at
a tree branch.
Sorry if this sounds like a drive-by-nitpik.

Alex Mizrahi

unread,
Dec 13, 2007, 1:11:00 PM12/13/07
to
??>> it is possible for two (or more) lisp threads to be executing
??>> simultaneously, possibly trying to access and/or modify the
??>> same data structures."

JT> That is great news! (I will get a MAC)

you can also get this working on Linux (and FreeBSD iirc) with SBCL.

as for Windows, you can get multiple cores used by ABCL.


Dimiter "malkia" Stanev

unread,
Dec 13, 2007, 7:35:59 PM12/13/07
to

Maybe the problem is in stopping the threads if the solution is already
found. Also how to schedule them so that the number of threads = number
of cpus.

For example: TerminateThread() on Windows is not a good thing to do as
far as my knowledge goes. If you want to stop a thread on Windows, the
code should know that it can be forced and interrupted. (I could be
wrong about all this, but that's what I know from the MSDN).

From MSDN: "TerminateThread is a dangerous function that should only be
used in the most extreme cases. You should call TerminateThread only if
you know exactly what the target thread is doing, and you control all of
the code that the target thread could possibly be running at the time of
the termination. For example, TerminateThread can result in the
following problems:"

Off course, probably there might be other ways. And Windows is not the
only OS :)

Mike G.

unread,
Dec 13, 2007, 9:44:38 PM12/13/07
to
On Dec 12, 1:45 pm, TerryMc <terry.mcint...@gmail.com> wrote:

> The intended development target will be a multicore box - quite likely
> an 8-way Mac.

If you really like the Apple interface, and don't mind the extra
money,
by all means get the Macintosh. Very earily in 2008 am getting an 8-
way
Opteron based around a Supermicro H8DME-2 motherboard, with 8 gig of
RAM
and a terabyte of storage that I was able to assemble for just over
$1700.

Linux / AMD64 / SBCL make a winning combination.

> Is anyone doing multicore concurrency in Lisp? Lightweight threads?
> Many of the projects I've found on the web seem not to have been
> maintained for several years. What is state-of-the-art today, with the
> proliferation of dual- and quad-core chips?

Yeah, as a serious hobbyist I'm doing multicore concurrency in Lisp on
dual-cores.
In fact I have 4 dual-cores that I have networked, and I do
distributed programming
in Lisp. My project is a set of macros that implement parallelized
versions
of ordinary Lisp functions. The macros try to scale to the system they
are generating
code for (i.e an SMP or a network machine). I have some simple
benchmark code that I've
put together on top of my macros -- vector operations, parallel tree
searches, and so forth.

My new box is to help improve / test the SMP scaling code, and also
the load-balancing
code for the distributed stuff. I only have dual-cores, and all my
machines are equivalent,
so I'm sure this stuff needs a lot of work. :)

Daniel Weinreb

unread,
Dec 13, 2007, 10:47:30 PM12/13/07
to

The most complete official information about what's going on
with Allegro CL in this regard can be found at:

http://www.franz.com/support/faq/#s-mp

Espen Vestre

unread,
Dec 14, 2007, 2:57:36 AM12/14/07
to
"Mike G." <michael...@gmail.com> writes:

> Linux / AMD64 / SBCL make a winning combination.

Are you sure? As I recently mentioned, our experience is that
Linux/Core 2/LW is a much faster combination than Linux/AMD64/LW
(especially if LW = 64 bit LW 5).

I'm curious to know if this is the case with SBCL too, or if it
is just the code generated by the LW compiler that tends to run
faster on Core 2.
--
(espen)

Stefan Nobis

unread,
Dec 14, 2007, 4:05:52 AM12/14/07
to
vanekl <va...@acd.net> writes:

> Min/Max search is tree search. Tree search is one of the easiest
> algorithms to parallelize because in many instances you can spin off
> a new thread at a tree branch.

But what about the state? To take full advantages of the
multithreading each thread has to get a complete copy of the state (or
you get quite some performance penalties from locking). Even then you
have to collect the outcomes from the different threads.

To parallelize tree search is only easy, if the state is (really)
small.

--
Stefan.

Rainer Joswig

unread,
Dec 14, 2007, 7:34:39 AM12/14/07
to
In article <m1n8j.10147$3s1.9435@trnddc06>,
Daniel Weinreb <d...@alum.mit.edu> wrote:

From reading it, I understand that Franz is working on
Multi-core support, but also wants to make their Lisp
thread-safe.

Sounds good!

But I have a question.

Let's say we have three models of threading (there are more,
but leave it to three for now):

1) no threading
2) threading without multi-core support
3) threading with multi-core support

Then we have:

a) no thread safety
b) thread safety

Franz works on 3) + b) .

But what do they have now? Is it 2) + a) ?

SBCL is not thread safe. OpenMCL is probably not thread safe, too?
What about the others? Which Common Lisp implementation
is thread safe (given their model of threads)?

Then I would ask: How is 3) + a) worse than 2) + a)?

Don't we need thread safety already for 'simple'
threading (without multi-core support)? Given for example
that one uses preemptive scheduling provided by the
OS to switch between threads?

--
http://lispm.dyndns.org/

Alex Mizrahi

unread,
Dec 14, 2007, 7:47:58 AM12/14/07
to
RJ> 1) no threading
RJ> 2) threading without multi-core support
RJ> 3) threading with multi-core support

RJ> Then we have:

RJ> a) no thread safety
RJ> b) thread safety

RJ> Franz works on 3) + b) .

RJ> But what do they have now? Is it 2) + a) ?

there was some discussion involving Duane Rettig some time ago (Message-ID:
<o0wsstx...@gemini.franz.com>), we have found out that "thread safety"
is quite vague concept.
as far as i understood ACL tries to be reasonably thread-safe, it's not that
complex given 2). safer than SBCL, for example in hash-table area.

if we have 3), there's no such thing as "absolute" thread safety anyway.


Alex Mizrahi

unread,
Dec 14, 2007, 8:55:03 AM12/14/07
to
??>> Min/Max search is tree search. Tree search is one of the easiest
??>> algorithms to parallelize because in many instances you can spin off
??>> a new thread at a tree branch.

SN> But what about the state?

what's about state? state in min/max search is current value (or, pointer
to), which is likely to be small.

as for tree itself, it should have problem when multiple threads _read_ from
it.

SN> To take full advantages of the multithreading each thread has to get a
SN> complete copy of the state (or you get quite some performance penalties
SN> from locking). Even then you have to collect the outcomes from the
SN> different threads.

using readily available MP primitives, which are quite fast btw, it's just
few lines of code. as long as you only have couple of threads it shouldn't
be of problem.

SN> To parallelize tree search is only easy, if the state is (really)
SN> small.

i'd say it only makes sense if tree is sufficiently large. if it's not, it's
not feasible, you'd loose more on overhead.


Juho Snellman

unread,
Dec 14, 2007, 9:10:09 AM12/14/07
to
Rainer Joswig <jos...@lisp.de> writes:
> SBCL is not thread safe. OpenMCL is probably not thread safe, too?
> What about the others? Which Common Lisp implementation
> is thread safe (given their model of threads)?

I don't really understand why Franz need the feel to imply on their
web page that SBCL does not care about thread safety, or to write
obvious untruths about the technical implementation aspects.
Commercial pressures, I guess.

There are many kinds of thread safety. For example there are a lot of
internal caches in SBCL, which the user has no direct control over.
These caches might used for things like type checks, method dispatch,
etc. Since these are internal implementation resources that the user
has no control over, SBCL will obviously need to ensure that access to
them is safe. (It would, after all, be completely unreasonable to
require a user to add locking around every call to SUBTYPEP, or around
every possible method dispatch).

In some cases that will be done by fine grained locking, in other
cases by lockless thread safe algorithms, in yet more cases by putting
locks around an entire subsystem. But note that there is a lot more to
this than the claim by Franz about "big locks around the compiler,
garbage collector and foreign loader, with no other locking to prevent
crashes when global resources are modified in separate thread". Is it
guaranteed that all such cases have been found and fixed? No, of
course not. But it is an area that some SBCL developers are serious
about, and such problems are definitely treated as bugs, not features.

Another area of thread safety is the core runtime of the Lisp system.
Again, this is an area of SBCL that a fair amount of thought and work
has gone into making safer. It's certainly not just a matter of simply
mapping pthreads to Lisp threads and calling it a day.

On the other hand, there's thread safety of user-accessible data
structures. Here the policy of SBCL is that users are responsible for
properly synchronizing their accesses, we just provide the tools for
the users (locks, condition variables, a compare-and-swap on many
kinds of primitive places). I believe this is the right engineering
choice. The majority of data structures are not accessed concurrently
from multiple threads. It makes little sense to unconditionally impose
the thread safety overhead on all users, in all cases. It's easier for
the user to add synchronization, than for him to remove it from
built-in data structures.

Anyway, good luck to Franz on the path they've chosen.

--
Juho Snellman

Rainer Joswig

unread,
Dec 14, 2007, 9:44:15 AM12/14/07
to
In article <87sl25g...@vasara.proghammer.com>,
Juho Snellman <jsn...@iki.fi> wrote:

How far is SBCL from the above to be considered 'thread-safe'
for practical use?

> On the other hand, there's thread safety of user-accessible data
> structures. Here the policy of SBCL is that users are responsible for
> properly synchronizing their accesses, we just provide the tools for
> the users (locks, condition variables, a compare-and-swap on many
> kinds of primitive places). I believe this is the right engineering
> choice. The majority of data structures are not accessed concurrently
> from multiple threads. It makes little sense to unconditionally impose
> the thread safety overhead on all users, in all cases. It's easier for
> the user to add synchronization, than for him to remove it from
> built-in data structures.

I kind of agree with the above, given that the user knows
what these data-structures are.

For a user the use of base language constructs should be 'thread-safe'.

Some examples...

* CLOS: define/redefine classes/methods/...
* CLOS: apply generic functions
* definitions of functions/macros/variables/packages/...
* I/O: reading / printing to streams
* compiler/interpreter: compile/load/interpret code

Say, I have a DEFSYSTEM that allows several compilation
threads, could I expect that I can use the compiler
that way?

>
> Anyway, good luck to Franz on the path they've chosen.

--
http://lispm.dyndns.org/

Tim Bradshaw

unread,
Dec 14, 2007, 10:52:00 AM12/14/07
to
On Dec 14, 12:34 pm, Rainer Joswig <jos...@lisp.de> wrote:

>
> 1) no threading
> 2) threading without multi-core support
> 3) threading with multi-core support
>
> Then we have:
>
> a) no thread safety
> b) thread safety
>
> Franz works on 3) + b) .

I don't think this is quite the right distinction. The issue at stake
is whether the system has real concurrency or not, in the sense that
multiple things can really happen simultaneously as opposed to them
all being time-sliced, either by Lisp, or by the underlying threading
system.

Obviously things can only be truly concurrent if you have hardware
which can actually do multiple things at once and exposes that ability
to programs[*].

Programming environments which run on truly-concurrent platforms and
want to take advantage of that (they can elect not to either by not
having multiple threads at all, by implementing their own time-sliced
threading system, or by having a "giant lock" approach to block
concurrency) have to deal with the awkward fact that things can happen
at the same time. They thus have to protect themselves against this,
and additionally make decisions about what operations they should make
appear atomic to applications (Java famously got this wrong early on).

--tim

[*] plenty of hardware has some concurrency which is *not* exposed,
such as speculative execution, etc.

Duane Rettig

unread,
Dec 14, 2007, 11:31:36 AM12/14/07
to
Juho Snellman <jsn...@iki.fi> writes:

> Rainer Joswig <jos...@lisp.de> writes:
>> SBCL is not thread safe. OpenMCL is probably not thread safe, too?
>> What about the others? Which Common Lisp implementation
>> is thread safe (given their model of threads)?
>
> I don't really understand why Franz need the feel to imply on their
> web page that SBCL does not care about thread safety, or to write
> obvious untruths about the technical implementation aspects.
> Commercial pressures, I guess.

What "untruths" are you talking about? We're certainly not interested
in telling untruths, let alone obvious ones, so if you care to
elaborate on the specific untruths told in our faq entry, we'll
certainly change them.

The most critical statements we make about sbcl's implementation come
directly from the sbcl documentation itself:

http://www.sbcl.org/manual/Implementation-_0028Linux-x86_0029.html#Implementation-_0028Linux-x86_0029

"Large amounts of the SBCL library have not been inspected for
thread-safety. Some of the obviously unsafe areas have large locks
around them, so compilation and fasl loading, for example, cannot be
parallelized. Work is ongoing in this area."

Perhaps you should look to fixing this documentation - we'll be happy
to change our FAQ as soon as we notice or are told that this
documentation has changed.

> There are many kinds of thread safety. For example there are a lot of
> internal caches in SBCL, which the user has no direct control over.
> These caches might used for things like type checks, method dispatch,
> etc. Since these are internal implementation resources that the user
> has no control over, SBCL will obviously need to ensure that access to
> them is safe. (It would, after all, be completely unreasonable to
> require a user to add locking around every call to SUBTYPEP, or around
> every possible method dispatch).

Yes, absolutely. So does sbcl do this? What do you do, for example,
to ensure that

(defvar *x* 0)

... (incf *x*) ...

which has never been bound in a lambda or let form yields an
increasing value for *x* (which of course wouldn't happen if global
symbol accesses aren't thread-safe, due to some other thread trying
the same increment but being scheduled less often? There is the
possibility that although all threads are incrementing *x* that its
value could actually be reduced at various times.

I presume from your description of bound specials (i.e. specials bound
in a lambda or let form) that the accesses will then be completely
disjoint and thus will not clash with each other (the implication is
drawn because of the statement that bindings are local to the
thread). That is a Good Thing.

What about get and (setf get)? The property list is a global resource
that is certainly not updated very often, but often enough that it
could pose a problem in a high-action system with many threads
updating plists. So does this count as a "user is responsible" type
of access? Because cons cells are individual, and a setf of get can
change the actual structure of the property list, one might wind up
with two of the same indicator on the plist, or, worse yet,
half-a-property [in the form (<prop> <val> <val> <prop> <val> ...)]
what does sbcl do for this situation?

To re-paste a phrase from your last paragraph:

> has no control over, SBCL will obviously need to ensure that access to
> them is safe.

Again, this is agreed, and Allegro CL will also do the same. But in
the meanwhile, while those things are being done, we will not claim to
have already done this, to have an smp lisp.

> In some cases that will be done by fine grained locking, in other
> cases by lockless thread safe algorithms, in yet more cases by putting
> locks around an entire subsystem. But note that there is a lot more to
> this than the claim by Franz about "big locks around the compiler,
> garbage collector and foreign loader, with no other locking to prevent
> crashes when global resources are modified in separate thread". Is it
> guaranteed that all such cases have been found and fixed? No, of
> course not. But it is an area that some SBCL developers are serious
> about, and such problems are definitely treated as bugs, not features.

The point is that you've chosen to put the smp label onto sbcl,
knowing that there are obvious bugs that are going to affect your
users. Perhaps this is acceptable to your users; after all, they do
have the sources, and can fix these bugs themselves.

> Another area of thread safety is the core runtime of the Lisp system.
> Again, this is an area of SBCL that a fair amount of thought and work
> has gone into making safer. It's certainly not just a matter of simply
> mapping pthreads to Lisp threads and calling it a day.

Of course not. And that's not what we imply in our faq entry. Our
implication is that neither Allegro CL nor sbcl smp is done, and for
our part we will not advertize that we have an smp until it is indeed
done.

> On the other hand, there's thread safety of user-accessible data
> structures. Here the policy of SBCL is that users are responsible for
> properly synchronizing their accesses, we just provide the tools for
> the users (locks, condition variables, a compare-and-swap on many
> kinds of primitive places). I believe this is the right engineering
> choice. The majority of data structures are not accessed concurrently
> from multiple threads. It makes little sense to unconditionally impose
> the thread safety overhead on all users, in all cases. It's easier for
> the user to add synchronization, than for him to remove it from
> built-in data structures.

I think we agree in all of these areas. Now if your own documentation
only reflected that philosophy...

> Anyway, good luck to Franz on the path they've chosen.

Thanks!

--
Duane Rettig du...@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182

alex.re...@gmail.com

unread,
Dec 14, 2007, 11:38:36 AM12/14/07
to

true but the "many" part is a huge problem. It is easy to do
principally but really hard to do it well. If you find a good solution
working well (load balanced) on N-core chips you will be a real
contender for the Turin Award. The branching factor of game trees will
not typically map nicely onto the number of cores. To make things
worse, pruning such as alpha/beta results in uneven search depths. The
net effect is that core load balancing is really hard requiring
substantial overhead with typically bad results. By bad results I mean
that the overhead is close or sometime even larger than the potential
gain => net loss.

An approach such as collaborative diffusion does not spend any time
with dispatching overhead. Layers or cell patches are mapped
statically to the game state. Additionally, because they all compute
the same thing all the cores are always load balanced.

all the best, Alex

Rainer Joswig

unread,
Dec 14, 2007, 11:44:54 AM12/14/07
to
In article
<596a0ba9-afc9-46a5...@d21g2000prf.googlegroups.com>,
Tim Bradshaw <tfb+g...@tfeb.org> wrote:

> On Dec 14, 12:34 pm, Rainer Joswig <jos...@lisp.de> wrote:
>
> >
> > 1) no threading
> > 2) threading without multi-core support
> > 3) threading with multi-core support
> >
> > Then we have:
> >
> > a) no thread safety
> > b) thread safety
> >
> > Franz works on 3) + b) .
>
> I don't think this is quite the right distinction. The issue at stake
> is whether the system has real concurrency or not, in the sense that
> multiple things can really happen simultaneously as opposed to them
> all being time-sliced, either by Lisp, or by the underlying threading
> system.

If we have threading without multi-core support, we still would
a lot of thought how to make it safe, since the scheduler
could interrupt a Lisp thread in the middle of doing
something destructive (for example updating a class definition).
Some other thread then time-sliced could be affected by that.

I can imagine that simultaneously running threads can
create additional problems. But even without them
you need to control changes of the running Lisp systems.

> Obviously things can only be truly concurrent if you have hardware
> which can actually do multiple things at once and exposes that ability
> to programs[*].
>
> Programming environments which run on truly-concurrent platforms and
> want to take advantage of that (they can elect not to either by not
> having multiple threads at all, by implementing their own time-sliced
> threading system, or by having a "giant lock" approach to block
> concurrency) have to deal with the awkward fact that things can happen
> at the same time. They thus have to protect themselves against this,
> and additionally make decisions about what operations they should make
> appear atomic to applications (Java famously got this wrong early on).
>
> --tim
>
> [*] plenty of hardware has some concurrency which is *not* exposed,
> such as speculative execution, etc.

--
http://lispm.dyndns.org/

Tim Bradshaw

unread,
Dec 14, 2007, 12:25:55 PM12/14/07
to
On Dec 14, 4:44 pm, Rainer Joswig <jos...@lisp.de> wrote:

> If we have threading without multi-core support, we still would
> a lot of thought how to make it safe, since the scheduler
> could interrupt a Lisp thread in the middle of doing
> something destructive (for example updating a class definition).
> Some other thread then time-sliced could be affected by that.

Yes, you would. If you're doing your own threading (ie you implement
the scheduler) then you can probably make good choices fairly easily.
If you're relying on some other scheduler (such as the OSs) then you'd
need to deal with various issues.

>
> I can imagine that simultaneously running threads can
> create additional problems. But even without them
> you need to control changes of the running Lisp systems.

You do need to handle it without concurrency, but there are whole new
classes of problems that arise with it. Additionally, solutions which
look reasonable when you think of threading as being "time slicing a
single processor" look terrible when you think of it a "sharing a
large number of processors". In particular things like a "without
preemption" macro translate as "serialise the machine" which is a
performance disaster if you do it very often.

Mike G.

unread,
Dec 14, 2007, 12:56:19 PM12/14/07
to
On Dec 14, 2:57 am, Espen Vestre <es...@vestre.net> wrote:

I haven't tested Core 2's. Prior to purchasing my personal machine, I
tested on quad-core, single-processor AMD and Xeon with SBCL, and
found them comparable to the common benchmarks published. AMD w/
Hypertransport is a slight win for certain memory-transfer intensive
algorithms. Xeon has a similar edge for arithmetic speed. But Lisp
isn't the only software on this machine - we have Linux, too. It has
been my experience (as a working Linux admin) that Linux on AMD is
more responsive and smoother than Linux on Xeon. I attribute this to
Linux's caching I/O and AMD HT working well together. One benchmark I
did for fun was to load the Xeons and Opterons up with 16gig of RAM
and rip a DVD to DivX. The first run, Xeon wins. Do the rip a second
time, and AMD wins - because 16gig allows the entire DVD to be
cached :)

But can you even build an 8-way Core 2? I was under the impression
that Core 2 was all single-socket stuff (i.e. quad at the most), and
that for dual-socket 8-ways, one needed an Opteron or Xeon.

I wanted 8-cores. I almost went Xeon, but decided to go w/ AMD mostly
because my current dual-core systems are AMD, and I wanted the freedom
to move my FASLs around fearlessly. My machine won't ship until Feb.
tho, so if you can point me to a dual-socket Core 2 board, I'd
consider it.

If I could have purchased a 64-core 68k for less money, I would have
-- I'm more interested in the number of simultaneous execution units
than the power of the individual units.

Raffael Cavallaro

unread,
Dec 14, 2007, 1:31:03 PM12/14/07
to
On 2007-12-14 11:38:36 -0500, alex.re...@gmail.com said:

> An approach such as collaborative diffusion does not spend any time
> with dispatching overhead. Layers or cell patches are mapped
> statically to the game state. Additionally, because they all compute
> the same thing all the cores are always load balanced.

This is a little terse for me to parse reliably (that is, I'm sure I'm
not understanding your real meaning). I've read your paper, so for
example, don't the various cells need to cooperatively and dynamically
produce a diffusion map? How is this "mapped statically?" I'm sure I'm
misunderstanding what your saying here, so could you elaborate on this
a bit?

TerryMc

unread,
Dec 14, 2007, 1:33:45 PM12/14/07
to
On Dec 12, 12:07 pm, "John Thingstad" <jpth...@online.no> wrote:
> PÃ¥ Wed, 12 Dec 2007 20:40:00 +0100, skrev Edi Weitz <spamt...@agharta.de>:
>
> > On Wed, 12 Dec 2007 20:13:39 +0100, "John Thingstad" <jpth...@online.no>
> > wrote:
>
> >>> Clozure CL/OpenMCL uses native threads on Mac OS X so you can have
> >>> multiple lisp threads executing on multiple cores simultaneously.

>
> >> No, that is a misunderstanding I think. Most lisp's lock the
> >> scheduling to one processor even when they accept multiple
> >> threads.
>
> > Why do you think Clozure CL does that?
>
> > Edi.
>
> Perhaps SPUR Lisp?
>
> http://portal.acm.org/citation.cfm?id=894779
>
> --------------
> John Thingstad

Interesting, but designed for a specific SPUR workstation developed at
Berkeley? Where can I purchase one, how much, and what do I get for my
money? ;)

Message has been deleted

TerryMc

unread,
Dec 14, 2007, 2:03:38 PM12/14/07
to

True. One can split out the branches collect information from
different
parts of the tree, and merge that information.

information sharing is likely to be a challenge. properly done, it
could
speed up the search, for example when life-and-death status is
determined.

The currently fashionable methods for the game of Go rely upon monte
carlo
methods and something called UCT, Upper Confidence Bounds applied to
trees.

Message has been deleted
Message has been deleted

Juho Snellman

unread,
Dec 14, 2007, 7:22:09 PM12/14/07
to
Duane Rettig <du...@franz.com> writes:
> Juho Snellman <jsn...@iki.fi> writes:
>
> > Rainer Joswig <jos...@lisp.de> writes:
> >> SBCL is not thread safe. OpenMCL is probably not thread safe, too?
> >> What about the others? Which Common Lisp implementation
> >> is thread safe (given their model of threads)?
> >
> > I don't really understand why Franz need the feel to imply on their
> > web page that SBCL does not care about thread safety, or to write
> > obvious untruths about the technical implementation aspects.
> > Commercial pressures, I guess.
>
> What "untruths" are you talking about? We're certainly not interested
> in telling untruths, let alone obvious ones, so if you care to
> elaborate on the specific untruths told in our faq entry, we'll
> certainly change them.

I object to the claim "as of December 2007 they have big locks around


the compiler, garbage collector and foreign loader, with no other
locking to prevent crashes when global resources are modified in

separate threads". It is true that there are big locks around those
areas. It is not true that those big locks are the only steps taken to
ensure consistency of global resources. The other steps include less
coarse locks, and use of lockless algorithms. The suggestion that the
totality of thread safety work in SBCL has consisted of putting in
three big locks is, frankly, insulting.

Nor do I appreciate the suggestion that SBCL is not taking thread
safety seriously: "Also, we believe that when they get serious about
solving the problems in #2 ...".

Now, if either of those statements had been made on comp.lang.lisp, I
would probably have ignored them. But it seems very strange for you to
be making such claims in your marketing material.

> The most critical statements we make about sbcl's implementation come
> directly from the sbcl documentation itself:
>
> http://www.sbcl.org/manual/Implementation-_0028Linux-x86_0029.html#Implementation-_0028Linux-x86_0029
>
> "Large amounts of the SBCL library have not been inspected for
> thread-safety. Some of the obviously unsafe areas have large locks
> around them, so compilation and fasl loading, for example, cannot be
> parallelized. Work is ongoing in this area."

This, on the other hand, is essentially true (the value of "large" is
smaller than when the text was originally written, but it's still
true). I have absolutely no problem with that quote, feel free to
leave it in.

> > There are many kinds of thread safety. For example there are a lot
> > of internal caches in SBCL, which the user has no direct control
> > over. These caches might used for things like type checks, method
> > dispatch, etc. Since these are internal implementation resources
> > that the user has no control over, SBCL will obviously need to
> > ensure that access to them is safe. (It would, after all, be
> > completely unreasonable to require a user to add locking around
> > every call to SUBTYPEP, or around every possible method dispatch).
>
> Yes, absolutely. So does sbcl do this? What do you do, for example,
> to ensure that
>
> (defvar *x* 0)
>
> ... (incf *x*) ...
>
> which has never been bound in a lambda or let form yields an
> increasing value for *x* (which of course wouldn't happen if global
> symbol accesses aren't thread-safe, due to some other thread trying
> the same increment but being scheduled less often?

You're using a definition of thread safe here that I can't agree with.
Access to global variables is thread safe in SBCL. That does in no way
imply that incrementing a global variable would, or should, be atomic.

I'm also a bit puzzled, since I was just describing the kinds of
global resources that *must* be protected by the implementation, since
it would be unreasonable or even completely impossible for the user to
take care of the synchronization. You say "Yes, absolutely.", and then
give a canonical example of the opposite!

A global variable is not a resource that the user has no control
over. There are some aspects of special variables, for example
ensuring that the allocation of thread-local storage for them is safe,
that the user can't control. Incrementing a variable isn't one of
those situations. If you truly think that it should be an atomic
operation for an implementation to qualify as thread safe, it's
probably best to agree to disagree.

(It might make sense to include a function for doing an atomic add on
a symbol-value with the implementation, but it's not something I would
expect to see happen by default.)

> I presume from your description of bound specials (i.e. specials bound
> in a lambda or let form) that the accesses will then be completely
> disjoint and thus will not clash with each other (the implication is
> drawn because of the statement that bindings are local to the
> thread). That is a Good Thing.

A threaded Lisp implementation where bound specials aren't thread
local would probably be completely unusable.



> What about get and (setf get)? The property list is a global resource
> that is certainly not updated very often, but often enough that it
> could pose a problem in a high-action system with many threads
> updating plists. So does this count as a "user is responsible" type
> of access? Because cons cells are individual, and a setf of get can
> change the actual structure of the property list, one might wind up
> with two of the same indicator on the plist, or, worse yet,
> half-a-property [in the form (<prop> <val> <val> <prop> <val> ...)]
> what does sbcl do for this situation?

I think we covered this in the other thread, a few weeks back :-) The
user would definitely be responsible. Even if GETF/REMF/(SETF GETF)
were made thread-safe, any code written to access a plist using lower
level operations would not be. So all users of the same plist must
co-operate in any case.

> To re-paste a phrase from your last paragraph:
>
> > has no control over, SBCL will obviously need to ensure that access to
> > them is safe.
>
> Again, this is agreed, and Allegro CL will also do the same. But in
> the meanwhile, while those things are being done, we will not claim to
> have already done this, to have an smp lisp.

As far as I am concerned, SBCL in its current state is perfectly
usable for many kinds of multi-threaded programs.

A couple of years ago I would've hesitated to run any services of my
own on a threaded SBCL, let alone recommend somebody else do
that. Since then things improved, as they tend to do. Had that rough
experimental thread support not existed, that progress would not have
been made. It is an inherent part of SBCL's open development model
that some features get released as experimental, and slowly mature.

> The point is that you've chosen to put the smp label onto sbcl,
> knowing that there are obvious bugs that are going to affect your
> users. Perhaps this is acceptable to your users; after all, they do
> have the sources, and can fix these bugs themselves.

I'm not aware of any obvious bugs that are going to affect our users.
I know of some obscure ones, that are unlikely to affect them, but the
obvious ones tend to get fixed...

As for acceptability... Were I a user who needs real multithreading, I
would definitely prefer to have a 98% solution now rather than
possibly having a 99% solution two years from now. As an implementor I
definitely prefer to give out the 98% solution, since it makes it much
more likely that the 99% solution will be achieved. But I certainly
understand that your concerns as a commercial entity, and of your
users as customers paying a lot of money, would be different than
those of developers and users of open source software.

> > On the other hand, there's thread safety of user-accessible data
> > structures. Here the policy of SBCL is that users are responsible for
> > properly synchronizing their accesses, we just provide the tools for
> > the users (locks, condition variables, a compare-and-swap on many
> > kinds of primitive places). I believe this is the right engineering
> > choice. The majority of data structures are not accessed concurrently
> > from multiple threads. It makes little sense to unconditionally impose
> > the thread safety overhead on all users, in all cases. It's easier for
> > the user to add synchronization, than for him to remove it from
> > built-in data structures.
>
> I think we agree in all of these areas.

Do we? From what you wrote below, it seems that you're advocating full
synchronization of all data structure accesses.

> Now if your own documentation only reflected that philosophy...

I'm really not sure what you're referring to here. Could you expand on
that?

--
Juho Snellman

Tim Bradshaw

unread,
Dec 15, 2007, 9:41:52 AM12/15/07
to
On Dec 15, 12:22 am, Juho Snellman <jsn...@iki.fi> wrote:

>
> Do we? From what you wrote below, it seems that you're advocating full
> synchronization of all data structure accesses.

Interestingly, that's more-or-less the approach that Java tried
initially. They backed away from it a long time ago though, as it
has a significant performance hit with rather little gain (since user
code tends to do things like "delete x from hashtable a then insert it
into queue b", and it helps not a jot that the deletion and insertion
are atomic if you need the *pair* to be atomic. So I'm sure that's
not what Franz are doing as they'll be aware of the Java story.

Tim Bradshaw

unread,
Dec 15, 2007, 9:48:59 AM12/15/07
to
On Dec 14, 5:56 pm, "Mike G." <michael.graf...@gmail.com> wrote:

>
> If I could have purchased a 64-core 68k for less money, I would have
> -- I'm more interested in the number of simultaneous execution units
> than the power of the individual units.

It's probably not less money, though it may not be that much more, but
you can get a Niagara based box which will look to you as if it has 32
cores (it only actually has 8 but I am not sure any user code could
tell easily). In fact you can get Niagara II systems (badged as
UltraSPARC T2) which will look like 64 core, but they're still
expensive I think.

Mike G.

unread,
Dec 15, 2007, 1:03:40 PM12/15/07
to

Hmm. How is SBCL's support for Solaris / Niagara? This is something
I'd consider. I've always liked Sun hardware. Solaris - not so much -
but that is just my bias after having worked with the GNU tool chain
for so long.

At the moment, I'm set on an 8-core AMD/Intel solution w/ Linux so
that I can virtualize w/ VMware. At some point, I want to get Windows/
SBCL hooked into my distributed system. Maybe MacOSX too. But a 32-
core Sun is so juicy, that it could convince me to settle for an Intel
quad Core 2.

-M

Maciej Katafiasz

unread,
Dec 15, 2007, 1:49:02 PM12/15/07
to
Den Sat, 15 Dec 2007 10:03:40 -0800 skrev Mike G.:

>> It's probably not less money, though it may not be that much more, but
>> you can get a Niagara based box which will look to you as if it has 32
>> cores (it only actually has 8 but I am not sure any user code could
>> tell easily). In fact you can get Niagara II systems (badged as
>> UltraSPARC T2) which will look like 64 core, but they're still
>> expensive I think.
>
> Hmm. How is SBCL's support for Solaris / Niagara? This is something I'd
> consider. I've always liked Sun hardware. Solaris - not so much - but
> that is just my bias after having worked with the GNU tool chain for so
> long.

Ubuntu officially supports Niagara (at least T1), so you can very well
have Linux on it. Though SBCL support looks less rosy, only 0.9.17 listed
for Linux/SPARC, and I have no idea about threading support.

Cheers,
Maciej

Mike G.

unread,
Dec 15, 2007, 11:39:44 PM12/15/07
to

Yeah. I don't know about Linux on Niagara though. I've run Linux on a
lot of platforms over the years (x86, alpha, m68k, sparc and arm) and
my general impression has been that there are more bugs in the non-x86
landscape. This isn't to say that the non-x86 distros don't work well,
or aren't viable for use -- just that, as is probably completely
natural, there are more x86 users -- and therefore more bug reports
and developers for x86. I sold off or junked most of those systems now
though. I still have the 68k box though - a loaded Mac LCII with a
custom paint job. It's been through hell with me, and faithfully runs
Linux and virtualizes Sys 7 with BasiliskII. It's odd, I have a
strange sense of loyalty to this machine. I never use it anymore, but
rebuilt the power supply after it broke, and added a 68881 FPU and
faster RAM just because I felt that the old beast somehow "deserved"
it. Maybe I'm just mental.

If I had a Niagara box though, I'd probably just run Solaris and deal
with my distaste for the tool chain's syntax differences.

As for SBCL -- I don't expect there to be much development support
there for Niagara. Hell, even if it were available, I'd personally
prefer to see those developers work on x86_64 if it came down to it
(maybe even if I owned a Niagara box too). Strictly speaking, my code
doesn't require threading support. I could just as easily run multiple
SBCL processes on the one box, after a little massaging of the
initialization routines and a shell script to launch them all. No
higher-level code would need to change, and those SBCL processes could
all enter into a distributed environment.

So Niagara T1's are getting cheap, huh? Guess I'll have to read up on
Sun's product line to see if I can find a T1 workstation model. Then
I'll scan Ebay until satisfied.

Duane Rettig

unread,
Dec 16, 2007, 12:10:24 AM12/16/07
to
Juho Snellman <jsn...@iki.fi> writes:

> Duane Rettig <du...@franz.com> writes:
>> Juho Snellman <jsn...@iki.fi> writes:
>>
>> > Rainer Joswig <jos...@lisp.de> writes:
>> >> SBCL is not thread safe. OpenMCL is probably not thread safe, too?
>> >> What about the others? Which Common Lisp implementation
>> >> is thread safe (given their model of threads)?
>> >
>> > I don't really understand why Franz need the feel to imply on their
>> > web page that SBCL does not care about thread safety, or to write
>> > obvious untruths about the technical implementation aspects.
>> > Commercial pressures, I guess.
>>
>> What "untruths" are you talking about? We're certainly not interested
>> in telling untruths, let alone obvious ones, so if you care to
>> elaborate on the specific untruths told in our faq entry, we'll
>> certainly change them.
>
> I object to the claim "as of December 2007 they have big locks around
> the compiler, garbage collector and foreign loader, with no other
> locking to prevent crashes when global resources are modified in
> separate threads". It is true that there are big locks around those
> areas.

Well, the intention of the statement was that there were no such
protections in the compiler, etc, which I believe you have confirmed.
I can see how that statement could be taken to mean that we thought
you had nothing else, _anywhere_, so we'll reword it.

> It is not true that those big locks are the only steps taken to
> ensure consistency of global resources. The other steps include less
> coarse locks, and use of lockless algorithms.

Clearly, from your documentation elsewhere, this is obvious.

> The suggestion that the
> totality of thread safety work in SBCL has consisted of putting in
> three big locks is, frankly, insulting.

You misunderstand. You left off the first part of that same statement:
"What SBCL has done is #1 above with a little of #2..." Clearly we don't
think that you've done only big locks.

> Nor do I appreciate the suggestion that SBCL is not taking thread
> safety seriously: "Also, we believe that when they get serious about
> solving the problems in #2 ...".

Interesting. You get angry when we say you are not serious about thread
safety. However, this statement of ours comes immediately after the
paragraph you have just confirmed:

>> The most critical statements we make about sbcl's implementation come
>> directly from the sbcl documentation itself:
>>
>> http://www.sbcl.org/manual/Implementation-_0028Linux-x86_0029.html#Implementation-_0028Linux-x86_0029
>>
>> "Large amounts of the SBCL library have not been inspected for
>> thread-safety. Some of the obviously unsafe areas have large locks
>> around them, so compilation and fasl loading, for example, cannot be
>> parallelized. Work is ongoing in this area."
>
> This, on the other hand, is essentially true (the value of "large" is
> smaller than when the text was originally written, but it's still
> true). I have absolutely no problem with that quote, feel free to
> leave it in.

I'm not sure when you said 'the value of "large" is smaller' whether
you meant that the "large amounts", or "large locks" are getting
smaller. However, even though you mention that the "large" is getting
smaller, you tell us we can leave this statement in. And so, with
that in mind, let me replay the first sentence of this acceptable
paragraph back to you:

"Large amounts of the SBCL library have not been inspected for
thread-safety."

Do you really want us to keep that sentence in, or do you really not
understand why we are assuming that you don't take thread-safety
seriously? When _do_ you intend to finish inspecting your libraries
for thread safety?

>> > There are many kinds of thread safety. For example there are a lot
>> > of internal caches in SBCL, which the user has no direct control
>> > over. These caches might used for things like type checks, method
>> > dispatch, etc. Since these are internal implementation resources
>> > that the user has no control over, SBCL will obviously need to
>> > ensure that access to them is safe. (It would, after all, be
>> > completely unreasonable to require a user to add locking around
>> > every call to SUBTYPEP, or around every possible method dispatch).
>>
>> Yes, absolutely. So does sbcl do this? What do you do, for example,
>> to ensure that
>>
>> (defvar *x* 0)
>>
>> ... (incf *x*) ...
>>
>> which has never been bound in a lambda or let form yields an
>> increasing value for *x* (which of course wouldn't happen if global
>> symbol accesses aren't thread-safe, due to some other thread trying
>> the same increment but being scheduled less often?
>
> You're using a definition of thread safe here that I can't agree with.
> Access to global variables is thread safe in SBCL. That does in no way
> imply that incrementing a global variable would, or should, be atomic.

It certainly should be possible to increment a global variable
atomically, without the user having to figure out some low-level
locking mechanism.

> I'm also a bit puzzled, since I was just describing the kinds of
> global resources that *must* be protected by the implementation, since
> it would be unreasonable or even completely impossible for the user to
> take care of the synchronization. You say "Yes, absolutely.", and then
> give a canonical example of the opposite!
>
> A global variable is not a resource that the user has no control
> over. There are some aspects of special variables, for example
> ensuring that the allocation of thread-local storage for them is safe,
> that the user can't control. Incrementing a variable isn't one of
> those situations. If you truly think that it should be an atomic
> operation for an implementation to qualify as thread safe, it's
> probably best to agree to disagree.

No, thread-local storage is obviously thread-safe by its very nature.
I'm only talking about the value of a non-lambda-bound variable's
value - the global values that are visible to all threads.

> (It might make sense to include a function for doing an atomic add on
> a symbol-value with the implementation, but it's not something I would
> expect to see happen by default.)

Correct. We will be providing constructs like incf-atomic, as well as
other accessors, rather than forcing the user to use lower level
locks and semaphores. My concern is the lack of any such minimal
support for basic lisp operations in sbcl.

>> I presume from your description of bound specials (i.e. specials bound
>> in a lambda or let form) that the accesses will then be completely
>> disjoint and thus will not clash with each other (the implication is
>> drawn because of the statement that bindings are local to the
>> thread). That is a Good Thing.
>
> A threaded Lisp implementation where bound specials aren't thread
> local would probably be completely unusable.

Agreed.

>> What about get and (setf get)? The property list is a global resource
>> that is certainly not updated very often, but often enough that it
>> could pose a problem in a high-action system with many threads
>> updating plists. So does this count as a "user is responsible" type
>> of access? Because cons cells are individual, and a setf of get can
>> change the actual structure of the property list, one might wind up
>> with two of the same indicator on the plist, or, worse yet,
>> half-a-property [in the form (<prop> <val> <val> <prop> <val> ...)]
>> what does sbcl do for this situation?
>
> I think we covered this in the other thread, a few weeks back :-)

I looked over that thread, and I can't find any reason why you would
have thought I agreed with you (i.e. that plists should not be
thread-safe) other than a statement that I made that they _aren't_
thread-safe currently in Allegro CL. That research I did was a
revelation to me, and certainly not what we have in mind. I also gave
some arguments against removing the interrupt-check that would make
our current virtual and os threads implementations thread-safe
(specifically, I said that there is a danger that a circular list
could cause an infinite lock-out of other threads). But you should
not make the assumption that that argument applies to an smp lisp.
There, the design considerations are different, and it is possible to
do the critical section of the (setf get) without any danger of
lockout.

> The user would definitely be responsible. Even if GETF/REMF/(SETF GETF)
> were made thread-safe, any code written to access a plist using lower
> level operations would not be. So all users of the same plist must
> co-operate in any case.

We differ here. It is simply a design decision. We believe that such
operations shouldn't have to be left to the user to encode - any
mistakes on their part would lead to potentially disastrous results.
It also requires that all users of the plist in question agree to
apply the same locking mechanism, a serious problem for code reuse.
Having the lisp fail because of a mangled property list seems like
something worth avoiding.

>> To re-paste a phrase from your last paragraph:
>>
>> > has no control over, SBCL will obviously need to ensure that access to
>> > them is safe.
>>
>> Again, this is agreed, and Allegro CL will also do the same. But in
>> the meanwhile, while those things are being done, we will not claim to
>> have already done this, to have an smp lisp.
>
> As far as I am concerned, SBCL in its current state is perfectly
> usable for many kinds of multi-threaded programs.

Perhaps so. A system might be usable for delivering a stable, static,
bug-free application and still present serious problems for
development, testing and evolution.

> A couple of years ago I would've hesitated to run any services of my
> own on a threaded SBCL, let alone recommend somebody else do
> that. Since then things improved, as they tend to do. Had that rough
> experimental thread support not existed, that progress would not have
> been made. It is an inherent part of SBCL's open development model
> that some features get released as experimental, and slowly mature.

Yes, understood. However, it is only very recently that I've seen
notations of bug fixes in areas of sbcl that seem fundamental to me,
like type errors (Jan), packages (June), hash-tables (Nov). What's
next? Since your documentation says that large amounts of your
library "have not been inspected for thread-safety", it is impossible
to tell. Such inspection is fundamental for our own effort.

>> The point is that you've chosen to put the smp label onto sbcl,
>> knowing that there are obvious bugs that are going to affect your
>> users. Perhaps this is acceptable to your users; after all, they do
>> have the sources, and can fix these bugs themselves.
>
> I'm not aware of any obvious bugs that are going to affect our users.
> I know of some obscure ones, that are unlikely to affect them, but the
> obvious ones tend to get fixed...

It's the ones you don't know about that are gonna get ya...

> As for acceptability... Were I a user who needs real multithreading, I
> would definitely prefer to have a 98% solution now rather than
> possibly having a 99% solution two years from now. As an implementor I
> definitely prefer to give out the 98% solution, since it makes it much
> more likely that the 99% solution will be achieved. But I certainly
> understand that your concerns as a commercial entity, and of your
> users as customers paying a lot of money, would be different than
> those of developers and users of open source software.

I think we're just looking at this with different perspectives.
You document that your lisp has had little experience in the area of
multiprocessing (it also only works on certain platforms) and I can
see why you believe that you have a 98% solution. We'll just have to
look at this thread a year from now and see how many bugs you've
posted fixes against your threading system :-) I'm reminded of the
software developer whose estimation of time left on the project was
always "two weeks". It didn't matter that it was a 6 month project,
or even if it were a ne week project, the estimation was always "two
weeks". But the sad thing was that in his mind, he really did believe
that the estimation was accurate, despite sanity checks that showed
otherwise.

Now, all of this talk about what sbcl does or doesn't do might lead
you to think that we're out to bash sbcl. This is not the case at
all. The only reason we mention sbcl in our "marketing material" is
because sbcl's own "marketing material" (i.e. documentation and
word-of-mouth that sbcl has what users want in the way of smp) is
causing people to come to us and ask us the questions on that faq
list. And we're simply answering them as honestly as we can. But
it's not about sbcl; it's about Allegro CL:

In our view, we can call Allegro CL an SMP implementation when it has
the following characteristics:

1. multiple cpus, if available, can be executing lisp code
simultaneously.

2. Legal lisp code running in multiple threads will not corrupt the
heap, even if the code itself fails due to its own synchronization
bugs.

3. We provide an adequate set of macros and functions for
syncronizing user code.

4. No major lisp subsystems are protected by gross locks.

Once these are met, we will be comfortable calling our lisp an smp
lisp.

>> > On the other hand, there's thread safety of user-accessible data
>> > structures. Here the policy of SBCL is that users are responsible for
>> > properly synchronizing their accesses, we just provide the tools for
>> > the users (locks, condition variables, a compare-and-swap on many
>> > kinds of primitive places). I believe this is the right engineering
>> > choice. The majority of data structures are not accessed concurrently
>> > from multiple threads. It makes little sense to unconditionally impose
>> > the thread safety overhead on all users, in all cases. It's easier for
>> > the user to add synchronization, than for him to remove it from
>> > built-in data structures.
>>
>> I think we agree in all of these areas.
>
> Do we? From what you wrote below, it seems that you're advocating full
> synchronization of all data structure accesses.

No, just the ones that can mess up a system out from under the user.

>> Now if your own documentation only reflected that philosophy...
>
> I'm really not sure what you're referring to here. Could you expand on
> that?

I think I did, early in this article.

alex.re...@gmail.com

unread,
Dec 16, 2007, 12:35:01 AM12/16/07
to
On Dec 14, 11:31 am, Raffael Cavallaro <raffaelcavallaro@pas-d'espam-
s'il-vous-plait-mac.com> wrote:

happy to. Say you are using on massively parallel machine - this is
how things started - such as a Connection Machine (SIMD architecture).
Assume you have a Go board of 19 x 19. Each cell of the board would be
mapped to a CPU (there were up to 64k CPUs on the CM2). Each CPU would
compute a diffusion equation accessing its 4 or 8 neighbors. This is
pretty simple. In the case of the diffusion coefficient D of 0.25 all
you do is to read the values of you neighbors and compute the average.
This will be your new value. Now back to reality. We do not have 64k
CPUs but just 8 cores. This means one needs to map the game board onto
the 8 cores. Think of drawing a n * n matrix and then slicing that up
into 8 roughly equal pieces. To solve a game such as GO one probably
needs to have a couple of layers. Example layers: in Soccer we used 3
layers. One for the ball, one for the payers of team A and one for the
players of team B. Now, I really am no GO expert. Take everything here
with a several grains of salt. With one layer you would have the black
pieces have a value of, say, 1.0 and the white ones 0.0. In the next
layer the other way around. I predict that there would be a need for a
couple of more layers to be able to have patterns relevant to GO be
able to emerge.

The statical mapping means that the game board + layers are fixed.
That is, on a 8 core machine you would only once, at the beginning of
the game, create the 8 threads that you keep for the rest of the game.
Each tread is assigned a fixed sub matrix and/or layer of cells
computing diffusion functions. The instruction/functions are all the
same. Only the data is different. The threads should be synchronized
and the need to be able to access shared memory. The game board matrix
* layers would be one shared chunk of memory. Each game board matrix
is a diffusion map. The only potentially critical sections regarding
shared data are at the edges of the sub matrices. There treads may
have to read values produced by other threads.

In essence one wants to use a process that works with data
parallelism. The Connection machine was good at that. A multi core
machine is a MIMD architecture which is more powerful that SIMD but
ironically to implement data parallelism one does have to implement
some synchronization. This kind of synchronization is unavoidable
because all kinds of other applications will be running on that
machine stealing some CPU cycles from some of the cores here and
there.

is this helping at all?

alex

Daniel Weinreb

unread,
Dec 16, 2007, 7:19:20 AM12/16/07
to

I agree very much and I think this is the heart of the issue.
When you take Common Lisp and add concurrency, you have to
say all kinds of things about where you have and have not put
concurrency control. Franz appears to be taking an extremely
stringent approach, saying that everything in the Common Lisp
manual should be made thread-safe. I agree with their assessment
that it's very hard (probably impossible) to do this without
unacceptable performance overhead.

But for exactly the reason you (Tim) say, there's another
approach, in which you carefully say which things are and are not
thread-safe, since only the application knows where the atomicity
boundaries are, and since you don't want to take that performance
hit.

I am not comfortable with Ranier's classification of any Lisp
implementation as being either "no thread safety" or "thread
safety", for this reason. Java falls into neither of these
two classes. I agree with Alex that there is no such thing
as "absolute" thread safety. Juho explains that for
user-accessible data structures, thread-safety is the
responsibility of the application programmer. I agree
that this is the right engineering choice, just as it is
in Java.

Juho says "As far as I am concerned, SBCL in its current


state is perfectly usable for many kinds of multi-threaded

programs." That is clearly true. The large and complex
airline reservation system that we are building at ITA
Software uses multi-threading heavily. We are currently
using SBCL. It works. We have left our hard-pounding
performace-test scenario running for 24 hours with no
errors for 24 hours, using multiple-threading heavily.

(In all fairness: it did take us a lot of work to get
all the threading-related bugs out of our application.
Concurrent programming is hard. But this would have
been true no matter what Lisp implementation we had
used; it would have been just as true had we coded in
Java, too.)

So perhaps Franz are merely setting too high a bar for
themselves, by setting requirements that are simply too
hard for anyone to meet. If Franz took the above attitude
with Allegro CL, perhaps it would be a lot easier to
implement SMP support. I recommend that they consider this.

Daniel Weinreb

unread,
Dec 16, 2007, 7:22:38 AM12/16/07
to

This all makes perfect sense. I'm not sure that this should be
considered impractically difficult, though. My understanding
is that there are plenty of physics "codes" (as those people
refer to programs, in their jargon) that do very much this kind
of thing. Yes, programming at the "edges" where the regions
meet is tricky; you just deal with it.

Daniel Weinreb

unread,
Dec 16, 2007, 7:26:15 AM12/16/07
to

I can report that SBCL on Linux on x86 with eight cores does work
fine. So does Clozure Common Lisp (OpenMCL). (We have not tested
anything else at this point.) This is for a very large production
application that has been stringently tested by a multi-person team
who do nothing else, full-time.

Edi Weitz

unread,
Dec 16, 2007, 10:41:03 AM12/16/07
to
On Sat, 15 Dec 2007 21:10:24 -0800, Duane Rettig <du...@franz.com> wrote:

> We will be providing constructs like incf-atomic, as well as other
> accessors, rather than forcing the user to use lower level locks and
> semaphores. My concern is the lack of any such minimal support for
> basic lisp operations in sbcl.

Let me avail myself of the opportunity of mentioning that I think this
is very important. The Lisps I've worked with that offered some kind
of MP functionality (not necessarily SMP) were all lacking in
providing and/or documenting safe atomic operations. Nikodemus
Siivola hinted that something like that would soon be available in
SBCL during his talk at the European Lisp Workshop in Berlin this
summer, but I haven't followed SBCL closely enough in the last months
to confirm that it's actually there already.

I'm probably repeating myself, but I still think that all the major CL
implementations right now definitely need lots of improvement both in
the features and in the documentation department.

Edi.

--

Lisp is not dead, it just smells funny.

Real email: (replace (subseq "spam...@agharta.de" 5) "edi")

Maciej Katafiasz

unread,
Dec 16, 2007, 3:08:06 PM12/16/07
to
Den Sat, 15 Dec 2007 20:39:44 -0800 skrev Mike G.:

>> > Hmm. How is SBCL's support for Solaris / Niagara? This is something
>> > I'd consider. I've always liked Sun hardware. Solaris - not so much -
>> > but that is just my bias after having worked with the GNU tool chain
>> > for so long.
>>
>> Ubuntu officially supports Niagara (at least T1), so you can very well
>> have Linux on it. Though SBCL support looks less rosy, only 0.9.17
>> listed for Linux/SPARC, and I have no idea about threading support.
>

> Yeah. I don't know about Linux on Niagara though.

I don't understand that sentence. Ubuntu _is_ Linux (a distro of, to be
precise). They're also officially supporting T1 (which means you can buy
commercial support for it, so it's not likely to be just string and tape).

Cheers,
Maciej

Tim Bradshaw

unread,
Dec 16, 2007, 3:44:56 PM12/16/07
to
On Dec 15, 6:03 pm, "Mike G." <michael.graf...@gmail.com> wrote:

>
> Hmm. How is SBCL's support for Solaris / Niagara? This is something
> I'd consider. I've always liked Sun hardware. Solaris - not so much -
> but that is just my bias after having worked with the GNU tool chain

Choice of toolchain is pretty much down to setting PATH nowadays.

> But a 32-
> core Sun is so juicy, that it could convince me to settle for an Intel
> quad Core 2.

Remember it's not 32 cores, it's 8 cores, with 4 HW threads per core
(or, for T2 8x8). It's hard to tell from the application perspective
that it's not 32/64 cores. T1s have rotten float performance if you
care about that (a single FPU for all 8 cores), T2s have one per core
(and other cool stuff better crypto, like 2 10G NICs on the die etc).

Whether these things are cheap depends on your perspective: I
remember the T1000 (which has inadequate redundancy for most real use)
being 5k (GBP I guess) or something in a reasonable config. I guess
systems might come up on ebay in due course, but they're pretty
current.

Juho Snellman

unread,
Dec 16, 2007, 4:28:29 PM12/16/07
to
Duane Rettig <du...@franz.com> writes:
> Juho Snellman <jsn...@iki.fi> writes:
> > I object to the claim "as of December 2007 they have big locks around
> > the compiler, garbage collector and foreign loader, with no other
> > locking to prevent crashes when global resources are modified in
> > separate threads". It is true that there are big locks around those
> > areas.
>
> Well, the intention of the statement was that there were no such
> protections in the compiler, etc, which I believe you have confirmed.
> I can see how that statement could be taken to mean that we thought
> you had nothing else, _anywhere_, so we'll reword it.

How could "... with no other locking to prevent crashes ..." possibly
be read in a way other than, well, there being no other locking around
to prevent crashes?



> > The suggestion that the
> > totality of thread safety work in SBCL has consisted of putting in
> > three big locks is, frankly, insulting.
>
> You misunderstand. You left off the first part of that same statement:
> "What SBCL has done is #1 above with a little of #2..." Clearly we don't
> think that you've done only big locks.

It was not at all clear to me from what you'd written.

> > Nor do I appreciate the suggestion that SBCL is not taking thread
> > safety seriously: "Also, we believe that when they get serious about
> > solving the problems in #2 ...".
>
> Interesting. You get angry when we say you are not serious about thread
> safety. However, this statement of ours comes immediately after the
> paragraph you have just confirmed:

Not angry. Perhaps dissapointed would be a better description. You're
making untrue and unfounded claims about the motivations of other Lisp
implementors in your official documentation! Is that really the kind
of thing you want to see in the Lisp community?

It's a bit as if I said something like: "Once the developers at Franz
finally get serious about performance, I think they'll find that
writing a high performance CL compiler is much harder than they
thought". I'm guessing you would object to that statement. But yet I
could probably show examples of other Lisp implementations being
faster at something, or the performance of Allegro improving between
versio. According to your reasoning that means that you aren't serious
about performance! If you were, you would never have released a
version of Allegro that did not have optimal performance.

I don't see much difference between that, and you apparently thinking
that having a real multithreaded Lisp but with access to the compiler
currently serialized is displaying a cavalier attitude to thread
safety.

> Do you really want us to keep that sentence in, or do you really not
> understand why we are assuming that you don't take thread-safety
> seriously?

No, I really do not understand.

--
Juho Snellman

Raffael Cavallaro

unread,
Dec 16, 2007, 5:05:54 PM12/16/07
to
On 2007-12-16 00:35:01 -0500, alex.re...@gmail.com said:

> The statical mapping means that the game board + layers are fixed.
> That is, on a 8 core machine you would only once, at the beginning of
> the game, create the 8 threads that you keep for the rest of the game.
> Each tread is assigned a fixed sub matrix and/or layer of cells
> computing diffusion functions.

[snip]

> is this helping at all?

Yes, much clearer to me now.

Thanks for taking the time to lay this out in detail.

regards,

Ralph

Don Geddis

unread,
Dec 17, 2007, 12:17:01 AM12/17/07
to
Juho Snellman <jsn...@iki.fi> wrote on 16 Dec 2007 23:2:

> Duane Rettig <du...@franz.com> writes:
>> Do you really want us to keep that sentence in, or do you really not
>> understand why we are assuming that you don't take thread-safety
>> seriously?
>
> No, I really do not understand.

Perhaps because you pruned a little too much from Duane's quote?

To restore the sentence immediately before and immediately after, Duane
originally wrote:

"Large amounts of the SBCL library have not been inspected for
thread-safety."

Do you really want us to keep that sentence in, or do you really not


understand why we are assuming that you don't take thread-safety

seriously? When _do_ you intend to finish inspecting your libraries
for thread safety?

Clearly, one of the most critical points in Duane's estimation is that SBCL
has significant critical libraries which have not been even checked (much
less corrected) for thread-safety.

It seems somewhat self-evident that an implementation which hasn't even
checked significant code for thread safety, would be calling itself currently
thread-safe. And be offended by a statement that they don't (yet) take
thread-safety seriously.

Is Duane's inference really so difficult to understand? Or instead, could
it be that the antecedent is false? Is SBCL's own documentation, which
Duane quoted above, obsolete at this point?

Because, given the statement in SBCL's documentation -- if true -- then
Duane's conclusion about SBCL not "taking thread safety seriously" [yet]
doesn't seem outlandish.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
When I told the people of Northern Ireland that I was an atheist, a woman in
the audience stood up and said, "Yes, but is it the God of the Catholics or the
God of the Protestants in whom you don't believe?" -- Quentin Crisp

Juho Snellman

unread,
Dec 17, 2007, 1:43:34 AM12/17/07
to
Don Geddis <d...@geddis.org> writes:
> Juho Snellman <jsn...@iki.fi> wrote on 16 Dec 2007 23:2:
> > Duane Rettig <du...@franz.com> writes:
> >> Do you really want us to keep that sentence in, or do you really not
> >> understand why we are assuming that you don't take thread-safety
> >> seriously?
> >
> > No, I really do not understand.
>
> Perhaps because you pruned a little too much from Duane's quote?

No, the surrounding context matters little. On the other hand, I
thought that the parts of my post that you snipped out explained my
position clearly. My communication skills must be lacking somehow,
considering how badly my points are being missed in this thread, but
I'll try one more time:

* If we had statistics about the amount of work spent on various
SBCL subsystems over the last couple of years, I'm pretty sure
that thread and interrupt safety would be the biggest area of
contribution. Not something you'd expect if it were not taken
seriously.

* It makes little sense to claim that something must be 100%
perfect, or the developers aren't serious about it. Resources will
always be limited, so there is no way for that perfection to be
achieved in all areas. See the example that you snipped. Thus the
most important parts will be audited and fixed first.

To give an example, it's vitally important that generic function
dispatch is thread safe. It's less important in practice that
concurrent CLOS operations and modification of the class hierarchy
be safe. It's not even clear what the correct semantics would be
in the latter case. So obviously the former case will have
received a lot of attention in SBCL, while the latter has been
mostly ignored.

The parts of SBCL which *are* expected to be threadsafe are
sufficient to successfully run substantial heavily multithreaded
applications.

* The goals of SBCL when it comes to thread safety are very
different from what Franz is apparently hoping to achieve. I
personally think their approach is the wrong one, Duane seems to
think that SBCL's is. That hardly qualifies as a reason to claim
that either party is not taking thread safety seriously.

Is that clearer?

--
Juho Snellman

Ken Tilton

unread,
Dec 17, 2007, 7:39:53 AM12/17/07
to
Words have power, words lack precision.


Juho Snellman wrote:
> I don't really understand why Franz need the feel to imply on their

> web page that SBCL does not care about thread safety,..

"[implying] SBCL does not care"

Shocking! Franz said that! This I gotta see.

Juho Snellman wrote:
> Nor do I appreciate the suggestion that SBCL is not taking thread
> safety seriously: "Also, we believe that when they get serious about
> solving the problems in #2 ...".

Oops. Here Juho escalates "get serious about" to "not taking thread
safety seriously". But "get serious about" can mean stop being
capricious, negligent, cavalier, and other fun pejoratives, or it can
simply mean "when they focus on this problem", the idea being that some
things will not reveal themselves until you actually engage the enemy.
The second more benign interpretation is supported by a fuller quote:

http://www.franz.com/support/faq/index.lhtml#s13q4:


> Also, we believe that when they get serious about solving the

> problems in #2, they will find there are significant performance
> issues that need to be solved.

The image thus from this implication of unseriousness is simply that
over on this workbench we have #2 awaiting attention while SBCL hackers
are all gathered around other perfectly worthwhile workbenches. The
image is not a bunch of yobs knocking back pints down at the corner pub
grabbing the beer wench's ass and roaring in laughter about #2.

Duane Rettig wrote:
> The most critical statements we make about sbcl's implementation come
> directly from the sbcl documentation itself:
>
>
http://www.sbcl.org/manual/Implementation-_0028Linux-x86_0029.html#Implementation-_0028Linux-x86_0029
>

> "Large amounts of the SBCL library have not been inspected for

> thread-safety. Some of the obviously unsafe areas have large locks
> around them, so compilation and fasl loading, for example, cannot be
> parallelized. Work is ongoing in this area."
>

Ah, Franz quotes SBCL: "work is ongoing". Sounds back-burnerish to this
reader, sufficiently so that the "significant performance issues" Franz
thinks SBCL will encounter once they lock horns with the challenge might
reasonably not have been flushed.

Ooops, I am in trouble:

Duane Rettig wrote:
> Interesting. You get angry when we say you are not serious about
> thread safety.

To my dismay I see Duane acquiescing in Juho's interpretation of "get
serious about", accepting and reiterating it: "when we say you are not
serious about thread safety." So either I am all wet and Franz does want
to paint SBCLers as blithely indifferent on this issue, or Juho's
highly-charged language has gotten Duane's back up a little. Duane being
the Pole Star of c.l.l cool, this would bode ill for negotiation by
incendiary.

Duane continued:


> However, this statement of ours comes immediately after the
> paragraph you have just confirmed:
>

>>>The most critical statements we make about sbcl's implementation come
>>>directly from the sbcl documentation itself:
>>>
>>>http://www.sbcl.org/manual/Implementation-_0028Linux-x86_0029.html#Implementation-_0028Linux-x86_0029
>>>

>>>"Large amounts of the SBCL library have not been inspected for

>>>thread-safety. Some of the obviously unsafe areas have large locks
>>>around them, so compilation and fasl loading, for example, cannot be
>>>parallelized. Work is ongoing in this area."

Oh, good, peace at hand, Duane again including the full quote from SBCL
doc indicating "work is ongoing", and... hang on...

> Do you really want us to keep that sentence in, or do you really not
> understand why we are assuming that you don't take thread-safety
> seriously?

Doh! Now Duane fully embraces the ass-grabbing, beer-chugging, guffawing
characterization, but before the yobbos can even think to call the
chieftains together for a council of war, he turns the downward
spiralling debate to a simple question of fact:

> When _do_ you intend to finish inspecting your libraries
> for thread safety?

Super. All this battling over the intentions of the developers of SBCL
as expressed by notoriously imprecise natural language can be left moot
and the overall issued can be reduced to a date comparison. Along the
way Franz has agreed to one disambiguation of the text, and this whole
seriousness detour can be replaced by an answer to the above question,
which would help comparison shoppers bind a date to "work is ongoing".

Unfortunately, this opportunity to return to the high road has not been
taken up in the first followup I see from Juho, but maybe that will happen.

More likely my dream will come true and the interpretation of "get
serious about" will continue to veer into increasingly pejorative versions.

:)

hth,kxo

--
http://www.theoryyalgebra.com/

"In the morning, hear the Way;
in the evening, die content!"
-- Confucius

Maciej Katafiasz

unread,
Dec 17, 2007, 11:16:08 AM12/17/07
to
Den Mon, 17 Dec 2007 07:39:53 -0500 skrev Ken Tilton:

> The image is not a bunch of yobs knocking back pints down at the corner
> pub grabbing the beer wench's ass and roaring in laughter about #2.

Aww man. There goes my motivation to hack SBCL :(. Maybe at least Movitz
guys will be up to the task... though given that Frode is from Norway,
it'd probably mean 8€ for a small beer and bars closing at midnight.
Dang.

Cheers,
Maciej

Ken Tilton

unread,
Dec 17, 2007, 11:28:51 AM12/17/07
to

Are you saying it is time Norway got serious about drinking?

:)

kt

Maciej Katafiasz

unread,
Dec 17, 2007, 12:15:32 PM12/17/07
to
Den Mon, 17 Dec 2007 11:28:51 -0500 skrev Ken Tilton:

>> Aww man. There goes my motivation to hack SBCL :(. Maybe at least
>> Movitz guys will be up to the task... though given that Frode is from
>> Norway, it'd probably mean 8€ for a small beer and bars closing at
>> midnight. Dang.
>
> Are you saying it is time Norway got serious about drinking?
>
> :)

Nope. More like they already got serious about drinking, then their mums
got angry and set a curfew and cut their allowance.

Cheers,
Maciej

Paul Wallich

unread,
Dec 17, 2007, 12:49:49 PM12/17/07
to
alex.re...@gmail.com wrote:

> The statical mapping means that the game board + layers are fixed.
> That is, on a 8 core machine you would only once, at the beginning of
> the game, create the 8 threads that you keep for the rest of the game.
> Each tread is assigned a fixed sub matrix and/or layer of cells
> computing diffusion functions. The instruction/functions are all the
> same. Only the data is different. The threads should be synchronized
> and the need to be able to access shared memory. The game board matrix
> * layers would be one shared chunk of memory. Each game board matrix
> is a diffusion map. The only potentially critical sections regarding
> shared data are at the edges of the sub matrices. There treads may
> have to read values produced by other threads.
>
> In essence one wants to use a process that works with data
> parallelism. The Connection machine was good at that. A multi core
> machine is a MIMD architecture which is more powerful that SIMD but
> ironically to implement data parallelism one does have to implement
> some synchronization. This kind of synchronization is unavoidable
> because all kinds of other applications will be running on that
> machine stealing some CPU cycles from some of the cores here and
> there.

It's not just a matter of other apps possibly running that will make you
need (fairly regular) synchronization. Once you get past the simplest
algorithms, running "the same program" on different data will lead to
substantially different execution times and hence processor/core/thread
utilization. The original CM design had a bit to stop each processor
when it had finished with its data, iirc, and claimed not to worry about
this because not using every processor all the time was no more of a
problem than not addressing every RAM address in a conventional CPU
exactly equally. When you have a smaller set of hardware threads this
kind of liberality may be less appropriate.

(Or you might confine yourself to diffusion algorithms for which the
running times on different data are dsitributed in a way that won't make
for big differences in runtime on each core, but finding those
algorithms for any given application may be an Interesting Problem.)

paul

Don Geddis

unread,
Dec 18, 2007, 12:04:30 AM12/18/07
to
Juho Snellman <jsn...@iki.fi> wrote on 17 Dec 2007 08:4:
> Don Geddis <d...@geddis.org> writes:
>> Juho Snellman <jsn...@iki.fi> wrote on 16 Dec 2007 23:2:
>> > Duane Rettig <du...@franz.com> writes:
>> >> Do you really want us to keep that sentence in, or do you really not
>> >> understand why we are assuming that you don't take thread-safety
>> >> seriously?
>> >
>> > No, I really do not understand.
>>
>> Perhaps because you pruned a little too much from Duane's quote?
>
> No, the surrounding context matters little.

Really? Because it seems to me that you _again_ snipped the most important
quote, this time from my post -- just like you snipped it last time from
Duane's post.

To repeat (yet again!), the quote, from SBCL's own documentation, was:

Large amounts of the SBCL library have not been inspected for
thread-safety.

This is the part that you have consistently not responded to directly.

And this is the part that suggests a claim of "SBCL doesn't take thread
safety seriously [yet]" is not unreasonable.

> My communication skills must be lacking somehow, considering how badly my
> points are being missed in this thread

I think the problem is that you snip, and don't respond directly to, the
#1 concern of the people you are talking to. But instead merely repeat your
own position, which is related to but not directly responsive to their
primary concern.

> * If we had statistics about the amount of work spent on various
> SBCL subsystems over the last couple of years, I'm pretty sure
> that thread and interrupt safety would be the biggest area of
> contribution. Not something you'd expect if it were not taken
> seriously.

Super. So you've done a lot of work on the topic. But it appears that
significant work remains to be done.

> * It makes little sense to claim that something must be 100%
> perfect, or the developers aren't serious about it.

Strawman. This is not a claim that has been made against SBCL.

The concern, which you've snipped twice now, is that having "large amounts"
of the code base not even "inspected" for thread safety, suggests the
developers aren't (yet) serious about it.

You're again being non-responsive if you interpret this as requiring the
code to be "100% perfect". And then somehow get offended.

> The parts of SBCL which *are* expected to be threadsafe are
> sufficient to successfully run substantial heavily multithreaded
> applications.

That's a much more interesting statement, but hard to reconcile with the
existing claim that large amounts of code haven't even been examined yet.

You're saying that this substantial fraction of the code base, which has
had no thread-safety work done on it, is all so obscure that it isn't
importnat for (most?) heavily threaded applications?

I suppose that's possible, but it doesn't seem likely.

> * The goals of SBCL when it comes to thread safety are very
> different from what Franz is apparently hoping to achieve.

That's again not at all relevant to my post, which primarily concerned the
important quote from SBCL's documentation:

Large amounts of the SBCL library have not been inspected for
thread-safety.

> Is that clearer?

I don't object to what you wrote. But that was clear before. And didn't
address the points in my post. Not quite so bad as to be completely
irrelevant. But close.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org

Beware the lollipop of mediocrity. Lick it once and you will suck forever.

George Neuner

unread,
Dec 18, 2007, 3:43:15 AM12/18/07
to
On Mon, 17 Dec 2007 12:49:49 -0500, Paul Wallich <p...@panix.com> wrote:

>The original CM design had a bit to stop each processor
>when it had finished with its data, iirc,

Not exactly.

On the bit-processor CMs (1,2,2a and 200)[*], most instructions were
executed conditionally depending on the state of a control flag.
There were no branching instructions - conditional tests set or
cleared a "run" flag that determined whether following instructions
would execute or be ignored. There were also some unconditional
instructions to deliberately set/clear the CPU flags, change system
configuration, etc.

Flow control was handled entirely by the host computer. CM code was
assembled into straight line "basic" blocks. The host kept track of
where in the call graph the program was and queued CM code blocks as
necessary. All CPUs processed the same instruction stream. When no
more instructions were pending, the CPUs would simply idle.


[*] The CM-5 units were based on Sun's SPARC processors and had full
32-bit instruction sets. The CM-5 also had four vector FPUs per CPU.
Parallel processing on the CM-5 could be done using either vector SIMD
or using MIMD with multiple CPUs.


>and claimed not to worry about this because not using every processor
>all the time was no more of a problem than not addressing every RAM
>address in a conventional CPU exactly equally.

Not using all processors was not a big deal, but code that routinely
depended on just a few processors was. CMs were quite expensive to
operate and there was a lot of interest in keeping as many processors
busy as possible. The CM had a light on each CPU that lit when the
CPU was active (ie. not ignoring instructions) - certain reduction
patterns were inevitable, but if your program too often had large
portions of the machine dark you were likely to take some ribbing from
your fellow programmers. Finding creative ways to make parallel
problems out of normally serial ones was a full contact sport.

There was also no way to share the SIMD models, the whole machine was
dedicated to running a single program. TMI had a program swapping
system that you could use for simple time sharing if you had a Data
Vault (the network connections weren't fast enough to support swapping
data sets from the host).


>When you have a smaller set of hardware threads this
>kind of liberality may be less appropriate.

You certainly don't want lots of cores/CPUs idle while just a few
operate. Of course, they probably won't actually be idle as the OS
will likely give them something to do, but if they aren't executing
your program they aren't helping you.

George
--
for email reply remove "/" from address

Duane Rettig

unread,
Dec 18, 2007, 3:05:45 PM12/18/07
to
Daniel Weinreb <d...@alum.mit.edu> writes:

> Tim Bradshaw wrote:
>> On Dec 15, 12:22 am, Juho Snellman <jsn...@iki.fi> wrote:
>>
>>> Do we? From what you wrote below, it seems that you're advocating full
>>> synchronization of all data structure accesses.
>>
>> Interestingly, that's more-or-less the approach that Java tried
>> initially. They backed away from it a long time ago though, as it
>> has a significant performance hit with rather little gain (since user
>> code tends to do things like "delete x from hashtable a then insert it
>> into queue b", and it helps not a jot that the deletion and insertion
>> are atomic if you need the *pair* to be atomic. So I'm sure that's
>> not what Franz are doing as they'll be aware of the Java story.
>
> I agree very much and I think this is the heart of the issue.
> When you take Common Lisp and add concurrency, you have to
> say all kinds of things about where you have and have not put
> concurrency control. Franz appears to be taking an extremely
> stringent approach, saying that everything in the Common Lisp
> manual should be made thread-safe.

For the record, and as Tim corretly guessed, this is _not_ what we're
trying to do. I answered Juho's question elsewhere in this thread.
Interestingly, the (incorrect) notion that we're advocating full
synchronization of all data structure accesses comes _after_ I made a
statement to Juho that I agreed with what he had had to say earlier.
Here is a replication of that exchange, including my final answer:

| >> > On the other hand, there's thread safety of user-accessible data
| >> > structures. Here the policy of SBCL is that users are responsible for
| >> > properly synchronizing their accesses, we just provide the tools for
| >> > the users (locks, condition variables, a compare-and-swap on many
| >> > kinds of primitive places). I believe this is the right engineering
| >> > choice. The majority of data structures are not accessed concurrently
| >> > from multiple threads. It makes little sense to unconditionally impose
| >> > the thread safety overhead on all users, in all cases. It's easier for
| >> > the user to add synchronization, than for him to remove it from
| >> > built-in data structures.
| >>
| >> I think we agree in all of these areas.
| >

| > Do we? From what you wrote below, it seems that you're advocating full
| > synchronization of all data structure accesses.
|

| No, just the ones that can mess up a system out from under the user.

Duane Rettig

unread,
Dec 18, 2007, 3:18:37 PM12/18/07
to
Ken Tilton <kenny...@optonline.net> writes:

> Words have power, words lack precision.

Loved your article. Great entertainment...

[...]

> Oh, good, peace at hand, Duane again including the full quote from
> SBCL doc indicating "work is ongoing", and... hang on...
>
>> Do you really want us to keep that sentence in, or do you really not
>> understand why we are assuming that you don't take thread-safety
>> seriously?

Remember the commercial for the after-shave? I think it was Aqua
Velva, but it might have been something else; I'm sure someone will
remember which one. Anyway, they had all kinds of sports stars (I
think Joe Montana was one) appear in this series - the guy puts the
after-shave on and either a hand comes out from nowhere or his own
hand rises involuntarily, and slaps him in the face. His response is
"Thanks, I needed that!".

But perhaps not everyone has seen that old commercial...

> Doh! Now Duane fully embraces the ass-grabbing, beer-chugging,
> guffawing characterization, but before the yobbos can even think to
> call the chieftains together for a council of war, he turns the
> downward spiralling debate to a simple question of fact:
>
>> When _do_ you intend to finish inspecting your libraries
>> for thread safety?
>
> Super. All this battling over the intentions of the developers of SBCL
> as expressed by notoriously imprecise natural language can be left
> moot and the overall issued can be reduced to a date comparison. Along
> the way Franz has agreed to one disambiguation of the text, and this
> whole seriousness detour can be replaced by an answer to the above
> question, which would help comparison shoppers bind a date to "work is
> ongoing".

And as promised, the FAQ has now been updated.

Ken Tilton

unread,
Dec 18, 2007, 4:27:59 PM12/18/07
to

Duane Rettig wrote:
> Loved your article. Great entertainment...

Thanks!

> And as promised, the FAQ has now been updated.

Minor typo: "Also, we believe that when they finish**ing** making their
libraries thread-safe..."

kt

Tim Bradshaw

unread,
Dec 18, 2007, 4:50:09 PM12/18/07
to
On Dec 18, 8:05 pm, Duane Rettig <du...@franz.com> wrote:
>
> For the record, and as Tim corretly guessed, this is _not_ what we're
> trying to do.

And I think also (if I am getting my quotes right):

> No, just the ones that can mess up a system out from under the user.

This sounds like the right thing to me. I suspect it's what Java does
- you probably can't (or should not be able to) kill a JVM with badly-
written concurrent code, but not all operations are synchronised. One
neat thing that Java does have is the ability to synchronise on any
object, though I'm not sure what the cost of that is (or why they only
provide a synchronised block rather than lock/unlock operations -
suspect there might be some efficiency win there).

All this sounds like I'm enthusing about Java, which is true to an
extent. In so far as I care about languages any more (which is less
than I did) I hate it as a language, but the people who did the design
were no fools and by now there is a vast amount of experience of
threaded Java code (most of it written by fools, OK).

Pascal Costanza

unread,
Dec 18, 2007, 5:41:49 PM12/18/07
to
Tim Bradshaw wrote:
> On Dec 18, 8:05 pm, Duane Rettig <du...@franz.com> wrote:
>> For the record, and as Tim corretly guessed, this is _not_ what we're
>> trying to do.
>
> And I think also (if I am getting my quotes right):
>
>> No, just the ones that can mess up a system out from under the user.
>
> This sounds like the right thing to me. I suspect it's what Java does
> - you probably can't (or should not be able to) kill a JVM with badly-
> written concurrent code, but not all operations are synchronised. One
> neat thing that Java does have is the ability to synchronise on any
> object, though I'm not sure what the cost of that is (or why they only
> provide a synchronised block rather than lock/unlock operations -
> suspect there might be some efficiency win there).

In Java, synchronized methods are compiled to methods with a special
flag set, but synchronized blocks are compiled to pairs of lock/unlock
operations.

I think they didn't provide lock/unlock operations at the language level
so that you cannot 'forget' to unlock an obtained lock. (Never forget
that Java is designed for the so-called 'average' programmer. ;)


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Mike G.

unread,
Dec 19, 2007, 3:50:15 PM12/19/07
to

Of course Ubuntu is Linux. So is Redhat - but did you ever run RH 4 on
a DEC Alpha? I did. Wasn't really all that fun. I have no doubt that
Ubuntu offers a decent distro with a booting kernel on Niagara. The
base toolchain might even be rock solid. But what interesting Open
Source apps have been ported to Linux/Niagara? How many have been
ported to Solaris/Niagara?

My gut tells me that most people who pay the premium for Sun hardware
run it with Solaris, and that most interested work done for the
Niagara proc is going to be done on Solaris too. I could be wrong,
though, I can't say I've really looked hard.

Maciej Katafiasz

unread,
Dec 19, 2007, 5:06:05 PM12/19/07
to
Den Wed, 19 Dec 2007 12:50:15 -0800 skrev Mike G.:

>> > Yeah. I don't know about Linux on Niagara though.
>>
>> I don't understand that sentence. Ubuntu _is_ Linux (a distro of, to be
>> precise). They're also officially supporting T1 (which means you can
>> buy commercial support for it, so it's not likely to be just string and
>> tape).
>

> Of course Ubuntu is Linux. So is Redhat - but did you ever run RH 4 on a
> DEC Alpha? I did. Wasn't really all that fun. I have no doubt that
> Ubuntu offers a decent distro with a booting kernel on Niagara. The base
> toolchain might even be rock solid. But what interesting Open Source
> apps have been ported to Linux/Niagara? How many have been ported to
> Solaris/Niagara?

Most things don't need to be "ported" to Linux/XYZ. The only ones that do
are 1) were written badly (= not portably enough, usually with regard to
endianness / 64-bit cleanness) 2) require intimate knowledge of the
underlying architecture to work (SBCL falls into that category). There
are plenty well-written, interesting Open Source apps that don't belong
to 2), though, which makes the "porting" equivalent to compilation.

Cheers,
Maciej

Mike G.

unread,
Dec 20, 2007, 5:24:48 AM12/20/07
to
On Dec 19, 5:06 pm, Maciej Katafiasz <mathr...@gmail.com> wrote:
> Den Wed, 19 Dec 2007 12:50:15 -0800 skrev Mike G.:
>
> >> > Yeah. I don't know about Linux on Niagara though.
>
> >> I don't understand that sentence. Ubuntu _is_ Linux (a distro of, to be
> >> precise). They're also officially supporting T1 (which means you can
> >> buy commercial support for it, so it's not likely to be just string and
> >> tape).
>
> > Of course Ubuntu is Linux. So is Redhat - but did you ever run RH 4 on a
> > DEC Alpha? I did. Wasn't really all that fun. I have no doubt that
> > Ubuntu offers a decent distro with a booting kernel on Niagara. The base
> > toolchain might even be rock solid. But what interesting Open Source
> > apps have been ported to Linux/Niagara? How many have been ported to
> > Solaris/Niagara?
>
> Most things don't need to be "ported" to Linux/XYZ. The only6 ones that do

> are 1) were written badly (= not portably enough, usually with regard to
> endianness / 64-bit cleanness) 2) require intimate knowledge of the
> underlying architecture to work (SBCL falls into that category). There
> are plenty well-written, interesting Open Source apps that don't belong
> to 2), though, which makes the "porting" equivalent to compilation.
>
> Cheers,
> Maciej

64-bit cleanliness may be the norm now with mainstream 64-bit
machines. But back when I ran RH4 on Alpha, this was very much not the
case. Most programs required mild patching to get them compile and
work properly.

I guess this boils down to the definition of "interesting" being used.
I'm sure that Linux/Niagara would make an excellent development system
for doing mult-threaded C. I'm sure gcc will act reasonably, and I'm
sure Emacs will work just fine. These are expected components to the
system. To not have them is a bug.

Not so for Lisp, Forth, or other... interesting.. languages.

Tim Bradshaw

unread,
Dec 20, 2007, 6:16:37 AM12/20/07
to
On Dec 19, 10:06 pm, Maciej Katafiasz <mathr...@gmail.com> wrote:

>
> Most things don't need to be "ported" to Linux/XYZ.

And of course approximately that set of things do not need to be
ported to Solaris either.

Tim Bradshaw

unread,
Dec 20, 2007, 6:19:24 AM12/20/07
to
On Dec 20, 10:24 am, "Mike G." <michael.graf...@gmail.com> wrote:

>
> 64-bit cleanliness may be the norm now with mainstream 64-bit
> machines. But back when I ran RH4 on Alpha, this was very much not the
> case. Most programs required mild patching to get them compile and
> work properly.

I'm not really familiar with Linux's approach to 64bitness, but for
Solaris things would not have to be 64bit clean - 32bit applications
will run fine, and have done for ever, and the same system will
happily rin 32 and 64bit applications at once (and almost ll do, of
course). There's no magic 64bit switch that you pull on the system
which requires everything to be 64bit suddenly

Maciej Katafiasz

unread,
Dec 20, 2007, 7:05:46 AM12/20/07
to
Den Thu, 20 Dec 2007 03:16:37 -0800 skrev Tim Bradshaw:

>> Most things don't need to be "ported" to Linux/XYZ.
>
> And of course approximately that set of things do not need to be ported
> to Solaris either.

I'd say that the set of things trivially portable between Linux/XYZ and
Linux/ABC is bigger than for either Linux/XYZ and Solaris/XYZ or Linux/
XYZ and Solaris/ABC.

Cheers,
Maciej

Tim Bradshaw

unread,
Dec 20, 2007, 8:49:26 AM12/20/07
to
On Dec 20, 12:05 pm, Maciej Katafiasz <mathr...@gmail.com> wrote:

>
> I'd say that the set of things trivially portable between Linux/XYZ and
> Linux/ABC is bigger than for either Linux/XYZ and Solaris/XYZ or Linux/
> XYZ and Solaris/ABC.
>

That's why I said "approximately", although it's not actually clear to
me that this is the case. Outside of things which depend on drivers
(audio say) that don't exist on Solaris, I can't, off the top of my
head,think of anything I've needed in a long while which has not been
trivial to port.

tim Josling

unread,
Dec 20, 2007, 2:11:36 PM12/20/07
to

I am running 64-bit linux and I can certainly run 32-bit apps. This even
includes Firefox plugins such as Flash (with a wrapper). Occasionally you
need to run things under the linux32 command to set memory options and
to change the results of uname.

Tim Josling

lisp linux

unread,
Dec 24, 2007, 3:48:03 AM12/24/07
to
Juho Snellman wrote:
> On the other hand, there's thread safety of user-accessible data
> structures. Here the policy of SBCL is that users are responsible for
> properly synchronizing their accesses, we just provide the tools for
> the users (locks, condition variables, a compare-and-swap on many
> kinds of primitive places). I believe this is the right engineering
> choice. The majority of data structures are not accessed concurrently
> from multiple threads. It makes little sense to unconditionally impose
> the thread safety overhead on all users, in all cases. It's easier for
> the user to add synchronization, than for him to remove it from
> built-in data structures.
I like this approach and feel it is the right approach.

I was a bit concerned with what I saw on the Allegro website [1] since
they mentioned
To quote
"By "thread safe", we mean the orderly updating of global resources. For
example, two threads adding to a hash table, changing a symbol's value
or defining a method on a generic function. "
I am not sure this is such a good idea. This seems like a repetition of
the java story with Hashtable and HashMap (or StringBuffer versus
StringBuilder see
http://www.leepoint.net/notes-java/data/strings/23stringbufferetc.html)
Hopefully the ACL example is just a bad example from a tech writer
rather than the engineering or may be they are referring to internal
allocations or whatever. I hope they don't mean what I think they mean.

Even for a symbol, if you intend to modify them in multiple threads, it
should be a user's responsibility to add locking. But this may be at
least debatable unlike the hash table.

Why should method definitions be thread safe (without requiring explicit
safety provisioning from user code). Cause that implies synchronization
on dispatch (just guessing).

I feel ACL is on the wrong track if they are going to be doing the
'everything is thread safe, user has to do nothing for anything defined
in ANSI' approach.
I am new to lisp, and feel silly writing this, cause it is quite
possible the story is all different under lisp compared to Java.
But I hope people look at history.

-Antony
1 http://www.franz.com/support/faq/#s-mp from an earlier post


lisp linux

unread,
Dec 24, 2007, 3:27:26 PM12/24/07
to
tim Josling wrote:
> I am running 64-bit linux and I can certainly run 32-bit apps. This even
> includes Firefox plugins such as Flash (with a wrapper). Occasionally you
> need to run things under the linux32 command to set memory options and
> to change the results of uname.
Are you saying you are able to use the 32bit flash plugin in 64 bit
firefox under linux. Can you point me to some info on how to do that.
Thanks,
-Antony

tim Josling

unread,
Dec 25, 2007, 12:10:16 AM12/25/07
to

Yes. Start here. It's a bit confusing but persist, with google assist.

http://plugindoc.mozdev.org/linux-amd64.html

Tim Josling

Juho Snellman

unread,
Dec 28, 2007, 3:54:40 PM12/28/07
to
Duane Rettig <du...@franz.com> writes:
> And as promised, the FAQ has now been updated.

Thanks.

--
Juho Snellman

lisp linux

unread,
Jan 2, 2008, 9:22:42 PM1/2/08
to
Thank You. the nspluginwrap thing worked well for flash.
-Antony

tim Josling

unread,
Jan 4, 2008, 2:03:29 AM1/4/08
to

You are welcome,

Tim Josling

0 new messages