Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Switch from SBCL to Erlang backend due to scalability issues(GC).

48 views
Skip to first unread message

Matthew Swank

unread,
Mar 5, 2007, 9:26:16 PM3/5/07
to
http://lambda-the-ultimate.org/node/2102
This seems to making the rounds; though Lisp is still used as a source
language then compiled to Erlang.

Matt

--
"You do not really understand something unless you
can explain it to your grandmother." - Albert Einstein.

Stefan Scholl

unread,
Mar 6, 2007, 2:45:23 AM3/6/07
to
Matthew Swank <akopa_is_very_mu...@c.net> wrote:
> http://lambda-the-ultimate.org/node/2102
> This seems to making the rounds; though Lisp is still used as a source
> language then compiled to Erlang.

But the fact about Common Lisp is hidden in some comments. People
who just read the headlines think that it's a total switch.


--
Web (en): http://www.no-spoon.de/ -*- Web (de): http://www.frell.de/

Tim Bradshaw

unread,
Mar 6, 2007, 5:07:04 AM3/6/07
to
On Mar 6, 2:26 am, Matthew Swank

<akopa_is_very_much_like_my_em...@c.net> wrote:
> http://lambda-the-ultimate.org/node/2102
> This seems to making the rounds; though Lisp is still used as a source
> language then compiled to Erlang.
>

The underlying issue seems to be that there is some memory leak in a
particular Lisp implementation under some circumstances, which may or
may not be a GC bug in that particular implementation. I've spent
stupid amounts of time chasing issues like this in the runtimes for
various languages (a great chunk of which has been trying vainly to
explain the basics of modern (last 30 years say) computer architecture
& GC design to programmers who generally seem to know nothing about
any developments since some time before they were born).

What does this say about Lisp? Nothing at all.

Wade Humeniuk

unread,
Mar 6, 2007, 9:40:53 AM3/6/07
to
Matthew Swank wrote:
> http://lambda-the-ultimate.org/node/2102
> This seems to making the rounds; though Lisp is still used as a source
> language then compiled to Erlang.
>

From the developers comments ....

<quote>
The "benchmark code" is difficult to isolate. It's not like we're computing fibbonaci
sequences here... there has to be a stream of events coming in from real-world users. SBCL
starts to use an insane amount of memory, and the footprint grows even though the set of
reachable objects (essentially we use the interal functions (room) calls to walk the
objects, and the output of (room) itself) doesn't get much bigger. It's been a while since
we've run any numbers on this problem so I don't have any handy.

Clearly something changed with the garbage collector in more recent versions of SBCL --
now SBCL uses 100% CPU all the time in production and lasts much longer before finally
bombing due to exhaustion of its pre-committed space.

And we aren't using OS-level threads at all.
By a1k0n at Mon, 2007-03-05 17:39 | login or register to post comments
</quote>

The question is... what are the SBCL group going to do about it? Sounds like
they have messed up (maybe by tinkering too much?). Screwed the pooch. Of course
the Vendetta developers should have known better then use beta software. It is
getting tiring that computer languages are getting bad reps because of human
screw-ups. Just Fix It!!!

Wade

Juho Snellman

unread,
Mar 6, 2007, 9:42:12 AM3/6/07
to
Wade Humeniuk <whumeniu+...@telus.net> wrote:
> The question is... what are the SBCL group going to do about it?

Absolutely nothing.

--
Juho Snellman

Tim Bradshaw

unread,
Mar 6, 2007, 9:47:46 AM3/6/07
to
On Mar 6, 2:40 pm, Wade Humeniuk <whumeniu+anti+s...@telus.net> wrote:

> The question is... what are the SBCL group going to do about it? Sounds like
> they have messed up (maybe by tinkering too much?). Screwed the pooch. Of course
> the Vendetta developers should have known better then use beta software. It is
> getting tiring that computer languages are getting bad reps because of human
> screw-ups. Just Fix It!!!

Maybe the Vendetta developers might consider buying some consultancy
from the SBCL people? Or are we all[*] just meant to work for free
now?

--tim

[*] I'm not an SBCL developer, but if I was I would not be any more
keen on working for free than I am now: thw whole skulking-around-
behind-restaurants-looking-for-scraps thing gets a bit tedious after a
while.

Ken Tilton

unread,
Mar 6, 2007, 11:10:52 AM3/6/07
to

The SBCL users should just be grateful they are not locked into an
implementation with unresponsive developers like they would be with a
commercial... hang on...

:)

kzo

--
Well, I've wrestled with reality for 35 years, Doctor, and
I'm happy to state I finally won out over it.
-- Elwood P. Dowd

In this world, you must be oh so smart or oh so pleasant.
-- Elwood's Mom

Richard M Kreuter

unread,
Mar 6, 2007, 1:08:23 PM3/6/07
to
Ken Tilton <kent...@gmail.com> writes:
> Wade Humeniuk wrote:
>> Matthew Swank wrote:

>> <quote>


>> Clearly something changed with the garbage collector in more recent
>> versions of SBCL -- now SBCL uses 100% CPU all the time in
>> production and lasts much longer before finally bombing due to
>> exhaustion of its pre-committed space.

>> </quote>
>>
>> The question is... what are the SBCL group going to do about it?
>

> The SBCL users should just be grateful they are not locked into an
> implementation with unresponsive developers like they would be with
> a commercial... hang on...

FWIW, I've never found the SBCL developers to be either unresponsive
or unhelpful. I once had a heap exhaustion problem and got prompt
advice on declarations that made the problem go away in that program.
Of course, you have to ask for help in order to receive it. It's
unclear from the story whether the people having heap exhaustion
problems asked for help from the SBCL developers, or reported the
change in the behavior of the garbage collector as a regression.

--
RmK

Ken Tilton

unread,
Mar 6, 2007, 2:22:41 PM3/6/07
to

Richard M Kreuter wrote:
> Ken Tilton <kent...@gmail.com> writes:
>
>>Wade Humeniuk wrote:
>>
>>>Matthew Swank wrote:
>
>
>>><quote>
>>>Clearly something changed with the garbage collector in more recent
>>>versions of SBCL -- now SBCL uses 100% CPU all the time in
>>>production and lasts much longer before finally bombing due to
>>>exhaustion of its pre-committed space.
>>></quote>
>>>
>>>The question is... what are the SBCL group going to do about it?
>>
>>The SBCL users should just be grateful they are not locked into an
>>implementation with unresponsive developers like they would be with
>>a commercial... hang on...
>
>
> FWIW, I've never found the SBCL developers to be either unresponsive
> or unhelpful.

Oh, I am sure, I just thought it would be fun to turn around on the FSF
crowd the "trapped with evil commercial vendor" argument. Note that this
point need not be grounded in reason for me to make it.

kenny

nall...@gmail.com

unread,
Mar 6, 2007, 6:13:01 PM3/6/07
to

Stefan Scholl wrote:
> Matthew Swank <akopa_is_very_mu...@c.net> wrote:
> > http://lambda-the-ultimate.org/node/2102
> > This seems to making the rounds; though Lisp is still used as a source
> > language then compiled to Erlang.
>
> But the fact about Common Lisp is hidden in some comments. People
> who just read the headlines think that it's a total switch.
>

<quote> We chose common lisp primarily for its rapid-prototyping
capabilities. In CL, it is very easy to define 'mini-languages', which
is what I've done for missions, objectives, senses, reflexes, state-
machines, business-models and more. All the code written in these
special languages will remain untouched, and be auto-translated (by CL
code) into erlang. As I develop new missions, reflexes, etc, I will
continue to use the mini-languages; the processes they describe will
simply be running on a distributed, fault-tolerant, erlang-based
platform instead of the simplistic single-threaded one they have been
running on. We had hoped for the simplistic system to continue to
serve for a while yet, so that we could focus on player-visible
features, but we always knew it would need to be made more scalable at
some point. </quote>

sounds pretty cool

Wade Humeniuk

unread,
Mar 7, 2007, 1:35:37 AM3/7/07
to
Wade Humeniuk wrote:
> Matthew Swank wrote:
>> http://lambda-the-ultimate.org/node/2102
>> This seems to making the rounds; though Lisp is still used as a source
>> language then compiled to Erlang.
>>
>
> From the developers comments ....
>
> <quote>
> The "benchmark code" is difficult to isolate. It's not like we're
> computing fibbonaci sequences here... there has to be a stream of events
> coming in from real-world users. SBCL starts to use an insane amount of
> memory, and the footprint grows even though the set of reachable objects
> (essentially we use the interal functions (room) calls to walk the
> objects, and the output of (room) itself) doesn't get much bigger. It's
> been a while since we've run any numbers on this problem so I don't have
> any handy.
>
> Clearly something changed with the garbage collector in more recent
> versions of SBCL -- now SBCL uses 100% CPU all the time in production
> and lasts much longer before finally bombing due to exhaustion of its
> pre-committed space.
>
> And we aren't using OS-level threads at all.
> By a1k0n at Mon, 2007-03-05 17:39 | login or register to post comments
> </quote>
>

On the assumption that everyone is acting rationally and intelligently,
and staring at the SBCL source for a while... it feels like the problem
is related with the usual culprits in C code.

- Stack overflow, though there seems to be a some sort of guard region
in the stack its unlikely that there is any hardware protection to
stop problems. Also signals are running on the stack. At least in the
new code, pthreads are created with limited memory (I think 1 MB).
In system running loaded perhaps there is a potential to overflow (maybe
even a signal overrunning the stack). If the code is compiled with
(safety 0) I assume SBCL will not do any guard checking. a1k0n
says they do not use pthreads, are all the threads using the main process
stack? A bunch of lisp threads all sharing the stack and overruning?
(seems unlikely).

- Heap corruption due to running unsafe code. Clobber the heap and the collector
goes who knows where, or even loses references to other objects.

So, if anyone is listening, run everything with (safety 3) (debug 3) and
rebuild SBCL with bigger stacks for threads. Try again.. and again and
again....

I assume they are running on Linux. Is that right?

Wade

michael...@gmail.com

unread,
Mar 7, 2007, 4:52:17 AM3/7/07
to
On Mar 7, 12:35 am, Wade Humeniuk <whumeniu+anti+s...@telus.net>
wrote:

> On the assumption that everyone is acting rationally and intelligently,
> and staring at the SBCL source for a while... it feels like the problem
> is related with the usual culprits in C code.
>
> - Stack overflow, though there seems to be a some sort of guard region
> in the stack its unlikely that there is any hardware protection to
> stop problems. Also signals are running on the stack. At least in the
> new code, pthreads are created with limited memory (I think 1 MB).
> In system running loaded perhaps there is a potential to overflow (maybe
> even a signal overrunning the stack). If the code is compiled with
> (safety 0) I assume SBCL will not do any guard checking. a1k0n
> says they do not use pthreads, are all the threads using the main process
> stack? A bunch of lisp threads all sharing the stack and overruning?
> (seems unlikely).
>
> - Heap corruption due to running uns afe code. Clobber the heap and the collector

> goes who knows where, or even loses references to other objects.
>
> So, if anyone is listening, run everything with (safety 3) (debug 3) and
> rebuild SBCL with bigger stacks for threads. Try again.. and again and
> again....
>
> I assume they are running on Linux. Is that right?
>
> Wade

Greetings,

I'm the principle author of the code in question. For the record, I
never approached the SBCL devs for help; I simply never felt that I
had something concrete enough to go to them with. I hate getting bug
reports so vague they're nearly useless, and I didn't want to inflict
that on anyone else. I also wish our plight hadn't been picked up by
Reddit, especially with that headline. :P

To summarize the problem, our system grows slowly in heap usage even
though the amount in memory should be reasonably constant over long
periods. The speed of the growth is generally proportionate to how
much is 'going on', which, I imagine, is related to how much garbage
is being made. This, no doubt, sounds *horribly* vague; I can only
say, I warned you that I didn't have anything concrete. The amount
that had to be 'going on', and for how long, meant that this bug only
reared its head in production. At first, when Deliverator was a
little collection of rules, we didn't even know the problem existed.
It wasn't until the code had been through a few paradigm shifts, and
was doing a lot more in general, that we even noticed the ever-growing-
memory-usage or finally saw a crash from heap exhaustion, and I don't
know if that is because the older code didn't have the problem, or
that we just didn't notice it. At the time of the first crashes it
was only happening if the lisp process was left running for a month or
more, and we typically release game updates that require server-
process restarts more frequently than monthly. After another paradigm
shift, and a rewrite of some of the most demanding code to utilize it,
the time-to-crash jumped to 12-24 hours. This is when we finally
realized we had a problem and started to look for the cause. The
haystack had already become enormous, and so had the urgency of
finding the needle (or building a new barn) before all the animals
left. ;)

I could never nail down what type of activity was producing unclaimed
garbage, because I forced each sort of thing to happen in isolation,
and it was always cleaned up, in isolation. The oddest thing to me,
and the reason I ultimately gave up trying to find the problem in my
own code, was that, given an image that had already grown in this way,
I could destroy all my root objects (hashes mostly), run a full gc,
and still be using nearly the same amount of memory as before. The
distribution of space among the types reported by room didn't lead us
anywhere; I can't recall why right now, perhaps a1k0n would. We built
our own structure-walker/counters and used the ones room uses; we
tried attaching finalizers to all the likely culprits, but a few of
anything I finalized didn't get collected in the production system.
Perhaps I should have begged the SBCL devs for help, despite my lack
of a reproducible case; at least they might have had ideas for how to
capture the relevant data.

But, as I've said in our own forums, we always knew we were going to
have to do something about scalability. After many long discussions
of our options, we felt that building a new infrastructure in Erlang,
and auto-translating as much of our declarative code as possible, was
our best option for the long run. This really shouldn't reflect
poorly on CL or Lisp in general; several of the options we considered
for the replacement platform were in the Lisp family (and, indeed, you
could argue that Erlang is too (if only there were a sexpy, alternate
syntax)). Some day, I'd love to write up the whole adventure in the
kind of detail that might help others in similar situations, but for
now, I'm still bailing water for Kourier, our new Erlang system. ;)

~Michael Warnock
Guild Software Inc.

Russell McManus

unread,
Mar 7, 2007, 7:03:25 AM3/7/07
to

"michael...@gmail.com" <michael...@gmail.com> writes:

> I could never nail down what type of activity was producing unclaimed
> garbage, because I forced each sort of thing to happen in isolation,
> and it was always cleaned up, in isolation. The oddest thing to me,
> and the reason I ultimately gave up trying to find the problem in my
> own code, was that, given an image that had already grown in this way,
> I could destroy all my root objects (hashes mostly), run a full gc,
> and still be using nearly the same amount of memory as before.

Isn't this the hallmark of "issues" with conservative collection? I
think that the default sbcl collector on i386 is conservative. It
might be possible to convince sbcl to use the slower, but (I think)
precise cheneygc instead.

But i suppose I'm closing the gate of an empty barn, horse long
gone...

-russ

Ken Tilton

unread,
Mar 7, 2007, 9:12:24 AM3/7/07
to

What I do in a situation like that is collect as much information as I
can and then report it to the developers (in my case those of AllegroCL)
and say "I know this is vague, just wondering if it rings any bells for
you." ie, just asking for some brainstorming.

On the last such -- a minor but still bothersome occasional IDE gaffe
that happened a lot but which I could not figure out how to recreate at
will -- support responded by saying "Fire up the trace dialog [something
I had never used in eight years of heavy IDE use], trace this internal
function [I had never heard of], and if you see it get kicked off and
the gaffe has occurred enter such-and-such mode and tell us who is in
the call stack."

Took a few days before it happened again, but there it was and a few
hours later I had a patch.

If the GC code is anything like my Cells code, it is about 50% disabled
debugging/diagnostic/backdoor-hacking stuff, completely unreadable. The
SBCL devs probably could have told you to "rebuild with such-and-such
turned on", wait for the machine to go "ping!", then...

ie, It never hurts to ask, offering at the same time copious
self-flagellation commensurate with the deficit in the question.

OTOH, what we are hearing now is, oh, we were going to switch to Erlang
anyway... still, I wonder what that parallel universe looks like where
you did /not/ run into a memory leak.

kzo

Wade Humeniuk

unread,
Mar 7, 2007, 2:09:05 PM3/7/07
to
michael...@gmail.com wrote:

>
> Greetings,
>
> I'm the principle author of the code in question. For the record, I
> never approached the SBCL devs for help; I simply never felt that I
> had something concrete enough to go to them with. I hate getting bug
> reports so vague they're nearly useless, and I didn't want to inflict
> that on anyone else. I also wish our plight hadn't been picked up by
> Reddit, especially with that headline. :P
>
> To summarize the problem, our system grows slowly in heap usage even
> though the amount in memory should be reasonably constant over long
> periods. The speed of the growth is generally proportionate to how
> much is 'going on', which, I imagine, is related to how much garbage
> is being made. This, no doubt, sounds *horribly* vague; I can only

<snip>

Busier system, more processes, more garbage, more gc, faster exhaustion.

Since in isolation everything is fine I assume it has to do with
multi-processing and gc. The SBCL code shows that all threads/processes
have to be "paused" on a collection and it does that with signals. I have
rarely found this combo to be stable.

I seems this issue has become mired in misinformation and miscommunication.
Thanks for putting your experiences down.

Wade

Ken Tilton

unread,
Mar 7, 2007, 2:35:53 PM3/7/07
to

SNAFU! :) Yeah, it is almost irresistible to dive into post mortem
debate even though the body has yet to be found. Many a time I have
wrestled myself to the ground when I wanted to dash off on something
like this SBCL -> Erlang port and insisted that I first identify the
problem precisely, the danger being that I would get everything over to
Erlang and see the /exact same phenomenon/. AAAAAARGGGHH!!!! The deal is
this: if I want to make some huge change, that is fine. Just first
/prove/ that it is necessary. And I do believe that, WRT Cells, 90% of
the time I trace a problem to something at the application level and my
plans to enhance (you know, sabotage) Cells get tossed. One exception
recently was that while I found it was /possible/ always to arrange
things such that dead instances never got communicated with, I finally
decided that the programming/debugging/contortions necessary on rare
occasions to avoid that were starting to be a productivity drain,
exactly wrong for productivity enhancements, and at the same time I
decided, shucks, it is not exactly that complicated to have the Cells
code in like three places check to see if something is dead. But I
digress. We left in the middle of a rant on why folks should always
track down exactly why something was failing before swapping it out.
Except! Tilton's Law says to keep changing things until a problem goes
away, even if that just means a new problem surfaces. This is where the
monkeys excel, I can really parallelize all the experiments. The
Deliverators were in production and understandably feared losing their
audience, so change for the sake of change (and probably a fix) was not
a bad idea, esp. since apparently they had their eye on that change anyway.

from the cheap seats, kzo

Wade Humeniuk

unread,
Mar 7, 2007, 3:44:29 PM3/7/07
to
Ken Tilton wrote:

There is a psychology of finding ways of justifying one's own reasons. The
decision to move to Erlang was made before the problems. It was in the back
of one's mind. Events made it easier to make the move but the intention
was there way before the action. This has been my observation of my own
behaviour. I find that when a change comes the evidence for its occurrence
was there all along. One looks for justification of one's future actions.

In this case something important was lost, understanding
and fixing the problem. At work we have a board where there is competition
to find bugs in the system (and prizes for the winner). But rarely do people
want that deep a commitment to a hunk of code.

I suppose a new definition can be added to "Free" software. Free from commitments.

Wade

Wolfram Fenske

unread,
Mar 7, 2007, 5:08:43 PM3/7/07
to
Russell McManus <ru...@cl-user.org> writes:

> "michael...@gmail.com" <michael...@gmail.com> writes:
>
>> I could never nail down what type of activity was producing unclaimed
>> garbage, because I forced each sort of thing to happen in isolation,
>> and it was always cleaned up, in isolation. The oddest thing to me,
>> and the reason I ultimately gave up trying to find the problem in my
>> own code, was that, given an image that had already grown in this way,
>> I could destroy all my root objects (hashes mostly), run a full gc,
>> and still be using nearly the same amount of memory as before.
>
> Isn't this the hallmark of "issues" with conservative collection? I
> think that the default sbcl collector on i386 is conservative.

Yes, you're right. I found this on
<http://www.sbcl.org/manual/History-and-Implementation-of-SBCL.html>:

On the x86 SBCL -- like the x86 port of CMUCL -- uses a
conservative GC. This means that it doesn't maintain a strict
separation between tagged and untagged data, instead treating some
untagged data (e.g. raw floating point numbers) as possibly-tagged
data and so not collecting any Lisp objects that they point to.
This has some negative consequences for average time efficiency
(though possibly no worse than the negative consequences of trying
to implement an exact GC on a processor architecture as
register-poor as the X86) and also has potentially unlimited
consequences for worst-case memory efficiency. In practice,
conservative garbage collectors work reasonably well, not getting
anywhere near the worst case. But they can occasionally cause odd
patterns of memory usage.

Does anyone know why CMU-CL and SBCL do that? This snippet only says
that they think a conservative GC is probably not slower than a
precise GC on x86. It doesn't say they think it's faster. That
sounds to me like the reason was not efficiency. What was it then?

--
Wolfram Fenske

A: Yes.
>Q: Are you sure?
>>A: Because it reverses the logical flow of conversation.
>>>Q: Why is top posting frowned upon?

bradb

unread,
Mar 7, 2007, 5:35:53 PM3/7/07
to
On Mar 7, 2:08 pm, "Wolfram Fenske" <i...@gmx.net> wrote:
> Russell McManus <r...@cl-user.org> writes:

I believe that the implication is that because x86 has relatively few
registers, it is harder to track those that contain tagged lisp
objects and those that hold random data (floating point or 32 bit
ints) that _might_ look like Lisp objects.
With say, 32 general purpose registers, you could use a convention
that r0..15 were always machine data (ie, not Lisp objects) and that
r16..r31 were valid tagged Lisp objects. Then the GC doesn't need to
consider r0..r15 when it collects & it can be sure that every object
it looks at is actually a Lisp object, and not the intermediate result
of a calculation that might look like a Lisp object.

It seems to me that it would be possible for the compiler to track
Lisp/non-Lisp registers if it were required. For example, you could
store a word on the stack that had status bits determining what a
register was used for. Then when the GC scavenged the stack it would
look at this word and choose which saved values were actually
objects. I'm curious if the scheme I've just outlined would actually
work?

Cheers
Brad

Frode Vatvedt Fjeld

unread,
Mar 7, 2007, 6:36:46 PM3/7/07
to
"bradb" <brad.be...@gmail.com> writes:

> It seems to me that it would be possible for the compiler to track
> Lisp/non-Lisp registers if it were required. For example, you could
> store a word on the stack that had status bits determining what a
> register was used for. Then when the GC scavenged the stack it
> would look at this word and choose which saved values were actually
> objects. I'm curious if the scheme I've just outlined would
> actually work?

I've done this and it works fine (except it's not stored on the stack,
but the principle is the same). I think perhaps Corman Lisp uses this
technique also. There is of course an overhead to consider, if you
have to change these status bits very frequently (like every few
instructions), that can be prohibitive.

--
Frode Vatvedt Fjeld

Wolfram Fenske

unread,
Mar 7, 2007, 7:11:51 PM3/7/07
to
"bradb" <brad.be...@gmail.com> writes:

> On Mar 7, 2:08 pm, "Wolfram Fenske" <i...@gmx.net> wrote:
>> Russell McManus <r...@cl-user.org> writes:

[...]

>> > Isn't this the hallmark of "issues" with conservative collection? I
>> > think that the default sbcl collector on i386 is conservative.
>>
>> Yes, you're right. I found this on
>> <http://www.sbcl.org/manual/History-and-Implementation-of-SBCL.html>:

[...]

>> Does anyone know why CMU-CL and SBCL do that? This snippet only says
>> that they think a conservative GC is probably not slower than a
>> precise GC on x86. It doesn't say they think it's faster. That
>> sounds to me like the reason was not efficiency. What was it then?
>
> I believe that the implication is that because x86 has relatively few
> registers, it is harder to track those that contain tagged lisp
> objects and those that hold random data (floating point or 32 bit
> ints) that _might_ look like Lisp objects.
> With say, 32 general purpose registers, you could use a convention
> that r0..15 were always machine data (ie, not Lisp objects) and that
> r16..r31 were valid tagged Lisp objects. Then the GC doesn't need to
> consider r0..r15 when it collects & it can be sure that every object
> it looks at is actually a Lisp object, and not the intermediate result
> of a calculation that might look like a Lisp object.

I suppose that sounds reasonable. Unfortunately, I don't have any
experience with precise GC's that have to track register contents. So
far, C is as low as I went. I think I'm gonna head over to one of the
SBCL mailing lists and ask there.

> It seems to me that it would be possible for the compiler to track
> Lisp/non-Lisp registers if it were required. For example, you could
> store a word on the stack that had status bits determining what a
> register was used for. Then when the GC scavenged the stack it would
> look at this word and choose which saved values were actually
> objects. I'm curious if the scheme I've just outlined would actually
> work?

This reminds me of something I read just yesterday in a paper by Kent
Dybvig et. al. [1]. They describe a very similar scheme in which they
store a bit mask in each call frame that tells the GC which words in
the frame are Lisp objects and which ones are machine words. The
description starts with the last paragraph on page 11 of the paper.


Footnotes:
[1] As you may know, Kent Dybvig is the man behind Chez Scheme. The
paper is called "Don't Stop the BIBOP: Flexible and Efficient
Storage Management for Dynamically Typed Languages". Link:
<http://www.cs.indiana.edu/~dyb/pubs/bibop.pdf>

michael...@gmail.com

unread,
Mar 7, 2007, 7:49:04 PM3/7/07
to
Hello again,

Just to clear a few things up:

Wade: We weren't using any kind of 'threads'. We used one socket,
serve-event, our own timer system on top of that, and callbacks
(generally chunks of declaratively-organized code stuffed into the
right hash tables so that they would be called when the events they
cared about happened). Also, we spent over 2 months trying to track
down the source of garbage, and certainly hadn't decided to definitely
go with Erlang *ever*, at that point, much less immediately. In fact,
if Franz had ever clearly said that when our trial license ran out, we
could use it on our server without paying any per-player fees or
giving them points in our company, we would have at least tried that
before moving to Erlang (because of acache/graph). Or if we had the
money, we would have paid an SBCL dev to consult for us. And we *did*
port to CMUCL (since that was the closest target) with very
disappointing results. I agree that it is most unsatisfying not to
know exactly what the problem is, but we make decisions based on what
gives our company the best chance at success, not what seems like the
Right(tm) thing to do technically (when those are at odds). Oh- and
to answer an earlier question of yours, we ran Deliverator on Linux/
x86 and now FreeBSD/x86 in production.

Ken: Insightful comments. I'd only like to point out that I had that
wrestling match daily for two months before we decided that, yes, any
change was preferable to continuing to bang our heads against the same
problem. Ultimately, though, I was fairly certain the the problem
wouldn't migrate to Erlang because a whole lot of closure-based
infrastructure that we needed in lisp to 'fake' processes, was
replaced by the language-level features (also, erlang's gc is per-
process). You might find it noteworthy that one small part of the
code that's now running in Erlang is my own dataflow language inspired
in part by Cells.

Russell and others about the conservative GC: We talked about this
possibility several times, but the sheer amount of uncollected garbage
always led us away from the hypothesis. Deliverator takes up about
180M once players have taken a few missions and convoys are flying
around the universe; at worst it was reaching 600M+ within 12 hours.
Could random bits in untagged data that look like references really
account for it more than tripling in size?

~Michael

bradb

unread,
Mar 7, 2007, 7:54:36 PM3/7/07
to
On Mar 7, 3:36 pm, Frode Vatvedt Fjeld <fro...@cs.uit.no> wrote:

Would you mind a small description on how you do this? I was thinking
that with the right kind of GC the status bits would only need
updating before function calls.

Cheers
Brad

Vassil Nikolov

unread,
Mar 7, 2007, 11:42:23 PM3/7/07
to

On Wed, 07 Mar 2007 20:44:29 GMT, Wade Humeniuk <whumeniu+...@telus.net> said:
| ...

| There is a psychology of finding ways of justifying one's own reasons. The
| decision to move to Erlang was made before the problems. It was in the back
| of one's mind. Events made it easier to make the move but the intention
| was there way before the action. This has been my observation of my own
| behaviour. I find that when a change comes the evidence for its occurrence
| was there all along. One looks for justification of one's future actions.

"People don't need reasons for what they want to do. They need excuses."

Somerset Maugham (_Theatre_ (I think; quoting from memory))

---Vassil.

--
Is your code free of side defects?

Juho Snellman

unread,
Mar 7, 2007, 11:47:41 PM3/7/07
to
michael...@gmail.com <michael...@gmail.com> wrote:
> Also, we spent over 2 months trying to track
> down the source of garbage, and certainly hadn't decided to definitely
> go with Erlang *ever*, at that point, much less immediately. In fact,
> if Franz had ever clearly said that when our trial license ran out, we
> could use it on our server without paying any per-player fees or
> giving them points in our company, we would have at least tried that
> before moving to Erlang (because of acache/graph). Or if we had the
> money, we would have paid an SBCL dev to consult for us.

Fwiw, I would've been happy to at least have a look at the this given
access to the problematic image, without getting paid for doing
consulting. Or if granting that access would not have been an option
to you, at least given you some further tips for tracking down the
problem. For example, given then way your application seems to have
been structured, it would've been possible to periodically do
non-conservative gcs.

Not that it matters now :-) Good luck with the new system!

--
Juho Snellman

Ken Tilton

unread,
Mar 8, 2007, 2:13:49 AM3/8/07
to

Juho Snellman wrote:
> Wade Humeniuk <whumeniu+...@telus.net> wrote:
>
>>The question is... what are the SBCL group going to do about it?
>
>
> Absolutely nothing.
>

Oh, look, Juho Snellman being an *sshole on comp.lang.lisp!*

http://jsnell.iki.fi/blog/archive/2005-10-12.html

No technologist with any self respect allows such a report to go
unrepaired, though I must say the user gets a lot of credit for not
reporting it. :)

But if I was pretending to offer a serious Lisp implementation, I would
shrug that off and get to work on the problem, beginning by beating a
bug report out of the user, not grandstanding on cll.

The sad thing is that the last words from said user suggest that the
problem is pretty fricking reliably reproducible, meaning then that a
child of three could debug it.

Juho, however, is hiding behind smartass comebacks. Not a true Lisper, I
have to think. STFU and go write/debug some code, that is all that matters.

kzo

* If you find that offensive, you are beginning to understand.

Juho Snellman

unread,
Mar 8, 2007, 2:52:49 AM3/8/07
to
Ken Tilton <k...@theoryyalgebra.com> wrote:
> Juho Snellman wrote:
>> Wade Humeniuk <whumeniu+...@telus.net> wrote:
>>>The question is... what are the SBCL group going to do about it?
>>
>> Absolutely nothing.
>
> Oh, look, Juho Snellman being an *sshole on comp.lang.lisp!*

No. Wade was being an asshole, I was just being honest. Nothing is
being done about this, nor will be done, unless the people who are
having the problem cooperate. We don't even know for sure whether
there *is* an sbcl problem there to be fixed.

--
Juho Snellman
"SBCL: Giving Lisp a bad reputation since 1999"

tbur...@gmail.com

unread,
Mar 8, 2007, 10:41:09 AM3/8/07
to
On Mar 8, 1:49 am, "michael.warn...@gmail.com"

<michael.warn...@gmail.com> wrote:
> Russell and others about the conservative GC: We talked about this
> possibility several times, but the sheer amount of uncollected garbage
> always led us away from the hypothesis. Deliverator takes up about
> 180M once players have taken a few missions and convoys are flying
> around the universe; at worst it was reaching 600M+ within 12 hours.
> Could random bits in untagged data that look like references really
> account for it more than tripling in size?

Sure, depending on how your data structures look. My telepathic
debugger tells me that occasionally, due to serve-event, you use up a
lot of stack space. Some of these call frames contain pointers to the
roots of large subtrees. After your stack usage goes down, the GC
continues to be confused about what's on the stack, and thinks those
subtrees are alive. Periodic stack-scrubbing would alleviate the
problem.

That's just a guess, mind you, but it sounds plausible to me.

William D Clinger

unread,
Mar 8, 2007, 11:15:49 AM3/8/07
to
michael...@gmail.com wrote:
> Russell and others about the conservative GC....

> Could random bits in untagged data that look like references really
> account for it more than tripling in size?

Yes. Which is not to say that conservative GC was the
problem, but it well could have been.

Unpublished anecdote: Larceny is an implementation of
Scheme with interchangeable garbage collectors, mostly
to support our research on garbage collection [1,2].
At one time, Larceny could use the BDW conservative
collector. That collector worked very well on our
smaller benchmarks and on most of the larger as well,
but consumed an unreasonable amount of space on one of
our larger benchmarks. Some attempts at scaling that
benchmark (which was pretty small by your standards)
suggested that the problem would get worse as the data
got larger.

We never published anything about our experience with
the conservative collector, because it was a sideshow
to our main line of gc research, but we saw enough to
scare me away from conservative gc for large systems.

Will

[1] http://www.ccs.neu.edu/home/will/GC/lth-thesis/index.html
[2] http://www.ccs.neu.edu/home/will/papers.html

Ken Tilton

unread,
Mar 8, 2007, 11:23:09 AM3/8/07
to

Juho Snellman wrote:
> Ken Tilton <k...@theoryyalgebra.com> wrote:
>
>>Juho Snellman wrote:
>>
>>>Wade Humeniuk <whumeniu+...@telus.net> wrote:
>>>
>>>>The question is... what are the SBCL group going to do about it?
>>>
>>>Absolutely nothing.
>>
>>Oh, look, Juho Snellman being an *sshole on comp.lang.lisp!*
>
>
> No. Wade was being an asshole, I was just being honest. Nothing is
> being done about this, nor will be done, unless the people who are
> having the problem cooperate. We don't even know for sure whether
> there *is* an sbcl problem there to be fixed.
>

Agreed, I was just being stupid because you took a Usenet flamewar to
the Web; there's a line there, I think, that got crossed.

As I said earlier, we cannot usefully even discuss this let alone work
on it until the actual problem is actually found and fixed. Proving both
those assertions wrong, TFB the Younger is now hard at work on the
unreported problem and making steady progress with his telepathic debugger.

This, btw, is why I am in favor of apologetic incident reports, ya never
know what someone with a good mental model of a system will come up with
just brainstorming places to look/try, even if the application is a
prime suspect. I mean, when all the code is our own, we do that all the
time. When debugging Module Y's of Module X, it sure helps knowing the
internals of Module X when something goes splat.

The good news is how a blog or reddit thingy has this power to make
things happen. The bad news is if TFBtY is right and the deliverators
could have avoided the two months of futile debugging, let alone the
port to Erlang, just by sharing a little:

"Look, sorry about this, but I am about to slit my wrists over here, I
have looked at everything, I am sure the problem is on my end, I am just
wondering if [insert apology-soaked IR here] rings any bells for
anyone over there that might give me new ideas where to look in my
stupid unworthy application. I really apologize for this, do not even
respond if it is as useless as I expect. Thx!"

Feel free to use that next time, people. :)

ken

--

"As long as algebra is taught in school,
there will be prayer in school." - Cokie Roberts

"Stand firm in your refusal to remain conscious during algebra."
- Fran Lebowitz

"I'm an algebra liar. I figure two good lies make a positive."
- Tim Allen

"Algebra is the metaphysics of arithmetic." - John Ray

http://www.theoryyalgebra.com/

Duane Rettig

unread,
Mar 8, 2007, 4:18:29 PM3/8/07
to
"Wolfram Fenske" <in...@gmx.net> writes:

> Does anyone know why CMU-CL and SBCL do that? This snippet only says
> that they think a conservative GC is probably not slower than a
> precise GC on x86. It doesn't say they think it's faster. That
> sounds to me like the reason was not efficiency. What was it then?

The tradeoff is runtime efficiency. A conservative gc is needed when
not all lisp locals (i.e. tagged values dedstined for the stack) are
initialized at the start of the function. Given register allocation
techniques that bound live ranges of variables only when they are
needed, it is quite common to see a stack location never touched by
the function as compiled. There is overhead in initializing such
variables; small, to be sure, but that overhead is constant and is a
function of the complexity of the whole function, rather than of the
portion of the algorithm that is active. So in optimizing a
particular larger algorithm, with conservative gc (i.e. lazy
initialization of variables) you could consider a function's
efficiency based loosely on the complexity of its data, whereas if
all variables must be pre-initialized, you would have to include the
function's number of internal variables in order to decide how to
maximize your run-time. That would tend to lead to the breaking up of
functions into smaller chunks, which shouldn't be necessary.

If the ratio of your run-time to gc-time is 70% or less, then perhaps
a conservative-gc is not a good choice. But if you get 97% or 98%
run-to-gc time, then eliminating the extra validation step is not
going to buy much.

Of course, the heap must be well-enough managed to be able to know
that what looks like a lisp object really is a lisp object, otherwise
the gc will introduce new "objects" that are really random bits that
happen to look like real lisp objects; these would then truly become
real lisp objects. A good conservative gc will know its heap well
enough to tell whether a bit pattern is or is not represented as a
lisp object in the heap. The only problem that this might pose is
that an object is thought to be live that is actually dead, but
because it was an object at one time, and resides still on the stack,
its object lingers around longer than its actual lifetime. Of course,
even implementations that pre-initialize their locals are subject to
mis-diagnosing a local to be live when it is really dead, unless it
also nulls out the variable at the _end_ of its live range, or if it
provides its gc with extra live-range info on all registers.

--
Duane Rettig du...@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182

Wolfram Fenske

unread,
Mar 8, 2007, 9:07:50 PM3/8/07
to
Duane Rettig <du...@franz.com> writes:

> "Wolfram Fenske" <in...@gmx.net> writes:
>
>> Does anyone know why CMU-CL and SBCL do that? This snippet only says
>> that they think a conservative GC is probably not slower than a
>> precise GC on x86. It doesn't say they think it's faster. That
>> sounds to me like the reason was not efficiency. What was it then?
>
> The tradeoff is runtime efficiency. A conservative gc is needed when
> not all lisp locals (i.e. tagged values dedstined for the stack) are
> initialized at the start of the function.

Ah. I've seen this in OCaml, which uses precise GC AFAICT. In the
parts of it that are witten in C and in C extensions (FFI), you have
to use macros to declare local variables. Besides registering the
locals with the GC they also initialize them with NULL.

> Given register allocation techniques that bound live ranges of
> variables only when they are needed, it is quite common to see a
> stack location never touched by the function as compiled.

I don't quite understand this statement. Isn't that a contradiction
to the SBCL developers saying that they don't use a precise GC on x86
because it is register-poor? If there a few registers, many locals
will be read from and written to memory instead of a register. So it
should be more likely that their stack locations have to be touched
than on architectires with more registers. Or are you saying you
usually need more stack locations on x86 because fewer variables
reside in registers, and initializing all those stack locations would
be too expensive?

> There is overhead in initializing such variables; small, to be sure,
> but that overhead is constant

I was under the impression that memory access is more efficient on a
CISC architecture because it is needed more frequently than on RISC's
(I may be wrong, though. I'm not very knowledgeable in this area.).
This should reduce the overhead of initializing local variables with
NULL on x86.

> and is a function of the complexity of the whole function, rather
> than of the portion of the algorithm that is active. So in
> optimizing a particular larger algorithm, with conservative gc
> (i.e. lazy initialization of variables) you could consider a
> function's efficiency based loosely on the complexity of its data,
> whereas if all variables must be pre-initialized, you would have to
> include the function's number of internal variables in order to
> decide how to maximize your run-time. That would tend to lead to
> the breaking up of functions into smaller chunks, which shouldn't be
> necessary.

I see.

[...] (rest of the posting snipped)

Duane Rettig

unread,
Mar 8, 2007, 11:34:34 PM3/8/07
to
"Wolfram Fenske" <in...@gmx.net> writes:

> Duane Rettig <du...@franz.com> writes:
>
>> Given register allocation techniques that bound live ranges of
>> variables only when they are needed, it is quite common to see a
>> stack location never touched by the function as compiled.
>
> I don't quite understand this statement. Isn't that a contradiction
> to the SBCL developers saying that they don't use a precise GC on x86
> because it is register-poor? If there a few registers, many locals
> will be read from and written to memory instead of a register. So it
> should be more likely that their stack locations have to be touched
> than on architectires with more registers. Or are you saying you
> usually need more stack locations on x86 because fewer variables
> reside in registers, and initializing all those stack locations would
> be too expensive?

None of the above. Registers have nothing to do with gc. Or rather,
you can treat all places which a function stores a variable as a
register, whether it resides on the stack or in a physical
register. The only difference between stack registers and hardware
registers is speed (which, in the case of risc machines, includes the
issue that most instructions only oprate on hardware registers, and so
a "register" on the stack must be then moved to a real register before
the operation can be performed.).

What is relevant is that if a "register" (or a variable) contains a
tagged object, it must be looked at by the gc. And if the gc doesn't
know that what is in that location will _always_ be a tagged lisp
value, then it must (conservatively, if you will) validate the
value as an actual lisp object before forwarding the pointer.

>> There is overhead in initializing such variables; small, to be sure,
>> but that overhead is constant
>
> I was under the impression that memory access is more efficient on a
> CISC architecture because it is needed more frequently than on RISC's
> (I may be wrong, though. I'm not very knowledgeable in this area.).
> This should reduce the overhead of initializing local variables with
> NULL on x86.

Risc and cisc architectures optimize data movement somewhat
differently, but the problem is stil the same; it takes a certain
amount of time to initialize those locations, and it does _not_ take
that time to not initialize those locations.

bradb

unread,
Mar 8, 2007, 11:49:01 PM3/8/07
to
On Mar 8, 6:07 pm, "Wolfram Fenske" <i...@gmx.net> wrote:
> I was under the impression that memory access is more efficient on a
> CISC architecture because it is needed more frequently than on RISC's
> (I may be wrong, though. I'm not very knowledgeable in this area.).
> This should reduce the overhead of initializing local variables with
> NULL on x86.

I think this may have been true long ago when the speed of the CPU and
the speed of the memory bus were closer to each other. Nowdays, on
any processor (CISC or RISC) when you have to go out to memory you
take a pretty big speed hit.
Caches mitigate this speed hit to a degree. I would imagine that most
functions that have to hit the stack inside a loop will be hitting the
L1 cache pretty often.
I've not examined this in very much detail though, so I could be
wrong.

Cheers
Brad

D Herring

unread,
Mar 9, 2007, 12:47:11 AM3/9/07
to
Wolfram Fenske wrote:
> I was under the impression that memory access is more efficient on a
> CISC architecture because it is needed more frequently than on RISC's
> (I may be wrong, though. I'm not very knowledgeable in this area.).
> This should reduce the overhead of initializing local variables with
> NULL on x86.

RISC machines introduced large numbers of registers to replace stacks or
other mechanisms used for CISC storage. Thus RISC machines were able to
avoid hitting main memory except to read new or flush old data.

Nowadays, most all CPU's are RISC or CISC/RISC hybrids. Pure RISC
systems require large numbers of opcodes, causing potential bottlenecks
in the instruction memory. CISC machines alleviate this, but their
opcodes cannot be implemented efficiently in hardware. Thus the decode
stage of today's CISC machines actually generates an equivalent series
of "micro-ops" (RISC instructions).

Simultaneously, the CISC machine uses register renaming to map its small
number of public registers into a large, RISC-like set of internal
registers. This allows for efficient pipelining, and in many cases can
overcome the need to read/write temporaries to memory.

- Daniel

Hoping to see the days of arithmetic-encoded instruction sets and memory
buses.

George Neuner

unread,
Mar 10, 2007, 1:59:56 AM3/10/07
to
On 7 Mar 2007 16:49:04 -0800, "michael...@gmail.com"
<michael...@gmail.com> wrote:

>Russell and others about the conservative GC: We talked about this
>possibility several times, but the sheer amount of uncollected garbage
>always led us away from the hypothesis. Deliverator takes up about
>180M once players have taken a few missions and convoys are flying
>around the universe; at worst it was reaching 600M+ within 12 hours.
>Could random bits in untagged data that look like references really
>account for it more than tripling in size?

Sure. If some word of data looks like a valid pointer into the heap,
a conservative collector will retain whatever it apparently points to.
Whether the collector can disambiguate false pointers depends on the
heap object representation and whether the system allows internal
pointers or only pointers to the beginning of an object.

Even when there is sufficient information to weed out wild false
pointers, a false pointer which appears to reference a valid object
can cause that object and anything it transitively references to be
retained.

Besides which, non-moving collectors are more prone to fragmentation -
using a two-level allocator helps a lot but doesn't eliminate the
problem. Given the right conditions, a non-moving system will
eventually start to fail allocations for lack of suitably sized blocks
even though, viewed globally, there may be plenty of free memory.

George
--
for email reply remove "/" from address

George Neuner

unread,
Mar 10, 2007, 2:26:34 AM3/10/07
to

It's unlikely that the GC is confused about what's on the stack. It's
more likely that a number of long-lived values happen to be false
pointers into large data structures.

It seems hard to do by accident, but a conservative collector has to
consider any properly aligned root data to be a potential pointer.
The collector will have rules and heuristics to eliminate wild false
pointers - but given a large heap, it's surprisingly easy to find
groups of bytes that appear to be valid pointers.

The reason conservative collectors are useable is because most false
pointers are found in short-lived data - on the stack or in registers
- and the heap blocks they erroneously point to will be recycled in
subsequent collections.

Vassil Nikolov

unread,
Mar 10, 2007, 8:36:50 PM3/10/07
to

On Thu, 08 Mar 2007 13:18:29 -0800, Duane Rettig <du...@franz.com> said:
| ...

| Of course, the heap must be well-enough managed to be able to know
| that what looks like a lisp object really is a lisp object, otherwise
| the gc will introduce new "objects" that are really random bits that
| happen to look like real lisp objects; these would then truly become
| real lisp objects. A good conservative gc will know its heap well
| enough to tell whether a bit pattern is or is not represented as a
| lisp object in the heap.

I suppose this question is of no more than academic interest, if
that, but would having an architecture that provides tagged words
in hardware help to avoid misidentifying objects?

George Neuner

unread,
Mar 11, 2007, 1:25:09 AM3/11/07
to
On 10 Mar 2007 17:36:50 -0800, Vassil Nikolov
<vnikolo...@pobox.com> wrote:

I'm assuming you mean extra tag bits on memory words (ala Lisp
Machine) and not simply machines that ignore or trap on certain bits
when dereferencing pointers.

Whether tagged memory would help depends on the memory organization,
pointer implementation and whether the language allows interior
pointers. If pointers are implemented in such a way that the object's
header and base (which may be neither the same nor contiguous) can be
found directly then tagged memory is of little value.

Tagged memory could help when direct interior pointers are allowed.
If each word's tag indicated whether the word is within a currently
valid object, a conservative GC could immediately identify a false
interior pointer. Without such tagging, the GC has to locate the
object's header and verify that the address lies within the object's
extent - which can be time consuming.

However, performing such tag manipulation would mean the GC would have
to touch dead objects as well as live ones - it would run in time
proportional to the size of the heap rather than proportional to the
size of live data. Probably not worth it.

Frode Vatvedt Fjeld

unread,
Mar 11, 2007, 3:38:22 PM3/11/07
to
> > "bradb" <brad.beveri...@gmail.com> writes:
> > > It seems to me that it would be possible for the compiler to
> > > track Lisp/non-Lisp registers if it were required. For example,
> > > you could store a word on the stack that had status bits
> > > determining what a register was used for. Then when the GC
> > > scavenged the stack it would look at this word and choose which
> > > saved values were actually objects. I'm curious if the scheme
> > > I've just outlined would actually work?
> >
> On Mar 7, 3:36 pm, Frode Vatvedt Fjeld <fro...@cs.uit.no> wrote:

> > I've done this and it works fine (except it's not stored on the
> > stack, but the principle is the same). I think perhaps Corman Lisp
> > uses this technique also. There is of course an overhead to
> > consider, if you have to change these status bits very frequently
> > (like every few instructions), that can be prohibitive.

"bradb" <brad.be...@gmail.com> writes:

> Would you mind a small description on how you do this? I was
> thinking that with the right kind of GC the status bits would only
> need updating before function calls.

Well, the resolution would typically be finer than a function. A
(thread-)global variable holds the register usage status (for example
there is one bit per register, indicating GC-root on/off). The
compiler will know whether it requires non-standard register usage,
and when required it emits instructions to set and re-set the
(thread-)global variable.

In my particular system (www.common-lisp.net/project/movitz) there are
exactly two modes of register usage: normal mode where all registers
except ECX are GC roots, and "secondary" mode where all registers
except EAX, ECX, and EDX are not GC roots (leaving EBX, ESI, and
EDI). Because there are two modes, only a single bit is required to
chose from them. An unused bit in the EFLAGS status register is used
for this. (Another, more heavy-weight, mechanism is used when this
scheme does not suffice.)

The same concept can be used for stack-frames, but here there would be
a per-function status variable. This could be a constant slot
associated with the function object, meaning there would be no extra
overhead in setting and re-setting the status bits.

--
Frode Vatvedt Fjeld

Stefan Scholl

unread,
Mar 18, 2007, 11:55:38 AM3/18/07
to
By the way: Most people who learn Common Lisp have fun learning
and using programming languages. It shouldn't surprise when CL
users try another "exotic" language for their project.


--
Web (en): http://www.no-spoon.de/ -*- Web (de): http://www.frell.de/

0 new messages