Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Surprising performance results (or Larceny Rocks!)

8 views
Skip to first unread message

David Rush

unread,
Jun 26, 2001, 1:40:19 PM6/26/01
to
Last night I finally got a Stalin version of a very compute-bound
program built (Yay!). I decided that, just for grins, I would
benchmark it against several of the other Scheme implementations I
have lying around (At last count I had 9 installed on my system).

The program computes the distance histogram of a list of words: that
is to say it calculates the string edit distance between all pairs of
words in the list. The list is originally read from a file (I've been
processing /usr/dict/words). This is an n-squared problem for n being
the number of words in the list and n^4 in the full text length.

For my benchmark I used 1000 words randomly culled from
/usr/dict/words on Solaris 2.6. All the code was run on an unladen
swallow^W Sun Ultra-60 running Solaris 2.8

And the winner is: Larceny!

larceny: 97.30u 0.29s 1:37.73 99.8%
stalin: 113.03u 0.04s 1:53.13 99.9%
bigloo: 235.63u 0.21s 3:56.04 99.9%
chicken: 410.29u 3.80s 6:54.17 99.9%
PLT (bytecode): 689.90u 0.67s 11:30.64 99.9%

My theory as to why:

The code is written in a fairly naive fashion. In particular it is
allocation-intensive (I allocate the full DP array for each distance
calculation rather than reusing/resizing a single array), giving an
edge to Larceny and it's advanced collection techniques over Stalin
and Bigloo which use the Boehm collector. Chicken's performance was
somewhat disappointing, but I'm not familiar enough with the
implementation to understand it's characteristics. I think that PLT
would be rather faster if I compiled it to native, but I haven't had
the time to figure that one out yet.

If anyone's interested, I'll post the code and dataset on the S2 web
site (I did the multi-platforming with S2's support for SRFI-0 and
modules) so you can check for yourselves.

david rush
--
In a tight spot, you trust your ship or your rifle to get you through,
so you refer to her affectionately and with respect. Your computer? It
would just as soon reboot YOU if it could. Nasty, unreliable,
ungrateful wretches, they are. -- Mike Jackmin (on sci.crypt)

felix

unread,
Jun 26, 2001, 4:20:31 PM6/26/01
to

David Rush wrote in message ...
>[...] and Bigloo which use the Boehm collector. Chicken's performance was

>somewhat disappointing, but I'm not familiar enough with the
>implementation to understand it's characteristics.

Unfortunately SPARC isn't one of Chicken's favorite platforms
(Might be those register windows).

You can try to install a statically linked build and compile with GCC's
"-mflat" command line option. This might give an improvement.

What optimization options have you used?

>
>If anyone's interested, I'll post the code and dataset on the S2 web
>site (I did the multi-platforming with S2's support for SRFI-0 and
>modules) so you can check for yourselves.


Yeah, that would be nice. I would like to try it on x86 hardware.


cheers,
felix


Jeffrey Siegal

unread,
Jun 26, 2001, 7:42:25 PM6/26/01
to
What compiler options did you use?

Ji-Yong D. Chung

unread,
Jun 27, 2001, 3:32:49 AM6/27/01
to
Hi

"> And the winner is: Larceny!
>
> larceny: 97.30u 0.29s 1:37.73 99.8%
> stalin: 113.03u 0.04s 1:53.13 99.9%
> bigloo: 235.63u 0.21s 3:56.04 99.9%
> chicken: 410.29u 3.80s 6:54.17 99.9%
> PLT (bytecode): 689.90u 0.67s 11:30.64 99.9%

Does this mean that you were running MzScheme
interpreter? If so, then MzScheme is pretty fast --
after all, it is an interpreter, right?


David Rush

unread,
Jun 27, 2001, 2:27:46 AM6/27/01
to

Yes, to the best of my knowledge, the histogram was calculated using
the PLT bytcode interpreter. I've alway found the PLT suite to be very
fast in bytecode, for S2 (a *heavy* user of closures and
continuations) it is the second-fastest host Scheme. In fact, I was a
little disappointed with mzscheme as I was used to better performance
from it, but again, this is a very different application.

david rush
--
To get anywhere with programming we must be free to discuss and
improve subjective phenomena. and leave the objective metrics to
resultants such as bug reports.
-- The Programmer's Stone (Alan Carter & Colston Sanger)

Sven Hartrumpf

unread,
Jun 27, 2001, 3:59:07 AM6/27/01
to
Yes, larceny is fast.
BUT: for larger programs you can't use (benchmark-block-mode #t) any more.
(The error message from compile-file will look like this:
Error: SPARC assembler: value too large in Memory instruction: 4097 = 4097)
Then, performance will drop.
(This was logged as 'bug #92' back in March 1999.)
Sven
--
Computer Science VII
University of Hagen
58084 Hagen - Germany

Harvey J. Stein

unread,
Jun 27, 2001, 8:57:33 AM6/27/01
to
David Rush <ku...@bellsouth.net> writes:

> For my benchmark I used 1000 words randomly culled from
> /usr/dict/words on Solaris 2.6. All the code was run on an unladen
> swallow^W Sun Ultra-60 running Solaris 2.8
>
> And the winner is: Larceny!
>
> larceny: 97.30u 0.29s 1:37.73 99.8%
> stalin: 113.03u 0.04s 1:53.13 99.9%
> bigloo: 235.63u 0.21s 3:56.04 99.9%
> chicken: 410.29u 3.80s 6:54.17 99.9%
> PLT (bytecode): 689.90u 0.67s 11:30.64 99.9%

With bigloo, did you make its numerics non-generic? When I tested
hobbit, bigloo, stalin & gambit on a prime number sieve, using
non-generic arithmetic (& a little tweaking of the hobbit code) made
them all about the same speed. With generic arithmetic, bigloo &
gambit slowed down considerably.

--
Harvey Stein
Bloomberg LP
hjs...@bfr.co.il

David Rush

unread,
Jun 27, 2001, 9:13:40 AM6/27/01
to
David Rush <ku...@bellsouth.net> writes:
> For my benchmark I used 1000 words randomly culled from
> /usr/dict/words on Solaris 2.6. All the code was run on an unladen
> swallow^W Sun Ultra-60 running Solaris 2.8

As a number of you mentioned, I neglected to put the compiler options
used into the message. There were two reason for that: 1) I pretty
much used only the "Out-Of-The-Box" versions of the compilers, and 2)
My wife called me on the phone telling me to come home in the middle
of composing the message ;)

With option details

Scheme time Options
-------- ------- ---------------
larceny 97.30u Note: this was the `thesis' release, not 1.0a1
stalin 113.03u stalin -On -t -d1 dhist.stalin
bigloo 235.63u bigloo dhist.bigloo
chicken 410.29u chicken -optimize -usual-integrations
-unsafe -fixnum-arithmetic
PLT 689.90u mzc -z --unsafe-skip-tests

vanilla
chicken 1031.29u chicken dhist.chicken

As you can see, I did allow turn on some optimizations for PLT and
Chicken. I was giving PLT some extra slack because it was running
bytecode. Chicken was very slow without optimizations, but more about
that elsewhere.

david rush
--
Einstein said that genius abhors consensus because when consensus is
reached, thinking stops. Stop nodding your head.
-- the Silicon Valley Tarot

David Rush

unread,
Jun 27, 2001, 10:32:03 AM6/27/01
to
"felix" <felixu...@freenet.de> writes:
> David Rush wrote in message ...
> >[...] and Bigloo which use the Boehm collector. Chicken's performance was
> >somewhat disappointing, but I'm not familiar enough with the
> >implementation to understand it's characteristics.

I'm going to slice and dice your message, Felix.

> You can try to install a statically linked build and compile with GCC's
> "-mflat" command line option. This might give an improvement.
>
> What optimization options have you used?

vanilla chicken: chicken dhist.chicken -output-file dhist-chicken.c
chicken: chicken -optimize -usual-integrations
-unsafe -fixnum-arithmetic
flattened chicken: same as plain chicken, but with -mflat to gcc-2.95.1
as you suggested. This made a fair bit of a difference.

In Summary:


larceny: 97.30u 0.29s 1:37.73 99.8%
stalin: 113.03u 0.04s 1:53.13 99.9%
bigloo: 235.63u 0.21s 3:56.04 99.9%

flattened chicken: 249.84u 1.95s 4:11.84 99.9%


chicken: 410.29u 3.80s 6:54.17 99.9%

vanilla chicken: 1031.29u 4.62s 17:16.06 99.9%

It's nice to see the flattened chicken being competitive with
Bigloo. Perhaps my theories (below) are all wet after all.

> Unfortunately SPARC isn't one of Chicken's favorite platforms
> (Might be those register windows).

It might be, but I also suspect that using the GC to eliminate dead
stack frames resulting from tail calls causes a particularly high
impact for this program. It's a heavy allocator with moderate closure
use (all looping is based on combinators). I theorize that the
interaction of the stack and the nursery for this program is
particularly pernicious.

Do you do *any* TR-elimination in your compiler? I'm thinking mainly
of self-tail resursion, as that is fairly easily translated into C
with a goto. If not, this might well be a worthwhile optimization.

> >If anyone's interested, I'll post the code and dataset on the S2 web
> >site (I did the multi-platforming with S2's support for SRFI-0 and
> >modules) so you can check for yourselves.

There's a tarball up and available from:
<http://mangler.sourceforge.net/benchmarks.html>

It includes the original source, the dataset, and a srfi-0 compliant(?)
mangling which (I hope) will work under the Chicken compiler. I did
*not* use the srfi-0 version. Chicken is a directly supported target
in S2, so it performed the cond-expands by default. I think this
represents an S2 bug, now that I'm actually paying attention, but it
is relatively malign.

david rush
--
... it's just that in C++ and the like, you don't trust _anybody_,
and in CLOS you basically trust everybody. the practical result
is that thieves and bums use C++ and nice people use CLOS.
-- Erik Naggum

Matthias Blume

unread,
Jun 27, 2001, 10:55:55 AM6/27/01
to
David Rush wrote:

> Do you do *any* TR-elimination in your compiler? I'm thinking mainly
> of self-tail resursion, as that is fairly easily translated into C
> with a goto. If not, this might well be a worthwhile optimization.

I guess you can't unless either

a) the loop in question is not allocating at all
b) you are willing to allocate in a more traditional way from the
heap while in that loop

Of course, a) might happen (e.g., in some purely arithmetic computation)
and b) can be done (although it requires some care to avoid inefficiencies,
not to mention that the original point of the CotMTA method would be lost).

Matthias

felix

unread,
Jun 27, 2001, 2:41:14 PM6/27/01
to

David Rush wrote in message ...
>
> vanilla chicken: chicken dhist.chicken -output-file dhist-chicken.c
> chicken: chicken -optimize -usual-integrations
> -unsafe -fixnum-arithmetic
>flattened chicken: same as plain chicken, but with -mflat to gcc-2.95.1
> as you suggested. This made a fair bit of a difference.
>
>In Summary:
> larceny: 97.30u 0.29s 1:37.73 99.8%
> stalin: 113.03u 0.04s 1:53.13 99.9%
> bigloo: 235.63u 0.21s 3:56.04 99.9%
>flattened chicken: 249.84u 1.95s 4:11.84 99.9%
> chicken: 410.29u 3.80s 6:54.17 99.9%
> vanilla chicken: 1031.29u 4.62s 17:16.06 99.9%
>
>It's nice to see the flattened chicken being competitive with
>Bigloo. Perhaps my theories (below) are all wet after all.


:-)

>
>> Unfortunately SPARC isn't one of Chicken's favorite platforms
>> (Might be those register windows).
>
>It might be, but I also suspect that using the GC to eliminate dead
>stack frames resulting from tail calls causes a particularly high
>impact for this program. It's a heavy allocator with moderate closure
>use (all looping is based on combinators). I theorize that the
>interaction of the stack and the nursery for this program is
>particularly pernicious.
>


This is certainly very true. Optimizing for optimal performance
is somewhat difficult. For example on (current) x86 platforms,
it gives quite a performance improvement to set the nursery
size to the size of the L1 cache. But this is also dependent
on the C compiler. With GCC, a 16k nursery is a lot faster
than a 300k nursery in code compiled with Microsoft C
(which on first impression seems to generate better code), even
if the GCC version does a lot more minor GC's!

Heavy allocation is ok, since it can be done on the stack, which
doesn't slow down the processor as much as allocation on
the heap (or better: the secondary heap, with the stack being
the primary one). Chicken allocates like *crazy*, this is a fact. But
it seems that the allocation can at least be slightly compensated.

>Do you do *any* TR-elimination in your compiler? I'm thinking mainly
>of self-tail resursion, as that is fairly easily translated into C
>with a goto. If not, this might well be a worthwhile optimization.


Yes, it does that. But only very tight loops can be optimized in that
way (see the followup on Matthias' posting).

That Chicken is dog-slow without optimizations comes from the
fact that every continuation created disrupts the control flow (every
continuation creates a new C function). Just use a simple program
(like "tak") and compile it with "-debug 7" with and without optimizations.
This shows you the intermediate CPS representation.
(Beware, though: Not for the faint of heart!)

BTW, I would be very interested wether "-block -optimize-leaf-routines"
makes a difference.


cheers,
felix


felix

unread,
Jun 27, 2001, 1:56:22 PM6/27/01
to
>
>Scheme time Options
>-------- ------- ---------------
>larceny 97.30u Note: this was the `thesis' release, not 1.0a1
>stalin 113.03u stalin -On -t -d1 dhist.stalin
>bigloo 235.63u bigloo dhist.bigloo
>chicken 410.29u chicken -optimize -usual-integrations
> -unsafe -fixnum-arithmetic
>PLT 689.90u mzc -z --unsafe-skip-tests
>
>vanilla
>chicken 1031.29u chicken dhist.chicken
>

Try the "-block" option. You also might want to try "-optimize-leaf-routines".


cheers,
felix


felix

unread,
Jun 27, 2001, 2:29:46 PM6/27/01
to

Matthias Blume wrote in message <3B39F3FB...@research.bell-labs.com>...


An very sensible point. I would like to show how Chicken translates a few sample
programs:

Here is the venerable TAK:

(define (tak x y z)
(if (not (< y x))
z
(tak (tak (- x 1) y z)
(tak (- y 1) z x)
(tak (- z 1) x y) ) ) )

Compiled with "-benchmark-mode -block", we get this (for the "tak"
procedure):

static C_word C_fcall f16(C_word t0,C_word t1,C_word t2,C_word t3){
C_word tmp;
C_word t4;
C_word t5;
C_word t6;
C_word t7;
C_word t8;
C_word t9;
C_word t10;
C_word t11;
C_word t12;
C_word t13;
C_word t14;
C_word t15;
loop:
t4=t2;
t5=t1;
if(C_truep((C_word)C_fixnum_lessp(t4,t5))){
t6=(C_word)C_u_fixnum_difference(t1,C_fix(1));
t7=f16(t0,t6,t2,t3);
t8=(C_word)C_u_fixnum_difference(t2,C_fix(1));
t9=f16(t0,t8,t3,t1);
t10=(C_word)C_u_fixnum_difference(t3,C_fix(1));
t11=f16(t0,t10,t1,t2);
t13=t7;
t14=t9;
t15=t11;
t1=t13;
t2=t14;
t3=t15;
goto loop;}
else{
return(t3);}}

Here the compiler can infer that "tak" is a leaf routine, one that does
only call itself or it's direct continuation (i.e. a return). This is only possible
in non-allocating procedures, since any data allocated on the stack
will of course be lost on "return", as Matthias points out correctly.

Here an allocating procedure:

(define (iota n)
(let loop ([i n] [lst '()])
(if (zero? i)
lst
(loop (sub1 i) (cons i lst)) ) ) )

Compiled with the same options we get for the "loop":

static void C_fcall f25(C_word t0,C_word t1,C_word t2,C_word t3){
C_word tmp;
C_word t4;
C_word t5;
C_word t6;
C_word t7;
C_word t8;
C_word t9;
C_word t10;
C_word *a;
loop: /* <-- here we go in every iteration */
a=C_alloc(3); /* allocate memory for a pair - from the nursery, please */
if(!C_stack_probe(a)){ /* check stack and do a GC, if needed */
C_adjust_stack(4);
C_rescue(t0,3);
C_rescue(t1,2);
C_rescue(t2,1);
C_rescue(t3,0);
C_reclaim(trf25,NULL);}
t4=(C_word)C_eqp(t2,C_fix(0)); /* (zero? i) */
if(C_truep(t4)){
t5=t1;
((C_proc2)(void*)(*((C_word*)t5+1)))(2,t5,t3);} /* call/jump-to continuation with result */
else{
t5=(C_word)C_u_fixnum_decrease(t2); /* (sub1 i) */
t6=(C_word)C_a_i_cons(&a,2,t2,t3); /* (cons i lst) - we use the allocated memory in "a" */
t8=t1; /* to tail-call... */
t9=t5;
t10=t6;
t1=t8;
t2=t9;
t3=t10;
goto loop;}}

("C_alloc" is just the same as "alloca(<n> * WORDSIZE)")

As can be seen here, allocation on the stack is no problem inside a loop.
At least it seems to work on all C compilers I've tried (GCC and Microsoft C).


cheers,
felix


Jeffrey Siegal

unread,
Jun 27, 2001, 5:24:59 PM6/27/01
to
David Rush wrote:
> With option details
>
> Scheme time Options
> -------- ------- ---------------
> larceny 97.30u Note: this was the `thesis' release, not 1.0a1
> stalin 113.03u stalin -On -t -d1 dhist.stalin

I don't remember precisely but I think by default stalin does not pass
any optimization options to gcc while something like "-O3
-fomit-frame-pointer" will generate significantly better code, at least
on x86 (not sure what options are best for SPARC). This would give an
advantage to Larcony, which does its own code generation.

> chicken 410.29u chicken -optimize -usual-integrations
> -unsafe -fixnum-arithmetic

If you're going to use -unsafe and -fixnum-arithmetic with chicken, then
to be fair you should use similar options (or declarations) with the
other compilers.

David Rush

unread,
Jul 23, 2001, 12:51:59 PM7/23/01
to
Sorry it's been so long folks, but RL intervened a bit. Here is the
follow-up I promised to some of you concerning the performance of
various Scheme implementations on a real-life programming that I was
using for statistical analysis.

David Rush <ku...@bellsouth.net> writes:
> For my benchmark I used 1000 words randomly culled from
> /usr/dict/words on Solaris 2.6. All the code was run on an unladen
> swallow^W Sun Ultra-60 running Solaris 2.8
>
> And the winner is: Larceny!
>
> larceny: 97.30u 0.29s 1:37.73 99.8%
> stalin: 113.03u 0.04s 1:53.13 99.9%
> bigloo: 235.63u 0.21s 3:56.04 99.9%
> chicken: 410.29u 3.80s 6:54.17 99.9%
> PLT (bytecode): 689.90u 0.67s 11:30.64 99.9%
>
> My theory as to why:

Was complete and utter shite. Issue #1 was simply naive use of the
compiler options. Between diddling Scheme and gcc options I was able
to speed up the program anywhere from twofold to eightfold. On the
average gcc options made more of a difference than Scheme compiler
options, but I didn't bother to keep the intermediate results to prove
it[1]. I believe that this parallels the experience of the OCaml
community; who (IIRC) have a fairly simple (high-level) optimizer, but
have a bang-up code generator.

I would like to take a minute to defend the validity of my original
results. Obviously they are fairly useless from an academic POV;
however they are indicative of the (for lack of a better term) 'user
experience'. I would guess that 80% of Software development is done in
a 'make it work'/'ship it' mode. Very little time is spent on
optimization[2] for two reasons: bugs and bugs. The first bugs is
historical; I remember days when turning on compiler optimizations
meant that you had a good chance of getting obviously incorrect code
generated. This still happens occasionally, although not nearly as
often. The second 'bugs' derives from the fact that optimization can
expose real bugs and/or make finding `hidden' bugs (currently
undiscovered but which would have been bugs w/out optimization) very
difficult. These effects combine to make me (and many other engineers)
trust that the compiler-writers have given us `optimal' (for certain
values of optimal) settings for the compiler `out-of-the-box'.

Anyway, this is clearly not true in the Scheme world. Perhaps because
it is much easier to perform correctness-preserving translations in
Scheme while it is still difficult to maintain debuggability Scheme
implementors have weighted the compilers in favor of debugging. Either
way, things are clearly much better than they used to be.

Performance issue #2 was, as Jeffrey Mark Siskind pointed out to
me in his exploration of Stalin's poor performance on this test,
my sloppy usage of the Scheme type system. In fact nearly all systems
had significant performance gains from removing run-time type
checks. Stalin, of course made the biggest gains in this area, running
14 times faster after rationalizing the type usage in the program[3].

Anyway, on to the results. Unfortunately this is not exactly an apples
to apples comparison because the machine I was using even ran the
unmodified code slower today. E.g.

Larceny (previous): 97.30u 0.29s 1:37.73 99.8%
Larceny (today): 103.20u 0.61s 1:44.45 99.3%

Gremlins, I guess. This means that by tweaking Larceny's options, I
only got another 6% out of it as can be seen below.

So with the best optimizations I have found thus far:

Bigloo: 28.09u 0.03s 0:28.17 99.8%
Chicken: 223.43u 1.98s 3:46.14 99.6%
gambit: 47.24u 1.23s 0:48.83 99.2%
larceny: 97.68u 0.42s 1:38.30 99.7%
PLT: 706.91u 0.74s 11:49.24 99.7%
Stalin: 7.94u 0.08s 0:08.13 98.6%

The winner is: Stalin! by a factor of three over Bigloo.

Many thanks go out to the various peple who suggested optimizations,
but particularly to Jefferey Mark Siskind and Brad Lucier who
corresponded with me at length about Stalin and Gambit, respectively.

I will be re-posting the code, results and build instructions Real
Soon Now, for those of you who think you can improve upon these
results ;)

david rush

[1] Actually, I kept them when I added Gambit-C to the test suite. The
difference between naive gcc and best gcc (for sparc) was a 30%
speed up. The reported Gambit performance improves on this by also
using -D___SINGLE_HOST, which I assume haleps the macro magic in
gambit.h avoid trampoline calls.
[2] In 18 years professionally, I have only worked at 2 companies
that bother with optimization
[3] Which involved adding SRFI-9 support to S2 so that all the Schemes
could participate. I haven't yet been able to figure out how to
use Gambit's native structure and I haven't yet had time to
incorporate Bigloo's. In both cases I am sceptical about the
cost/benefits as using the native structures won't eliminate the
type *checking*, the way that Stalin can.

--
Thieves respect property. They merely wish the property to become
their property that they may more perfectly respect it.
-- The Man Who Was Thursday (G. K. Chesterton)

bri...@zipcon.net

unread,
Jul 24, 2001, 1:39:19 AM7/24/01
to
>>>>> "David" == David Rush <ku...@bellsouth.net> writes:

David> [3] Which involved adding SRFI-9 support to S2 so that all the Schemes
David> could participate. I haven't yet been able to figure out how to
David> use Gambit's native structure and I haven't yet had time to
David> incorporate Bigloo's. In both cases I am sceptical about the
David> cost/benefits as using the native structures won't eliminate the
David> type *checking*, the way that Stalin can.

Not to be flippant, but what is there to figure out about Gambit's
structures ? The documentation makes their use very clear.

However you are correct in assuming that they will not eliminate the
type-checks. Since they are implemented as "generic" vectors you'll
lose efficiency, generally speaking.

Not always though. Let's say for instance that you were using a
structure to store floating point values, then doing :

(declare
(not safe)
(flonum))

(* (foo-field1 foo-inst) (bar-field2 bar-inst))

Would optimize to eliminate safety and maximize speed. You can cook
up similar instances. If you've been talking to Brad about optimizing
for Gambit, you've been talking to the right person :-)

Brian

--

"There is no right place for the dead to live."

-- Kai, last of the Brunnen-G

David Rush

unread,
Jul 24, 2001, 7:42:13 AM7/24/01
to
bri...@zipcon.net writes:
> >>>>> "David" == David Rush <ku...@bellsouth.net> writes:
>
> David> [3] Which involved adding SRFI-9 support to S2 so that all the Schemes
> David> could participate. I haven't yet been able to figure out how to
> David> use Gambit's native structure and I haven't yet had time to
> David> incorporate Bigloo's. In both cases I am sceptical about the
> David> cost/benefits as using the native structures won't eliminate the
> David> type *checking*, the way that Stalin can.
>
> Not to be flippant, but what is there to figure out about Gambit's
> structures ? The documentation makes their use very clear.

How to map their automatically generated function names onto
user-specified names under SRFI-9. It's not obvious that
define-structure is callable below top-level (to bind the names in a
local scope and set! them to the SRFI-9 names).

PLT has a lovely multi-level approach which (at the bottom) uses
multiple-values to give the user all of the generated functions. You
can then bind them to whatever you want.

bri...@zipcon.net

unread,
Jul 24, 2001, 10:21:53 PM7/24/01
to
>>>>> "David" == David Rush <ku...@bellsouth.net> writes:

David> How to map their automatically generated function names onto
David> user-specified names under SRFI-9. It's not obvious that
David> define-structure is callable below top-level (to bind the names in a
David> local scope and set! them to the SRFI-9 names).

structures in Gambit are implemented as define-macro's. They generate
(define ...) expressions, so in theory, you could use define-structure
inside a let and then set! to re-bind.

0 new messages