Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

New commercial product written in Lisp looks like a winner.

56 views
Skip to first unread message

Bruce Tobin

unread,
Aug 20, 1998, 3:00:00 AM8/20/98
to
Nendo, a 3D modeling and painting program for Win32 and Solaris from
Nichimen Graphics,
is generating a lot of favorable comments on the graphics newsgroups.
Samples:

"For me, it looks like it's going to feel a lot more like working
directly on
a real sculpture, and that I'll spend less time struggling to select a
desired polygon"

"Frankly, I am quite taken by the program because of its speed."

Can anyone confirm that Nendo is written entirely/mostly in Lisp? AFAIK
all of Nichimen's
products are; they don't advertise programming jobs for anything other
than Lisp programmers.

The product costs $99.

Info (rather sketchy) available at http://www.nichimen.com

Barry Margolin

unread,
Aug 20, 1998, 3:00:00 AM8/20/98
to
In article <35DC12D7...@infinet.com>,

Bruce Tobin <bto...@infinet.com> wrote:
>Can anyone confirm that Nendo is written entirely/mostly in Lisp? AFAIK
>all of Nichimen's
>products are; they don't advertise programming jobs for anything other
>than Lisp programmers.

I'm not surprised. I believe Nichimen Graphics is descended from Nichimen
Symbolics, Symbolics's Japanese subsidiary. They were spun off and took
over development of the S-Products a year or two before Symbolics declared
bankruptcy. Here's what their Corporate Profile web page says:

NGI is owned by Nichimen Corporation of Japan, a $60 billion trading
company, and, for many years, was the sole Japanese distributor of
Symbolics Inc.'s hardware and software. NGI's technology is based upon the
one invented and developed by the Graphics Division of Symbolics
Inc. Shortly after NGI was formed three years ago, the company ported the
computer graphics software from Symbolics workstations to Silicon Graphics
workstations and significantly enhanced it to form its flagship product,
N-World.

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Cambridge, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.

Cranecoyne

unread,
Aug 21, 1998, 3:00:00 AM8/21/98
to
>From: Barry Margolin <bar...@bbnplanet.com>
>Date: Thu, Aug 20, 1998 12:11 EDT
>Message-id: <6VXC1.7$DY4.2...@burlma1-snr1.gtei.net>

>
>In article <35DC12D7...@infinet.com>,
>Bruce Tobin <bto...@infinet.com> wrote:
>>Can anyone confirm that Nendo is written entirely/mostly in Lisp? AFAIK
>>all of Nichimen's
>>products are; they don't advertise programming jobs for anything other
>>than Lisp programmers.
>
>I'm not surprised. I believe Nichimen Graphics is descended from Nichimen
>Symbolics, Symbolics's Japanese subsidiary. They were spun off and took
>over development of the S-Products a year or two before Symbolics declared
>bankruptcy. Here's what their Corporate Profile web page says:

Nichimen's professional level products are written almost entirely in Lisp.
But
Nendo (which is aimed at the consumer market) is written in C++. Nendo does
share the same 3D modeling paradigm, underlying winged-edge data structure,
and context-sensitive menus as N-Geometry, our high-end Lisp-based 3D
modeler. So, though it isn't written in Lisp, Nendo does owe a lot to some
methodologies
that grew out of Lisp.

And yes, Nichimen Graphics software is descended from the Symbolics S-Products.

- Bob Coyne (co...@nichimen.com)

Harley Davis

unread,
Aug 21, 1998, 3:00:00 AM8/21/98
to

Cranecoyne wrote in message
<199808210546...@ladder01.news.aol.com>...

>Nichimen's professional level products are written almost entirely in Lisp.
>But Nendo (which is aimed at the consumer market) is written in C++.
Nendo does
>share the same 3D modeling paradigm, underlying winged-edge data structure,
>and context-sensitive menus as N-Geometry, our high-end Lisp-based 3D
>modeler. So, though it isn't written in Lisp, Nendo does owe a lot to some
>methodologies that grew out of Lisp.


Hi,

Could you comment on the reasons and expectations that led you to develop
the lower-end product in C++ rather than Lisp, and whether you still believe
that those reasons are valid now that the product is shipping?

Thanks!

-- Harley

Cranecoyne

unread,
Aug 22, 1998, 3:00:00 AM8/22/98
to
>From: "Harley Davis" <spamless_davis@spamless_ilog.com>
>Date: Fri, Aug 21, 1998 18:53 EDT

>
>Cranecoyne wrote in message
><199808210546...@ladder01.news.aol.com>...
>>Nichimen's professional level products are written almost entirely in Lisp.
>>But Nendo (which is aimed at the consumer market) is written in C++.
>Nendo does
>>share the same 3D modeling paradigm, underlying winged-edge data structure,
>>and context-sensitive menus as N-Geometry, our high-end Lisp-based 3D
>>modeler. So, though it isn't written in Lisp, Nendo does owe a lot to some
>>methodologies that grew out of Lisp.
>
>Could you comment on the reasons and expectations that led you to develop
>the lower-end product in C++ rather than Lisp, and whether you still believe
>that those reasons are valid now that the product is shipping?

Nendo was originally written to run on the Nintendo 64 game console. The
development tools for the Nintendo are all in C/C++. Also, the machine has
severe memory restrictions so the code had to be as compact as possible,
another reason to use C. (BTW, the Nintendo version is not yet released,
due to various circumstances.)

These reasons are still valid for the Nintendo platform. For the PC, they
don't
really apply. The biggest drawback to doing it in Lisp would be the
difficulty of finding Lisp programmers with computer graphics experience.

- Bob

Rainer Joswig

unread,
Aug 22, 1998, 3:00:00 AM8/22/98
to
In article <199808220633...@ladder03.news.aol.com>,
crane...@aol.com (Cranecoyne) wrote:

> Nendo was originally written to run on the Nintendo 64 game console. The
> development tools for the Nintendo are all in C/C++. Also, the machine has
> severe memory restrictions so the code had to be as compact as possible,
> another reason to use C. (BTW, the Nintendo version is not yet released,
> due to various circumstances.)

Hmm, sounds like a cool program for my Nintendo 64.

> These reasons are still valid for the Nintendo platform. For the PC, they
> don't
> really apply.

How about a Lisp for the programmer version of the Sony Playstation?
They have a machine where you can develop your own software for.
What is story about the upcoming SEGA console based on Windows CE?
Will this thing be programmer friendly?

> The biggest drawback to doing it in Lisp would be the
> difficulty of finding Lisp programmers with computer graphics experience.

Hmm. This is a recurring topic in this newsgroup. Is it
a general problem? Specifically to the US? How about some
development in Germany? ;-) Is it a payment problem?
Wrong focus in education? Is it Lisp's reputation and
its small market which makes a personal focus on Lisp
risky (maybe)? Are Lisp tools not accessible enough (do they
need to be more visual?)? Lack of recent literature with
real world Lisp code/examples? Are companies not
aggressive enough to give motivations for people
to learn Lisp (free devtools? open source?)?
No/wrong strategies to grow a market?

If this lack of developers is really hurting more than one company,
why not, for example, have a joint booth on CeBIT and show
the ultra cool products (I guess there are some) to the press and the
public - attracting people? If nowbody admits that they are using
Lisp, no wonder few (press, students, ...) are interested.

MARKETING? Anybody heard about the upcoming anniversary Lisp conference?
(http://www.franz.com/lugm/conferencetheme.html)

Btw., WE DO USE LISP.

Will Hartung

unread,
Aug 22, 1998, 3:00:00 AM8/22/98
to
jos...@lavielle.com (Rainer Joswig) writes:

>MARKETING? Anybody heard about the upcoming anniversary Lisp conference?
>(http://www.franz.com/lugm/conferencetheme.html)

Yeah, I've heard of it. I've scoured that little web site up and down,
but I can't seem to get much more information besides a date and
location.

Like conference fees and what not.

Clearly this is something in Franz's backyard, so I imagine that they
plan on going, but what about Harlequin? What about Eclipse/Elwood?
What about (dare I say) "The Scheme People"? An EMACS contingent as
well, perhaps?

And speaking of Harlequin, would folks consider it distasteful if they
represented Dylan there?

Is anyone here planning on going? I hope the Nichimen folks are taking
the time to submit a paper. I think it would be swell for Erik to submit
a paper telling about his endeavors to overcome the opposition he has
enountered in the marketplace. I think a paper like this would be as
relevant to the Lisp Community as another presentation on CLOS Dispatch
methods.

Kelly better be hacking some English to write up his project, I hope.

Same goes for the CL-HTTP people and maybe even Harlequins WATSON
product. What about a paper about PLOB?

Hopefully some future directions WRT Lisp and CORBA, perhaps? Both
Franz and Harlequin are interfacing to CORBA now.

Linux and Lisp, maybe?

Perhaps the excitement and anticipation will mount more as it gets
closer, but the paper deadline is the end of the month.

I'd write something up, but I don't think anyone would be particularly
interested in my little project to calculate inventory quantities for a
Poinsettia Ranch (besides it not being done, and the problem it was
designed to solve has been moved to the "Oh, we won't be doing that
next season").

--
Will Hartung - Rancho Santa Margarita. It's a dry heat. vfr...@netcom.com
1990 VFR750 - VFR=Very Red "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
1993 Explorer - Cage? Hell, it's a prison. -D. Duck

Rainer Joswig

unread,
Aug 22, 1998, 3:00:00 AM8/22/98
to
In article <vfr750Ey...@netcom.com>, vfr...@netcom.com (Will Hartung)
wrote:

> jos...@lavielle.com (Rainer Joswig) writes:
>
> >MARKETING? Anybody heard about the upcoming anniversary Lisp conference?
> >(http://www.franz.com/lugm/conferencetheme.html)
>
> Yeah, I've heard of it. I've scoured that little web site up and down,
> but I can't seem to get much more information besides a date and
> location.
>
> Like conference fees and what not.

From the announcement:
The cost of the meeting to participants is $500
for the two-day sessions and $500 for each tutorial.

> And speaking of Harlequin, would folks consider it distasteful if they
> represented Dylan there?

Would they? I have never seen the L-word being mentioned by them
in combination with Dylan in their Docs.

Cranecoyne

unread,
Aug 23, 1998, 3:00:00 AM8/23/98
to
>From: jos...@lavielle.com (Rainer Joswig)
>Date: Sat, Aug 22, 1998 03:06 EDT
>Message-id: <joswig-2208...@194.163.195.67>

>
>> The biggest drawback to doing it in Lisp would be the
>> difficulty of finding Lisp programmers with computer graphics experience.
>
>Hmm. This is a recurring topic in this newsgroup. Is it
>a general problem? Specifically to the US? How about some
>development in Germany? ;-) Is it a payment problem?
>Wrong focus in education? Is it Lisp's reputation and
>its small market which makes a personal focus on Lisp
>risky (maybe)? Are Lisp tools not accessible enough (do they
>need to be more visual?)? Lack of recent literature with
>real world Lisp code/examples? Are companies not
>aggressive enough to give motivations for people
>to learn Lisp (free devtools? open source?)?
>No/wrong strategies to grow a market?

While no longer true, for a long time the system requirements of Lisp made
writing
commercial applications difficult. In fact, Lisp is still perceived as being
slow and
inefficient by most people, often because their only experience was in using an
interpreter (many people are surprised to learn that it can be compiled). In
any case,
because of this history there are very few Lisp applications out there.

I think the best hope for changing those perceptions (which leads to the lack
of programmers) is to have as many successful and visible Lisp-based products
as possible. Which is what we (at Nichimen) are working on! ;-)

- Bob Coyne (co...@nichimen.com)

Mike McEwan

unread,
Aug 23, 1998, 3:00:00 AM8/23/98
to
crane...@aol.com (Cranecoyne) writes:

> While no longer true, for a long time the system requirements of Lisp made
> writing
> commercial applications difficult. In fact, Lisp is still perceived as being
> slow and
> inefficient by most people, often because their only experience was in using an
> interpreter (many people are surprised to learn that it can be compiled). In
> any case,
> because of this history there are very few Lisp applications out there.

Is that compiled as in `compiled to machine code' or byte-compiled?
--
Mike.

Barry Margolin

unread,
Aug 23, 1998, 3:00:00 AM8/23/98
to
In article <m3pvdrv...@lotusland.demon.co.uk>,

Both. Emacs Lisp compiles to byte code, but most other compilers compile
to machine code.

Of course, this whole compile/interpret concern should really be moot these
days. Consider all the popular applications these days that are based on
interpreters. Probably at least half the web sites use CGI scripts written
in Perl, as are many common system administration utilities. Lots of other
stuff is based on TCL/Tk or TCL/Expect, which are interpreted. I'm not
sure if Visual Basic is compiled or interpreted, but it wouldn't surprise
me if it's interpreted. And in a sense, HTML and its successor XML are
interpreted languages.

Lisp is probably one of the easiest languages to interpret (in many ways,
the internal representation of source code is similar to threaded languages
like Forth). Its main problem has been that it has a large runtime
environment that often needs to be initialized before it can start
interpreting the application. Interestingly, this hasn't stopped Java --
when my Netscape browser hits its first Java applet, it hangs my Macintosh
for 20-30 seconds while it says "Initializing Java" (on my old machine this
took 2 minutes!). If the runtime could be integrated with the OS (as was
done for Lisp on Lisp Machines, and which some OS vendors seem to be doing
now for Java) then that wouldn't be a problem.

Rainer Joswig

unread,
Aug 23, 1998, 3:00:00 AM8/23/98
to
In article <z9XD1.2$Sm5.1...@burlma1-snr1.gtei.net>, Barry Margolin
<bar...@bbnplanet.com> wrote:

> Of course, this whole compile/interpret concern should really be moot these
> days. Consider all the popular applications these days that are based on
> interpreters. Probably at least half the web sites use CGI scripts written
> in Perl,

And a lot of this stuff is notoriously slow with slow startup times.

> as are many common system administration utilities. Lots of other
> stuff is based on TCL/Tk or TCL/Expect, which are interpreted.

And the core is written in C. This stuff is nothing more than glue
for C libraries, isn't it?

> I'm not
> sure if Visual Basic is compiled or interpreted, but it wouldn't surprise
> me if it's interpreted.

Having an incremental compiler speeds up things a lot. Though, some of the
speed advantage is then lost due to some slow software architecture (e.g. I/O)
in CL. The stuff I'm using sure would be hit hard if I had to use an
interpreter.
Clever coding also helps a lot to maintain usability speedwise.

If you use an interpreter you are experiencing the division
between a "scripting" language and a system programming language
(simplified, o.k.).
In the end you often will need to work in two languages (PERL/C, Scheme/C, ...).

I prefer CL, where I can move the code from abstracted/experimental
to optimized without crossing the language boundary.

> And in a sense, HTML and its successor XML are
> interpreted languages.

I'm not impressed with the performance of HTML viewer applications.
For me this technology is really the stone age of Internet user interfaces.
I guess the Symbolics Document Examiner was more advanced as some of this stuff
years ago.

> interpreting the application. Interestingly, this hasn't stopped Java --
> when my Netscape browser hits its first Java applet, it hangs my Macintosh
> for 20-30 seconds while it says "Initializing Java" (on my old machine this
> took 2 minutes!).

What kind of Mac do you have? ;-)

On my PowerBook:

Netscape Communicator 4.5b starts in 4 secs.
Java in Netscape 4.5b starts in 3 secs.
Macintosh Common Lisp 4.2 starts in 2 secs.

Well, actually I don't care about startup times that much. These
applications will be started once and then I keep them running
(o.k., Netscape needs to be restarted sometimes due to crashes, memory leaks,
performance breakdowns, ...).

The basic incremental compile operation from an editor or a
save/compile-file/load cycle is so ridicously fast - who needs an
interpreter?

Cranecoyne

unread,
Aug 23, 1998, 3:00:00 AM8/23/98
to
>From: jos...@lavielle.com (Rainer Joswig)
>Date: Sun, Aug 23, 1998 13:54 EDT

>
>In article <z9XD1.2$Sm5.1...@burlma1-snr1.gtei.net>, Barry Margolin
><bar...@bbnplanet.com> wrote:
>
>> Of course, this whole compile/interpret concern should really be moot these
>> days. Consider all the popular applications these days that are based on
>> interpreters. Probably at least half the web sites use CGI scripts written
>> in Perl,
>
>And a lot of this stuff is notoriously slow with slow startup times.

Not only that...

In computer graphics (and other areas), numeric performance is critical. So
not only
does Lisp code have to be compiled, it needs floating point declarations and
other optimizations in order to get acceptable performance. For example, when
we ported our software off the Symbolics (which had a tagged architecture
and hence didn't require as much in the way of declarations), we had to pack
3d coordinates into arrays and pass those around between functions in order
to avoid boxing floats. Without doing this kind of thing, even compiled Lisp
code
can be too inefficient to compete with C. Running interpeted would be out of
the
question.

- Bob Coyne


Erik Naggum

unread,
Aug 23, 1998, 3:00:00 AM8/23/98
to
* Hannu Koivisto <az...@iki.fi.ns> [rant abridged]
| I haven't been complaining about the situation before simply because I
| think nothing is going to happen to it anyway.

sometimes, when I look at unemployment statistics that inexplicably make
the news and about all the people who somehow "want a chance", I cannot
for the life of me forget all the stuff that needs doing, yet doesn't get
done. this is part of the reasons why I decided long ago that I would
neither employ people nor be employed in the regular sense. I just want
to take care of some of the things that need doing. somebody will want
to pay for having annoying problems _solved_, especially when they pay a
lot for people who keep it there, only somewhat less annoying, and if
there's one guy or company who wants to pay well enough that I can make a
comfortable living off of it, hey, that's pretty cool, and it hasn't
failed me yet in 14 years of running my own business this way. I don't
_expect_ anything for free, but I do get a lot of stuff for free because
people think what I do is useful, yet recognize that it is impossible to
provide it in an employer-employee relationship or in our wonderful
"taxation economies". _most_ of the stuff that needs doing is like that:
it doesn't make sense to suffer the enormous costs of employing people to
get it done. what we've done to the small needs that societies before us
satisfied (with full employment, btw) is to make them _impossible_ to
satisfy because it isn't worth the overhead it takes to provide them in
the goverment-recognized fashion with taxes and employment rights and all
that stuff. and worse yet, even if we _could_ employ people without all
the costly stuff, it costs too much just to live in our modern societies
that it wouldn't attract anyone to actually do it.

the same holds for features in programming languages in the open market.
it doesn't pay to set up a support organization for every small feature
that a handful of people need, yet don't want to pay any real money for.
it's not that we're not rich enough to pay for it, it's that doing it for
others requires so much more than doing it for ourselves, and doing it
for pay requires _yet_ more, making it economically infeasible to do
stuff that very few people want. however, every single feature has at
least one supporter behind it. sometimes, that supporter just _needs_
it, like the unemployed people need a job. sometimes, the supporter can
go ahead and implement it all by himself and he doesn't even stop to
think that others might find it useful, or he does, and then frowns at
the high costs of advertising it, selling it, supporting it, etc, and
then somebody else, who just _needs_ it, won't know about it. this is
all a _societal_ problem.

it's all a matter of communication: if you could describe what you've
done or what you need done in such a way that a computer can help you
find what somebody else wants or has already done, without the high
overhead involved in normal transactions and without the high cost of
expression yourself very clearly, then that computer or network of
computers could perhaps again make possible what smaller societies that
_had_ total communication did, only now with computers taking care of the
total communication, and also the almost cost-free shipping of the works
involved. gotta find a way to make money transactions cheap, though.

until this wonderful world unfolds itself, you need to talk to the people
you want to get something from yourself, and you need to offer them some
incentive to give it to you, or they won't. if you "complain" and think
it's never going to happen, then, voilà! _nothing_ happens, just like you
thought it would. if you don't even help build that total communication
network of computers, your _best_ bet is that somebody else does, and I
wouldn't hold my breath for that one, either.

| So, I thought instead to shut up and start writing my own compiler. If
| all goes well, perhaps I'll write next time after a year or two about how
| I'm proceeding.

what I don't get is why you don't write the software in Common Lisp (or
Scheme, if you're as much into reinventing wheels as you seem to be),
profile it and optimize it to death or rewrite the parts that needs tight
CPU and memory control in languages suited to _that_ task. (which, by
the way, is more likely to be Fortran than C if your goal is fast numeric
code -- C and especially C++ are actually _bad_ at it.)

but go ahead, write your own compiler just because you can't get feature
X without even _talking_ to the vendors about your needs and desires and
willingness to fund their development. it's just like reinventing a
whole new societal structure just because you can't a job in this one
when you never ask anybody what they would like to pay you to do.

the future of Lisp, indeed any future, lies not in the people who want
something solved for them, but in the people who solve problems without
having to find a huge market _first_. even academia, where this was once
the working motto of most researchers, is now succumbing to the masses
and their "needs". but the masses only have "needs" as shown on TV --
they could never invent something new to need if it didn't already exist
-- _that's_ the task for people who can solve them and then people become
(or are made) aware of the problem that can go away. however, this is
not going to make anybody billionaires. solving problems never does
that. if you want to get rich, you must _never_ solve a problem, only
make more and more bogus ways to make it progressively less annoying, yet
keep it in everybody's mind at all times. I sincerely hope Lisp doesn't
get that kind of future.

#:Erik
--
http://www.naggum.no/spam.html is about my spam protection scheme and how
to guarantee that you reach me. in brief: if you reply to a news article
of mine, be sure to include an In-Reply-To or References header with the
message-ID of that message in it. otherwise, you need to read that page.

Hannu Koivisto

unread,
Aug 24, 1998, 3:00:00 AM8/24/98
to
jos...@lavielle.com (Rainer Joswig) writes:

| > The biggest drawback to doing it in Lisp would be the
| > difficulty of finding Lisp programmers with computer graphics experience.
|
| Hmm. This is a recurring topic in this newsgroup. Is it
| a general problem? Specifically to the US? How about some
| development in Germany? ;-) Is it a payment problem?
| Wrong focus in education? Is it Lisp's reputation and
| its small market which makes a personal focus on Lisp
| risky (maybe)? Are Lisp tools not accessible enough (do they
| need to be more visual?)? Lack of recent literature with
| real world Lisp code/examples? Are companies not
| aggressive enough to give motivations for people
| to learn Lisp (free devtools? open source?)?
| No/wrong strategies to grow a market?

Bob Coyne already wrote that they are using Lisp for graphics
applications and that speed and system requirements are not a
problem anymore. Well, there is graphics software and then there
is graphics software. Speed provided by typical CL compilers may
be enough for some 3D modeller, but I'm pretty sure that no CL
or Scheme compiler can make fast-enough native code for software
I've been doing. So the reasons below are naturally not
universal, but they are the reasons why _I_ am not using Lisp
for graphics and other multimedia.

First of all, I'm not a Common Lisp programmer, although I know
some CL (and I even have ACL 5beta/Linux and cmucl installed
just in case I want to do some experimenting :) I do hack elisp
every day and do some minor stuff with Scheme. I _would_ like to
use Scheme for almost all programming, including multimedia
stuff (2D/3D graphics, audio) that we've been doing, _but_ there
are no good-enough implementations. That's the major reason for
me (and some others I know). Wrong focus in education is a
problem for some people, but not for me. Real world Lisp
code/examples and literature may also be a problem for some
people, especially because most of the examples/code I've seen
is AI stuff, compilers, etc -- not high-end 3D graphics, GUI
software and so on.

Ok, so what's wrong with implementations? If my information is
incorrect, great! I really would like to be wrong. I _hate_
writing C++ and really enjoy when I can write Scheme. (I prefer
Scheme to CL because I like to write functional programs using
recursion and I don't like CL's separate namespaces for
functions and variables.)

First of all, there should be at least one otherwise qualifying
free implementation and preferably one commercial
implementation. One implementation doesn't need to support all
required platforms, but then there needs to be several
implementations that all together support all required OS and
CPU platforms.

One of the biggest problems is that _no_ Scheme or CL
implementation I know supports OS level threads in Linux. This
already forces me to use C++. MzScheme supports OS level threads
in NT (and in at least Solaris), but then again, MzScheme
doesn't have native code compiler. It's pretty damn fast
interpreter, though. MzScheme is what I'm usually using for
Lisp/Scheme programming.

Platform support should include at least Win32/x86/Alpha and
Linux/x86/Alpha. Preferably also Irix/Rxk, Digital Unix, QNX,
Solaris/Sparc, HP-UX, NetBSD and other Unices.

Another problem is speed. If, with reasonably high probability,
someone can guarantee that some CL or Scheme compiler can
produce as efficient floating point code for Pentium as Intel's
compiler plugin for Visual C++, then I will hereby promise to
make a CL or Scheme version of the C++ IDCT function I made for
mine and my friend's MPEG-1 systems player and test whether that
claim is true or not. If it's true (which I believe it is not),
I might even recode the whole player in Scheme or CL, because
it's currently non-portable C++ code (for NT) and I'd like to
make a Unix version of it. It uses threads to decode audio on
one CPU and video on another CPU, if more than one CPU is
available, so this hypothetical compiler must support OS
level threads.

This speed issue applies to 3D graphics too. Btw, both in our
company's drivers for a certain 3D accelerator and that
mentioned MPEG IDCT-routine we used a certain speedup-trick: on
at least x86 hardware you can convert a 32bit floating-point
value to integer much faster by first forcing it to memory and
then doing one _integer_ addition instead of the fist(p)
instruction that C++ compiler normally generates when you
convert float to int. To gain additional speed, one may need to
lower the precision of FPU operations for certain functions by
modifying the FPU control register. One C++ compiler supports
this as a command line option; in others it has to be coded
using inline-assembly. Now, the question is: how can I do those
two speedup-tricks with a Lisp compiler? Guess: I can't.

Ability to control the run-time environment (not all
applications can afford to include >1Mb shared RTL) should also
include the ability to control memory allocation and reclamation
strategy, both on an application basis and 'inside' the
application. There are certain applications where I need
real-time gc, some applications where I need gc that supports
threads (those applications that use threads) and finally some
applications where I need to allocate and deallocate all or some
of the memory myself. For example, some data may need to
be allocated in the video card's memory using a special
allocator. It would be nice to be able to choose a suitable
garbage collector based on one's needs and profiling results
accumulated during the development.

Now, there is some functionality that is practically impossible
to write in Lisp. It's also impossible to write it in C++. For
example, certain kind of audio-technology that our company and
few of my friends have made absolutely requires hand-written
assembly which uses data and code in a cache-optimal way (not
impossible for a really good compiler, especially with feedback
optimizer) and uses self-modifying code (pretty impossible by
any reasonable way, I guess). So now higher level code must
interface to this lower-level code some way. With C/C++ this is
easy, but I haven't yet seen a Scheme-implementation that would
have some sort of a FFI that can be used to interface to C/C++
code (which also assembly can be made to look like) without
coding (slow) wrappers. I have understood that at least cmucl
and Allegro CL have this kind of FFI. Am I right?

I'm also under an impression that there are no many already
written wrappers for different libraries. Lurking in
comp.lang.lisp has revealed that there is OpenGL interface for
Allegro CL. What about DirectX? GGI (new, partially DirectX-like
graphics library especially for Linux but other Unices too)?
KDE and Gnome (especially their GTK widget set) desktop
environments?

Summa summarum: speed, thread support and FFI are the reasons
why I'm not using Lisp/Scheme for graphics and audio. Naturally
not all applications require all those features, but I can't,
for example, afford to write a complex 3D system in Lisp that I
can use in a non-speed-critical modeller if I can't use the same
library in another program that is speed-critical.

I haven't been complaining about the situation before simply
because I think nothing is going to happen to it

anyway. Currently the Lisp people seem to write software that
has much lower requirements (regarding these problematic-for-me
aspects) than the software me and my colleagues write and thus
the Lisp compiler vendors don't have reason to improve their
products into this direction. So, I thought instead to shut up


and start writing my own compiler. If all goes well, perhaps
I'll write next time after a year or two about how I'm
proceeding.

Just my $0.02,
--
Hannu Koivisto | What you see is all you get.
NOYB | - Brian Kernighan
-------------------------------------------------------------

Tim Bradshaw

unread,
Aug 24, 1998, 3:00:00 AM8/24/98
to
* Mike McEwan wrote:

> crane...@aol.com (Cranecoyne) writes:
>> (many people are surprised to learn that it can be compiled).

> Is that compiled as in `compiled to machine code' or byte-compiled?

compiled machine code:"although Lisps that bytecompile exist, I think
all the current commercial CLs have native machine code output or
something equivalent (compiling to C, which is then native-compiled).

--tim

Rainer Joswig

unread,
Aug 24, 1998, 3:00:00 AM8/24/98
to
In article <t2wlnof...@lehtori.cc.tut.fi>, Hannu Koivisto
<az...@iki.fi.ns> wrote:

> people, especially because most of the examples/code I've seen
> is AI stuff, compilers, etc -- not high-end 3D graphics, GUI
> software and so on.

Visit ftp://ftp.digitool.com/pub/mcl/ for lots of Macintosh Common Lisp
stuff. People are doing a lot of multimedia work with MCL.
See http://sk8.research.apple.com for a multimedia meta
environment written in Macintosh Common Lisp (no longer
maintained, but source is available).


(Comments here mostly about MCL, which I know best, apologies
to the other Lisps).

> (I prefer
> Scheme to CL because I like to write functional programs using
> recursion

This should be possible in CL, too. ;-) A lot of
CL compilers are supporting special compilation
of tail recursion.

> and I don't like CL's separate namespaces for
> functions and variables.)

No comment. ;-)



> Platform support should include at least Win32/x86/Alpha and
> Linux/x86/Alpha. Preferably also Irix/Rxk, Digital Unix, QNX,
> Solaris/Sparc, HP-UX, NetBSD and other Unices.

Well, ACL, LispWorks, Genera, MCL and Eclipse CL are
the most important commercial systems. There maybe
some more (some with special application areas like "L").
They should cover a bunch of platforms.

> Another problem is speed. If, with reasonably high probability,
> someone can guarantee that some CL or Scheme compiler can
> produce as efficient floating point code for Pentium as Intel's
> compiler plugin for Visual C++, then I will hereby promise to
> make a CL or Scheme version of the C++ IDCT function I made for
> mine and my friend's MPEG-1 systems player and test whether that
> claim is true or not. If it's true (which I believe it is not),
> I might even recode the whole player in Scheme or CL, because
> it's currently non-portable C++ code (for NT) and I'd like to
> make a Unix version of it. It uses threads to decode audio on
> one CPU and video on another CPU,

Sounds like a challenge. Hmm, some compilers are producing
quite good floating point code. But then? Would it be enough
to write a MPEG viewer.

On a Symbolics Lisp Machine, I'm sure you would get such a thing
running - unfortunately the processor is too slow.

> if more than one CPU is
> available, so this hypothetical compiler must support OS
> level threads.

This is currently not available in MCL and other CLs, too, I guess.

> This speed issue applies to 3D graphics too. Btw, both in our
> company's drivers for a certain 3D accelerator and that
> mentioned MPEG IDCT-routine we used a certain speedup-trick: on
> at least x86 hardware you can convert a 32bit floating-point
> value to integer much faster by first forcing it to memory and
> then doing one _integer_ addition instead of the fist(p)
> instruction that C++ compiler normally generates when you
> convert float to int. To gain additional speed, one may need to
> lower the precision of FPU operations for certain functions by
> modifying the FPU control register. One C++ compiler supports
> this as a command line option; in others it has to be coded
> using inline-assembly. Now, the question is: how can I do those
> two speedup-tricks with a Lisp compiler? Guess: I can't.

Use inline assembly. MCL years ago had a special floating
point compiler where you would code the FP ops directly.
I don't think this is needed anymore.

Currently using PPC code from inside MCL looks like this:

#+ppc-target
(defppclapfunction %set-ieee-single-float ((src arg_y) (macptr arg_z))
(check-nargs 2)
(lwz imm0 ppc::macptr.address macptr)
(get-single-float fp1 src)
(stfs fp1 0 imm0)
(blr))

> allocator. It would be nice to be able to choose a suitable
> garbage collector based on one's needs and profiling results
> accumulated during the development.

In MCL you can define your one Pascal/C datastructures (records, etc.).
Those are completely interchangeable with the rest of the
system and use will use the OS methods for memory allocation
and reclaiming.

> coding (slow) wrappers. I have understood that at least cmucl
> and Allegro CL have this kind of FFI. Am I right?

MCL and LispWorks for example have those FFIs, too.
Unfortunately everybody has its own.

> I'm also under an impression that there are no many already
> written wrappers for different libraries. Lurking in
> comp.lang.lisp has revealed that there is OpenGL interface for
> Allegro CL. What about DirectX? GGI (new, partially DirectX-like
> graphics library especially for Linux but other Unices too)?
> KDE and Gnome (especially their GTK widget set) desktop
> environments?

MCL for example has complete access to the Mac toolbox.
You have a special layer to deal very efficiently
with the underlying OS. Some of the stuff has
high level CL interfaces based on CLOS.

Playing a Quicktime video inside a MCL window then is (using the
qt-objects library):

(make-instance 'window
:color-p t
:window-title "Movie"
:view-size #@(400 400)
:view-subviews (list (make-instance 'qt-objects:movie-view
:view-position #@(10 10)
:movie-scaling nil
:movie (make-instance 'qt-objects:movie
:file (qt-objects:choose-movie-dialog)))))

Looks easy to me.

> Summa summarum: speed, thread support and FFI are the reasons
> why I'm not using Lisp/Scheme for graphics and audio.

Actually these are very strong points of MCL on the Mac.
People often are using MCL for educational multimedia environments
where you have to deal with frame grabbers, video disks, Quicktime,
speech recognition, .... Most of this stuff gets easy once
you have the interfaces to the OS. See http://www.digitool.com/home-3.html
for a list of customers and what some of them were/are doing with MCL.


Summary: not all is possible, but you can get very far. More then you
would expect.


Greetings,

Rainer Joswig

Duane Rettig

unread,
Aug 24, 1998, 3:00:00 AM8/24/98
to

Hannu Koivisto <az...@iki.fi.ns> writes:
> Speed provided by typical CL compilers may
> be enough for some 3D modeller, but I'm pretty sure that no CL
> or Scheme compiler can make fast-enough native code for software
> I've been doing. So the reasons below are naturally not
> universal, but they are the reasons why _I_ am not using Lisp
> for graphics and other multimedia.

> First of all, I'm not a Common Lisp programmer, although I know
> some CL (and I even have ACL 5beta/Linux and cmucl installed
> just in case I want to do some experimenting :)

The reasons below do seem like a tall order, especially for one
who is not any CL vendor's customer :-) In reality, I can't
give a positive response to all of these requirements, but if
nobody responded to those points that were applicable, it would
be too easy to pass off this post and say "you're asking too much"
(which I don't believe at all; every request you have made is
reasonable).

I do suggest that you use those CLs that you have installed and
actually do some experimenting. However, I must first discuss
your final summary:

> I haven't been complaining about the situation before simply
> because I think nothing is going to happen to it anyway.

This will become a self-fullfilling prophecy. There are so
many requirements and good suggestions for enhancements for
lisps, especially for CL, that there is no way that we lisp
vendors can or will move our products toward requirement sets
that do not match our customer base (or what we perceive to be
our potential future customer base). If you don't assert
yourself as a potential future customner, or if we don't find
out about you through marketing search efforts, then how can
anything possibly happen to move CL toward your goals?

> Currently the Lisp people seem to write software that
> has much lower requirements (regarding these problematic-for-me
> aspects) than the software me and my colleagues write and thus
> the Lisp compiler vendors don't have reason to improve their
> products into this direction.

I think it is unduly judgemental to characterize those requirements
as "lower"; I would prefer to call them "different". As lisp vendors
(as is the case for other vendors) we try to satisfy as many
requirements as it makes business sense for us to satisfy. But
if you don't state those requirements, we can't incorporate
them into our thinking.

> So, I thought instead to shut up
> and start writing my own compiler. If all goes well, perhaps
> I'll write next time after a year or two about how I'm
> proceeding.

I wish you luck.

> I do hack elisp
> every day and do some minor stuff with Scheme. I _would_ like to
> use Scheme for almost all programming, including multimedia
> stuff (2D/3D graphics, audio) that we've been doing, _but_ there
> are no good-enough implementations. That's the major reason for
> me (and some others I know). Wrong focus in education is a
> problem for some people, but not for me. Real world Lisp
> code/examples and literature may also be a problem for some
> people, especially because most of the examples/code I've seen
> is AI stuff, compilers, etc -- not high-end 3D graphics, GUI
> software and so on.
>
> Ok, so what's wrong with implementations? If my information is
> incorrect, great! I really would like to be wrong. I _hate_
> writing C++ and really enjoy when I can write Scheme. (I prefer
> Scheme to CL because I like to write functional programs using
> recursion and I don't like CL's separate namespaces for
> functions and variables.)

OK, I am forewarned that CL is not your language of choice.

> First of all, there should be at least one otherwise qualifying
> free implementation and preferably one commercial
> implementation. One implementation doesn't need to support all
> required platforms, but then there needs to be several
> implementations that all together support all required OS and
> CPU platforms.

CMUCL is already free, and both major multiplatform CL vendors
have free versions of their products.

> One of the biggest problems is that _no_ Scheme or CL
> implementation I know supports OS level threads in Linux.

Yes, this is a problem. We've been developing OS-threads
implementations on all of our platforms (although ACL 5.0
only supports this on NT and Windows). I may get it wrong,
but our multiprocessing team tells me that the problem on
Linux is that it uses a `clone'-based implementation, instead of
moving toward Posix-compliance at the implementation level,
and that this causes problems for our own special needs for
OS-threads support.

> This
> already forces me to use C++. MzScheme supports OS level threads
> in NT (and in at least Solaris), but then again, MzScheme
> doesn't have native code compiler. It's pretty damn fast
> interpreter, though. MzScheme is what I'm usually using for
> Lisp/Scheme programming.

Yes, but if it is an interpreter, then you won't get the kinds
of optimizations you are asking for (unless the implementation
introduces a compiler as well).

> Platform support should include at least Win32/x86/Alpha and
> Linux/x86/Alpha. Preferably also Irix/Rxk, Digital Unix, QNX,
> Solaris/Sparc, HP-UX, NetBSD and other Unices.

This is a sales question, so I won't answer it here. I don't
think you would be disappointed however, if you were to go to
either of the major commercial lisp vendors.

> Another problem is speed. If, with reasonably high probability,
> someone can guarantee that some CL or Scheme compiler can
> produce as efficient floating point code for Pentium as Intel's
> compiler plugin for Visual C++, then I will hereby promise to
> make a CL or Scheme version of the C++ IDCT function I made for
> mine and my friend's MPEG-1 systems player and test whether that
> claim is true or not. If it's true (which I believe it is not),

There is nothing about CL which precludes such efficient
compilation. Show me some C++ code with attending disassembled
output, and I could likely get to the same range of execution
speed as any C++ compiler. Note, however, that this is much
easier to accomplish on RISC architectures, because of the
different nature of the x87/Pentium stack-based floating-point
architecture.

> I might even recode the whole player in Scheme or CL, because
> it's currently non-portable C++ code (for NT) and I'd like to
> make a Unix version of it. It uses threads to decode audio on
> one CPU and video on another CPU, if more than one CPU is
> available, so this hypothetical compiler must support OS
> level threads.

Based on this goal, I would say that lisp can _already_ compile
as efficiently as C++. Of course, the threads-basing would have
to come later.

> This speed issue applies to 3D graphics too. Btw, both in our
> company's drivers for a certain 3D accelerator and that
> mentioned MPEG IDCT-routine we used a certain speedup-trick: on
> at least x86 hardware you can convert a 32bit floating-point
> value to integer much faster by first forcing it to memory and
> then doing one _integer_ addition instead of the fist(p)
> instruction that C++ compiler normally generates when you
> convert float to int. To gain additional speed, one may need to
> lower the precision of FPU operations for certain functions by
> modifying the FPU control register. One C++ compiler supports
> this as a command line option; in others it has to be coded
> using inline-assembly. Now, the question is: how can I do those
> two speedup-tricks with a Lisp compiler? Guess: I can't.

You can, although it is not documented. In your ACL-5.0beta/linux,
set the variable comp::*hack-compiler-output* to a list of functions
which you want to hack in lisp pseudo-assembler code; then when
you compile the function, follow the directions at the break prompt
and experiment to your heart's content.

This hacking can be done on all architectures, and although
it is not recommended for normal code generation, it can be
used to experiment with.

> Ability to control the run-time environment (not all
> applications can afford to include >1Mb shared RTL) should also
> include the ability to control memory allocation and reclamation
> strategy, both on an application basis and 'inside' the
> application. There are certain applications where I need
> real-time gc, some applications where I need gc that supports
> threads (those applications that use threads) and finally some
> applications where I need to allocate and deallocate all or some
> of the memory myself. For example, some data may need to
> be allocated in the video card's memory using a special
> allocator. It would be nice to be able to choose a suitable
> garbage collector based on one's needs and profiling results
> accumulated during the development.

Many of the features and extensions of CL implementations allow
you to do these important customizations; for example, I believe
all the major CLs have weak objects and finalizations which can
be used for deallocation of non-lisp allocated data.

> Now, there is some functionality that is practically impossible
> to write in Lisp. It's also impossible to write it in C++. For
> example, certain kind of audio-technology that our company and
> few of my friends have made absolutely requires hand-written
> assembly which uses data and code in a cache-optimal way (not
> impossible for a really good compiler, especially with feedback
> optimizer) and uses self-modifying code (pretty impossible by
> any reasonable way, I guess). So now higher level code must
> interface to this lower-level code some way. With C/C++ this is
> easy, but I haven't yet seen a Scheme-implementation that would
> have some sort of a FFI that can be used to interface to C/C++
> code (which also assembly can be made to look like) without
> coding (slow) wrappers. I have understood that at least cmucl
> and Allegro CL have this kind of FFI. Am I right?

Yes, of course, but perhaps Foreign Functions is not the
best way to go if you need such speed. Hacking the assembler
code as mentioned above might help.

> I'm also under an impression that there are no many already
> written wrappers for different libraries. Lurking in
> comp.lang.lisp has revealed that there is OpenGL interface for
> Allegro CL. What about DirectX? GGI (new, partially DirectX-like
> graphics library especially for Linux but other Unices too)?
> KDE and Gnome (especially their GTK widget set) desktop
> environments?

Some wrapper libraries are available, and in the past many
individual CL users created and maintained their own. I think
that the new trend is to use a C/C++ to lisp translator to read
the .h files and to create such wrapper libraries automatically.
But I think, from the rest of your post, that this isn't really
what you want anyway; if you are looking for blazing speed at
a low level, it shouldn't go through language boundaries.

> Summa summarum: speed, thread support and FFI are the reasons
> why I'm not using Lisp/Scheme for graphics and audio. Naturally
> not all applications require all those features, but I can't,
> for example, afford to write a complex 3D system in Lisp that I
> can use in a non-speed-critical modeller if I can't use the same
> library in another program that is speed-critical.
>
>

> Just my $0.02,
> --
> Hannu Koivisto | What you see is all you get.
> NOYB | - Brian Kernighan
> -------------------------------------------------------------

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Lyman S. Taylor

unread,
Aug 24, 1998, 3:00:00 AM8/24/98
to
In article <joswig-2408...@194.163.195.67>,

Rainer Joswig <jos...@lavielle.com> wrote:
>In article <t2wlnof...@lehtori.cc.tut.fi>, Hannu Koivisto
...

>(Comments here mostly about MCL, which I know best, apologies
>to the other Lisps).
>
>> (I prefer
>> Scheme to CL because I like to write functional programs using
>> recursion
>
>This should be possible in CL, too. ;-) A lot of
>CL compilers are supporting special compilation
>of tail recursion.

Ha! MCL can remove the recursion from the "naive" fibonacci. :-)
[ Although, much more of a special case due to the fact numbers are
involved. ]

(defun fib ( n )
;(declare (notinline fib))
(cond ((= n 0 ) 1 )
((= n 1 ) 1 )
(t (+ (fib (- n 1)
(fib (- n 2)))))))

With MCL I spend more time worry about telling the compiler what to
recursion _not_ to remove than the recursion it does. :-)
(I may be interested in tracing.)


>Well, ACL, LispWorks, Genera, MCL and Eclipse CL are
>the most important commercial systems. There maybe

....
....


>> coding (slow) wrappers. I have understood that at least cmucl
>> and Allegro CL have this kind of FFI. Am I right?
>
>MCL and LispWorks for example have those FFIs, too.
>Unfortunately everybody has its own.

All the major CL players have FFIs, they just happen to all be different.
So if you're objective is to write code that talks directly to
the hardware, you'll need to either duplicate it N times and/or convince,
through various forms of persuasion, the vendors why they need to duplicate
it N times.


--
Lyman S. Taylor Scully: "I have a date."
(ly...@cc.gatech.edu) Mulder: "Can you cancel?"
Scully: "Unlike you, Mulder, I would
like to have a life."
Mulder: "I have a life!"

David B. Lamkins

unread,
Aug 24, 1998, 3:00:00 AM8/24/98
to
In article <joswig-2408...@194.163.195.67>, jos...@lavielle.com
(Rainer Joswig) wrote:

>> I'm also under an impression that there are no many already
>> written wrappers for different libraries. Lurking in
>> comp.lang.lisp has revealed that there is OpenGL interface for
>> Allegro CL. What about DirectX? GGI (new, partially DirectX-like
>> graphics library especially for Linux but other Unices too)?
>> KDE and Gnome (especially their GTK widget set) desktop
>> environments?
>
>MCL for example has complete access to the Mac toolbox.
>You have a special layer to deal very efficiently
>with the underlying OS. Some of the stuff has
>high level CL interfaces based on CLOS.
>

One thing I'd like to add to your comments about MCL's toolbox access:
With the Power PC implementation, Apple has effectively supplanted its
trap-dispatch API with a shared library API (although the trap dispatcher
lives on in the 68K emulator for the benefit of older applications.) MCL
4.x gives you fully general access to shared libraries; not only the OS
APIs, but also any libraries compiled by (for example) MPW or CodeWarrior
for the PPC. This means you have three choices for dealing with the most
time-critical code in your MCL application: (1) add declarations and see
whether the Lisp compiler will generate good-enough code, (2) write inline
assembler using deflapppc, or (3) call out to a shared library compiled by
another language processor.

--
David B. Lamkins <http://www.teleport.com/~dlamkins/>

Marco Antoniotti

unread,
Aug 25, 1998, 3:00:00 AM8/25/98
to
Duane Rettig <du...@franz.com> writes:

> Hannu Koivisto <az...@iki.fi.ns> writes:

> > I haven't been complaining about the situation before simply
> > because I think nothing is going to happen to it anyway.
>
> This will become a self-fullfilling prophecy. There are so
> many requirements and good suggestions for enhancements for
> lisps, especially for CL, that there is no way that we lisp
> vendors can or will move our products toward requirement sets
> that do not match our customer base (or what we perceive to be
> our potential future customer base). If you don't assert
> yourself as a potential future customner, or if we don't find
> out about you through marketing search efforts, then how can
> anything possibly happen to move CL toward your goals?
>

It looks like you are reading C.L.L. Isn't this enough? And yes! I
am registred as a "customer" in your databases.

Now answer this. How many times did a thread like "why doesn't CL
provide a common FFI?" show up in this (and related newsgroups?)

Second question.

Why don't Franz and Harlequin (and Digitool) come up with a common FFI
interface and be done with it? It will not be an ANSI standard but it
will be an "industry standard"?

> > Currently the Lisp people seem to write software that
> > has much lower requirements (regarding these problematic-for-me
> > aspects) than the software me and my colleagues write and thus
> > the Lisp compiler vendors don't have reason to improve their
> > products into this direction.
>
> I think it is unduly judgemental to characterize those requirements
> as "lower"; I would prefer to call them "different". As lisp vendors
> (as is the case for other vendors) we try to satisfy as many
> requirements as it makes business sense for us to satisfy. But
> if you don't state those requirements, we can't incorporate
> them into our thinking.
>

Most likely, the requirements you try to satisfy are not those that
will expand the CL market as a whole. This is marketing short
sightedness. (Unless, of course, Franz is thinking to get out of the
CL market in the future).

PS. IMHO, the CLIM case is an example of *bad cooperation* among
vendors.

--
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - (0)6 - 68 80 79 23, fax. +39 - (0)6 - 68 80 79 26
http://www.parades.rm.cnr.it

Juliusz Chroboczek

unread,
Aug 25, 1998, 3:00:00 AM8/25/98
to
In article <6rsk17$o...@pravda.cc.gatech.edu>,
ly...@cc.gatech.edu (Lyman S. Taylor) writes:

LST> Ha! MCL can remove the recursion from the "naive" fibonacci. :-)
LST> [ Although, much more of a special case due to the fact numbers are
LST> involved. ]

I have some trouble believing that. Doing that requires knowing that
addition is associative --- which is only true of bignums.

Have you disassembled the generated code, or measured execution times
(is the optimized fib more or less linear, give or take an O(n) factor
when you get out of fixnum range)?

Sincerely,

J. Chroboczek


Raymond Toy

unread,
Aug 25, 1998, 3:00:00 AM8/25/98
to
>>>>> "Hannu" == Hannu Koivisto <az...@iki.fi.ns> writes:

Hannu> Another problem is speed. If, with reasonably high probability,
Hannu> someone can guarantee that some CL or Scheme compiler can
Hannu> produce as efficient floating point code for Pentium as Intel's
Hannu> compiler plugin for Visual C++, then I will hereby promise to
Hannu> make a CL or Scheme version of the C++ IDCT function I made for
Hannu> mine and my friend's MPEG-1 systems player and test whether that
Hannu> claim is true or not. If it's true (which I believe it is not),

CMUCL can for some functions produce code as good as C/C++. Using a
straight-forward implementation of the code from Oppenheim and
Schafer, a 64K-point complex radix-2 FFT in CMUCL takes 0.33 sec.
Matlab takes 0.16 sec. Matlab may be using more sophisticated
algorithms than my simple FFT. Perhaps a radix-8, radix-4 algorithm
would produce similar results as Matlab. Then the IDCT could be as
fast as C. Maybe.

Hannu> This speed issue applies to 3D graphics too. Btw, both in our
Hannu> company's drivers for a certain 3D accelerator and that
Hannu> mentioned MPEG IDCT-routine we used a certain speedup-trick: on
Hannu> at least x86 hardware you can convert a 32bit floating-point
Hannu> value to integer much faster by first forcing it to memory and
Hannu> then doing one _integer_ addition instead of the fist(p)

This trick could certainly be added to CMUCL if necessary/desired. Of
course this would only work if CMUCL could prove the result would fit
into a 32-bit integer.

Hannu> lower the precision of FPU operations for certain functions by
Hannu> modifying the FPU control register. One C++ compiler supports
Hannu> this as a command line option; in others it has to be coded
Hannu> using inline-assembly. Now, the question is: how can I do those
Hannu> two speedup-tricks with a Lisp compiler? Guess: I can't.

I think CMUCL allows access to the FPU control register so you could
reduce the precision of the arithmetic. (Does that really change the
execution time? I don't have the user's manual.) However, doing so
might confuse the compiler which might be doing interval arithmetic to
derive results. Reducing the precision may invalidate the assumptions
the compiler makes. But nothing prevents you from doing this.

Hannu> have some sort of a FFI that can be used to interface to C/C++
Hannu> code (which also assembly can be made to look like) without
Hannu> coding (slow) wrappers. I have understood that at least cmucl
Hannu> and Allegro CL have this kind of FFI. Am I right?

CMUCL has a good FFI that's easy to use and is probably as efficient
as C itself, for most things.

Hannu> Allegro CL. What about DirectX? GGI (new, partially DirectX-like
Hannu> graphics library especially for Linux but other Unices too)?
Hannu> KDE and Gnome (especially their GTK widget set) desktop
Hannu> environments?

This is probably because callbacks are typically hard to handle.
Also, perhaps no one wants or needs such an interface, or at least is
not motivated enough to produce one.

Hannu> products into this direction. So, I thought instead to shut up
Hannu> and start writing my own compiler. If all goes well, perhaps
Hannu> I'll write next time after a year or two about how I'm
Hannu> proceeding.

If you're going to write your own compiler, may I suggest hacking on
CMUCL's compiler? At least you don't have to start from scratch, and
CMUCL is widely recognized as having a very good compiler for
floating-point arithmetic.

Ray

Kelly Murray

unread,
Aug 25, 1998, 3:00:00 AM8/25/98
to
One needs to look at what are the key strengths of a product/language
(e.g. LISP), and what type of market/customer cares the most
about those strenghs.
And look honestly at its weaknesses, and avoid trying to sell/compete
to customers that value those weaknesses.
Numeric/floating point is a particularly weak aspect of CL.
So is the relatively large disk and memory footprint,
and slow startup times.

What kind of application/customer does NOT care about those things?
What applications have very large footprints/data sets
and/or keep applications running so startup time is not relevant,
and don't run on lowest-end hardware?

How about another hint, who requires high productivity
from their developers/programmers?

The absolutely wrong answer is people developing shrink-wrap
applications for delivery on PCs/Windows.

-Kelly Murray k...@intellimarket.com
Yes, I have left Franz and started another company.

David Bakhash

unread,
Aug 25, 1998, 3:00:00 AM8/25/98
to
Juliusz Chroboczek <j...@dcs.ed.ac.uk> writes:

> In article <6rsk17$o...@pravda.cc.gatech.edu>,
> ly...@cc.gatech.edu (Lyman S. Taylor) writes:
>
> LST> Ha! MCL can remove the recursion from the "naive" fibonacci. :-)
> LST> [ Although, much more of a special case due to the fact numbers are
> LST> involved. ]
>
> I have some trouble believing that. Doing that requires knowing that
> addition is associative --- which is only true of bignums.

It's this last line that is troubling me.

dave

Juliusz Chroboczek

unread,
Aug 25, 1998, 3:00:00 AM8/25/98
to
JC> knowing that addition is associative --- which is only true of bignums.

David Bakhash <ca...@bu.edu>:

DB> It's this last line that is troubling me.

By bignum, i meant potential bignum, i.e. what Common Lisp calls INTEGER.

Flonum addition is not associative:

* (setq one (coerce 1.0 'single-float)
eps2 (coerce (/ single-float-epsilon 2.0) 'single-float))
* (+ (+ eps2 one) (- one)) => zero (more or less)
* (+ eps2 (+ one (- one))) => eps2 (more or less)

Fixnum addition is not associative, assuming that 1 is a fixnum:

* (defmacro +% (x y) `(the fixnum (+ (the fixnum ,x) (the fixnum ,y))))
* (+% most-positive-fixnum (+% 1 -1)) => most-positive-fixnum
* (+% (+% most-positive-fixnum 1) -1) `is an error'

(But this last example will typically fail to show anything
interesting, as most implementations either ignore declarations or
compute modulo MOST-POSITIVE-FIXNUM.)

Even fixnum addition with automatic promotion to bignums is not really
associative, if you care about consing:

* (+ most-positive-fixnum (+ 1 -1)) => most-positive-fixnum (no side-effect)
* (+ (+ most-positive-fixnum 1) -1) => most-positive-fixnum (consing)

Sincerely,

J. Chroboczek

Duane Rettig

unread,
Aug 25, 1998, 3:00:00 AM8/25/98
to
Marco Antoniotti <mar...@galileo.parades.rm.cnr.it> writes:

> Duane Rettig <du...@franz.com> writes:
>
> > Hannu Koivisto <az...@iki.fi.ns> writes:
>

> > > I haven't been complaining about the situation before simply
> > > because I think nothing is going to happen to it anyway.
> >
> > This will become a self-fullfilling prophecy. There are so
> > many requirements and good suggestions for enhancements for
> > lisps, especially for CL, that there is no way that we lisp
> > vendors can or will move our products toward requirement sets
> > that do not match our customer base (or what we perceive to be
> > our potential future customer base). If you don't assert
> > yourself as a potential future customner, or if we don't find
> > out about you through marketing search efforts, then how can
> > anything possibly happen to move CL toward your goals?
> >
>

> It looks like you are reading C.L.L.

Yes, I do, and have listened in for many years.

> Isn't this enough?

Sometimes it is, and sometimes it isn't.

As an example of times when listening to c.l.l directly affects
the product, I was watching a thread a year or two ago about multiple-
value-returns, and at some point in the conversation I asked myself
why the first few values returned couldn't be returned in registers
(in the same way arguments are passed in registers). As a result,
all architectures except for the sparc (whose register-windows
architecture makes this problematic) will now return up to the same
number of values as they can pass as arguments. This makes code
smaller, faster, and allows tail-call merging.

I will only describe times when c.l.l is not appropriate for technical
interaction: Many times, by the time a suggestion gets to c.l.l,
it has been been transformed from an "It would be nice if ..." type
of comment to a "Why doesn't this product do this ridiculously
obvious thing ...?" I would much rather answer an individual
customer, even with a potentially negative answer that includes a time
scale or a "we'll look into it" or a "we're already working on it" than
to try to sort through the many frustrations that individual may have
had silently. We are not mind readers, and rely on user's suggestions
to help us to know what is wanted.

> And yes! I am registred as a "customer" in your databases.

I looked through our SPR database (i.e. Software Problem Reports, the
database that logs interactions with any customer, including linux
users, that send mail to the bugs mailing list). At first I failed
to find your name. But when I did an exhaustive textual search, I
found you in two sprs, in both of which you had been quoted on a news
or mailing group. Now, there may have been other emails to us, but
we are not mind readers; if you don't tell us what your problems are
with the product, how will we know? (Of course, I would love to hear
"I don't have any problems with it", but I suspect that that is not
the case :-)

> Now answer this. How many times did a thread like "why doesn't CL
> provide a common FFI?" show up in this (and related newsgroups?)

Of course I haven't logged all of the newsgroup proceedings, so I
will venture a qualitative guess and say "many times". I would have
also asked you what your point was, but it is clear that the point is
coming up next:

> Second question.
>
> Why don't Franz and Harlequin (and Digitool) come up with a common FFI
> interface and be done with it? It will not be an ANSI standard but it
> will be an "industry standard"?

I can't speak for Harlequin or Digitool, and I'll even take off my Franz
hat for this one:

From previous definitions in this thread about what constitutes an "industry
standard" (or "de facto standard"), I personally don't like them; they are
usually more trouble than a formal standard and provide less benefits.

I should qualify this by adding that de facto standards are appropriate
where a formal standard is non-existant or has failed. And although the
CL standard is not perfect, I don't think it has failed. It hasn't
provided a FFI yet, but it has only been through one iteration, and
I presume others are possible, if there is enough interest.

> > > Currently the Lisp people seem to write software that
> > > has much lower requirements (regarding these problematic-for-me
> > > aspects) than the software me and my colleagues write and thus
> > > the Lisp compiler vendors don't have reason to improve their
> > > products into this direction.
> >
> > I think it is unduly judgemental to characterize those requirements
> > as "lower"; I would prefer to call them "different". As lisp vendors
> > (as is the case for other vendors) we try to satisfy as many
> > requirements as it makes business sense for us to satisfy. But
> > if you don't state those requirements, we can't incorporate
> > them into our thinking.
> >
>

> Most likely, the requirements you try to satisfy are not those that
> will expand the CL market as a whole. This is marketing short
> sightedness. (Unless, of course, Franz is thinking to get out of the
> CL market in the future).

No, I think you'll see us fully immersed in the CL market for a long
time.

> PS. IMHO, the CLIM case is an example of *bad cooperation* among
> vendors.

Interesting statement. Well, from what I know about CLIM's history,
I would characterize it as a "de facto" or "industry" standard.
If you believe CLIM to be a Bad Thing, then perhaps what you believe
about it that is bad can be attributed to the de facto standardization
process.

> --
> Marco Antoniotti ===========================================
> PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
> tel. +39 - (0)6 - 68 80 79 23, fax. +39 - (0)6 - 68 80 79 26
> http://www.parades.rm.cnr.it

--

Paul Dietz

unread,
Aug 25, 1998, 3:00:00 AM8/25/98
to
> How about another hint, who requires high productivity
> from their developers/programmers?

These days, just about everyone.

Paul

Hannu Koivisto

unread,
Aug 25, 1998, 3:00:00 AM8/25/98
to
Duane Rettig <du...@franz.com> writes:

[I was somewhat vague at some points, I'll try to clarify at the
same time I comment on your response, for which I thank you.]

| The reasons below do seem like a tall order, especially for one
| who is not any CL vendor's customer :-) In reality, I can't

True :) I have just started to consider CL as an alternative for
Scheme just because it's another potential source for suitable
implementations. Although I mentioned Scheme as my preferred
Lispish language (to which you commented (*)), it too has some
weak points or points that I don't like. OTOH, actually (with my
current knowledge of CL) the only big point why I don't like CL
is its separate namespaces for variables and functions. So
basically, if I find some great CL implementation first, then I
would probably lean on CL as my primary language. On the other
hand, if I find some suitable Scheme implementation first, then
Scheme might turn out to become my primary language. In
practise, I think that I will eventually apply Scheme for some
applications and CL for others. Currently the situation is
simply that I know Scheme better (and so do my colleagues, so I
have better chances to push Scheme for 'real' projects) and
Scheme implementations have suited better for my personal work
in which I have been able to use some Lispish language, so I
have ended up using the available Schemes instead of becoming a
customer of any CL vendor (or user of some free implementation).

| I do suggest that you use those CLs that you have installed and
| actually do some experimenting. However, I must first discuss

I'll try to do that.

| This will become a self-fullfilling prophecy. There are so
| many requirements and good suggestions for enhancements for
| lisps, especially for CL, that there is no way that we lisp
| vendors can or will move our products toward requirement sets
| that do not match our customer base (or what we perceive to be
| our potential future customer base). If you don't assert
| yourself as a potential future customner, or if we don't find
| out about you through marketing search efforts, then how can
| anything possibly happen to move CL toward your goals?

I'll comment on this below. (**)

| > Currently the Lisp people seem to write software that
| > has much lower requirements (regarding these problematic-for-me
|

| I think it is unduly judgemental to characterize those requirements
| as "lower"; I would prefer to call them "different". As lisp vendors

I know, when I first wrote that sentence it didn't say _at all_
what I meant so I tried to fix it with that adjustion in parens
but it didn't really make it better :) Different them shall be.

| (as is the case for other vendors) we try to satisfy as many
| requirements as it makes business sense for us to satisfy. But
| if you don't state those requirements, we can't incorporate
| them into our thinking.

(**)
I know that you would satisfy everyone if you just could (don't
we all :), but like you say, you have to think business-wise
too. I'm just very sceptic about the number of other Lisp
developers or potential Lisp developers with as high performance
or other requirements than I have. And this leads to the
suspicion that satisfying those requirements doesn't make
business sense to you and thus the situation won't
change.

If I 'forced' our development team to use Lisp (and thus
become a CL vendor's customer) for some project with, say, high
performance requirements, then I would have to have some
guarantee that it will succeed from this language (plus
implementation) choice's part. And naturally you can't guarantee
(of course, feel free to say I'm wrong about this :) that you'll
improve, say, some optimizations during our short project time
(our projects are usually not very long-term ones excluding few
exceptions) to the extend we would need them to be.

So I cannot do this (of course, I cannot do it because of other
reasons too; I was somewhat exaggerating above, don't take that
literally), but what I _can_ do is that I start using Lisp in my
personal projects and suitable single-person project-team
projects for our company that don't need to be on the bleeding
edge in terms of performance or some other problematic
aspect. Then I can gradually move into more and more demanding
stuff. The only problem is that although this would be an ideal
way to gradually start providing feedback for you or some other
commercial Lisp vendor, I cannot afford to buy a commercial Lisp
lisence for my personal 'hacks' just to be able to provide
feedback for you. [I can always try to make our company buy ACL
or similar in some phase whether or not it will be used for any
real work or by anyone else but me ;)] For now, I continue using
Scheme (except when I feel adventurous, I'll do that mentioned
experimenting with cmucl or acl).

The possibilities for Scheme just got new light when I received
a very interesting mail from Jeffrey Mark Siskind, the author of
Stalin. He told me that he has been using Scheme for audio and
video processing for a long time and is developing Stalin, which
has zero-overhead FFI, possibility to make it support
multithreading and it should generate pretty much equivalent
floating point code (in terms of performance) as one can do by
writing C directly. I'll know more after I get the latest
development version he promised and experiment with it. Yes,
it's a Scheme->C compiler (although he mentioned that there are
plans for a native code backend), but it's a promising step in
this gradual movement towards using Lisp/Scheme in applications
with more and more "different" :) requirements.

| > every day and do some minor stuff with Scheme. I _would_ like to
| > use Scheme for almost all programming, including multimedia

...
(*) | OK, I am forewarned that CL is not your language of choice.

| only supports this on NT and Windows). I may get it wrong,
| but our multiprocessing team tells me that the problem on
| Linux is that it uses a `clone'-based implementation, instead of
| moving toward Posix-compliance at the implementation level,
| and that this causes problems for our own special needs for
| OS-threads support.

Hmm, I cannot comment based on that information. Have you
considered asking help from the developers of Posix-thread
interface or kernel developers? What you said about that
clone-based implementation really needs a bit more clarification
-- after all, it's simply how _kernel_ exposes generic
functionality for threads and it should be transparent to the
user of Posix-compliant threads implemented as a library. Of
course, that library may still have some problems with
Posix-compliance, but I'm sure that the developers are willing
to try to fix such problems that affect you. And if it turns out
that these problems simply cannot be fixed because of some
limitation in the kernel's clone()-based interface, then the
issue should be brought into attention of kernel
developers. Posix-threads are, after all, the only sensible way
to use threads at application level, so if they cannot be
implemented properly, then the situation must be fixed.

| Yes, but if it is an interpreter, then you won't get the kinds
| of optimizations you are asking for (unless the implementation
| introduces a compiler as well).

True, it is an interpreter and cannot provide such
optimizations. This far it has been the best
Scheme-implementation for non-speed-critical programs, though.

| There is nothing about CL which precludes such efficient
| compilation. Show me some C++ code with attending disassembled
| output, and I could likely get to the same range of execution
| speed as any C++ compiler. Note, however, that this is much

I'll do that. I will have to reproduce the assembly output of
that mentioned IDCT routine so that I can compare it to what
Stalin + {pgcc, egcs} can do.

| easier to accomplish on RISC architectures, because of the
| different nature of the x87/Pentium stack-based floating-point
| architecture.

I know. Exactly that's why our performance requirements may be
harder to achieve with a compiler that is not well tuned for the
x86 architecture (which _some_ C++ compilers are _beginning_ to
be). x86 is the primary target for most of the software we've
been making, so we cannot make that RISC excuse -- either some
compiler does good enough job or we'll write the speed-critical
parts by hand.

| You can, although it is not documented. In your ACL-5.0beta/linux,
| set the variable comp::*hack-compiler-output* to a list of functions
| which you want to hack in lisp pseudo-assembler code; then when
| you compile the function, follow the directions at the break prompt
| and experiment to your heart's content.
|
| This hacking can be done on all architectures, and although
| it is not recommended for normal code generation, it can be
| used to experiment with.

Sounds like a nice feature. I didn't expect something like this
or inline-assembly of MCL that Rainer Joswig mentioned. I'll
definitely try it.

| > coding (slow) wrappers. I have understood that at least cmucl
| > and Allegro CL have this kind of FFI. Am I right?
|
| Yes, of course, but perhaps Foreign Functions is not the
| best way to go if you need such speed. Hacking the assembler
| code as mentioned above might help.

FFI is still needed for interfacing to GUI toolkits and
such. Naturally not _all_ of our code is insanely speed-critical :)

| Some wrapper libraries are available, and in the past many
| individual CL users created and maintained their own. I think
| that the new trend is to use a C/C++ to lisp translator to read
| the .h files and to create such wrapper libraries
| automatically.

Do you have some translator in mind? Perhaps even such that
has 'back ends' for generating wrappers for different Lisp
vendors' FFIs?

| But I think, from the rest of your post, that this isn't really
| what you want anyway; if you are looking for blazing speed at
| a low level, it shouldn't go through language boundaries.

Well, I'd like to have both :-) FFI is useful for interfacing to
components written by other people etc.

Thanks for the interesting information,

Rainer Joswig

unread,
Aug 26, 1998, 3:00:00 AM8/26/98
to
In article <35E32DFF...@IntelliMarket.Com>, Kelly Murray
<k...@IntelliMarket.Com> wrote:

> One needs to look at what are the key strengths of a product/language
> (e.g. LISP), and what type of market/customer cares the most
> about those strenghs.
> And look honestly at its weaknesses, and avoid trying to sell/compete
> to customers that value those weaknesses.
> Numeric/floating point is a particularly weak aspect of CL.
> So is the relatively large disk and memory footprint,
> and slow startup times.

Why should there be no progress?

> What applications have very large footprints/data sets
> and/or keep applications running so startup time is not relevant,
> and don't run on lowest-end hardware?

You are talking about the typical PC/Mac productivity application
(Xpress, Freehand, Word, Excel, Photoshop, Cyberstudio,
FrameMaker, Illustrator, AutoCAD, ...)?

I guess MCL starts faster than most of those. ACL actually
starts pretty fast on a Unix box, too.

We have seen an incredible speed up lately. One
of our customers has a large network of graphics systems
(SUNs, Scitex, Macs, SGIs, ...). Typical transfer
rates (not those on sundays where few were working)
from a server three years ago were 300kb/sec.
Currently they have copy rates from Ethershare
servers to Macs of 8MB/sec (Switches everywhere, UW2 SCSI disks,
100 BaseT, IP, ...). 100MB size print jobs will
be spooled in a few seconds. Who cares about start up
speed of a 3 MB app versus startup speed of a 10 MB
app? You can get laptops with >200 MIPS and >100 MB
RAM for around $3000. What will you do with fast networks
(and ultra-fast networks soon) and really fast computers?
Be prepared to have demand for network server applications that either
work in the intranet with very complex data or are able
to handle large crowds in the Internet providing sophisticated
services.

A question is, who will give you the tools
to handle the complexity and will help you to come with
cool apps on the market in time? What tool will let you
develop in your language and will make sure
that you are not slowed down by stupid problems
(low level bugs, speed traps, debugging nightmares, ...).
The customer really often will not care - the solution
counts.

My current bet is on CL/CLIM/CL-HTTP. Well, I could be wrong... ;-)
A good OODB is still missing.

> How about another hint, who requires high productivity
> from their developers/programmers?

Small companies like ours? ;-)

Jeff Dalton

unread,
Aug 26, 1998, 3:00:00 AM8/26/98
to
In article <z9XD1.2$Sm5.1...@burlma1-snr1.gtei.net>,
Barry Margolin <bar...@bbnplanet.com> wrote:

>Lisp is probably one of the easiest languages to interpret (in many ways,
>the internal representation of source code is similar to threaded languages
>like Forth). Its main problem has been that it has a large runtime
>environment that often needs to be initialized before it can start
>interpreting the application.

There can be pretty small Lisps, though. The run-time env doesn't
have to be large.

Anyway, the ease of implementing Lisp is an important point.
I can't think of any other "language" with more implementations.

Indeed, even confining myself to Common Lisp, at work I have
a choice between 2 C implementations (gcc and one from Sun)
but 4 Common Lisps, and more if I wanted them.

-- jd

Marco Antoniotti

unread,
Aug 26, 1998, 3:00:00 AM8/26/98
to
Raymond Toy <t...@rtp.ericsson.se> writes:

> CMUCL can for some functions produce code as good as C/C++. Using a
> straight-forward implementation of the code from Oppenheim and
> Schafer, a 64K-point complex radix-2 FFT in CMUCL takes 0.33 sec.
> Matlab takes 0.16 sec. Matlab may be using more sophisticated
> algorithms than my simple FFT. Perhaps a radix-8, radix-4 algorithm
> would produce similar results as Matlab. Then the IDCT could be as
> fast as C. Maybe.

AFAIK, and I may be wrong, the core of Matlab is written in FORTRAN.

Pierre Mai

unread,
Aug 26, 1998, 3:00:00 AM8/26/98
to
Hannu Koivisto <az...@iki.fi.ns> writes:

> | Some wrapper libraries are available, and in the past many
> | individual CL users created and maintained their own. I think
> | that the new trend is to use a C/C++ to lisp translator to read
> | the .h files and to create such wrapper libraries
> | automatically.
>
> Do you have some translator in mind? Perhaps even such that
> has 'back ends' for generating wrappers for different Lisp
> vendors' FFIs?

There are several points to start from, although most that I'm aware
of are for Scheme rather than CL:

- One of the IMHO most versatile Generators is SWIG (the Simplified
Wrapper and Interface Generator), which currently supports wrapping
C/C++/Obj-C Libs for Perl, Python, Tcl and Guile (GNU's SCM-based
Scheme interpreter) at the least. I think it would be _very_ useful,
to adapt this to support the FFI's of current CLs.

The only problem that I see with this adaption, is that GUILE -- like
most Scheme interpreters nowadays -- uses Boehm Conservative GC, which
simplifies interfacing to C/C++ at the cost of less-than-optimal GC
performance, whereas most CLs don't.

OTOH there is much to be gained, as SWIG really makes wrapping C
libs very unproblematic, and is easily adaptable, with a very
supportive and cooperative author, David Beazley. For further
information, see http://www.swig.org/

- Many Scheme implementations include their own wrapper generators,
like e.g. MzScheme, as do many modern functional languages nowadays.

- If speed is even less of an issue, going through CORBA is another
course of action open.
ILU at ftp://ftp.parc.xerox.com/pub/ilu/ilu.html supports Common Lisp
(ACL I think), and there is work beign done at http://www.gnome.org/
(with support from RedHat) for a small, fast implementation of a
CORBA 2.0 ORB, which will support GUILE at the least.

Regs, Pierre.

--
Pierre Mai <de...@cs.tu-berlin.de> http://home.pages.de/~trillian/
"Such is life." -- Fiona in "Four Weddings and a Funeral" (UK/1994)

Duane Rettig

unread,
Aug 26, 1998, 3:00:00 AM8/26/98
to

I have snipped out a large portion of your post, for which I
agree and have no further comment.

Hannu Koivisto <az...@iki.fi.ns> writes:
> I know that you would satisfy everyone if you just could (don't
> we all :), but like you say, you have to think business-wise
> too. I'm just very sceptic about the number of other Lisp
> developers or potential Lisp developers with as high performance
> or other requirements than I have. And this leads to the
> suspicion that satisfying those requirements doesn't make
> business sense to you and thus the situation won't
> change.

Those suspicions are unfounded in fact. Lisp is very much in
the high-performance game, and while implementations aren't as
far as they could be, they are progressing.

> If I 'forced' our development team to use Lisp (and thus
> become a CL vendor's customer) for some project with, say, high
> performance requirements, then I would have to have some
> guarantee that it will succeed from this language (plus
> implementation) choice's part.

This is correct; there can be no guarantees (except, I suppose,
for contractual ones, where the consequences are spelled out)
for _any_ language. This includes C++ (Along with lisp-to-C++
conversion success stories, I have also heard horror stories
where projects fail simply because the programming wouldn't
scale due to the size/complexity of the software).

> And naturally you can't guarantee
> (of course, feel free to say I'm wrong about this :) that you'll
> improve, say, some optimizations during our short project time
> (our projects are usually not very long-term ones excluding few
> exceptions) to the extend we would need them to be.

Well, one thing about lisps and other dynamic languages that
you'll need to get used to: it is incredibly easy to patch them.
And, given a responsive support team, you can get solutions to your
problems in days or weeks, instead of months or years.

> ... I cannot afford to buy a commercial Lisp


> lisence for my personal 'hacks' just to be able to provide
> feedback for you.

This is exactly why we lisp vendors provide free implementations.

> The possibilities for Scheme just got new light when I received
> a very interesting mail from Jeffrey Mark Siskind, the author of
> Stalin. He told me that he has been using Scheme for audio and
> video processing for a long time and is developing Stalin, which
> has zero-overhead FFI, possibility to make it support
> multithreading and it should generate pretty much equivalent
> floating point code (in terms of performance) as one can do by
> writing C directly. I'll know more after I get the latest
> development version he promised and experiment with it. Yes,
> it's a Scheme->C compiler (although he mentioned that there are
> plans for a native code backend), but it's a promising step in
> this gradual movement towards using Lisp/Scheme in applications
> with more and more "different" :) requirements.

This is good; it represents a shattering of a myth (that lisp doesn't
do numbers fast) in at least one person's mind. Hopefully you'll
be successful with this package.

> | only supports this on NT and Windows). I may get it wrong,
> | but our multiprocessing team tells me that the problem on
> | Linux is that it uses a `clone'-based implementation, instead of
> | moving toward Posix-compliance at the implementation level,
> | and that this causes problems for our own special needs for
> | OS-threads support.
>

> ... What you said about that


> clone-based implementation really needs a bit more clarification

In resonding to some personal mail, our multiprocessing
expert said:

The "every thread has its own pid in Linux" is the major problem.
Signals are a big deal, as are thread exiting protocols and
interaction with exit(). As long as this situation continues,
it almost doesn't matter that Linux POSIX thread "wrapping"
exists - a significant portion of our code has to be rewritten
to deal with this behavior.

It is heartening to hear that Linux is moving toward Posix
compliance. We'll see what happens.

> | easier to accomplish on RISC architectures, because of the
> | different nature of the x87/Pentium stack-based floating-point
> | architecture.
>
> I know. Exactly that's why our performance requirements may be
> harder to achieve with a compiler that is not well tuned for the
> x86 architecture (which _some_ C++ compilers are _beginning_ to
> be). x86 is the primary target for most of the software we've
> been making, so we cannot make that RISC excuse -- either some
> compiler does good enough job or we'll write the speed-critical
> parts by hand.

You've hit the nail on the head: it really is "that RISC excuse"
(although it's really "that x86 excuse", because the 68k floating
point hardware didn't have this problem ...) The floating point
code is written to assume that float registers can be identified
and operated on in random-access fashion, rather than on a stack-based
fashion. I assume that there is specialized code generation
in those compilers that you allude to, which specifically generate
stack-based floating-point operations. The x87/486/Pentium) floating
hardware has only 8 registers arranged on a stack-access basis,
and if compilers want to compile specifically for them, they must
also deal specifically with overflow/spill code that doesn't have to
be dealt with for random-access code generation. We currently punt
on the situation by holding the stack fast and generating instructions
to simulate random-access. It is, presumably, slightly slower.
So far nobody has complained. Maybe you'll be the first :-)

Now if you were on an Alpha, (or even a 68K), we could easily hold
our own in code genberation for floating point.

> | > coding (slow) wrappers. I have understood that at least cmucl
> | > and Allegro CL have this kind of FFI. Am I right?
> |
> | Yes, of course, but perhaps Foreign Functions is not the
> | best way to go if you need such speed. Hacking the assembler
> | code as mentioned above might help.
>
> FFI is still needed for interfacing to GUI toolkits and
> such. Naturally not _all_ of our code is insanely speed-critical :)

Thanks; this was a point I had missed on your previous post.

> | Some wrapper libraries are available, and in the past many
> | individual CL users created and maintained their own. I think
> | that the new trend is to use a C/C++ to lisp translator to read
> | the .h files and to create such wrapper libraries
> | automatically.
>
> Do you have some translator in mind? Perhaps even such that
> has 'back ends' for generating wrappers for different Lisp
> vendors' FFIs?

There is a free one at ftp.franz.com/pub/cbind/5.0.beta/*
There are versions that will run on SGI, Sparc, or Windows,
but I think you can actually send other architectures' files
through if you set up the predefined macros correctly. And
certainly you should be able to use the output in any 5.0.beta,
including Linux.

I am also told that the cbind package will be included in the 5.0
distribution.

> Thanks for the interesting information,

Thanks for the interest.

> --
> Hannu Koivisto | What you see is all you get.
> NOYB | - Brian Kernighan
> -------------------------------------------------------------

--

Barry Margolin

unread,
Aug 26, 1998, 3:00:00 AM8/26/98
to
In article <dh37lzx...@iolla.dcs.ed.ac.uk>,

Juliusz Chroboczek <j...@dcs.ed.ac.uk> wrote:
>In article <6rsk17$o...@pravda.cc.gatech.edu>,
> ly...@cc.gatech.edu (Lyman S. Taylor) writes:
>
>LST> Ha! MCL can remove the recursion from the "naive" fibonacci. :-)
>LST> [ Although, much more of a special case due to the fact numbers are
>LST> involved. ]
>
>I have some trouble believing that. Doing that requires knowing that

>addition is associative --- which is only true of bignums.

Note that Common Lisp permits implementations to reorder argument
processing based on the mathematical associativity rules, even though the
computer operations are not necessarily associative. But this only applies
to multiple arguments to the same function, e.g. (+ a b c), and not
multiple function calls, i.e. (+ a (+ b c)) cannot be implemented as (+ (+
a b) c) unless they're guaranteed to be equivalent.

In any case, a typical fibonacci function is likely to be defined only for
integers, so the optimization LST described should be valid.

Robert Swindells

unread,
Aug 26, 1998, 3:00:00 AM8/26/98
to

> Duane Rettig <du...@franz.com> writes:

> > Marco Antoniotti <mar...@galileo.parades.rm.cnr.it> writes:

> > PS. IMHO, the CLIM case is an example of *bad cooperation* among
> > vendors.

>Interesting statement. Well, from what I know about CLIM's history, I
>would characterize it as a "de facto" or "industry" standard. If you
>believe CLIM to be a Bad Thing, then perhaps what you believe about it
>that is bad can be attributed to the de facto standardization process.

The biggest problem with CLIM is the lack of a public domain reference
implementation.

IMHO, CLOS took off quickly because enough of it could be provided by
PCL for potential users to try it out. Once they were hooked, they
would want to buy a higher performance/better featured implementation
from a commercial vendor.

I don't care if a PD CLIM is slow, ugly and a memory hog, it would make
proof of concept work easier for me.

Robert Swindells
-------------------------------------
Robert Swindells - GenRad Ltd
r...@genrad.co.uk - Work
r...@fdy2.demon.co.uk - Home

Michael Harper

unread,
Aug 26, 1998, 3:00:00 AM8/26/98
to
It would be very interesting and useful if the various vendors (at least
Franz and Harlequin) could provide a "free" version of CLIM 2.1 for
Linux to go along with the "free" CL implementation. Perhaps they just
cannot spare the resources to maintain such a port. But this would be a
partial solution to the PD reference version mentioned below.

Mike Harper
michael...@alcoa.com

Robert Swindells wrote:
>
> > Duane Rettig <du...@franz.com> writes:
>
> > > Marco Antoniotti <mar...@galileo.parades.rm.cnr.it> writes:
>

> > > PS. IMHO, the CLIM case is an example of *bad cooperation* among
> > > vendors.
>
> >Interesting statement. Well, from what I know about CLIM's history, I
> >would characterize it as a "de facto" or "industry" standard. If you
> >believe CLIM to be a Bad Thing, then perhaps what you believe about it
> >that is bad can be attributed to the de facto standardization process.
>

Erik Naggum

unread,
Aug 27, 1998, 3:00:00 AM8/27/98
to
* Barry Margolin <bar...@bbnplanet.com>

| Note that Common Lisp permits implementations to reorder argument
| processing based on the mathematical associativity rules, even though the
| computer operations are not necessarily associative.

I thought Common Lisp had strict left-to-right evaluation rules, as per
3.1.2.1.2.3 Function Forms. where does the standard say what you say?

Rainer Joswig

unread,
Aug 28, 1998, 3:00:00 AM8/28/98
to
In article <4ww7vb...@beta.franz.com>, Duane Rettig <du...@franz.com> wrote:

> Those suspicions are unfounded in fact. Lisp is very much in
> the high-performance game, and while implementations aren't as
> far as they could be, they are progressing.

Minus supporting threads on multiple processors
for SMP machines (SMP Linux PCs, SUNs, ...).

Barry Margolin

unread,
Aug 28, 1998, 3:00:00 AM8/28/98
to
In article <31132470...@naggum.no>, Erik Naggum <cle...@naggum.no> wrote:
>* Barry Margolin <bar...@bbnplanet.com>
>| Note that Common Lisp permits implementations to reorder argument
>| processing based on the mathematical associativity rules, even though the
>| computer operations are not necessarily associative.
>
> I thought Common Lisp had strict left-to-right evaluation rules, as per
> 3.1.2.1.2.3 Function Forms. where does the standard say what you say?

The above wasn't referring to evaluation of argument forms (which is
left-to-right, as you say), but to how the function processes the arguments
that were passed to it.

See
<http://www.harlequin.com/education/books/HyperSpec/Body/sec_12-1-1-1.html>
for the standard's wording.

Tim Bradshaw

unread,
Aug 28, 1998, 3:00:00 AM8/28/98
to
* Erik Naggum wrote:
* Barry Margolin <bar...@bbnplanet.com>
> | Note that Common Lisp permits implementations to reorder argument
> | processing based on the mathematical associativity rules, even though the
> | computer operations are not necessarily associative.

> I thought Common Lisp had strict left-to-right evaluation rules, as per
> 3.1.2.1.2.3 Function Forms. where does the standard say what you say?

It does. You need to evaluate the args left-to-right, but you can
*process* them in a different order. For instance if I had:

(+ 1 1.0 1 1.0)

I can arrange to do the integer addition, a coercion and then two FP
additions, rather than coercion, fp-add, coercion, fp-add, fp-add.

--tim

Tim Bradshaw

unread,
Aug 28, 1998, 3:00:00 AM8/28/98
to
* Rainer Joswig wrote:
> Minus supporting threads on multiple processors
> for SMP machines (SMP Linux PCs, SUNs, ...).

It would be interesting to know what implementations are addressing
this. Franz seem to be aware of the problem at least, which is a good
sign.

--tim

Erik Naggum

unread,
Aug 29, 1998, 3:00:00 AM8/29/98
to
* Barry Margolin <bar...@bbnplanet.com>

| The above wasn't referring to evaluation of argument forms (which is
| left-to-right, as you say), but to how the function processes the
| arguments that were passed to it.
|
| See
| <http://www.harlequin.com/education/books/HyperSpec/Body/sec_12-1-1-1.html>
| for the standard's wording.

oh, OK, it's the fine line between "argument" and "parameter", again.
12.1.1.1 is very clear. thanks for the pointer.

Erik Naggum

unread,
Aug 29, 1998, 3:00:00 AM8/29/98
to

Raymond Toy

unread,
Sep 8, 1998, 3:00:00 AM9/8/98
to
>>>>> "Raymond" == Raymond Toy <t...@rtp.ericsson.se> writes:

Raymond> CMUCL can for some functions produce code as good as C/C++. Using a
Raymond> straight-forward implementation of the code from Oppenheim and
Raymond> Schafer, a 64K-point complex radix-2 FFT in CMUCL takes 0.33 sec.
Raymond> Matlab takes 0.16 sec. Matlab may be using more sophisticated
Raymond> algorithms than my simple FFT. Perhaps a radix-8, radix-4 algorithm
Raymond> would produce similar results as Matlab. Then the IDCT could be as
Raymond> fast as C. Maybe.

Just wanted to post a correction that just occurred to me. In the
matlab test that I did, I essentially timed fft(ones(65536,1)).
However, if I try something like fft(ones(65536,1)*(1+j)), I get 0.33
sec, the same as CMUCL. I guess that, with the previous call, matlab
realized that the data was real and used a simpler and shorter real
transform instead to get the desired complex-valued transform. The
correct comparison, of course, is the complex-valued transform since
that's what the Lisp version assumed.

So with this result, I see no reason at all why Lisp needs to be
slower than Fortran or C.

Ray

0 new messages