Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Common Lisp programmer productivity

90 views
Skip to first unread message

Software Scavenger

unread,
Feb 10, 2002, 11:51:25 PM2/10/02
to
I think most of you agree that learning Common Lisp and using it for
real work should generally make a programmer more productive at that
work. But the big questions are how much and how fast. If you spend
several years gaining CL experience and building a personal library of
CL stuff, what are the productivity results likely to be over those
several years, and how fast are those results likely to be changing?
And how big a factor is your personal library, e.g. your functions and
macros which you reuse in different applications?

I know these questions are too vague and general for good specific
answers, but I would like to get some discussion of these issues and
whatever vague hints, anecdotes, etc., might throw some light on such
issues in different ways.

Kent M Pitman

unread,
Feb 11, 2002, 1:06:50 AM2/11/02
to
cubic...@mailandnews.com (Software Scavenger) writes:

> If you spend several years gaining CL experience and building a
> personal library of CL stuff, what are the productivity results
> likely to be over those several years, and how fast are those
> results likely to be changing? And how big a factor is your
> personal library, e.g. your functions and macros which you reuse in
> different applications?

That depends on how much reuse is a built-in criterion for how the
libraries are constructed. Lisp _allows_ you to write reusable stuff
but it doesn't _force_ you to write reusable stuff. And not everyone
is educated in how to maximize reusability. If you routinely look for
ways to move stuff from "nearly general" to "actually general", your
personal library will get better with time.

One of the big enemies of good modular design is cut-and-paste.
Another is modern "large address space". These things enable people
to make multiple divergent versions of something easily, and
disincentivize the creation of a single common mechanism, which becomes
just an "option" rather than a necessity.

I myself, unless forced not to, usually force myself to do design for
the long term always, even at the risk of slowing myself in the short
term, because I know it will ultimately matter. This kind of design
practice works best when one has management support, though. Otherwise,
people who are doing "quick and dirty" stuff will seem to have better
throughput...

Erik Naggum

unread,
Feb 11, 2002, 4:22:19 AM2/11/02
to
* Software Scavenger

| I think most of you agree that learning Common Lisp and using it for real
| work should generally make a programmer more productive at that work.

It depends largely on the goal of the programming task. If the goal is
to make something "work", you grab the tool that is already almost there
and just whip something together to get all there, and Common Lisp is not
"almost there" for a large set of common tasks today. (Those who want
Common Lisp to be "almost there" with a huge lot of stupid tasks go off
to whine about libraries and small syntax issues and go create Arc or
whatever, instead of seeing that being "almost there" is not an asset at
all, because the more "almost there" you get, the fewer things are within
reach at the same distance, and those things that are much farther away
than others are curiously considered "unsuitable" for the language once
it has succeeded in getting close enough for comfort for other tasks.)

If the goal is to make something of lasting value, which does only what
it is supposed to do and does not fail randomly, but has tightly
controlled and predictable failure modes with graceful recovery, then
most other languages and tools are so lacking in support for those
problems they were not specifically created to handle well, that _they_
are unsuited for top quality software and can only achieve this at great
expense. This is where I think Common Lisp really excels.

Those who argue that Common Lisp should be "almost there" for a lot of
common tasks, would like people to learn to use it with simple tasks and
then move on to more complex tasks. I have never seen any supporting
evidence of this transition, and much evidence of the contrary: That if
you use some tool for the simple tasks, it becomes so much harder to
write truly robust code and solve complex problems. I think this has two
important reasons that have to do with us being human beings: (1) If we
have too much comfort and predictabilty, abandoning the comfort zone is
very hard to do and we lose the predictability in an area where we think
we have sufficient expertise to predict most developments, and that means
that if you are good at slapping things together, you will not have the
experience to feel comfortable in writing robust code even in you "home
language". (2) Learning how to write robust code and solving complex
problems is a difficult process of its own, and _habituality_ dictates
that the languages and tools we use for such purposes should be very
different from what we do for simple tasks, like the difference between
cheap ball-point pen used to jot down a shopping list and the calligraphy
equipment used on diplomas, invitations, or like the difference between a
newspaper on recycled trash paper that yellows and crumbles in three
months and a book on acid-free virgin paper that is expected to last. I
think we do not use the same tools, languages, and equipment for that
which we want to last and that which we want only in the short term for
accidental reasons: I think they communicate intent simply in that
choice: Using a calligraphic pen and expensive ink to sign a check in
your grocery store would be way too weird. For Common Lisp, this means
that writing small programs that solve common problems is mere "training"
for the real thing, getting used to the language, experimenting, etc, and
that is all well and good, but it must be recognized to be "training",
not the end result and purpose, like it is with crud like Perl.

So while it is crucially important to be fluent in Common Lisp, it must
have the purpose of being used in programming tasks that are more complex
and more difficult than what you can do with "almost there" tools.
Looking back at my varied carreer, I see that I have always been drawn to
these kinds of things: Ada, SGML, Common Lisp; I recognize that I am not
a "tinkerer" who is satisfied with all the one-liners that stood between
me and some "really" interesting problem, I took the problems I faced
seriously enough to commit myself to solving them for good. This is why
I love Common Lisp so much -- it lets me take problems seriously and
solve them seriously. (As for the Open Source element here, I am quite
happy about "sharing" bits and pieces, but I invest too much in what I
really want to do to want to give it away, _especially_ to "tinkerers".)

| But the big questions are how much and how fast.

In my view, the big question is how. Common Lisp can make programmers
productive in a very different way than, say, Java can, because when
faced with the same superficial problem, Common Lisp programmers see a
different problem than Java programmers, and therefore very different
solutions. I think Common Lisp programmers think more of creating a
system in which the problem is solved, while Java programmers think more
of fitting the problem into the system of solutions provided by their
language. This means that a Common Lisp programmer would generally spend
much more time thinking about the problem and its solution than a Java
programmer, who applies his restricted "solution world" to the problem
instead of applying his intelligence and knowledge of the real world to
the problem. In that thinking phase, a Common Lisp programmer is much
more likely to make serious mistakes, go down dead ends and bakctrace,
etc, than a Java programmer would. In the end, I think Java is so hard
to program in that a Java programmer and a Common Lisp programmer will
emerge with a solution at the same time, but the Common Lisp solution
will provide the means to solve the next hundred problems in the system
with very little effort, while the Java solution will require repeating
the same development process for ever one of those problems, and may find
that they made serious mistakes and went down dead ends _much_ later than
the Common Lisp solution did, leading to full reimplementations in the
face of failure to predict future demands, while the Common Lisp solution
would have been much better thought out and have prepared for those
future changes.

In summary, the first solution in Common Lisp and Java will probably be
completed with differences in implementation costs that drown in the
statistical noise, but the value of Common Lisp will show over time, with
te changes that are required to keep it working well, with changing ideas
about what the system should "really" be solving, etc, because the design
has been made with an eye to systems-building.

| If you spend several years gaining CL experience and building a personal
| library of CL stuff, what are the productivity results likely to be over
| those several years, and how fast are those results likely to be
| changing? And how big a factor is your personal library, e.g. your
| functions and macros which you reuse in different applications?

I think each area in which you work requires its own libraries of such
stuff, and that the "almost there" property of, say, Java, is that it is
such a large common library of stuff that its immediate appeal for those
tasks is quite strong, but every business that bases its operation on a
large system of solutions, will bring much more to the programming task
that is unique to that business than it can accept from the outside,
anyway, so there is a certain balance that can be struck here.

I think Java has some really huge advantages over Common Lisp that it
would be completely futile to fight, but Common Lisp also has some
amazing differences from other languages that Java in particular has no
chance ever of adopting. The question, then, is whether you model your
programming tasks according to what is easy to do in Java, or whether you
focus on your business and user needs and develop a system of your own.

Please note that the preceding supports, e.g., Franz's pricing model.

However, when it comes to building a large software system today, we have
a serious problem: (1) You cannot build a large system from small pieces
without getting into a huge logistical problem. (2) People only know
small pieces, and have experience only with small pieces (which you can
see immediately if you try to study some of the larger Open Source
projects). So, (3) arguing for tackling the inherent cost of complexity
up front is futile with today's managers and programmers because they are
simply ignorant of complexity theory and even management of complexity.
This means that a Common Lisp system in our time must start off as some
silly component-based crufty glued-together crap, and while Common Lisp
was exceptionally good for prototyping in days when billions of lines of
code were not readily available, it is not good for prototyping today.
That is, we must learn from building far smaller systems before we can
embark upon the large ones, for no better reason than that we have lost
the skills necessary to run large software projects. (Industry reports
seem to indicate that the rate of software project failures has been
increasing and that this is one of the reasons for the negative trend in
new software development projects.) So how do we _start_ using Common
Lisp today when the cost of smaller projects is relatively higher than
smaller projects in "almost there" languages, when most projects are
believed to much smaller than they actually _should_ be?

| I know these questions are too vague and general for good specific
| answers, but I would like to get some discussion of these issues and
| whatever vague hints, anecdotes, etc., might throw some light on such
| issues in different ways.

I hope the above has been useful.

///
--
In a fight against something, the fight has value, victory has none.
In a fight for something, the fight is a loss, victory merely relief.

Frode Vatvedt Fjeld

unread,
Feb 11, 2002, 5:12:39 AM2/11/02
to
Erik Naggum <er...@naggum.net> writes:

> However, when it comes to building a large software system today, we
> have a serious problem: (1) You cannot build a large system from

> small pieces without getting into a huge logistical problem. [...]

Hence the current research frenzy into "middleware", I suppose.

--
Frode Vatvedt Fjeld

Software Scavenger

unread,
Feb 11, 2002, 6:59:48 AM2/11/02
to
Kent M Pitman <pit...@world.std.com> wrote in message news:<sfw8za0...@shell01.TheWorld.com>...

> I myself, unless forced not to, usually force myself to do design for
> the long term always, even at the risk of slowing myself in the short
> term, because I know it will ultimately matter. This kind of design
> practice works best when one has management support, though. Otherwise,
> people who are doing "quick and dirty" stuff will seem to have better
> throughput...

But after years of gradually gaining more and more productivity from
Lisp, don't you reach a point where you can design for the long term
and still beat the quick and dirty programmers in development speed?

Tim Bradshaw

unread,
Feb 11, 2002, 9:11:25 AM2/11/02
to
* Software Scavenger wrote:
> I think most of you agree that learning Common Lisp and using it for
> real work should generally make a programmer more productive at that
> work. But the big questions are how much and how fast. If you spend
> several years gaining CL experience and building a personal library of
> CL stuff, what are the productivity results likely to be over those
> several years, and how fast are those results likely to be changing?
> And how big a factor is your personal library, e.g. your functions and
> macros which you reuse in different applications?

I find that I do almost no reuse of code, and after some time I've
worked out that, *for me*, physical reuse is almost always not worth
it. The reason that this is true (again, in my case, please read this
throughout the remaining stuff) is that I find a huge difference
between implementing some decent-quality code, and implementing some
decent-quality *library* code. Library code needs to be so much more
carefully designed so it *can* be reused, that I almost always find
it's easier to simply reimplement. There are exceptions, but they
tend to be low-level functionality - for instance I have a souped-up
defpackage which I use a fair bit. Generally, if I try to write
library-quality code, I end up doing it *so* slowly, because I'm so
aware of all the issues that it might need to deal with, that I'd be
better off not bothering.

Even where I have written libraries, I often find that I end up not
using them, because it's simply quicker to reimplement. An example of
this is a simple command-line interpreter that a system I'm working on
has. I've done these before, and somewhere I have one which was meant
to be a library. But when I needed one last week it took about 3
hours to write from nothing: it would probably have taken me that long
to read and understand the documentation of the old one.

Instead what I do is some kind of conceptual reuse. I have a lot of
code lying around and I often look at it, but I tend to look at it to
get ideas of how I might solve some problems, and then just go ahead
and reimplement, rather than trying to use it as a library. This is a
better approach, for me, and it means that I spend more time writing
code that does what it says on the box, and only what it says on the
box, rather than trying to (over)generalise.

It may only be a possible approach however because I've - so far
successfully - managed to avoid the kind of insane overcomplexity that
is rife in modern computing systems. There's a great interview with
Brian Kernighan where someone asks him about Java, and he says he has
a 1000-page book which seems to consist simply of lists of functions
you can call, and that this kind of puts him off. I've avoided
systems like that so far, but if I get forced into them I expect I'll
spend a lot more time writing libraries and a lot less time actually
solving the problem at hand, simply because the libraries will be
necessary to isolate me from these 1000-page lists of functions.

Whether CL has made me more productive I'm not sure. I'm not really
fluent in any other language any more (maybe perl), so I don't have
anything to compare it with. I'm currently working on a system which
I developed entirely on windows and which we recently discovered needs
to run on Linux as well. We moved it to linux with *no* changes in
the code at all (it just built first time and ran), and it currently
has one substantive conditional on whether it's on windows or not, and
one conditional to do with whether cygwin is there or not. It does
lots and lots of pathname bashing and so on. I don't think I could
have done this without CL.

--tim

Paolo Amoroso

unread,
Feb 11, 2002, 12:35:45 PM2/11/02
to
On Mon, 11 Feb 2002 09:22:19 GMT, Erik Naggum <er...@naggum.net> wrote:

> I think Java has some really huge advantages over Common Lisp that it
> would be completely futile to fight, but Common Lisp also has some
> amazing differences from other languages that Java in particular has no
> chance ever of adopting. The question, then, is whether you model your

Which advantages and which differences? To clarify my question about
differences, I mean to ask whether you refer to differences beyond the ones
usually mentioned in Lisp literature--such as the ability to do source code
transformations, to embed languages, bottom-up programming, interactive
development, etc.


Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://www.paoloamoroso.it/ency/README
[http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/]

Kent M Pitman

unread,
Feb 11, 2002, 12:56:21 PM2/11/02
to
cubic...@mailandnews.com (Software Scavenger) writes:

Yes, I think so. Lisp itself, and its rapid prototyping capabilities,
are a testament to that. Building new tools in new domains gives you
a similar leg up in those domains, yes. But success or failure of a
set of software can hinge on more than just this. Issues of bad
management, and other bad business practices not related to the sheer
quality of programming have set the industry back more than once. The
strength of the language and of the implementations allowed things to
survive through harsh times. But to be ultimately successful, we must
learn to make good business decisions, not just good libraries.

I'm not a big fan of the open source movement, but not because it asks
the wrong kinds of questions. The movement addresses the issue of the
social and business context in which we do our work, and that issue
must be addressed. I take issue with the specific proposals people
often make about how to solve this problem (as they take issue with
mine), but I don't take issue with the idea that there are social and
business problems that we must, as a community, address. [I've tried
to keep this statement relatively neutral/balanced so we don't have to
actually have that debate all over again right here, but I do think our
having that debate here on this group once in a while is not really
off-topic.]

At MIT, we used to see stories about Nasrudin in our "bye lines" (the
more modern name might be "fortune cookie messages"). If you're wondering
why understanding business should be relevant to programming, here's one
that seems apropos...

The Mullah Nasrudin was earning his living by running a ferry across
a lake. He was taking a pompous scholar to the other side. When
asked if he had read Plato's Republic, the Mullah replied, "Sir, I
am a simple boatman. What would I do with Plato?" The scholar
replied, "In that case half of your life's been wasted." The Mullah
kept quiet for a while and then said, "Sir, do you know how to
swim." "Of course not," replied the professor, "I am a scholar. What
would I do with swimming?" The Mullah replied, "In that case, all of
your life's been wasted. We're sinking."

Eduardo Muñoz

unread,
Feb 11, 2002, 2:03:11 PM2/11/02
to
cubic...@mailandnews.com (Software Scavenger) writes:

> [...] Otherwise,


> > people who are doing "quick and dirty" stuff will seem to have better
> > throughput...
>
> But after years of gradually gaining more and more productivity from
> Lisp, don't you reach a point where you can design for the long term
> and still beat the quick and dirty programmers in development speed?

Yes. I have done this in a different context
(mechanical desing). Designing with an eye on the
long term (reusability, generality and
correctness) will make you faster than any quick
and dirty designer in not so many years. But then,
if you don't have management support you'll have
to invest your own time to create a library of
reusable components. Later you will "output"
faster and better work or will be more relaxed at
work (or both :). A well investment IMHO.


--

Eduardo Muñoz

Vebjorn Ljosa

unread,
Feb 11, 2002, 2:41:35 PM2/11/02
to
* Tim Bradshaw <t...@tfeb.org>

|
| Even where I have written libraries, I often find that I end up not
| using them, because it's simply quicker to reimplement. An example of
| this is a simple command-line interpreter that a system I'm working on
| has. I've done these before, and somewhere I have one which was meant
| to be a library. But when I needed one last week it took about 3
| hours to write from nothing: it would probably have taken me that long
| to read and understand the documentation of the old one.
|
| Instead what I do is some kind of conceptual reuse. I have a lot of
| code lying around and I often look at it, but I tend to look at it to
| get ideas of how I might solve some problems, and then just go ahead
| and reimplement, rather than trying to use it as a library.

This is surprising to me. If the old code is in use, don't you have
to maintain it, i.e., fix bugs and make other changes from time to
time?

I'm still a student, so my commercial Lisp programming experience is
measured in months instead of years, but I would expect that making
something into a real library would in many cases be easier than
maintaining two parallel versions.

--
Vebjorn Ljosa

Kaz Kylheku

unread,
Feb 11, 2002, 2:52:25 PM2/11/02
to
In article <a6789134.0202...@posting.google.com>, Software

I have less than a year's experience in CL, and I don't have any personal
libraries.

I chose CL for developing Meta-CVS, a freeware version control tool
implemented as a front end for CVS. Meta-CVS handles versioning of the
directory structure just fine, unlike CVS which doesn't.

I had the ideas for this software in my head for some time, but I was
looking for the right language to express them.

Lisp turned out to be ideal, because I didn't have all the details nailed
down and wanted to do some exploratory programming to play with the ideas,
without having to throw everything away and start from scratch when
I hit upon something good. The interactive nature of Lisp helped get
this rolling; in fact I was able to start versioning the Meta-CVS sources
using Meta-CVS, when it was still just a bunch of functions unintegrated
into a tool.

Another concern I had was error handling. Again, Lisp turned out to be
the right catalyst. When you do crazy things like completely rearrange a
local directory structure after a CVS update operation, you are probably
from time to time going to run into snags, and have to be able to roll
back or otherwise get into some sane state. That is going to require
interaction from the user. Lisp's conditions and restarts are perfect
for this. What you can do is put in your restarts, and then just interact
with them using your debugger. This way you interactively learn about
error scenarios, and how you should revise your error handling strategy.

The macros, functional programming, built-in parser, and useful built-in
functions allowed me to go from nothing in early January to a 0.0 alpha
release on January 27. That's phenomenal productivity, given that I only
worked on it sporadically on evenings and weekends, sometimes ignoring
the project for days at a time.

What would have helped productivity even more, not to mention portabilty,
would be if there was a standard POSIX interface for Common Lisp. Right
now the program relies on the glibc2 bindings in CLISP, which ties it
to Linux. I had to do some silly things to deal with paths, but it's
all worth it.

You have to remember that CVS is some 16 years old, in which time it
failed to acquire a directory versioning feature, and nobody that I'm
aware of has written a directory versioning front end. And not having
this feature, nor even simple file renaming, is oft complained about,
because it hurts! And most CVS users are aware that the feature is
available in some proprietary systems.

So there has to be some kind of barrier against implementing this,
and that barrier is certainly not due to not having a need, and it's
not due to failure to come up with the idea *how* to do it, because that
is simple and obvious. No, it's largely a *programming language* barrier.
For instance, at one point I started hacking on this project in C++
and quickly gave up.

It's not that I don't know C++: it's what has been paying my bills for
five years, and I've known C for some 13 years. So when some language
you've been learning for nine or ten months better enables to you to do
a task than tools you've known for years, that tells you something.

It's only because Lisp landed in my lap I was able to hack up Meta-CVS,
and to do it with little effort, in little time, all the while having fun,
because Lisp removed the ``wall of feces'' between the idea and the gritty
details of the implementation. In Lisp, at worst you have an isolated
little turd here and there that you can hold your nose and jump over.
It's almost as if the language doesn't exist; you are expressing the
computation as it appears in your thoughts.

I can't escape the sensation that I have already been thinking in Lisp
all my programming career, but forcing the ideas into the constraints
of bad languages, which explode those ideas into a bewildering array of
details, most of which are workarounds for the language. In Lisp, there
is a nice 1:1 correspondence between an idea and its expression.
You teach Lisp your idea once and then communicate in the language of
that idea.

Here is an example. When you visualize something being done recursively
to a directory tree, what do you think of? Certainly not about the ugly
details of the operating system calls, the recursion, the catenation
of paths and so on. You juxtapose two ideas: the idea that a directory
traversal is taking place, and some action being done on each element,
which is discerned by some name. All you care about is where the
traversal is rooted and what it does. Moreover, each element is an
object with some properties, like predicates that indicate whether it's
a directory or not. So the language ought to make this equally easy to
express as a simple juxtaposition of a few symbols. Therefore, one of
the first things I did in Meta-CVS was make a macro, which
can be invoked according to this example:

(for-each-file-info (fi "/path/to/somewhere")
;; Iterate over /path/to/somewhere, and bind each object to the variable
;; fi in turn. Folks, it doesn't get simpler than this.

(let (name (file-name fi))
(cond
((directory-p fi)
(format t "directory: ~a~%" name))
((regular-p fi)
(format t "regular file: ~a~%" name))
(t (format t "special file: ~a~%" name)))))

When you have this macro, you suddenly realize that directory traversal
is so easy, you can casually do it wherever you want at the drop of a hat.
Need to add up the sizes of files in a directory? Easy.

(defun disk-usage-bytes (path)
(let ((usage 0))
(for-each-file-info (fi path)
(incf usage (* (block-size fi) (num-blocks fi))))
usage))

(Hmm, this is clearly a do- form that needs an optional third
parameter, a form that specifies the result; guess what, we can
add that without breaking any existing code).

Error handling? It's built into for-each-file-info, which contains
restarts that enable it to continue working in the face of an error.
Some error handler can supply the strategy for dealing with this,
so it doesn't have to be pushed into disk-usage. So if you, say, make
a GUI around disk-usage-bytes, you can have an error handler that brings
up a dialog box: ``could not descend into /home/joe/mail: permission
denied'' with buttons for aborting, or continuing the scan of /home/joe''.
Or if you wrap it in a batch utility, you can spit out an error message
and automatically continue. Then at the end resturn an unsuccessful
termination status with a message like ``could not calculate true usage
because some directories were inaccessible''. Or optionally the tool
could bail at the first error, depending on some command line argument.

The disk-usage function doesn't have to care about this at all; it will
add up the right total of whatever was processed. If you haven't written
an error handler, the debugger will provide you with one, so you don't
have to worry about it while developing the macro.

What if we suspect there are Unix hard links in the tree, and don't want
to add up the usage twice? We can make a hash table keyed on inode
to sniff out the duplicates, something like:

(defun disk-usage-bytes (path)
(let ((usage 0)
(inode-hash (make-hash-table)))
(for-each-file-info (fi path)
(setf (gethash (inode-num fi) inode-hash)
(* (block-size fi) (num-blocks fi))))
(maphash #'(lambda (key val)
(declare (ignore key))
(incf usage val) inode-hash))
usage))

With only a modest increase in size, we could take on all of the
bells and whistles provided by GNU du, and have them done in probably
under one hour, in way fewer lines of code.

Kent M Pitman

unread,
Feb 11, 2002, 3:05:14 PM2/11/02
to
Vebjorn Ljosa <ab...@ljosa.com> writes:

> * Tim Bradshaw <t...@tfeb.org>
> |
> | Even where I have written libraries, I often find that I end up not
> | using them, because it's simply quicker to reimplement. An example of
> | this is a simple command-line interpreter that a system I'm working on
> | has. I've done these before, and somewhere I have one which was meant
> | to be a library. But when I needed one last week it took about 3
> | hours to write from nothing: it would probably have taken me that long
> | to read and understand the documentation of the old one.
> |
> | Instead what I do is some kind of conceptual reuse. I have a lot of
> | code lying around and I often look at it, but I tend to look at it to
> | get ideas of how I might solve some problems, and then just go ahead
> | and reimplement, rather than trying to use it as a library.
>
> This is surprising to me. If the old code is in use, don't you have
> to maintain it, i.e., fix bugs and make other changes from time to
> time?

It is not necessarily the same people who are maintaining it, and they may
not appreciate having to upgrade their code to accomodate your fix. It may
be they only want to download a fresh copy of the library in question to get
out-and-out bugfixes, but they are forced to confront other, gratuitous
changes because they cannot receive the fixes without the gratuitous shifts
in featurism. This problem is true throughout computer science. Makers of
everything from Linux to Windows (that is, it doesn't seem to matter what
your economic model is) have the problem that they integrate the creation of
new featurism with the creation of new bugfixes, and people who want
stability are caught in the crossfire. Both IBM (with its compatibility for
old mainframe processors) and Debian Linus (with its concern about stability)
have shown that someone can have the will to take another path if they want
to. But it's an open question whether the market will reward such passion
and concern in any given instance.



> I'm still a student, so my commercial Lisp programming experience is
> measured in months instead of years, but I would expect that making
> something into a real library would in many cases be easier than
> maintaining two parallel versions.

Speaking as someone who has spent the last year trying to put together
a set of libraries for public consumption, I'd guess you underestimate
the cost of designing, packaging, testing and documenting a package
for public consumption. The problem isn't merely writing one version
vs writing two, but that as you "fix" one's behavior for one purpose,
you potentially have to modify the syntax, semantics, documentation, etc.
affecting all other uses. You want to get things "mostly right" to start
with because once something is released, it is hard to change.

I heard a story a while back about the designers of Unix make deciding
that making the Tab character have special significance over Space was
really wrong. As the story goes, there were already a dozen or two
people depending on the behavior of Tab, so rather than cause
incompatible change that would "break the world", they continued with
what they considered a suboptimal definition. Moral? Well, now there
are many more than just a dozen or two people who wish this would get
fixed, but there is no way to accomodate them. What it tells you is
that it doesn't take very many users depending on you before a design
is utterly frozen and "cost of change" is promoted to a key design
decision. That is certainly the case for CL, and it causes some of us
who resist change to look like we are anti-progress. We aren't, of
course, we just like change to progress in an orderly way. There are
compatible paths and incompatible paths, and creating a new library is
often a more compatible path than using an existing one... unless the
existing one was designed with _considerable_ thought, which then raises
the price of producing it well above what you are surmising.

Tim Bradshaw

unread,
Feb 11, 2002, 3:56:33 PM2/11/02
to
* Vebjorn Ljosa wrote:

> This is surprising to me. If the old code is in use, don't you have
> to maintain it, i.e., fix bugs and make other changes from time to
> time?

yes, quite often.

> I'm still a student, so my commercial Lisp programming experience is
> measured in months instead of years, but I would expect that making
> something into a real library would in many cases be easier than
> maintaining two parallel versions.

But I don't have two parallel versions typically, I have system A
which is being developed and maintained and system B which is being
similarly developed and maintained. Typically I have one large
high-quality common library, which fortunately I can pay other people
to maintain, called Common Lisp. I do have other bits of common
functionality (so I slightly exaggerated, perhaps) but in general
although the code might do similar things it is sufficiently different
that it gets maintained separately. I could spend a lot of time
trying to abstract out commonality, but when I've done that I've found
myself both spending a *very* long time doing the abstraction and
robustifying of the code and then discovering that it didn't do what I
wanted anyway, so I had to spend yet more time on abstraction which I
didn't really need anyway. So I've mostly given up: I'm not good
enough at writing or designing libraries to make it worth it for me.
Instead I concentrate on developing techniques which make writing
single-purpose code fast enough that the lack of huge libraries is not
hurting too badly.

Incidentally I think that this kind of situation is fairly pervasive.
Although it's clearly possible to do write libraries which are
good-quality and genuinely reusable, very many of the ones that exist
seem to me to fail to meet these criteria. One has only to look at
the vast number of different(ly broken) standards to do similar
things, like, say, remote function call to worry about this. I think
some of the MS APIs are also fine examples: they seem to consist
largely of repeated reworkings of the same ideas, presumably because
they've failed to produce a good enough library at any point and so
have repeatedly hacked away at the thing trying to produce something
that works (of course in the MS case they're probably also motivated
by wanting not to have too much compatibility). I once had a look at
the SMB protocol and it's really amazing. Java too - how many
versions of the libraries have they gone though so far?

I think that writing libraries *that are useful outside their initial
domain* is very, very hard, and in particular it's too hard for me in
general.

--tim

Marco Antoniotti

unread,
Feb 11, 2002, 4:09:01 PM2/11/02
to

Tim Bradshaw <t...@tfeb.org> writes:

...

> I think that writing libraries *that are useful outside their initial
> domain* is very, very hard, and in particular it's too hard for me in
> general.

I think ti was Stroustroup who said that "Library Design is Language
Design" (or at least I remember seeing it in the ARM).

I wholeheartdly agree.

Cheers

--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
719 Broadway 12th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
"Hello New York! We'll do what we can!"
Bill Murray in `Ghostbusters'.

Fernando Rodríguez

unread,
Feb 11, 2002, 5:52:15 PM2/11/02
to
On Mon, 11 Feb 2002 19:52:25 GMT, k...@accton.shaw.ca (Kaz Kylheku) wrote:


>I chose CL for developing Meta-CVS, a freeware version control tool
>implemented as a front end for CVS. Meta-CVS handles versioning of the
>directory structure just fine, unlike CVS which doesn't.

Have you seen Subversion (at tigris.org)?

----
Fernando Rodríguez
frr at wanadoo dot es
-------

Kaz Kylheku

unread,
Feb 11, 2002, 7:26:06 PM2/11/02
to
In article <fmig6uc180747423r...@4ax.com>, Fernando

Rodríguez wrote:
>On Mon, 11 Feb 2002 19:52:25 GMT, k...@accton.shaw.ca (Kaz Kylheku) wrote:
>
>
>>I chose CL for developing Meta-CVS, a freeware version control tool
>>implemented as a front end for CVS. Meta-CVS handles versioning of the
>>directory structure just fine, unlike CVS which doesn't.
>
>Have you seen Subversion (at tigris.org)?

Yes. I took a look at it, and just went ``gack''. It's a from-scratch
hugeware project, which is overkill if all you want is what CVS does now,
plus directory structure versioning. I don't want WebDAV, XML or any
of that crap, just good old Secure Shell access. I also want Meta-CVS
users to be able to use existing sites that host CVS without asking the
admins to set up anything new. Sysadmins hate doing that for some
reason. :)

Also note that in a few weeks of Lisp hacking, while retaining CVS as
a substrate, I already have a functioning solution that can
handle merges of the directory structure. Meta-CVS users can also
share patches with each other containing changes to the directory
structure. These are applied in your Meta-CVS working copy with the
standard patch tool, followed by a ``mcvs up'' to bring the rearrangement
into effect. Similarly, if they distribute Meta-CVS archives, they can
track them using vendor branches.

So I'm basically ready for production use here, modulo a few issues,
and the poor platform support. In terms of performance, I have tested
Meta-CVS on some large-ish projects like GNU Libc 2.2.x and the Linux
kernel 2.4.x. Right now it does well, except that large rename operations
involving hundreds or thousands of files are slow. This is currently
done using some ``dumb'' Lisp list proessing that can be sped up in
obvious ways, but get it right first, optimize later, right?

Whereas after months of hacking in a Stone Age language, Subversion is
not at the stage where I could use it. If we take a look at the Status
page http://subversion.tigris.org/project_status.html

Subversion 0.9 (Mon 11 Feb 2002): Code refactorization in
preparation for implementing "svn merge" (see issue #504);
"svn switch" ( issue #575); and resolution of some repository
db stability issues (issue #608).

All of these things appear rock-solid in Meta-CVS. There will *not* be
a refactoring or rewrite to get branch switching or merging to work.
These features are inherited straight from CVS, and the stability is at
the same level as that of CVS.

In Meta-CVS, the directory structure is versioned in exactly the same way
as any other file. CVS is kind of turned on itself. This gives you the
immediate advantage of being able to treat the directory structure as just
another part of your source code that can be subject to the same parallel
development as anything else. The directory structure is represented
as a simple association list mapping the files known to CVS to
their paths as they appear to the users. This association list is formatted
out to a text file which is just stored in CVS. That text file can
be grokked by humans. It can be edited, diffed, merged, subject to
conflict resolution, patched, etc.

So as you can see, I didn't wait for the ``compelling replacement for
CVS'' to materialize; I made it myself. :)

Alain Picard

unread,
Feb 12, 2002, 3:22:15 AM2/12/02
to
Kent M Pitman <pit...@world.std.com> writes:

> But success or failure of a
> set of software can hinge on more than just this.

["this" referring to rapid prototyping, and technical superiority]

Indeed, I would go further and say that success or failure of
software is almost _never_ related to technical issues, but are
failures of management and/or business development.

This adds noise to the measurement of which tools actually _are_
superior, and thus contributes to the survival of inferior technologies.


--
It would be difficult to construe Larry Wall, in article
this as a feature. <1995May29....@netlabs.com>

Oliver Vecernik

unread,
Feb 12, 2002, 5:45:07 AM2/12/02
to
Tim Bradshaw wrote:

>
>

> [...]

> I developed entirely on windows and which we recently discovered needs
> to run on Linux as well. We moved it to linux with *no* changes in
> the code at all (it just built first time and ran), and it currently
> has one substantive conditional on whether it's on windows or not, and
> one conditional to do with whether cygwin is there or not. It does
> lots and lots of pathname bashing and so on. I don't think I could
> have done this without CL.


Which tools are you using?

Oliver

Software Scavenger

unread,
Feb 12, 2002, 6:40:44 AM2/12/02
to
Marco Antoniotti <mar...@cs.nyu.edu> wrote in message news:<y6c6653...@octagon.mrl.nyu.edu>...

> I think ti was Stroustroup who said that "Library Design is Language
> Design" (or at least I remember seeing it in the ARM).

The main idea of Paul Graham's "On Lisp" seems to be that program
design should be both top down and bottom up. Top down to adapt the
program to the language, and bottom up to adapt the language to the
program. Adapting the langauge to the program means building a more
focused language on top of the original language. That's language
design. Therefore, even though it's true that "library design is
language design", it's like saying "going to work each day is a
journey."

Tim Bradshaw

unread,
Feb 12, 2002, 8:02:06 AM2/12/02
to
* Software Scavenger wrote:
> The main idea of Paul Graham's "On Lisp" seems to be that program
> design should be both top down and bottom up. Top down to adapt the
> program to the language, and bottom up to adapt the language to the
> program. Adapting the langauge to the program means building a more
> focused language on top of the original language. That's language
> design. Therefore, even though it's true that "library design is
> language design", it's like saying "going to work each day is a
> journey."

But that's a different meaning, really, because it's *special-purpose*
language design, which is a much lesser thing than *general-purpose*
language design. Just because I've written a mini-language for the
configuration files of my system (which I have), doesn't mean that
that mini-language is, or should be, useful outside the context of my
system. In my case, it almost certainly will not be, although some of
its concepts probably will be (in fact most of the ideas behind it
were borrowed from other config-file-type mini-languages I have.)

--tim

Brian P Templeton

unread,
Feb 13, 2002, 8:35:27 PM2/13/02
to
k...@accton.shaw.ca (Kaz Kylheku) writes:

> In article <fmig6uc180747423r...@4ax.com>, Fernando
> Rodríguez wrote:
>>On Mon, 11 Feb 2002 19:52:25 GMT, k...@accton.shaw.ca (Kaz Kylheku) wrote:
>>
>>
>>>I chose CL for developing Meta-CVS, a freeware version control tool
>>>implemented as a front end for CVS. Meta-CVS handles versioning of the
>>>directory structure just fine, unlike CVS which doesn't.
>>
>>Have you seen Subversion (at tigris.org)?
>
> Yes. I took a look at it, and just went ``gack''.

[...]


> So as you can see, I didn't wait for the ``compelling replacement for
> CVS'' to materialize; I made it myself. :)

Have you looked at PRCS yet? (Development seems to have stopped,
though.)

--
BPT <b...@tunes.org> /"\ ASCII Ribbon Campaign
backronym for Linux: \ / No HTML or RTF in mail
Linux Is Not Unix X No MS-Word in mail
Meme plague ;) ---------> / \ Respect Open Standards

Kaz Kylheku

unread,
Feb 13, 2002, 10:20:15 PM2/13/02
to
In article <87lmdwn...@tunes.org>, Brian P Templeton wrote:
>Have you looked at PRCS yet? (Development seems to have stopped,
>though.)

Nope, looking at it now. Some similarity of ideas, extending to the
use of Lisp expressions for storing meta-data.

It does more things than Meta-CVS, like store symbolic links,
and has to duplicate a lot of functionality that is in CVS.

But, oops, 27000 lines of C++, and 16000 more of C. Blech!
My eyes are glazing over. :)

--
Meta-CVS: version control with directory structure versioning over top of CVS.
http://users.footprints.net/~kaz/mcvs.html

Marco Antoniotti

unread,
Feb 14, 2002, 8:54:40 AM2/14/02
to

k...@accton.shaw.ca (Kaz Kylheku) writes:

> In article <87lmdwn...@tunes.org>, Brian P Templeton wrote:
> >Have you looked at PRCS yet? (Development seems to have stopped,
> >though.)
>
> Nope, looking at it now. Some similarity of ideas, extending to the
> use of Lisp expressions for storing meta-data.

They are not `good' S-expr. The authors of PRCS had only a cursory
knowledge of CL and ended up making up a format that is not good
enough.

Apart from that, PRCS is infinitely easier to use than CVS as long as
you stay on a single machine/network. PRCS 2.0 is (was) supposed to
enhance PRCS with good networking capabilities. Right now you have a
simple networking layer on top of it but I have not used it.

> It does more things than Meta-CVS, like store symbolic links,
> and has to duplicate a lot of functionality that is in CVS.

Of course it has to.

Brian P Templeton

unread,
Feb 15, 2002, 8:45:28 PM2/15/02
to
k...@accton.shaw.ca (Kaz Kylheku) writes:

> In article <87lmdwn...@tunes.org>, Brian P Templeton wrote:
>>Have you looked at PRCS yet? (Development seems to have stopped,
>>though.)
>
> Nope, looking at it now. Some similarity of ideas, extending to the
> use of Lisp expressions for storing meta-data.
>

However, the file format isn't extensible at all.

> It does more things than Meta-CVS, like store symbolic links,
> and has to duplicate a lot of functionality that is in CVS.
>
> But, oops, 27000 lines of C++, and 16000 more of C. Blech!
> My eyes are glazing over. :)
>

The C files are just copies from other libraries, presumably so it can
compile on Unices without, e.g., GNU getopt.

I am convinced that a PRCS-like system in Common Lisp would be
considerably shorter :).

> --
> Meta-CVS: version control with directory structure versioning over top of CVS.
> http://users.footprints.net/~kaz/mcvs.html

--

Marco Antoniotti

unread,
Feb 19, 2002, 10:54:09 AM2/19/02
to

Brian P Templeton <b...@tunes.org> writes:

> k...@accton.shaw.ca (Kaz Kylheku) writes:
>
> > In article <87lmdwn...@tunes.org>, Brian P Templeton wrote:
> >>Have you looked at PRCS yet? (Development seems to have stopped,
> >>though.)
> >
> > Nope, looking at it now. Some similarity of ideas, extending to the
> > use of Lisp expressions for storing meta-data.
> >
> However, the file format isn't extensible at all.
>
> > It does more things than Meta-CVS, like store symbolic links,
> > and has to duplicate a lot of functionality that is in CVS.
> >
> > But, oops, 27000 lines of C++, and 16000 more of C. Blech!
> > My eyes are glazing over. :)
> >
> The C files are just copies from other libraries, presumably so it can
> compile on Unices without, e.g., GNU getopt.
>
> I am convinced that a PRCS-like system in Common Lisp would be
> considerably shorter :).

Sure. All the parsing will go away. :)

0 new messages