I forgot the title of a paper - it compared the design principles of Bell
Labs and MIT. Could anybody send me the title - an URL would be more
Thanks in advance,
That would be the classic paper
"Lisp: Good News. Bad News. How to Win Big", by Richard P. Gabriel
especially the "Worse is Better" section. It's at
and most LISP repositories.
>I forgot the title of a paper - it compared the design principles of Bell
>Labs and MIT. Could anybody send me the title - an URL would be more
I think you're after Richard P. Gabriel's paper "Lisp: Good News, Bad News,
How to Win Big". It's a brilliant paper, IMHO, and well worth reading.
The URL is <http://www.ai.mit.edu/docs/articles/good-news/good-news.html>.
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit"
PGP: finger f...@18.104.22.168 | -- the last words of T. S. Garp.
That paper has sort of a sequel, R. P. Gabriel's book _Patterns_of_
Software_ on patterns, design principles, habitable software, ...
Those interested in the cited paper may want to read this fine book, too.
To my mind this paper is less than "classic" since it contains a very big
flaw in it in my opinion. The paper characterises the New Jersey way of software
building as "creating something that's 50% of what you want, letting it
spread, and then subsequentely improving it to 90% of what you want". The
paper contends that this is a better approach to software development than
the MIT way. However there is a subtle flaw in this so called New Jersey
approach, namely that it assumes that you *can* improve your 50%-of-what-
you-want system to a 90%-of-what-you-want system. I have worked on too many
projects that have tried exactly this 50->90 transition and *failed*. They
failed for two reasons:
a) intertia -- people liked the 50%-system and had invested heavily in
resources in it's usage, and subsequently didn't want to allow any changes
necessary to make the 90%-system.
b) technical reasons -- early design decisions in creating the
it technically impossible to improve it to a 90%-system.
An example of b) is C and garbage collection. C's early decision to have pointers
now precludes proper safe garbage collection (yes I know about the Boehm
collector, but it's not safe like say an ML collector).
An example of a) is the Macintosh Operating system. Much fuss is created every
time Apple changes the OS to get rid of "undesirables" from the early days
(like system extensions).
Some kinds of software can successfully do a 50%->90% shift, but a lot cannot.
And that is a big flaw in the New Jersey way.
Yes, but I always thought of *designing* for 90%, implement 50%, spread and
then reach 90%
Yes, this is true.
On the other hand, the "create 100% of what you want in one shot" has
its problems as well. You might run out of money before the project is
complete. You might discover only at the end that you made fundamental
mistakes in your original design, but there's way too much investment to
make major changes now. You might create something that goes over the
top in complexity and feature creep. Or you might suceed, but too late:
your competitor with the 50% solution has already established a
There are advantages and disadvantages to both sides, but I think it's
not an accident that almost all of today's commercially successful
software used the 50->90 model.
"To summarize the summary of the summary: people are a problem."
It's also not an accident that technically speaking most of today's
commercially successful software is awful (bugs, inconsistencies,
etc). It seems to me that if you are writing software to be
commercially successful then use the New Jersey model. If you
are writing software for other reasons then don't use the New
>Russell Wallace wrote:
>> There are advantages and disadvantages to both sides, but I think it's
>> not an accident that almost all of today's commercially successful
>> software used the 50->90 model.
>It's also not an accident that technically speaking most of today's
>commercially successful software is awful (bugs, inconsistencies,
>etc). It seems to me that if you are writing software to be
>commercially successful then use the New Jersey model. If you
>are writing software for other reasons then don't use the New
I think this applies to success in general, not just commercial success.
Look at Linux vs Hurd, for example.
Linux started out as an x86-only Unix clone, written with no attempt at
portability in mind, but it now runs on just about everything bar your
shoe-phone. Clearly an example of the 50->90 model.
x86 vs. the world can be a more entertaining example. Can you
believe that this crappy ISA (instruction set architecture) is
now the top performer in the general purpose computer world?
(By "general purpose" I mean to exclude vector processors and
other more exotic systems currently suited only for specialized
Face it. x86 ISA is crap. x87 (floating point) ISA is even
more crappier. Yet look how far they have gone now.
-- Chuan-kai Lin
The key, I think, is getting the core architecture reasonably close to
right. (Where I'm using architecture in Brooks's sense to mean only
what the outside world sees.) Implementations can be redesigned and
rewritten later if need be, provided they keep the same interface, but
if you make a mess of the interface, not all your tears will wash out a
word of it.
Unix is a good example; the original version lacked a lot of features
now considered essential, but the architecture was clean and well
designed, and nowadays Unixes run enterprise servers with MTBF measured
Windows NT is a good counterexample; its problems aren't for want of
debugging effort, they're because the architecture was botched, and I
doubt if a million man-years of effort now would make it reliable.
> I think this applies to success in general, not just commercial success.
> Look at Linux vs Hurd, for example.
> Linux started out as an x86-only Unix clone, written with no attempt at
> portability in mind, but it now runs on just about everything bar your
> shoe-phone. Clearly an example of the 50->90 model.
Well, clearly an example of the 5-38 model, anyway.
I agree with this -- the architecture is the key. And to my mind this
is the flaw in both the "50->90 New Jersey" way of building software
and the "MIT way". In the 50->90 model one makes architectural
decisions early on in the 50% bit that often preclude capabilities
later in the 90% bit (worse it's not clear that one is making such
architectural decisions). In the MIT approach one has to somehow
predict all architectural requirements before implementing anything.
That's very very hard to do right.
For my money both the MIT and New Jersey way are the wrong way to
build software, not that I know what the right way is.
So finally, your criticism notwithstanding, you tend to agree
with Gabriel's "Worse Is Better" paper which I don't recall at
all to measure excellence by commercial (or reproductive) success.
"Unix and C are the ultimate computer viruses."
Well, perhaps it's time to quote Fredrik Brooks: "Plan to throw one
away, you will anyway." And the less you build before that becomes
evident, the easier it is to throw it away...
Regarding Linux and architecture, of course most of the architecture
was already there in the first place, Linux is very traditional in its
design. Compare that with Hurd, where the opposite is (almost)
true. Perhaps other successful open source software would tell us
more, what about Apache? Does anyone have inside info on it?
Stefan Axelsson Chalmers University of Technology
s...@rmovt.rply.ce.chalmers.se Dept. of Computer Engineering
(Remove "rmovt.rply" to send mail.)
I have been involved with the design of software in very large projects
for more than 20 years. There are two types of project:
1 - projects which build on existing architectures and design
principles for already successful products. Given reasonably
competent people, these project usually succeed unless the new
product being designed is fundamentally different from the
original product. If this is the case then, you should not be
designing your new product based on the old one!
2 - projects building a new type of product. For these to work, the
following criteria must be true:
- A small team of competent architects (2 or 3 people) run the
project in a hands-on manner. They are the sort of people who
spend a lot of their time working with code and doing
experiments. If the architecture team are not "hands-on" people
but Power Point freaks, the project will fail
- The project must not try to solve all the world's problems at
once but solve a subset which will define an architecture which
will enable projects of type 1) above. I would not call this
50->90, rather 5->90. Guessing future unknown requirements *is*
pure guesswork, you win some, you loose some. So keep the first
instantiation of the product *small* until you know the odds.
- Architecture, design principles etc. must be worked out by
prototyping and measurements. Paper studies, specially about
capacity are frequently completely misleading. A lot of
prototypes and experiments, must be done in initial phases. This
is why languages which allow early prototyping and experiments
are very important. In our case Erlang fits very well here.
- The right documentation should be written. Projects can easily
be killed by too much, too little documentation or the wrong
documentation. No documentation means that the product reaches a
situation where it cannot be maintained or left over to other
designers. Too much documentation means that the project becomes
a bureaucracy. Specifications are a good idea - but it is easy
to over-specify. Good comments in code are often quite enough to
document a product together with some short high level design
documentation. Good user level documentation is essential and
should be written by specialists, not by programmers!
- And of course you need a gang of motivated, experienced software
designers. (I use the term software designer as I assume that
these people can all phases of software design, systems,
programming, test, configuration management, integration etc
etc). A frequent project killer is expanding a project too
early. Another killer is pouring in people when the project is
delayed. "Adding manpower to a late software project makes it