Once upon a time I wrote a credit card billing system for a company which
shall remain nameless. The company's product was on-line advertising
delivered by an ad server, which was already up and running. (At the
time, the company only had corporate accounts, which were billed
manually.) The number of ads delivered was written to a database which
the biller that I wrote would periodically scan to generate invoices. The
invoices were then automatically charged to credit cards on file. The
entire process was fully automated.
During the initial development of the system I had a run-in with a
high-level manager at the company who opined that my dislike of Java and
C++ in favor of Lisp was irrational, that the static type systems provided
by these languages provided a powerful and reliable mechanism for
introducing incremental changes to a system, to wit: you'd make a change
to part of the system, compile it, and the compiler errors would point you
reliably to all the other parts of the system that required attention in
order to support that change. The introduction of credit card billing
required some changes to the ad server which were done more or less
following that methodology by a very bright and highly competent engineer
(not me).
The system was launched and had been happily humming along for a week or
two when all of a sudden it began to issue outrageously large charges -
millions of dollars in some cases - to people's credit cards.
Fortunately, sanity checks at the credit card processor prevented most of
these charges from going through, but a few did, and sorting out the
resulting mess took a very long time.
The proximate cause of the problem was quickly determined to be in the ad
server, which was suddenly claiming to have served vastly more ads than
was physically possible. But the ad server was old software. It had been
running for months with no problems. So there we were, facing this
critical problem in what had been a stable piece of software with no idea
how to even reproduce it, let alone track it down.
To make a long story short, what happened was this: when an ad-server shut
down, part of the shut-down code did a final update of the database to
record all the ads that had been served since the last update before the
shut-down. But the ad-server was multi-threaded, and it turned out that
there was a race condition. The thread that did the final database update
used data structures whose destructors were being run in a different
thread. Kablooey.
I can no longer recall whether the changes made to support credit card
billing actually introduced the race condition or just changed the timing
so that a problem that had actually been there all along began to manifest
itself, but it doesn't matter. The point is this: the more kinds of
errors are flagged for you at compile time, the more you can get lulled
into a sense of complacency and start to believe that if a program
compiles without errors then it is actually free of bugs. This is
especially true if you are using a language that doesn't allow you to run
your program at all until you have eliminated all the errors that the
compiler could find. After all, it is not unreasonable to expect that all
the work required just to get the program to compile so you could run it
for the first time should pay a dividend of some sort, and what else could
that dividend possibly be other than having to do less testing?
So it is not at all clear to me that eliminating even a certain class of
run-time errors actually has net positive utility. The real payback from
compile-time detection of type errors is not a reduced necessity for
testing; it is instead merely a time savings, finding those errors sooner
rather than later. And if one does allow onesself to be deluded into
thinking that static typing does pay a dividend in reduced testing then
static typing could actually have a negative net benefit when the laws of
physics and mathematics come around to remind you that you were wrong.
E.
Well, if you have incompetent programmers, no language can really solve
that problem. I've used static-typed languages and dynamic-typed
languages, and they each have their pros and cons. I've never known a
decent programmer who assumed that the only kinds of errors are the ones
that the compiler can detect.
Compilers can generally only detect some really simple, stupid errors like
misspelling variable names or changing a type in one place without changing
it in others. Semantic errors like the one that caused your problem are
clearly beyond the abilities of any compiler to detect, and you have to do
exhaustive testing to verify things at this level.
Blaming this failure on the language is really barking up the wrong tree.
--
Barry Margolin, barry.m...@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
[...]
> During the initial development of the system I had a run-in with a
> high-level manager at the company who opined that my dislike of Java and
> C++ in favor of Lisp was irrational, that the static type systems provided
> by these languages provided a powerful and reliable mechanism for
> introducing incremental changes to a system, [...]
There ought to be some sort of comp.lang FAQ entry explaining that no
advocate of static type systems will ever take any defense of latent
typing seriously if it mentions C++ or Java.
-thant
> The real payback from compile-time detection of type errors is not
> a reduced necessity for testing; it is instead merely a time savings,
> finding those errors sooner rather than later.
Actually, finding errors sooner rather than later is not always an
advantage.
There's one big advantage to finding errors at run-time rather than
compile-time: you can get a lot more information about the situation
that caused the error. Any decent debugger will let you inspect the
entire sequence of events, i.e. function calls and arguments, that
led to the error. You get an actual, real-life, fully worked out
*example* of your thinking mistake, not just a type-mismatch error
(which can sometimes be very cryptic, in my experience). It's hard
to overestimate the value of a good example.
(Obviously, finding errors at run-time rather than compile-time has
its disadvantages too.)
Arthur Lemmens
> In article <gat-171103...@k-137-79-50-101.jpl.nasa.gov>,
> Erann Gat <g...@jpl.nasa.gov> wrote:
> >I can no longer recall whether the changes made to support credit card
> >billing actually introduced the race condition or just changed the timing
> >so that a problem that had actually been there all along began to manifest
> >itself, but it doesn't matter. The point is this: the more kinds of
> >errors are flagged for you at compile time, the more you can get lulled
> >into a sense of complacency and start to believe that if a program
> >compiles without errors then it is actually free of bugs.
>
> Well, if you have incompetent programmers, no language can really solve
> that problem.
But that's just the thing: this wasn't a case of programmer incompetence.
The person who made the changes to the ad server that resulted in the
problem was very smart, very competent. (You may say that the fact that
this problem arose is de facto evidence that he was not competent, but
after the problem occurred it took a whole team of engineers a week to
figure out what was going on. That whole team would have had to have been
incompetent, and I assure you they were not. These were the cream of the
crop.)
> Blaming this failure on the language is really barking up the wrong tree.
You have badly misunderstood my point. I do not blame this failure on the
language. I blame this failure on the mindset that because the compiler
catches certain kinds of errors that you can get away with less testing
than if it didn't. In other words, I am not saying static typing is bad,
just that one of the benefits that many people cite for it is not really a
net benefit.
E.
> [...] I do not blame this failure on the
> language. I blame this failure on the mindset that because the compiler
> catches certain kinds of errors that you can get away with less testing
> than if it didn't. In other words, I am not saying static typing is bad,
> just that one of the benefits that many people cite for it is not really a
> net benefit.
I've heard exactly this argument used to defend manual memory management
over GC.
-thant
And I'm saying that that mindset is an indication of poor programming
skill. If your cream of the crop thinks like that, you need to find a
better crop. Extensive testing is always necessary, especially in
multi-threaded applications -- timing holes are notoriously transient and
difficult to detect.
> After all, it is not unreasonable to expect that all
> the work required just to get the program to compile so you could run it
> for the first time should pay a dividend of some sort,
I'm still baffled by what this extra work that you (and a few others like
Raffael and Pascal) keep refering to actually is. So maybe you could
illustrate what extra work your talking about if we chose a specific
example we are both familiar with, the phonecodes problem. I posted
a Haskell solution which you can dig out of the google if you like.
How do you think the presence of a static type system impeded development
of this program?
The only extra work I can think of is that a static type system does
force you to give some thought to correctness as you write your
code. But anybody who doesn't do that as a matter of routine is
storing up a heap of trouble for themselves further down the line.
> and what else could that dividend possibly be other than having to do less
> testing?
For non-trivial programs testing will never prove correctness. The best
you can hope for is that more testing will raise the probability that
a program which passes all its tests is indeed correct. So static typing
does not reduce the amount of testing you *need* to do (which is infinite
if you like). But for a given finite test time/effort static typing will
improve test coverage (testing for certain properties which are a
necessary but not sufficient condition for correctness is 100% and
comes for free). So it will raise the probability of correctness IMO.
Also, detection of a fault and diagnosis are not the same thing at all.
In my testing I often detect bugs hours before I diagnose the cause and
fix it. In contrast, those bugs detected by the static type system are
pinpointed imediately (well almost, once I've grocked the error message
thrown out by the compiler).
Regards
--
Adrian Hey
> But the ad-server was multi-threaded, and it turned out that
> there was a race condition.
Nobody expects static typing to guard against race conditions.
> The point is this: the more kinds of errors are flagged for you at
> compile time, the more you can get lulled into a sense of
> complacency and start to believe that if a program compiles without
> errors then it is actually free of bugs.
The same goes for unit tests: Even if you have quite a lot of them,
you probably would have been lucky if they caught the race condition.
You guard against race conditions by careful programming, and by
planning in advance. Modelling the interaction of the threads is also
helpful, but requires some effort.
No matter what tools you use, you should know what you can expect
from them. So the "data point" you gave is irrelevant for static vs.
dynamic typing.
- Dirk
> Erann Gat <g...@jpl.nasa.gov> wrote:
>
> > But the ad-server was multi-threaded, and it turned out that
> > there was a race condition.
>
> Nobody expects static typing to guard against race conditions.
Why not?
http://citeseer.nj.nec.com/boyapati01parameterized.html
> > The point is this: the more kinds of errors are flagged for you at
> > compile time, the more you can get lulled into a sense of
> > complacency and start to believe that if a program compiles without
> > errors then it is actually free of bugs.
>
> The same goes for unit tests: Even if you have quite a lot of them,
> you probably would have been lucky if they caught the race condition.
In fact, static analysis (although perhaps not in form of type
systems) is your best (some may say: only) bet against the possibility
of race conditions and their ilk.
I would be very surprised if no one were working on precisely this.
Jesse
Race conditions and deadlocks can occur readily in either static or dynamic
languages.
If the programmer's mindset leads them to believe that concurrency issues
cannot occur just because things compile, then they are indeed incompetent, at
least with regards to concurrency.
They can be outrageously smart in other areas, but forgetting about
concurrency issues is a mistake, plain and simple.
I would imagine that after the mistake you describe, your smart programmer was
quite a bit smarter.
I would also imagine that your programmer's opinion of static typing is in no
way affected by the incident either.
--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
rAYb...@STRIPCAPStelus.net The Rhythm has my soul.
> Erann Gat wrote:
>
> > After all, it is not unreasonable to expect that all
> > the work required just to get the program to compile so you could run it
> > for the first time should pay a dividend of some sort,
>
> I'm still baffled by what this extra work that you (and a few others like
> Raffael and Pascal) keep refering to actually is. So maybe you could
> illustrate what extra work your talking about if we chose a specific
> example we are both familiar with, the phonecodes problem. I posted
> a Haskell solution which you can dig out of the google if you like.
> How do you think the presence of a static type system impeded development
> of this program?
I can't really speak to this because I don't know Haskell. But the
episode in question involved C++. And in the case of ML the extra work
involves things like changing all you arithmetic operations if you decide
to change an int to a float.
> The only extra work I can think of is that a static type system does
> force you to give some thought to correctness as you write your
> code.
No, it only forces you to give some thought to a particular kind of
correctness, which may or may not be a kind that actually matters to you.
That's the whole point.
E.
Ringo Starr
> The only extra work I can think of is that a static type system does
> force you to give some thought to correctness as you write your
> code.
By "thought to correctness" you really mean, thinking in a type system,
an additional burden I don't want to take on, especially for such little
benefit - virtually all of these type errors will be caught by other
necessary testing.
> But anybody who doesn't do that as a matter of routine is
> storing up a heap of trouble for themselves further down the line.
No, they are storing up failed tests and runtime exceptions, which lead
to corrections.
I wan't to run my program at any stage, knowing that it isn't fully
correct yet, and ignore those errors I choose to, and attend to those I
want to, when I want to. I don't wan't the compiler dictating the order
in which I must move my program toward correctness (i.e., fix the type
errors *first* or I won't let your program run). Type errors aren't all
that important to me, since I know they'll either be caught in other
necessary testing, or disappear as my code is refined and refactored.
> So maybe you could
> illustrate what extra work your talking about if we chose a specific
> example we are both familiar with, the phonecodes problem. I posted
> a Haskell solution which you can dig out of the google if you like.
> How do you think the presence of a static type system impeded development
> of this program?
This is tantamount to asking us to retroactively get inside your head
and know everything that you were doing and thinking from the time you
first read the spec, to the time your program compiled and tested
correctly. The extra work is having to conform your thought about the
problem to Haskell's type system, and being forced to correct any type
errors before your program is allowed to run. This extra work isn't
going to be visible in the finished code any more than the initial list
based, throw away drafts of a lisp solution would be visible in a final
program that uses CLOS or hash tables.
> Erann Gat <g...@jpl.nasa.gov> wrote:
>
> > But the ad-server was multi-threaded, and it turned out that
> > there was a race condition.
>
> Nobody expects static typing to guard against race conditions.
Not consciously, no. But tacitly, yes, I think people do expect this,
even if they don't realize it.
> You guard against race conditions by careful programming, and by
> planning in advance.
Or by putting in run-time sanity checks. If I had had the foresight and
the strength of my own convictions I would have put code into the biller
that stopped the billing process immediately as soon as a request was made
to bill a ridiculously high amount, for some value of "ridiculously
high". That would have solved the problem too.
Neither static nor dynamic typing would have helped in this situation.
But I do believe that a dynamic typing *mindset* would have been more
likely to produce a correct solution because it encourages you to think
more about possible run-time errors.
E.
> In fact, static analysis (although perhaps not in form of type
> systems) is your best (some may say: only) bet against the possibility
> of race conditions and their ilk.
Ironically, I actually agree with this.
E.
> g...@jpl.nasa.gov (Erann Gat) writes:
> > > Blaming this failure on the language is really barking up the wrong tree.
> >
> > You have badly misunderstood my point. I do not blame this failure on the
> > language. I blame this failure on the mindset that because the compiler
> > catches certain kinds of errors that you can get away with less testing
> > than if it didn't. In other words, I am not saying static typing is bad,
> > just that one of the benefits that many people cite for it is not really a
> > net benefit.
>
> Race conditions and deadlocks can occur readily in either static or dynamic
> languages.
>
> If the programmer's mindset leads them to believe that concurrency issues
> cannot occur just because things compile, then they are indeed incompetent, at
> least with regards to concurrency.
The situation was that the programmer was making what appeared outwardly
to be a minor change to a piece of code that had run for months in a
production environment with no problems. No one imagined that this
problem would happen. Even after it happened, as I said, it took the
entire team the better part of a week to figure out what had gone wrong.
> I would imagine that after the mistake you describe, your smart programmer was
> quite a bit smarter.
Well, they're still using C++ so I don't know ;-)
> I would also imagine that your programmer's opinion of static typing is in no
> way affected by the incident either.
That is probably true. Certainly the manager who insisted I write the
biller in Java was unmoved.
E.
Ringo sang it. But it was a Lennon/McCartney composition.
[If I read one of your papers out loud, can I take the credit?
Please? ;-) ;-)]
--ag
--
Artie Gold -- Austin, Texas
Oh, for the good old days of regular old SPAM.
Static typing will tell you exactly one thing --- that your code is
free of type errors. It has its utility --- it means that, if your
code typechecks correctly, you don't have to pay attention to looking
for type errors, rather, you can spend your efforts (testing, code
review time) looking for other sorts of errors. People who sell static
typing as a panacea (like anybody who sells anything as a panacea) are
selling you snake oil.
> Neither static nor dynamic typing would have helped in this situation.
> But I do believe that a dynamic typing *mindset* would have been more
> likely to produce a correct solution because it encourages you to think
> more about possible run-time errors.
Well, I quite liked your story, and in my experience the
sort of error you relate is absolutely typical.
What I also find interesting about this episode is, at
the meta level, that you got the responses I expected
right away:
* the programmers were incompetent (shoot the messenger)
* That's not a typing issue, it's a concurrency issue (head in the sand).
* refusal to learn from the experience (from your managers/team).
I would almost go so far as to say that KNOWING you're going
to get errors at runtime is BETTER, because it leads to safer/more
robust designs. Your programmer _knows_ he might have missed a
call path, and will think about where/how to handle those missed
errors, even if it's just to log the bug and continue in some sane
state.
An interesting war story. Thanks.
> Dirk Thierbach <dthie...@gmx.de> writes:
>>
>> Nobody expects static typing to guard against race conditions.
>
Matthias Blume <fi...@my.address.elsewhere> writes:
> Why not?
>
> http://citeseer.nj.nec.com/boyapati01parameterized.html
>
> In fact, static analysis (although perhaps not in form of type
> systems) is your best (some may say: only) bet against the possibility
> of race conditions and their ilk.
Great. I agree with you.
--
~jrm
> Static typing will tell you exactly one thing --- that your code is
> free of type errors. It has its utility --- it means that, if your
^
static
> code typechecks correctly, you don't have to pay attention to looking
> for type errors, rather, you can spend your efforts (testing, code
^
static
> review time) looking for other sorts of errors.
Like the dynamic type errors.
--
~jrm
> Dirk Thierbach <dthie...@gmx.de> writes:
>
> > Erann Gat <g...@jpl.nasa.gov> wrote:
> >
> > > But the ad-server was multi-threaded, and it turned out that
> > > there was a race condition.
> >
> > Nobody expects static typing to guard against race conditions.
>
> Why not?
>
> http://citeseer.nj.nec.com/boyapati01parameterized.html
There is also a technique in C++ which uses the type system and the
volatile keyword to guard against race conditions. I haven't used the
technique myself, so I don't know about benefits and problems. But it
sounds reasonable and might be of practical value. The technique
really relies on the language being statically typed.
One article describing it is here (sorry for the long URL):
> > The same goes for unit tests: Even if you have quite a lot of them,
> > you probably would have been lucky if they caught the race condition.
>
> In fact, static analysis (although perhaps not in form of type
> systems) is your best (some may say: only) bet against the possibility
> of race conditions and their ilk.
The reason for this is that in a multithreaded environment you would
have to check an /exponential/ number of "different" programs (because
the scheduling might switch threads after each operation, in effect
creating a different control flow each time). IMHO, there is no
chance to fix a race condition in a moderately complex multithreaded
program by unit testing. (Note: I am /not/ saying a multithreaded
program shouldn't be unit-tested!)
>> Nobody expects static typing to guard against race conditions.
> Why not?
[...]
> In fact, static analysis (although perhaps not in form of type
> systems)
Because it's static analysis, and not a static type system.
> is your best (some may say: only) bet against the possibility
> of race conditions and their ilk.
Yes. I know.
- Dirk
>> Nobody expects static typing to guard against race conditions.
> Not consciously, no. But tacitly, yes, I think people do expect this,
> even if they don't realize it.
So you say you don't believe in static typing because you have met
people who are stupid? Come on, you cannot be serious. That would
be like saying that I don't believe that dynamic typing can work
because I have seen people who treat it as a religion.
> Neither static nor dynamic typing would have helped in this situation.
> But I do believe that a dynamic typing *mindset* would have been more
> likely to produce a correct solution because it encourages you to think
> more about possible run-time errors.
You seem to have some strange ideas about a "static typing mindset".
I believe you that you have met incompetent people who were using
static typing, but they are also incompetent people who are using
dynamic typing. That really proves nothing.
And in this case, a static analysis (not a static type system) will
probably buy you much more, so some "static" thinking doesn't hurt.
- Dirk
> The comment about changing a type and then having the
> compiler tell you where you need to make modifications made me cringe.
> That's an abuse of the type system, not a good way to take advantage
> of it!
Why? Types encode invariants. They form a contract about the behaviour
of some part of my code. If I break this contract in such a way that
it changes the type, the type checker *will* point out all places
I have to think about. (Of course, if I change it in a way that keeps
the type, I have to do testing and code review).
> Only proper testing and code review would have.
But static typing *does* testing by abstract interpretation. And it
does this more thoroughly than any test suite or code review ever
can, because it is automated.
> People who sell static typing as a panacea (like anybody who sells
> anything as a panacea) are selling you snake oil.
Yes. Nobody tries to sell static typing as a panacea. (Though I
sometimes got the impression that people were trying to sell dynamic
typing as one). Static typing is a great tool, nothing more. It has
its limits, but within those limits, it is really helpful.
- Dirk
> What I also find interesting about this episode is, at
> the meta level, that you got the responses I expected
> right away:
> * the programmers were incompetent (shoot the messenger)
> * That's not a typing issue, it's a concurrency issue (head in the sand).
> * refusal to learn from the experience (from your managers/team).
Excuse me, so you're saying that if A claims he had an accident
on his way to work because you saw a black cat passing from left
to right, and if B points out that the black cat has nothing to
do with it, then B is automatically wrong? Come on.
> I would almost go so far as to say that KNOWING you're going
> to get errors at runtime is BETTER, because it leads to safer/more
> robust designs.
Yes, it is. With a static type system, you not even know that you're
going to get errors at runtime, you have a good idea about *what*a
kinds of errors you're going to get, and what kinds of errors
never won't happen.
- Dirk
No, you get different information. Static analysis can infer global
information that won't be available during runtime, where you just look at
one execution path at a time.
That's why should have both.
- Andreas
--
Andreas Rossberg, ross...@ps.uni-sb.de
"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
Not in Standard ML.
"Compile time" -- what does this mean, the time when macros are
expanded?
Seriously, I can not imagine in this new millenium, that static
checking proponents are still talking about compile time. You must
have a bad assumption in your posting; perhaps you are explaining the
very real problem of how local fixes to satisfy a static system end up
damaging code globally.
So I'd ask: Could a lisp system be configured so that it runs a
static analysis upon runtime errors, or at arbitrary points on
arbitrary subsets of code? In fact, if the user is allowed to
incrementally supply type information, it can ask for specific info to
make certain guarantees. It may even be combined with runtime
analysis to recommend exactly which types and where they are most
sorely needed.
Non sequitur.
Static typing didn't catch the error - no surprise here, nobody in his
right mind would claim that. (That manager who said that static typing
would tell you all the places where to change the code obviously
over-generalized - only if a change affects the signature of a function
does static typing help spotting the other affected places; the race
condition obviously wasn't of that error class.)
Being lulled into a false sense of security can happen with /any/
testing method, be it unit tests, static typing, black-box testing, or
whatever. One should apply all available test methods and not rely on
just one.
Given the reasoning of the previous paragraph, I draw the exactly
reverse conclusion: Removing a test method increases the likelihood that
a given bug will slip through.
So here's my litmus test for language testability:
Does it enable as many test paradigms as possible?
Lisp doesn't qualify, it makes static testing (type checking, automatic
proving) difficult, as it gives very little easily verifiable guarantees
what a given piece of code does.
C++ doesn't qualify: no introspection and a syntax that requires several
months from a highly qualified expert to write a useful parser, so tools
that inspect the code have a hard time. On top of that, a highly
complicated semantics that makes analyzing the code even more difficult.
Java sort-of qualifies: its introspection facilities give tools a
foothold that they can work with. However, its type system is too weak:
as soon as containers come into play, it fails to give the guarantees
that the programmer knows could be given (every single type cast is such
a failed guarantee).
Note that even when Java acquires parametric classes (scheduled for this
year IIRC), its type system will remain too weak: parallel type
hierarchies (e.g. Driver and Vehicle, the problem is with the drive()
routine in the, say, Ship/Captain subclasses) will continue to require
type casts. This is a typical problem of most if not all statically
typed OO languages.
Typical functional languages with static typing sort-of qualify from the
other side: they usually have an extremely regular syntax and semantics,
so writing tools (including testing tools) should be comparatively easy,
so they /should/ qualify; however, these tools often aren't present (or
they don't get the high visibility that they deserve, I don't know), so
for me, they fall a bit short of their potential.
Regards,
Jo
That's of of the least common types of change that I have met in my life
(with the exception of school examples where this can indeed happen).
The type of changes that I usually see are a change in the fields of a
data structure, or replacing a data structure with a slightly different
one. Static typing will not require me to rewrite all the code where the
semantics has remained unchanged. (Static typing without inference will
require me to needlessly rewrite the routine signatures such as
"f (foo: BAR)" if the type of foo changes to BAZ, and f just passes the
value through to other functions without actually inspecting it. Which
is what I see in type inference - I've been spending too many hours
twiddling function signatures.)
Regards,
Jo
> No, you get different information. Static analysis can infer global
> information that won't be available during runtime, where you just
> look at one execution path at a time.
>
> That's why should have both.
Definitely. Getting static type *information* is great. But getting an
impenetrable static type *barrier*, that won't let you run a program
just because it might cause a type error at run-time, is not so great.
It makes it more difficult to explore many paths in the search space from
ideas to useful programs.
Arthur Lemmens
> I can't really speak to this because I don't know Haskell. But the
> episode in question involved C++. And in the case of ML the extra work
> involves things like changing all you arithmetic operations if you
> decide to change an int to a float.
This lack of explicitness is a weakness in Common Lisp and a potential
source of inaccuracy. It arises whenever integer to float conversion must
be more precise than single floating point arithmetic and it can be really
hard to spot the inaccuracy within a series of computations.
Specifically the problem is with automatic single- to higher-float
contagion which the rules make difficult to spot.
(values #1=(loop for i in '(2 3 5 7 11) sum (* pi (sqrt i))) (type-of #1#))
=> 35.640452700666685
double-float
Here I've been lulled into the false belief that calculations have been
performed to double-floating point accuracy. But SQRT returned single
floats. It is PI that lead to the double float contagion.
Here's the correct level of accuracy provided by a double-float:
(values #1=(loop for i in '(2d0 3d0 5d0 7d0 11d0) sum (* pi (sqrt i))) (type-of #1#))
=> 35.64045272006214
double-float
Here's the more realistic level of accuracy that was computed in the
original example:
(values #1=(loop for i in '(2 3 5 7 11) sum (* (coerce pi 'single-float)
(sqrt i))) (type-of #1#))
=> 35.640453f0
single-float
Hidden contagion would not be a problem if floats of different formats
lead to contagion to the float of a lower level of accuracy. If a
single-float appeared in a series of double-float calculations the outcome
would be a single-float and the fix would be to locate the source of the
contagion. As it stands a programmer may be none the wiser when half the
precision is thrown away at some point--or potentially worse if the
implementation supports even higher levels of floating point precision:
(loop for i in '(2 3 5 7 11) sum (* pi (sqrt i)))
=> 35.64045270066668789L0 in CLISP in ANSI Common Lisp mode. CLISP warns:
WARNING:
Floating point operation combines numbers of different precision. See ANSI
CL 12.1.4.4 and the CLISP impnotes for details. The result's actual
precision is controlled by *floating-point-contagion-ansi*.
To shut off this warning, set *warn-on-floating-point-contagion* to nil.
Regards,
Adam