I would like to solicit thoughts on how to develop Lisp
competence more systematically, on the job as opposed to academia,
and in as short a time as possible (subject of course to the
definition of competence). "X months" instead of 21 days or ten years.
Specifically, how to design an in-house training program instead of
just "OK, let's go off and individually learn Lisp on
company-allowed time". For example, what particular content to
select out of the vast amount of material available.
Somehow, "Let's read SICP for X months" doesn't seem appropriate
in a corporate IT setting. The goal would be to show tangible
results in X months---production software, not just
"Oh, now we really understand the essence of programming."
Would Cooper's book followed by Graham's ANSI Common Lisp be the
best sequence? How to structure and pace the "classes"?
How to avoid boredom or learning doldrums on one hand, and
overwhelment on the other?
Or am I naive in thinking that Lisp competence can be
willed, manufactured, in a corporate IT environment.
If there is precedent for a practical on-the-job Lisp training
program, I would be interested to know more details about the
program design.
@-@-@-@
If a company decides to "go Lisp", i.e., to use Lisp as its main
language for "enterprise applications", particularly on the
server side, is it foregoing all the non-Lisp-based technology
out there now and yet to come...stuff like EJB, J2EE (whatever
these do), "web services" software, "application servers",
XML processing, all this middleware stuff, .NET (whatever that is)
or its open source equivalent (if any), and so on?
Perhaps these things are mostly smoke now, or in part
recreations of Lisp functionality, but they could catch fire
tomorrow...and where there is smoke, must there not be something hot?
We know that Lisp is a good language, but it's just a language.
There is a gnawing, inchoate sense that one might be cutting
oneself off from (or at least be making it more difficult to use)
other pieces of a system that one might want or need
in the future. (Yes, in theory, there is always the foreign
function interface, but as we all know, theory is just that.)
Comments?
Jala Bira
>>>>> "JB" == Jala Bira <jala...@yahoo.com> writes:
[...]
JB> Specifically, how to design an in-house training program
JB> instead of just "OK, let's go off and individually learn Lisp
JB> on company-allowed time".
You will need to tell us more about the backgrounds of these people. Are
they reasonably competent in any computer language?
JB> For example, what particular
JB> content to select out of the vast amount of material
JB> available. Somehow, "Let's read SICP for X months" doesn't
JB> seem appropriate in a corporate IT setting.
If read properly SICP tells a wonderful story (I think I picked this up from
EN). There are other stories, of course. Have there people been exposed to
any story at all? That is there are some academic and/or professional
training programs that teach people "if you want to do A, write the code
segment XYZ", while this is still learning it is a different kind of learning
than understanding the underlying concepts. Some people can tell the
difference between the two, primarily because they actually have a deep
understanding of "something" that gives them a solid basis to compare their
understanding of other things they are taught.
JB> The goal would be
JB> to show tangible results in X months---production software,
JB> not just "Oh, now we really understand the essence of
JB> programming."
It might even be tangible results in X months with demonstrated good
maintainability N*X months later. Of course you need N*X months to show
the latter!
JB> Would Cooper's book followed by Graham's ANSI Common Lisp be
JB> the best sequence? How to structure and pace the "classes"?
JB> How to avoid boredom or learning doldrums on one hand, and
JB> overwhelment on the other?
I'd start by collecting background information from people. Graham's book
is pretty basic and I have glanced through Cooper's document. I think
you can do those simultaneously. Your biggest hurdle would be to get people
comfortable with the programming environment if they are only used to pointy
clicky stuff with "wizards" and such.
Since you worry about keeping people motivated, you want to be able to
have them roll out something that impresses them right away. Maybe
this could be something that resembles a solution they collectively had
a problem delivering recently? Now this is mostly cheating as the lisp
solution would be the second solution to an already understood problem,
but it would be impressive if you showed how it could be extended with
ease and without growing warts to include fresh functionality (and compare
it to the old solution which, if you are lucky, will require a comparable
amount of pain to the original substrate).
JB> Or am I naive in thinking that Lisp competence can be willed,
JB> manufactured, in a corporate IT environment. [...]
It depends on what kind people you have available and what kind of
problems they are solving. Do these people have any formal academic
training? That by itself will not tell you much about their existing
skills, but the choices they made might tell you something about what
kinds of things they find easy to learn (ie it might be the case that
people who are scared of math but like playing with computers usually
select IT majors as opposed to CS). How many years of experience they
have doing what kind of programming? Will they get scared that their
skills in whatever is now going to be unnecessary? Do they have a sense
of elegance that they care about or do they just like getting things
done by whatever means and going home? If the former does lispy stuff
overlap with their taste? If the latter, can you make the case that Lisp
will indeed make their lives easier?
The second question is why do you want to do this? That is, why are _you_
convinced that training the existing staff in Lisp is a good way to go?
cheers,
BM
Jala> Peter Norvig's essay #5 in http://www.norvig.com/ discusses
Jala> two extremes in the length of the Lisp learning curve---"21
Jala> Days" and "Ten Years".
I think the article does not specifically address the Lisp learning
curve but the "programming learning curve" in general.
Immanuel
> I would like to solicit thoughts on how to develop Lisp
> competence more systematically, on the job as opposed to academia,
> and in as short a time as possible (subject of course to the
> definition of competence). "X months" instead of 21 days or ten years.
This paper might be useful (I haven't checked whether it's available
online, it may well be; if not, the publication should still be available
from Franz, Inc.):
"Lessons in Lisp"
Gail Anderson, John Levine, Jeff Dalton
Proceedings of ELUGM '99
Abstract: In this paper we aim to document and to pass on some of the
lessons we learned when asked to take over teaching a Masters level
degree module on Programming in Lisp. we also hope to stimulate
discussion about teaching Lisp, programming, and software engineering in
general.
> We know that Lisp is a good language, but it's just a language.
I personally think that it's more than a language.
Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://www.paoloamoroso.it/ency/README
[http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/]
Bulent Murtezaoglu wrote:
Thanks for the comments. All good points.
> It depends on what kind people you have available and what kind of
> problems they are solving. Do these people have any formal academic
> training? That by itself will not tell you much about their existing
> skills, but the choices they made might tell you something about what
> kinds of things they find easy to learn (ie it might be the case that
> people who are scared of math but like playing with computers usually
> select IT majors as opposed to CS). How many years of experience they
> have doing what kind of programming? Will they get scared that their
> skills in whatever is now going to be unnecessary? Do they have a sense
> of elegance that they care about or do they just like getting things
> done by whatever means and going home? If the former does lispy stuff
> overlap with their taste? If the latter, can you make the case that Lisp
> will indeed make their lives easier?
The problems to be solved are the usual company database-ish stuff...
customer accounts, money, materials, processes...nothing AI-ish
at the moment (that could be for later, have to get to square
one first).
In general, the raw material for training is not the greatest.
Experience varies but, if any, would tend to be in C-like languages, and
MS products like Visual Basic, FoxPro, and Access. If any formal
academic training, it might be in what passes for BSCS here,
or what the IT training schools crank out.
On the plus side, they recognize that VB is not a good language and
they are (and we would not hire anyone who would not be) eager
to learn new things and venture beyond MS products. Also, for now,
it just a small team of programmers, not an army as the
term "corporate IT" might have suggested. I used that term only to
mean not-academia, and also because I was trying to ask a general
question on Lisp training methodology that *could* apply to large
IT departments.
After all, if large IT departments can train people to at least some
basic level of competence in VB, C++, Java, SQL, etc., they should
also be able to do the same for Lisp, it is just another
language with its own set of concepts and quirks, right.
> The second question is why do you want to do this? That is, why are _you_
> convinced that training the existing staff in Lisp is a good way to go?
I personally prefer Lisp, but being the only one here that knows
it, the only way I get to use my preferred language is if I can
bring everyone else along. Management is open to this approach,
or doesn't care, as long as it works, and works out.
As for being "convinced", I would not say that. It is with
some trepidation that I entertain this approach---an experiment
that *must* succeed. But what is the alternative? It seems too painful
or funless to think about.
As I alluded to previously, I think my biggest concern is not so
much about the language itself, but the vague sense that we might be
burning bridges to all of that "middleware" stuff out there
and yet to come. That we might in future find ourselves marooned
on our little Lisp island.
Jala Bira
Jala Bira wrote:
> The problems to be solved are the usual company database-ish stuff...
> customer accounts, money, materials, processes...nothing AI-ish
> at the moment (that could be for later, have to get to square
> one first).
Not sure how much emphasis you meant on the AI thing, but the "run" in
"walk before you run" is not AI, it will be creating a company-specific
language within Lisp, a language which embodies company-specific
business and IT rules and thus enhances programmer productivity, as well
as programmer conformance with said rules.
> After all, if large IT departments can train people to at least some
> basic level of competence in VB, C++, Java, SQL, etc., they should
> also be able to do the same for Lisp, it is just another
> language with its own set of concepts and quirks, right.
Right. So why are you bothering us? <g>
Seriously, I was wondering before I read this why Lisp training needs
special treatment, and you are saying the same thing. Pick a good CL
intro and go. Then make sure you have one solid Lisp mentor (good Liap,
good teacher) who reviews newbies' code to look for ways it can be
improved. That will shorten the learning curve nicely.
And just so the students really piss off other programmers back in the
lunchroom: at the end of every morning session inroduce something cool,
like landing in a backtrace, fixing the code, and then continuing from a
"retry" restart to successful completion.
Come to think of it, on the first day show how the parentheses
auto-indent text and make editing easier by letting one select, copy,
delete, and move logical chunks of text (sexprs). Arm the students with
this comeback to parentheses jokes: "Do the spaces between words bother
you when you read?"
> Management is open to this approach,
> or doesn't care, as long as it works, and works out.
Do you get to pick who takes the Lisp training? If so, look for folks
who explore other languages on their own. If the others choose whether
to take the class, hell, they will self-select, you should be OK. If you
have a given group, uh-oh, you may run into grumpy preconceptions. tell
them they will be learning a new language: Ciel. :)
> but the vague sense that we might be
> burning bridges to all of that "middleware" stuff out there
> and yet to come. That we might in future find ourselves marooned
> on our little Lisp island.
I have been lucky enough to miss all that so far, but IIUC you can get
to the mainland by building relatively simple bridges. See "UFFI".
--
kenny tilton
clinisys, inc
---------------------------------------------------------------
"Harvey has overcome not only time and space but any objections."
Elwood P. Dowd
What is being used at the present time? Just VB and ad-hoc MS office
(VB for apps, is it?) cruft? The problem with solving those kinds of
common problems in CL is that the competition usually offers a
pre-packaged solution with a glorified screen painter that is 90%
there and the app programmer just walks the remaining 10% with the
pointy-clicky tool. I have clients who love things like powerbuilder
and such and use their quality people to design the data model and
leave the rest to guys who know how to generate screens. Anyway, the
most important stuff usually is on the system administration end and
usually entails making sure the permissions are set right, rogue users
(or rogue programs inane users get duped into using be they viruses or
other stuff) cannot do much damage, and that backups are done
properly. No fun, and usually turns out to be surprisingly expensive
on the MS platform. 9 times out of 10, in small but rapidly growing
companies you will find that nobody bothered to attempt a restore of
business critical data on new hardware for example, but you guys might
be too big and already over those hurdles.
This is pure speculation on my part, but maybe one way to go is to find
a small but important problem first. It might be some data analysis
task that some engineer somewhere cannot do efficiently on ms-office.
Or some obscure calibration tracking program that everybody hates to use
or some such thing. These are relatively easy to spot of there's
manufacturing involved in the business.
I'd also look into CRM-ish stuff if a pre-packaged solution is not being
used. Agressive and smart sales people can usually communicate what would
make their life easier in terms that a rough programming spec can be
derived from. They also want all kinds of integration to their palms and
outlooks and whatnot, but interesting stuff can be done there that cannot
usually be handled easily by the common tools.
Reporting from exiting data in the database is actually a big deal for some
IT departments and is very valuable (at least as in "management likes it").
There are prepackaged solutions like Crystal Reports and such, but
if there is a sizeable amount of data, some simple-for-a-lisper rule based
filter could be could be devised that generates interesting reports. With
the advent of http/html, prettifying things and making them accessible is
a mostly solved problem, so you can just concentrate on the guts. The
consumers of such reports usually play with MS Excel, which luckily can
import stuff in CSV form -- so no reverse engineering is required there if
people love the stuff and want to play with it on their own.
If an interesting enough problem is identified, assuming you have time,
yourself and your best guy could approach it in Lisp. One successful
project with some internal folks raving about it might provide motivation
for others to start badgering you about doing something like that with
them. Anyway, just my thoughts.
JB> In general, the raw material for training is not the greatest.
JB> Experience varies but, if any, would tend to be in C-like
JB> languages, and MS products like Visual Basic, FoxPro, and
JB> Access. If any formal academic training, it might be in what
JB> passes for BSCS here, or what the IT training schools crank
JB> out.
BSCS should be reasonably safe in that you will not have a hell of a time
communicating at least. IT varies all over the place. Do not be
surprised if people with 4-year IT degrees give you blank stares when you
mention big-OH, or collisions in hash tables or some such basic first year
algorithms thing. (I am assuming you are in the US).
[...]
JB> After all, if large IT departments can train people to at
JB> least some basic level of competence in VB, C++, Java, SQL,
JB> etc., they should also be able to do the same for Lisp, it is
JB> just another language with its own set of concepts and quirks,
JB> right.
Ah, they send them to classes usually. I have seen notes from some of
those classes and I am not sure what people learn there beyond the
ability to work some development environment to produce something that
runs. Now this is usually all they need to know given that a cookbook
approach works OK when you are solving a variant of the same problem
over and over again. My main point is you might be underestimating the
amount of time and energy being sunk into both the tools and the
teaching materials that ensures people learn "enough" for their target
tasks without being unduly stressed intellectually. No such training
industry exists for lisp!
[...]
JB> As I alluded to previously, I think my biggest concern is not
JB> so much about the language itself, but the vague sense that we
JB> might be burning bridges to all of that "middleware" stuff out
JB> there and yet to come. That we might in future find ourselves
JB> marooned on our little Lisp island.
I would defer to the others on this, but with a grain of salt recommendation
I'll write down what I think. If there isn't some VP of IT or
somesuch over you who likes the latest and the greatest middleware (as
defined by trade rags or authoritative sounding reports) you should be
pretty safe IMHO. You will be able to connect to your database, and you
will be able to display. COM, should you need that, shouldn't be a
problem. CORBA, if it is applicable, is supported by the major vendors
AFAIK. If some hyped-up middleware turns out to be a great idea, there is
a fairly good chance that at leat one major CL vendor will support it.
If I were in the position that I am fancifully imagining you are,
I wouldn't even worry about this initially (re: the small program with a
small group idea).
cheers,
BM
> The problems to be solved are the usual company database-ish stuff...
> customer accounts, money, materials, processes...nothing AI-ish
> at the moment (that could be for later, have to get to square
Lisp's numeric types may be useful for this.
And if you do anything with interest or similar percentages, Lisps
numeric types will almost certainly save you a huge amount of labor.
Even if you use something other than Lisp, don't get tricked into
trusting doubles for money. It will work fine for a while until the
one-cent errors start showing up.
Gregm
> Even if you use something other than Lisp, don't get tricked into
> trusting doubles for money. It will work fine for a while until the
> one-cent errors start showing up.
Doubles are *perfect* for money. Just make sure you wrap them in a
class or something that hides the fact you're representing cents, not
dollars.
-- Bruce
> Doubles are *perfect* for money. Just make sure you wrap them in a
> class or something that hides the fact you're representing cents, not
> dollars.
Then you shuold use integers, not doubles. But why bother when you
already have rationals, so that you don't need decide on an atomic
unit of money?
--
-> -/ - Rahul Jain - \- <-
-> -\ http://linux.rice.edu/~rahul -=- mailto:rj...@techie.com /- <-
-> -/ "Structure is nothing if it is all you got. Skeletons spook \- <-
-> -\ people if [they] try to walk around on their own. I really /- <-
-> -/ wonder why XML does not." -- Erik Naggum, comp.lang.lisp \- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
(c)1996-2002, All rights reserved. Disclaimer available upon request.
> Bruce Hoult <br...@hoult.org> writes:
>
> > Doubles are *perfect* for money. Just make sure you wrap them in a
> > class or something that hides the fact you're representing cents, not
> > dollars.
>
> Then you shuold use integers, not doubles. But why bother when you
> already have rationals, so that you don't need decide on an atomic
> unit of money?
You must have missed the part "if you use something other than Lisp".
-- Bruce
>
> > Even if you use something other than Lisp, don't get tricked into
> > trusting doubles for money. It will work fine for a while until the
> > one-cent errors start showing up.
>
> Doubles are *perfect* for money. Just make sure you wrap them in a
> class or something that hides the fact you're representing cents, not
> dollars.
Its fine until you do arithmetic with money that involves lots of
decimal places. If all you're doing is adding # units * cost with
various subtotals, it will work out fine and normal rounding
techniques will manage the errors. But either way, at some point the
values will get large enough and you'll start with the penny errors
here and there in involved calculations. Even then lots of stuff will
work fine, but some routines will become downright pathological where
rounding/truncation tweaks to fix one problem will cause others. I'm
not saying you can't make doubles work reliably for money, but it can
involve a lot of pain.
Though I've not used Lisp for an accounting app thus far, I think
rationala should work beautifully for this kind of stuff. I imagine
there will be problems, but no more 5-3 = 1.9999999 errors that get
propagated thru the system. This kind of problem can really cost you
in labor and "kludgyness", if for no other reason than you pile on the
code to work around the problems and end up with something thats
nearly impossible to fix much less understand.
Gregm
> Bruce Hoult <br...@hoult.org> writes:
>
>
> >
> > > Even if you use something other than Lisp, don't get tricked into
> > > trusting doubles for money. It will work fine for a while until the
> > > one-cent errors start showing up.
> >
> > Doubles are *perfect* for money. Just make sure you wrap them in a
> > class or something that hides the fact you're representing cents, not
> > dollars.
>
> Its fine until you do arithmetic with money that involves lots of
> decimal places. If all you're doing is adding # units * cost with
> various subtotals, it will work out fine and normal rounding
> techniques will manage the errors.
There are no errors or rounding techniques needed. Integer arithmetic
using IEEE FP is exact.
Financial arithmetic in the real world is not defined using rationals.
It is defined using decimal numbers with a fixed number of decimal
places. Anything involving a division is by definition rounded to fit
that number of decimal places at source. Whatever the number of decimal
places is (usually its two), just scale all numbers by ten to the powr
of that number.
That's what COBOL does, after all.
-- Bruce
I don't understand this statement. An integer such as
4503599627370496, which can be represented exactly in floating point,
and 1 (also representable exactly), when added (an integer arithmetic
operation) produces 4503599627370496
This seems to me to indicate that the result is either
rounded or in error.
Very funny. It's a fixed size representation, and therefore has a
MAXINT.
Which, incidentally, is not where you think it is. 64 bit IEEE doubles
are exact up to 2^53, not just 2^52. You example works fine on every
computer here (chips from Motorola, AMD, and Intel).
-- Bruce
What nonsense! You have only so many bits of precision. When a result
(eventually) needs more than that, instead of losing the most significant
bits, as in C-style "integer" arithmetic, you lose the least significant
bits, which generally goes undetected.
| Financial arithmetic in the real world is not defined using rationals.
| It is defined using decimal numbers with a fixed number of decimal
| places.
Which is a rational.
///
--
In a fight against something, the fight has value, victory has none.
In a fight for something, the fight is a loss, victory merely relief.
Post with compassion: http://home.chello.no/~xyzzy/kitten.jpg
What nonsense! A MAXINT is the largest possible representation of an
integer, not the largest possible representation of an odd integer.
> * Bruce Hoult <br...@hoult.org>
> | There are no errors or rounding techniques needed. Integer arithmetic
> | using IEEE FP is exact.
>
> What nonsense! You have only so many bits of precision. When a result
> (eventually) needs more than that, instead of losing the most significant
> bits, as in C-style "integer" arithmetic, you lose the least significant
> bits, which generally goes undetected.
Hi Erik, how are you?
Any fixed size representation is going to have a maximum precision. In
C on a typical machine, that precision happens to be far greater using
doubles than longs.
Bignums or rationals are safer, but you need to work with what you've
got. Which in C is int, long, float and double.
> | Financial arithmetic in the real world is not defined using rationals.
> | It is defined using decimal numbers with a fixed number of decimal
> | places.
>
> Which is a rational.
A rational with a fixed divisor, not an arbitrary one. Their is no need
for the fixed divisor to be stored explicitly.
-- Bruce
> * Bruce Hoult <br...@hoult.org>
> | It's a fixed size representation, and therefore has a MAXINT.
>
> What nonsense! A MAXINT is the largest possible representation of an
> integer, not the largest possible representation of an odd integer.
The person specifying a representation can pick the definitions they
choose. In this case defining MAXINT to be the smallest (in magnitude)
number such that N+1 != N is the most useful definition. For 64 bit
IEEE doubles that is 2^53.
Otherwise MAXINT would be indentical to MAXDOUBLE, which is an integer,
but a pretty useless one for accounting purposes.
-- Bruce
> We know that Lisp is a good language, but it's just a language.
A human is just a mammal. Civilization and modern technology are just
artifacts of a particular type of mammal. Common Lisp has not yet had
a real chance to show the world what it can do, because the hardware
it works best on is only gradually becoming available. Judging Common
Lisp on the hardware of the past is like judging the human race on the
accomplishments of prehistoric people who lived in caves. Those
accomplishments were noteworthy, but Common Lisp needs to accomplish
something far better than noteworthy to be commonly accepted as
something far better than just a language.
> There is a gnawing, inchoate sense that one might be cutting
> oneself off from (or at least be making it more difficult to use)
> other pieces of a system that one might want or need
> in the future. (Yes, in theory, there is always the foreign
> function interface, but as we all know, theory is just that.)
The FFI is a lot more than just theory. Common Lisp has overwhelming
power to encapsulate things neatly. You can use foreign stuff more
easily in Common Lisp than in its native language, especially in big
projects. To encapsulate means to package, in neat capsules, which
cause less confusion and errors because their interfaces are clearer.
Common Lisp lets you do the best encapsulation with the least effort.
Thats true, but its quite easy to exhaust the precision of doubles and
longs- if the amounts of money are in the billions it becomes trivial
to do so. Doubles (and floats) can easily manage the magnitudes, but
not the precision when the # of digits exceeds 18 or so. Its
particularly noticable in some interest calculations because
intermediate results can be very large.
>
> > | Financial arithmetic in the real world is not defined using rationals.
> > | It is defined using decimal numbers with a fixed number of decimal
> > | places.
> >
The reason I think rationals will work better is because they can
easily (and exactly) represent very large numbers and you don't get
problems respresentating particular fractions. bignums with
appropriate scaling would probably work fine too, but the nice thing
about the rationals is you don't have to bother with the scaling.
I agree that financial arithmetic does specify a particular number of
decimal places, but datatypes must be chosen carefully because
financial math sometimes requires precision down to the least
significant digit at very large magnitudes.
Gregm
And yes, I blew it on the number. I was *sure* I didn't,
but I just tried it and it worked fine like you said.
The limiting integer is 9007199254740991
Unfortunately, financial arithmetic doesn't usually deal
with integer amounts of the preferred currency, and you
can exceed the limits of a double precision float
(Turkey's foreign debt in Turkish lira).
On the other hand, you may not want to be mathematically
precise, but follow the conventions of the banks
which do weird rounding all the time.
"Bruce Hoult" <br...@hoult.org> wrote in message
news:bruce-8AFC56....@copper.ipg.tsnz.net...
> Bignums or rationals are safer, but you need to work with what you've
> got. Which in C is int, long, float and double.
C has bignums if you use MPZ. (And C++ has all Lisp numeric types plus
bigfloats if you use CLN.)
E.
Hmm, my copy of ISO/IEC 9899:1999(E) doesn't mention MPZ. Are you
sure about that?
Regards,
--
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."
PGP key ID 0x42B32FC9
> Any fixed size representation is going to have a maximum precision. In
> C on a typical machine, that precision happens to be far greater using
> doubles than longs.
So use long longs.
GNU C doubles, ona 386, have fifty-three bits of precision. long
longs have 64.
> Bignums or rationals are safer, but you need to work with what you've
> got. Which in C is int, long, float and double.
What an impoverished attitude.
Thomas
No one's claiming that it's not a Turing-equivilent language. But C
decidedly does *not* have *language* support for bignums.
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
> g...@jpl.nasa.gov (Erann Gat) writes:
>
> > In article <bruce-CEA92E....@copper.ipg.tsnz.net>, Bruce Hoult
> > <br...@hoult.org> wrote:
> >
> > > Bignums or rationals are safer, but you need to work with what you've
> > > got. Which in C is int, long, float and double.
> >
> > C has bignums if you use MPZ. (And C++ has all Lisp numeric types plus
> > bigfloats if you use CLN.)
>
> No one's claiming that it's not a Turing-equivilent language. But C
> decidedly does *not* have *language* support for bignums.
I didn't say it did. C's bignum support is part of the library, not part
of the language. Bruce's claim was "you need to work with what you've
got." "What you've got" includes the library.
E.
> In article <gat-090402...@192.168.1.50>, Erann Gat wrote:
> > In article <bruce-CEA92E....@copper.ipg.tsnz.net>, Bruce Hoult
> ><br...@hoult.org> wrote:
> >
> >> Bignums or rationals are safer, but you need to work with what you've
> >> got. Which in C is int, long, float and double.
> >
> > C has bignums if you use MPZ. (And C++ has all Lisp numeric types plus
> > bigfloats if you use CLN.)
>
> Hmm, my copy of ISO/IEC 9899:1999(E) doesn't mention MPZ. Are you
> sure about that?
By golly, you're right. Hmmm, my copy of the Lisp hyperspec doesn't
mention CLIM, AllegroServe, or UncommonSQL. I guess that means that if I
want to write a GUI, or a Web server, or connect to a database I'd better
find some language other than Lisp.
Thanks for straightening me out there.
E.
Well, if the definition of the C language doesn't mention bignums,
I think it isn't quite right to say C ``has'' bignums. If you add
a library, you won't be able to do ``z = x + y;'', either. Sure,
there are libraries for everything, and if there isn't one, you can
write one at any time. But does that justify saying ``C has hashtables,
red-black trees, binomial trees, priority queues, exceptions, a garbage
collector, lots of GUI systems, ASN.1 modules, support for building
parsers, 3D graphics, sound systems, web servers, ftp clients, ftp
servers, etc etc etc''? Maybe there is more about C than I thought,
hmmmmm...
> Thanks for straightening me out there.
You're welcome :-)
> In article <gat-090402...@eglaptop.jpl.nasa.gov>, Erann Gat wrote:
> > In article <a8v732$vnu0i$1...@ID-125440.news.dfncis.de>, Nils Goesche
> ><car...@cartan.de> wrote:
> >
> >> In article <gat-090402...@192.168.1.50>, Erann Gat wrote:
> >> > In article <bruce-CEA92E....@copper.ipg.tsnz.net>, Bruce Hoult
> >> ><br...@hoult.org> wrote:
> >> >
> >> >> Bignums or rationals are safer, but you need to work with what you've
> >> >> got. Which in C is int, long, float and double.
> >> >
> >> > C has bignums if you use MPZ. (And C++ has all Lisp numeric types plus
> >> > bigfloats if you use CLN.)
> >>
> >> Hmm, my copy of ISO/IEC 9899:1999(E) doesn't mention MPZ. Are you
> >> sure about that?
> >
> > By golly, you're right. Hmmm, my copy of the Lisp hyperspec doesn't
> > mention CLIM, AllegroServe, or UncommonSQL. I guess that means that if I
> > want to write a GUI, or a Web server, or connect to a database I'd better
> > find some language other than Lisp.
>
> Well, if the definition of the C language doesn't mention bignums,
> I think it isn't quite right to say C ``has'' bignums.
Good grief! Are we going to start arguing over what the meaning of the
word "is" is? Whether or not C "has" bignums is not the point. The point
is whether when you program in C you must restrict yourself to int, long,
float and double.
> If you add a library, you won't be able to do ``z = x + y;'', either.
Actually, in C++ using CLN you can do exactly that. In C you have to
write "z=add(x,y)" or something like that. If you consider that to be
anything worse than a minor annoyance then whether or not C "has" bignums
is the least of your worries.
> Sure,
> there are libraries for everything, and if there isn't one, you can
> write one at any time.
The whole point is that for bignums in C (and Lisp) you don't have to
write one. It's already done.
> But does that justify saying ``C has hashtables,
> red-black trees, binomial trees, priority queues, exceptions, a garbage
> collector, lots of GUI systems, ASN.1 modules, support for building
> parsers, 3D graphics, sound systems, web servers, ftp clients, ftp
> servers, etc etc etc''?
In the context of a discussion that was started with the phrase "you need
to work with what you've got", yes, it does. If Bruce had said, "You need
to work with (and only with) what is in the C standard," then the
conversation would have gone differently.
> Maybe there is more about C than I thought, hmmmmm...
I sense sarcasm here, but this is a possibility that the Lisp community
would do well to take seriously.
E.
> In article <a8vjgr$v8v9l$1...@ID-125440.news.dfncis.de>, Nils Goesche
> <car...@cartan.de> wrote:
>
> > Well, if the definition of the C language doesn't mention bignums,
> > I think it isn't quite right to say C ``has'' bignums.
>
> Good grief! Are we going to start arguing over what the meaning of the
> word "is" is? Whether or not C "has" bignums is not the point. The point
> is whether when you program in C you must restrict yourself to int, long,
> float and double.
Oh well, forget about it, then.
> > If you add a library, you won't be able to do ``z = x + y;'', either.
>
> Actually, in C++ using CLN you can do exactly that. In C you have to
> write "z=add(x,y)" or something like that. If you consider that to be
> anything worse than a minor annoyance then whether or not C "has" bignums
> is the least of your worries.
Actually, I consider it a /major/ annoyance, especially when any
kind of static typing is involved, but never mind.
> > Maybe there is more about C than I thought, hmmmmm...
>
> I sense sarcasm here, but this is a possibility that the Lisp community
> would do well to take seriously.
I write C code every day. I am sure many others here do, too.
If anybody else is considering the possibility of there being
``more to C'', he is free to come over to my office and find out
which module is fucking up my internal kmalloc structures, again
;-| (gonna be a looong night)
Regards,
--
Nils Goesche
Ask not for whom the <CONTROL-G> tolls.
PGP key ID #xC66D6E6F
> > > If you add a library, you won't be able to do ``z = x + y;'', either.
> >
> > Actually, in C++ using CLN you can do exactly that. In C you have to
> > write "z=add(x,y)" or something like that. If you consider that to be
> > anything worse than a minor annoyance then whether or not C "has" bignums
> > is the least of your worries.
>
> Actually, I consider it a /major/ annoyance, especially when any
> kind of static typing is involved, but never mind.
Why? I thought "z=x+y" vs "z=add(x,y)" was a purely syntactic issue.
What's it got to do with static typing?
> > > Maybe there is more about C than I thought, hmmmmm...
> >
> > I sense sarcasm here, but this is a possibility that the Lisp community
> > would do well to take seriously.
>
> I write C code every day. I am sure many others here do, too.
> If anybody else is considering the possibility of there being
> ``more to C'', he is free to come over to my office and find out
> which module is fucking up my internal kmalloc structures, again
> ;-| (gonna be a looong night)
Seems to me you're changing the subject. If you're saying that C sucks
because it has fundamental design flaws (like a bad memory management
model) then you get no argument from me. But that has nothing to do with
the level of support provided for bignums.
Look, I'm not saying C/C++ is wonderful. In fact, C/C++ IMO sucks big fat
weenies for many, many reasons (see http://www.elj.com/cppcv3/). But a
lack of support for bignums is not (any more) among them.
E.
> Bruce Hoult <br...@hoult.org> writes:
>
> > Any fixed size representation is going to have a maximum precision. In
> > C on a typical machine, that precision happens to be far greater using
> > doubles than longs.
>
> So use long longs.
>
> GNU C doubles, ona 386, have fifty-three bits of precision. long
> longs have 64.
Long long is a possibility with modern compilers, but code size and
speed are much better with double. Sometimes this doesn't matter.
Sometimes it does. Data size is of course the same either way (on a 32
bit machine).
-- Bruce
> Long long is a possibility with modern compilers, but code size and
> speed are much better with double.
Code size, maybe. But speed? Floating point operations are not known
for their speed. Adding a long long is a couple integer adds. Adding
two floating point numbers is usually a fairly longer operation...
> I wasn't trying to be funny, I was trying to understand
> what you were getting at. So to paraphrase, you are
> saying that integer arithmetic in IEEE floating point
> is exact provided the exponent is zero or smaller.
> (That latter detail is important!)
That's correct.
> And yes, I blew it on the number. I was *sure* I didn't,
> but I just tried it and it worked fine like you said.
> The limiting integer is 9007199254740991
>
> Unfortunately, financial arithmetic doesn't usually deal
> with integer amounts of the preferred currency, and you
> can exceed the limits of a double precision float
> (Turkey's foreign debt in Turkish lira).
Yes, that could be a problem.
I was doing this stuff in a stockbroking/investment banking place in the
late 80's and early 90's. Doubles were certainly fine for calculations
dealing with New Zealand in $NZ. They might well not be for Lira or
rubles or something like that. Given inflation and so forth they might
not be suitable for the US any more. $90,071,992,547,409.91 is only
1500 times more than Bill Gates' worth (and 300 times more than
Microsoft itself), so if it's adequate for the total worth of US stocks
it won't be much longer.
-- Bruce
> Look, I'm not saying C/C++ is wonderful. In fact, C/C++ IMO sucks big fat
> weenies for many, many reasons (see http://www.elj.com/cppcv3/). But a
> lack of support for bignums is not (any more) among them.
I disagree completely. If you're lucky/unfortunate enough to control
the entire world your application lives in, then, great, use an MP
library. But if you need to share data with anything else, you're
screwed.
Not on modern machines.
I tried this, using g++ with -O3 on two machines:
-----------------------------
#include <iostream.h>
#ifdef usedouble
typedef double num;
#else
typedef long long num;
#endif
int main(){
num t = 0;
for(num i=0; i<=(num)10000000; i += (num)1){
t += i;
}
cout << t << endl;
}
-----------------------------
Athlon/700 G4/867
double 0.11 0.08
long long 0.06 0.09
double is faster on the PowerPC, long long on the x86. There's not a
lot in it in either case. Certainly both are far faster than any bignum
or rational type are going to be.
I haven't tested it, but double will certainly be a lot faster than long
long for multiplication on these machines.
-- Bruce
> g...@jpl.nasa.gov (Erann Gat) writes:
>
> > Look, I'm not saying C/C++ is wonderful. In fact, C/C++ IMO sucks big fat
> > weenies for many, many reasons (see http://www.elj.com/cppcv3/). But a
> > lack of support for bignums is not (any more) among them.
>
> I disagree completely. If you're lucky/unfortunate enough to control
> the entire world your application lives in, then, great, use an MP
> library. But if you need to share data with anything else, you're
> screwed.
None of this makes any sense to me at all.
> I disagree completely.
What are you disagreeing with? That C sucks, or that it provides bignum
support?
> If you're lucky/unfortunate enough to control
> the entire world your application lives in, then, great, use an MP
> library.
Why do I need to "control the entire world that my application lives in"
(whatever that means) to use an MP library?
> But if you need to share data with anything else, you're screwed.
Nonsense. You are no more "screwed" passing a bignum from C to "anything
else" than from Allegro CL to Corman Lisp. It's exactly the same problem
in both cases, and unless efficiency is a concern you'd probably solve it
in exactly the same way in both cases: by passing an ascii representation.
E.
> In article <xcvsn64...@famine.OCF.Berkeley.EDU>,
> t...@famine.OCF.Berkeley.EDU (Thomas F. Burdick) wrote:
> > If you're lucky/unfortunate enough to control
> > the entire world your application lives in, then, great, use an MP
> > library.
>
> Why do I need to "control the entire world that my application lives in"
> (whatever that means) to use an MP library?
Because if you need to call into code that you didn't write to use the
MP library, you can't pass in your bignums, rationals, etc.
I think I mean the same thing as Thomas here; it might help if you
told us if you ever wrote something that involves bignums both in
Lisp and in languages that support bignums only via a library.
Because if you did, well, I didn't know what to say anymore. You
should /know/ :-)
The problem with bignum libraries is, IMO, that what they give
you is bignums allright, but what they /don't/ give you is /integers/
(in the Lisp sense). That is, you /still/ can't use bignums and
int's interchangably. You constantly have to convert between
int's and bignums and vice versa. If you already have written lots
of huge libraries that use integers, you can't magically convert them
to using bignums instead (I think this is what Thomas meant by data
exchange). And sure, ``z = add(x, y);'' looks simple enough, but,
say,
try
z = toBignum(func(toInt(add(add(x, toBignum(y)), toBignum(-2)))));
with SomeConversionException X ...
doesn't anymore, I'd say. Especially, when you consider that
all it does is (setq z (func (+ x y -2))), if I didn't make a
mistake counting all those pesky parentheses you get in infix
languages ;-)
Moreover, whenever you want to introduce a new numerical variable,
after you've decided to go ``bignum'', you have to think, each and
everytime, whether to use an int or a bignum. Often your decision
will be wrong and you have to rewrite code over and over again...
> In article <XMCs8.15952$%s3.54...@typhoon.ne.ipsvc.net>,
> "Joe Marshall" <prunes...@attbi.com> wrote:
>
> > I wasn't trying to be funny, I was trying to understand
> > what you were getting at. So to paraphrase, you are
> > saying that integer arithmetic in IEEE floating point
> > is exact provided the exponent is zero or smaller.
> > (That latter detail is important!)
>
> That's correct.
>
>
> > And yes, I blew it on the number. I was *sure* I didn't,
> > but I just tried it and it worked fine like you said.
> > The limiting integer is 9007199254740991
> >
> > Unfortunately, financial arithmetic doesn't usually deal
> > with integer amounts of the preferred currency, and you
> > can exceed the limits of a double precision float
> > (Turkey's foreign debt in Turkish lira).
>
> Yes, that could be a problem.
>
> I was doing this stuff in a stockbroking/investment banking place in the
> late 80's and early 90's. Doubles were certainly fine for calculations
> dealing with New Zealand in $NZ. They might well not be for Lira or
You realize that you are talking essentially about Turkish Lira at
this time and place :)
Cheers
--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
719 Broadway 12th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
"Hello New York! We'll do what we can!"
Bill Murray in `Ghostbusters'.
my major financial arithmetic problem is with SQL engines, not with C or
lisp or perl or php.
SELECT SUM(amount) FROM bookings WHERE id=1; will overflow sooner or
later, regardless if I process this number with or without MP support in
the language.
is there any DB with native bignums?
--
Reini Urban
http://xarch.tu-graz.ac.at/home/rurban/film/
> The problem with bignum libraries is, IMO, that what they give
> you is bignums allright, but what they /don't/ give you is /integers/
> (in the Lisp sense).
You need to look at CLN. You will be surprised.
E.
They are related, but they are not "joined at the hip" in the sense that
you imply, namely that because the memory model is broken (which it is) it
follows that that brokenness must manifest itself at the API of every
library that allocates memory. That is not so. (It does place a great
burden on the writer of a library to hide that brokenness, but that is not
at issue here.)
E.
> g...@jpl.nasa.gov (Erann Gat) writes:
>
> > In article <a91fd5$v5c8l$1...@ID-125440.news.dfncis.de>, Nils Goesche
> > <car...@cartan.de> wrote:
> >
> > > The problem with bignum libraries is, IMO, that what they give
> > > you is bignums allright, but what they /don't/ give you is /integers/
> > > (in the Lisp sense).
> >
> > You need to look at CLN. You will be surprised.
>
> You mean CLN by... Bruno Haible?
Yep.
E.
From the PostgreSQL documentation:
The type numeric can store numbers of practically unlimited size and
precision, while being able to store all numbers and carry out all
calculations exactly. It is especially recommended for storing monetary
amounts and other quantities where exactness is required. However, the
numeric type is very slow compared to the floating-point types
described in the next section.
Yours, Florian.
Actually, I am not surprised. The name is funny, though. What
are you trying to tell me? That this library will somehow transform
C++ into a language where every library that used int's before is
magically rewritten to using CLN? I am not very fond of C++, but
wouldn't go so far and question its Turing-completeness, you know.
/Of course/ one can write a library that will emulate Common Lisp
number types in C++; you can probably write one in TeX, if you
are so inclined. So what?
> In article <gat-100402...@192.168.1.50>, Erann Gat wrote:
> > In article <a91fd5$v5c8l$1...@ID-125440.news.dfncis.de>, Nils Goesche
> ><car...@cartan.de> wrote:
> >
> >> The problem with bignum libraries is, IMO, that what they give
> >> you is bignums allright, but what they /don't/ give you is /integers/
> >> (in the Lisp sense).
> >
> > You need to look at CLN. You will be surprised.
>
> Actually, I am not surprised. The name is funny, though. What
> are you trying to tell me?
Never mind. You can lead a horse to water...
E.
Methinks you're misunderstanding.
Double precision _FP_ is what is dangerous.
Using double _ints_, of some form (e.g. - 64 bits, or, for that
matter, just about anything with a few bits more than 32, which isn't
_quite_ enough anymore...), is a perfectly fine idea.
--
(reverse (concatenate 'string "gro.mca@" "enworbbc"))
http://www3.sympatico.ca/cbbrowne/x.html
"You can swear at the keyboard and it won't be offended. It was going
to treat you badly anyway" -- Arthur Norman
Fine. Tell us _exactly_ how to represent the operation
RESULT = 1.52 + 1.75
Better still, I'll tell you...
In IEEE double FP, precision is expressed as a 53 bit binary fraction.
1.52 is 10^1 * 1369094242697216 / 2^53
1.75 is 10^1 * 1576259842736128 / 2^53
Adding them together gives you
10^1 * 2945354085433344 / 2^53
This does happen to coincide with the FP representation of 3.27, but
_none_ of the values were exact
The exact FP value of the result, 3.27, is
(* 10 2945354085433344 (/ 1 (expt 2 53)))
54861495/16777216
Comparing that to the exact result:
(- 54861495/16777216 327/100)
-33/419430400
The result is absolutely NOT "exact"; it is off by -33/419430400.
> Financial arithmetic in the real world is not defined using
> rationals. It is defined using decimal numbers with a fixed number
> of decimal places. Anything involving a division is by definition
> rounded to fit that number of decimal places at source. Whatever
> the number of decimal places is (usually its two), just scale all
> numbers by ten to the power of that number.
>
> That's what COBOL does, after all.
COBOL traditionally uses BCD, which has exactly as much to do with
IEEE FP as it has to do with Lisp rationals, which is to say, nothing.
--
(reverse (concatenate 'string "gro.mca@" "enworbbc"))
http://www3.sympatico.ca/cbbrowne/linuxdistributions.html
"Because you're computer scientists, you have no need to go to the
college bar" -- Arthur Norman
Huh? 1.75 is 7/4, and can be represented exactly using any base-2
floating point. Your value is 0.175, which cannot. But why did you
divide by 10 first? This is a really, really bad way of converting
floating-point numbers to and from internal form. I have no idea where
this 10^1 thing comes from, but it is usually the other way around, you
have 175/1000, not 0.175*10.
What I get is this (with *read-default-float-format* permanently set to
double-float)
(integer-decode-float 1.52)
=> 6845471433603154
=> -52
=> 1
(integer-decode-float 1.75)
=> 7881299347898368
=> -52
=> 1
Adding these together, I get 14726770781501522 / 2^52 which is
fortunately identical to 7363385390750761 / 2^51.
| Adding them together gives you
|
| 10^1 * 2945354085433344 / 2^53
|
| This does happen to coincide with the FP representation of 3.27, but
| _none_ of the values were exact
Well, at least one was.
| The exact FP value of the result, 3.27, is
| (* 10 2945354085433344 (/ 1 (expt 2 53)))
| 54861495/16777216
(integer-decode-float 3.27)
=> 7363385390750761
=> -51
=> 1
| Comparing that to the exact result:
| (- 54861495/16777216 327/100)
| -33/419430400
|
| The result is absolutely NOT "exact"; it is off by -33/419430400.
This is a pretty odd way to calculate things.
(rational 3.27)
=> 7363385390750761/2251799813685248
(- * 327/100)
=> 1/56294995342131200
However, we also have
(- (rational 1.52) 152/100)
=> 1/56294995342131200
I have no idea where you learned floating-point representation, but you
have just introduced a computational error by working with more inexact
values than you could have, and you have not reported your inaccuracies
for the operands, only the result. I wonder why you did this. Nobody
does normalization in the decimal realm _before_ converting to binary.
///
--
In a fight against something, the fight has value, victory has none.
In a fight for something, the fight is a loss, victory merely relief.
Post with compassion: http://home.chello.no/~xyzzy/kitten.jpg
But as Bruce Hoult pointed out, IEEE floating point works just
fine for representing 54-bit ints.
> Centuries ago, Nostradamus foresaw when Bruce Hoult <br...@hoult.org> would
> write:
> > In article <m37knhv...@europa.pienet>,
> > Greg Menke <gregm...@mindspring.com> wrote:
> >> Bruce Hoult <br...@hoult.org> writes:
> >> > > Even if you use something other than Lisp, don't get tricked into
> >> > > trusting doubles for money. It will work fine for a while until the
> >> > > one-cent errors start showing up.
> >> >
> >> > Doubles are *perfect* for money. Just make sure you wrap them in a
> >> > class or something that hides the fact you're representing cents, not
> >> > dollars.
> >>
> >> Its fine until you do arithmetic with money that involves lots of
> >> decimal places. If all you're doing is adding # units * cost with
> >> various subtotals, it will work out fine and normal rounding
> >> techniques will manage the errors.
> >
> > There are no errors or rounding techniques needed. Integer arithmetic
> > using IEEE FP is exact.
>
> Fine. Tell us _exactly_ how to represent the operation
>
> RESULT = 1.52 + 1.75
You don't. Read my lips: "Integer arithmetic using IEEE FP is exact"[1].
Neither 1.52 nor 1.75 is an integer. The rest of your post therefore
has *nothing* whatever to do with what I posted.
-- Bruce
[1] add the proviso "for integers less than 2^53" if you wish to be
pedantic.
Pedantic? Integer arithmetic is _not_ exact with IEEE FP _unless_ you
confine yourself to integers in a fairly small range, and there is not
even any indication that you have lost precision when it happens. This
is not dealing with _integers_, but with a severely restricted subset of
integers under optimistic conditions. _Integers_ is what we have in
Common Lisp, defined so as not to truncate their precision or work only
modulo some "word length". A guarantee that you work within the range
that your "integer" supports is hard to come by and failure to get it is
the source of many programming errors. Just increasing the fixed number
of bits in the representation constitutes no such guarantee.
[Bruce Hoult:]
> > There are no errors or rounding techniques needed. Integer arithmetic
> > using IEEE FP is exact.
[Christopher:]
> Fine. Tell us _exactly_ how to represent the operation
>
> RESULT = 1.52 + 1.75
integer, n. a number whose fractional part is zero.
Integer arithmetic using IEEE doubles is exact provided you
stay below 2**54 in absolute value. Not as good as real bignums,
but still useful. (And a lot faster than bignums.)
Beyond 2**54, you will start to lose. But then, you lose
beyond 2**32 with machine integers on most machines, but
those are "exact".
[SNIP: strangely wrong explanation of how FP works]
The most interesting thing about the calculation you
exhibited is that all the arithmetic was done exactly.
The only "inexactness" arose from trying to represent
things like 1.52 in IEEE floating point, which can't
be done exactly. But the *calculations* were perfectly
exact. Now, integers of moderate size are represented
exactly in IEEE floating point. The calculations on them
are exact. There is no inexactness here.
Bruce *should* have said what bound he was placing on
those integers, since (e.g.) 10**250 is exactly representable
as an IEEE double but calculating 10**250+1 won't give the
right answer. But I don't see how explaining to him that
finite decimals aren't always exactly representable was
relevant.
--
Gareth McCaughan Gareth.M...@pobox.com
.sig under construc