i have only float variable and i have tried to code all operation of cast
very neatly...
i look in msdn to find information about Release float optimisation, but the
greater difference appear between F5 and CTRL F5 mode.
I'm very interesting by all advice about it.
thanks
Alexandre Buisson
Phd Student
FT R&D - DIH/HDM/CIM
4 Rue du Clos Coutel
BP 59
35512 Cesson Sevigne - France
Tel : 02 99 12 38 39
Fax : 02 99 12 40 98
Mobile : 06 81 64 57 92
Mail : alexandr...@rd.francetelecom.com
This seems like an extremely large discrepancy to simply be the result
of fpp processor sequencing, storing/recalling intermediate results,
etc. I can only think that perhaps there is a difference in inputs or
some other undiagnosed problem here...perhaps a minor effect variable is
not defined or some such?
Can you isolate the calculations in question down to a reasonably small
code segment that reproduces the problem? (Sometimes doing such causes
the problem to disappear, thus elucidating the cause.)
An additional thought...is the result the inverse of a ill-conditioned
matrix or some other numerically unsound computation technique by
chance??? In such a case, a small difference in intermediate results
can lead to large differences. Another possibility is if this is an
iterating or integration problem, not a single calculation.
The problem looks like a rounding error. With the different compile
switches enforcing different levels of type checking and therefore different
truncation errors.
2. If the problem is a rounding error use BCD (binary coded digits) as a
data type. I would suggest that anyway, unless efficiency is an issue.
Floating point calculations are usually only good for one operation at a
time. They are not good for sequences or multiple operations.
In addition. Math performed on floating point numbers such as matrix math
generating graphic routines is always performed with starting numbers. For
example, if you were to rotate an object 40 degrees in 1 degree increments
you would perform your point matrix calculations against the point of origin
for every transformation. You would not perform a 1 degree calculation from
your current location adding sequentially to the point of origin because
this will cause rounding errors. When the object stops or changes direction
of rotation you re-zero your algorithm.
I suggest you review the introductory chapters of "Numerical Recipes in C."
This gives the basic practices of number stability with functions using
digital computers. And just in case you didn't know, which for someone
studying for a Phd I would be surprized, binary coded digit or BCD is pretty
much the standard for fault intolerant math.
If
"Alexandre Buisson" <alexandr...@rd.francetelecom.fr> wrote in message
news:9q4g23$gh...@news.rd.francetelecom.fr...
>I suggest you review the introductory chapters of "Numerical Recipes in C."
>This gives the basic practices of number stability with functions using
>digital computers. And just in case you didn't know, which for someone
>studying for a Phd I would be surprized, binary coded digit or BCD is pretty
>much the standard for fault intolerant math.
Then the "standard" is an ugly kludge. Decimal is not intrinsically
more accurate than binary, and BCD arithmetic is slow.
Gerry Quinn
--
http://bindweed.com
Puzzles, Arcade, Strategy, Kaleidoscope Screensaver
Download evaluation versions free - no time limits
Check out our new arcade-puzzler "Bubbler"!
Instead, go to a real live standard and read what is supplied in
<math.h> by any conforming C compiler system.
--
Chuck F (cbfal...@yahoo.com) (cbfal...@XXXXworldnet.att.net)
Available for consulting/temporary embedded and systems.
(Remove "XXXX" from reply address. yahoo works unmodified)
mailto:u...@ftc.gov (for spambots to harvest)
I don't think you understand what I am saying.
and BCD arithmetic is slow.
>
This is true.
BCD stands for binary coded decimal.
"It is a way of representing numbers by means of codes for the decimal
digits. For example, consider the number 65. In binary, 65 is 01000001,
and that is how most computers represent it. But some computer programs
might represent it as the code for 6 followed by the code for 5, i.e. 0110
0101.
The advantage of BCD shows up when a number has a fractional part, shuch
as 0.1. There is no way to convert 0.1 into binary exactly, it would
require an infinite number of digits, just like converting 1/3 into decimal.
But in BCD, 0.1 is represented by the code for 1 immediately after the
point, and no accuracy is lost.
BCD arithmatic is consderably slower and takes more memory than binary
arithmetic. It us used primarily in financial work and other situations
where rounding errors are intolerable. Pocket calculators use BCD"
Barron's Dictionary of Computer and Internet Terms. Under binary coded
decimal.
The simple fact is digital computers can not represent all real numbers.
Numbers such as 1/3 can not be represented accurately in the base ten
system. Numbers such as 1/10 can not be represented in the binary system.
So you have errors in real numbers in both areas. The binary ones are
really tricky because most people are not used to trying to represent
rational numbers with binary digits.
Regardless, any time you perform division, or multiplication by a number
less than one, which is the same thing, you will encounter rounding errors
if the real result leaves a remainder when performing the operation in
decimal or binary. Which, by the way, is a lot of numbers.
Therefore, where accuracy is needed BCD is the only way to get pure
results. You could also create an abstract data type for rationals. This
is standard fare for starting computer students.
Of course, most game programming and graphics applications do not use
accurate math but rely greatly on approximations otherwise they wouldn't
work.
Most approximations are done through using the mod operator to limit the
sig digits and detection accuracy.
It doesn't matter what math.h says.
Math is math my friend. Division is division.
You can not represent all real numbers with a binary number system.
You can not represent all real numbers with a decimal number system.
Leave the computer out of the discussion for a second and put your brain
in gear and think.
You can not represent 1/3 using decimals.
You can nor represent 1/10 using binary.
Any performing any division by 10 or a multiple of 10, or 3 or a multiple
of 3 with a digital computer using standard floating point variables will
result in a mathematical error.
That is math.
If error is not allowed then you have to create or use a data type that
circumvents the decimal and binary characteristics of the computer.
For instance, you could create an abstract rational data type that
performs operations using integers for the numerator and denominator. The
general practice is that each digit is represented by an integral value 0-9.
So BCD is the most common data form for this use.
However, some programming languages such as Visual Basic have a currency
data type that is essentially an abstract data type that does not allow for
rounding errors.
The book "Numerical Recipes in C" is one of a kind and it is published by
Cambridge. The introductory chapter alone is worth the price of the book.
Allow me to quote a short excerpt from the first edition:
Before reading this allow me to explain that in binary form all floating
point numbers are divided into three sections if signed: exponent,
mantissa, and one sign bit. Mathematical operations on numbers at the
binary level can only occur if the exponents are the same.
"Arithmetic among numbers in floating-point representation is not exact,
even if the operands happen to be exactly represented.(i.e have exact
values...). For example, two floating numbers are added by first
right-shifting (dividing by two) the mantissa of the smaller (in magnitude)
one, simultaneously increasing its exponent, until the two operands have the
same exponent. Low-order (least significant) bits of the smaller operand
are lost by this shifting. If the two operands differ too greatly in
magnitude, then the smaller operand is effectively replaced by zero, since
it is right-shifted to oblivion." p 25.
So even with perfectly accurate numbers in the digital realm the operation
itself can introduce round off error. And the operation does not have to be
division.
This is why, if you are going to use floating point values you should use
scientific notation and equalize the exponent before the computer gets it.
That is, use numbers that have one integer and the same amount of values
right of the decimal point.
I would imagine that right now you are starting to thing about logarithms.
Logs are also a very good way to avoid math error.
The main thing is that you understand where the errors are likely to turn
up and how to minimize or if necessary eliminate them.
In conclusion:
There is round off error and truncation error.
Round off error is machine dependent.
Truncation error is a result of the program or algorithm.
Which is what I basically said to the original post.
You can have an error that is a result of machine characteristics or one
that is a result of algorithm characteristics. The control of truncation
error, or generating correct algorithms, is the goal of scientific or
numeric computing. Machine error is avoidable only in changing the data type
to an abstract or operating within a specified tolerance.
Errors like this show up largely in mathematical recurrence relations and
problems solved by mathematical induction implemented with recursion or in
any operation where there is a large difference in the magnitude of the
operands.
I thought you were saying it's necessary because binary standards are
not respected. If so, it's entirely plausible, but I stand by my
opinion of it as an ugly kludge.
>
> and BCD arithmetic is slow.
>>
>This is true.
>
>BCD stands for binary coded decimal.
>"It is a way of representing numbers by means of codes for the decimal
>digits. For example, consider the number 65. In binary, 65 is 01000001,
>and that is how most computers represent it. But some computer programs
>might represent it as the code for 6 followed by the code for 5, i.e. 0110
>0101.
> The advantage of BCD shows up when a number has a fractional part, shuch
>as 0.1. There is no way to convert 0.1 into binary exactly, it would
>require an infinite number of digits, just like converting 1/3 into decimal.
>But in BCD, 0.1 is represented by the code for 1 immediately after the
>point, and no accuracy is lost.
IN BCB ("binary coded binary"), 0.1 decimal has an infinite
representation, whereas in BCD it is finite. In either BCB or BCD, 1/2
has a finite representation. But in either BCB or BCD, 1/3 only has an
infinite representation. What advantage is there in having an exact
representation a little bit more often? [Maybe if you are counting
pennies, but even then interest payments will take you to irrational
numbers soon, and sharing a sum between three will lead you to infinite
decimals.]
> BCD arithmatic is consderably slower and takes more memory than binary
>arithmetic. It us used primarily in financial work and other situations
>where rounding errors are intolerable. Pocket calculators use BCD"
I thought that was more to do with simplifying internal software.
>
> The simple fact is digital computers can not represent all real numbers.
>Numbers such as 1/3 can not be represented accurately in the base ten
>system. Numbers such as 1/10 can not be represented in the binary system.
>So you have errors in real numbers in both areas. The binary ones are
>really tricky because most people are not used to trying to represent
>rational numbers with binary digits.
> Regardless, any time you perform division, or multiplication by a number
>less than one, which is the same thing, you will encounter rounding errors
>if the real result leaves a remainder when performing the operation in
>decimal or binary. Which, by the way, is a lot of numbers.
>
> Therefore, where accuracy is needed BCD is the only way to get pure
>results. You could also create an abstract data type for rationals. This
>is standard fare for starting computer students.
Now I'm lost. You admit that BCD doesn't have any real
technical advantage over BCB, yet you insist it is the only way. And
where do 'people' come in? Do you want to cripple (say)
expensive fluid-dynamic computations just to make it easier to implement
the consistent input of numbers like 1.9? If so, it's time to implement
a simple input conversion protocol that will deal with it.
It has nothing to do with BCD versus binary. Neither system can
represent all rationals, let alone all reals, exactly.
If you want exact values to 0.1 accuracy, use integers scaled by
10, etc. To decide what ranges of values you can handle, look at
<limits.h>, or <math.h> for float/doubles.
If they are insufficient, build code that can handle the range
needed.
[ ... ]
> The book "Numerical Recipes in C" is one of a kind and it is published by
> Cambridge. The introductory chapter alone is worth the price of the book.
If you're doing numerical work, and want to depend on the results,
you might want to look at some of the (many) critiques of the
_Numerical Recipes_ books. Just for example, there's one at:
that's well worth reading, and give _numerous_ examples of areas in
which the Numerical Recipes books fall FAR short of ideal. (Note
that this if of the Jet Propulsion Labs web site -- not exactly a
bunch of ignorant amateurs).
Much of what the Numerical Recipes books provide as "the way" to
solve a particular problem is really the way everybody else uses as
an example of an outdated method that's numerically unstable, poorly
designed, etc., and will _usually_ cause problems.
I'd note that somebody has a bit near the end questioning the
relevance of the rest because they're 10 years old or so. This, of
course, is a thoroughly specious argument: the basics of math haven't
changed in 10 years, and neither, for the most part, has the way
floating point arithmetic is implemented in most computers -- in
fact, most computers basically implement IEEE 754 arithmetic (or try
to), which has been a standard for well over 20 years.
To make a long story short, the Numerical Recipes books are basically
good ONLY if taken as only a general background on the VERY most
basic methods of doing some simple scientific calculations. It's a
bit like reading _Principia Mathematica_ -- certainly interesting,
perhaps informative, but clearly NOT a suitable guide to the best
available methods for solving problems.
> Before reading this allow me to explain that in binary form all floating
> point numbers are divided into three sections if signed: exponent,
> mantissa, and one sign bit. Mathematical operations on numbers at the
> binary level can only occur if the exponents are the same.
This, of course, is false. What they describe is undoubtedly the
single most common representation for floating point numbers, but is
clearly NOT the only one in use.
> "Arithmetic among numbers in floating-point representation is not exact,
> even if the operands happen to be exactly represented.(i.e have exact
> values...).
Again, as stated, this is false. Rather than "is not exact", they
should say "may not be exact". "Is not exact" means it is never
exact under any circumstances. In reality, that's just not the case
-- depending on the values involved, many calculations can be done
exactly. Just for example, I've got some old routines that use the
floating point unit on an x86 to work with 64-bit integers. All the
supported calculations within the supported range are absolutely
exact.
> For example, two floating numbers are added by first
> right-shifting (dividing by two) the mantissa of the smaller (in magnitude)
> one, simultaneously increasing its exponent, until the two operands have the
> same exponent. Low-order (least significant) bits of the smaller operand
> are lost by this shifting.
Again, the correct phrase would be "may be lost by this shifting."
First of all, if the two start out with the same magnitude, there is
no shifting done, and no precision is lost at all. Second, It's
fairly common to start with (for example) single precision operands,
but carry out intermediate operations to double precision. This
allows considerable shifting without losing any bits at all.
> If the two operands differ too greatly in
> magnitude, then the smaller operand is effectively replaced by zero, since
> it is right-shifted to oblivion." p 25.
>
> So even with perfectly accurate numbers in the digital realm the operation
> itself can introduce round off error. And the operation does not have to be
> division.
Of course. Ultimately, what do you expect? Consider what's really
being discussed here: if I ask you to add .00000001 cents to 5
million dollars, and give me the result to the nearest cent, do you
somehow thing a computer can magically retain that fraction of a cent
when the result is given only to the nearest cent? Of course it
can't. Multiplication and division are the places that this tends to
matter the most: an obvious example is taking that fraction of a
cent, multiplying by a large number, and THEN rounding the result.
Addition and subtraction generally matter when _multiplied_ -- either
directly (e.g. (x+delta) * big_number) or by repetition (e.g.
subtracting a small fraction many times, but no individual
subtraction has any effect on the subtrahend).
You should, however, be aware that these problems are well known, and
numerical analysts have been studying them, and designing ways to
avoid them for years. The problem is that Numerical Recipes ignores
nearly all of this work. To continue the example above, (x+delta)
*big_number may work better if you use the distributive property to
get (x*big_number)+(delta*big_number).
> This is why, if you are going to use floating point values you should use
> scientific notation and equalize the exponent before the computer gets it.
This is pure nonsense, and a nearly perfect example of the fact that
Numerical Recipes giving somebody just enough knowledge to be
dangerous.
> Machine error is avoidable only in changing the data type
> to an abstract or operating within a specified tolerance.
Nonsense -- machine errors are almost always avoided or minimized by
other methods, often involving rearranging the order in which
operations are carried out, but there are lots of other methods as
well, such as transforming the original formula/equation into a
different form that's less likely to trigger problems.
My advice would be to get a copy of _Real Computing made Real:
Preventing Errors in Scientific and Engineering Calculations_, by
Forman S. Acton, ISBN 0-691-03663-2, published by Princeton
University Press. If you look carefully, you'll realize that MANY of
its examples of methods that _don't_ work are essentially direct
outtakes from Numerical Recipes.
--
Later,
Jerry.
The Universe is a figment of its own imagination.
A binary number is a binary number. I have never seen the acronym BCB
until this post. In binary you can not represent 0.1 accurately. It only
has a theoretic infinite representation because you can not represent it
infinitely using a machine.
The field of mathematics commonly studied to help people understand how
computers deal with math is discrete mathematics or combinatorics.
Computers have a set of numbers that they can represent at the machine
level. That set is finite. Any time you are working with a concept in math
that is not finite you can not represent it accurately at the machine level.
Therefore, you have to abstract it. A binary coded digit is an abstraction
of a number concept because you translate each decimal digit into a four
digit binary which essentially gives you a integral functionality.
BCD will therefore, eliminate rounding errors caused by the machine but
they will not eliminate trucation errors that are inherent in using decimal
numbers.
whereas in BCD it is finite. In either BCB or BCD, 1/2
> has a finite representation. But in either BCB or BCD, 1/3 only has an
> infinite representation.
Again, this is true but what is done in finance is for currency BCD is used
and the amount of points right of the decimal is limited to two and the
business logic employs official rounding. So the entire process is
carefully controlled and there is no guess work. Whereas if you just
started doing floating point math with floats or doubles and limited the
points to 2 past the decimal you would find that you would get different
results with different operations and different orders of operations.
What advantage is there in having an exact
> representation a little bit more often?
When people are talking about large sums of money they prefer to be exact.
There are standard accounting techniques that require exact mathematical
formulas be used. There are applications in industry when dealing with
large quantities or distances where triginometric functions or volumetric
functions are employed and absolute exactitude is required.
[Maybe if you are counting
> pennies, but even then interest payments will take you to irrational
> numbers soon, and sharing a sum between three will lead you to infinite
> decimals.]
That is the entire point.
>
> > BCD arithmatic is consderably slower and takes more memory than binary
> >arithmetic. It us used primarily in financial work and other situations
> >where rounding errors are intolerable. Pocket calculators use BCD"
>
> I thought that was more to do with simplifying internal software.
>
No sir, it is directly related to accuracy.
> >
> > The simple fact is digital computers can not represent all real
numbers.
> >Numbers such as 1/3 can not be represented accurately in the base ten
> >system. Numbers such as 1/10 can not be represented in the binary
system.
> >So you have errors in real numbers in both areas. The binary ones are
> >really tricky because most people are not used to trying to represent
> >rational numbers with binary digits.
> > Regardless, any time you perform division, or multiplication by a
number
> >less than one, which is the same thing, you will encounter rounding
errors
> >if the real result leaves a remainder when performing the operation in
> >decimal or binary. Which, by the way, is a lot of numbers.
> >
> > Therefore, where accuracy is needed BCD is the only way to get pure
> >results. You could also create an abstract data type for rationals.
This
> >is standard fare for starting computer students.
>
> Now I'm lost. You admit that BCD doesn't have any real
> technical advantage over BCB,
I am not sure where I could have been interpreted to have said that. I have
only said that it is slower. The primary reason for using BCD is it
eliminates machine dependent rounding errors. It does not eliminate
algorithmic truncation. But at least with BCD you have a tool that is
accurate at the machine level for calculation.
yet you insist it is the only way. And
> where do 'people' come in?
People come in at the algorithmic level. That is, rounding errors happen at
the machine level. Truncation errors happen at the people level because it
is the program they wrote that has the logic error in it because, for
example, they did not take into account the inherent error in using 0.33 as
1/3. Understand that 1/3 is not the same concept as 33%. The former can
only be accurately represent using a rational number, the latter is exactly
33 hundredths. So when performing math calculations people come in when
they start using the computer to perform mathematics. If they don't
understand that many rationals do not translate accurately to decimals they
will encounter truncation errors when attemting mathematics using rational
concepts. The severity of the problem created by this is dependent on what
the calculation is being used for.
Do you want to cripple (say)
> expensive fluid-dynamic computations just to make it easier to implement
> the consistent input of numbers like 1.9?
Not at all. It just that a computer scientist has to understand the
limitations of his machine. The value you site may or may not be correct
depending on the math used to attain it. How important is it that it be
accurate to within one tenth? or one onehundreth. I regularly measure
fasteners for industrial use to 1/1000 of an inch. I have performed
measurements to 1/10000 of an inch. So it all depends on the application.
As I said in another post in this thread, you either have to use an
abstract that will absolutely represent the values you are dealing with or
you have to work within an acceptable tolerance. Most computer work is done
within acceptable tolerances and people do it with out even thinking. Some
mathmatical work has to be done to acceptable tolerances because there is no
absolute accurate value such as pi.
Work generally continues until they run up against an application that
requires absolute accuracy or they stray outside the acceptable tolerance
through a series of calculations and they get incorrect results or something
doesn't fit or the right amount of money isn't there or there is more of
something in a place where it is not supposed to be. They generally assume
that they are dealing with exact values or exact enough values to get the
job done and are surprized when either rounding error or truncation produces
a problem.
If so, it's time to implement
> a simple input conversion protocol that will deal with it.
>
In mathematics there are no silver bullets. You either know what you are
doing or you don't.
I suggest that you re evaluate your math background and your understanding
of number representations in computer variables.
"CBFalconer" <cbfal...@yahoo.com> wrote in message
news:3BE0207F...@yahoo.com...
> RJ wrote:
> >
> ... snip ...
> >
From this point-
> > The advantage of BCD shows up when a number has a fractional part,
shuch
> > as 0.1. There is no way to convert 0.1 into binary exactly, it would
> > require an infinite number of digits, just like converting 1/3 into
decimal.
> > But in BCD, 0.1 is represented by the code for 1 immediately after the
> > point, and no accuracy is lost.
To this point-
Is a quote from "Barron's dictionary of Computer and Internet Terms" for BCD
Possibly you should write them and tell them they don't know what they are
talking about?
>
> It has nothing to do with BCD versus binary. Neither system can
> represent all rationals, let alone all reals, exactly.
>
That is not the point.
There are two forms of error in computational mathematics.
1. Rounding error. This is a result of machine dependent characterics.
BCD corrects this.
2. Truncation error. This is a result of improper use of algorithms and
math principles.
You are confusing my argument as being that the answer for number 1 is also
the answer for number 2. I never said that and I have tried carefully to
explain the difference. However, I have used the word truncation in
reference to rounding so I could see how you could confuse my point in that
I did not segregate the terms carefully.
Mathematically, truncation is what occurs any time you round off digits of
a result to represent a real value. In science your results can not be more
accurate than your readings. So you limit your mathematic results to the
same level of precision as your readings. This is normally called
truncation.
In computer science however, truncation and rounding are two different
things.
Truncation is the deletion of significant digits as the result of logical
operations.
Rounding is the deletion of significant digits as the result of machine
operations.
I apologize if I have confused this issue at all.
NOTE:
The normal practice is to perform all math in the realm in which the
measurements where taken. That is, if the measurements were taken using a
digital instrument you perform digital math. If the math is rational math,
which is common in computer graphics applications, it is more accurate to
perform the math using a rational abstract data type. However, most of the
time things like points on the compass in radians and square root values
are stored in look up tables to eliminate the need for a rational data type
and provide an accurate source of informtion for their digital
approximations.
After the math is done if the result needs to be in a different format
that is when the coversion occurs. One of the most common mistakes is to
perform decimal conversions on rationals as part of a formula or algorithm.
> If you want exact values to 0.1 accuracy, use integers scaled by
> 10, etc. To decide what ranges of values you can handle, look at
> <limits.h>, or <math.h> for float/doubles.
>
And again, you enforce my point. You use the term "0.1 accuracy". When you
are dealing with absolute values there is no level of accuracy because the
values are absolute.
There is a relatively small set of numbers that digital computers can
represent at the machine level absolutely.
The only way to positively perform absolute calculations is to abstract
the data type.
Otherwise, as you have correctly pointed out, you are dealing with a
"level of accuracy" which means in layman's terms, slop factor.
> If they are insufficient, build code that can handle the range
> needed.
>
Which is pretty much my assertion as well. But by code I assume you mean
data type. And BCD is the best data type to use for decimal work because it
eliminates machine round off error.
If you want an authority on numerical analysis, look for William
Kahan, NOT Barron's!
Barron's is a publisher of financial information for people lacking
the intelligence or inclination to truly understand what they're
dealing with, and the quotes you give from their dictionary seem to
fit that mold quite closely.
I'd particularly suggest Kahan's paper "Mathematics Written in Sand",
which is available for download at:
http://www.cs.berkeley.edu/~wkahan/MathSand.pdf
Take particular note of the TI calculators. As he notes, at least
most of them use decimal calculations, but also do "egregious" things
to calculations...
We aren't really at the level of numeric analysis yet.
We are just at the level of round off error verses truncation error. Pretty
basic stuff really.
The statement in Barron's is scientifically true. It can be proven with
basic math. Anyone who doesn't see that is either too impatient to sit down
and do the numbers or ignorant.
>
> Barron's is a publisher of financial information for people lacking
> the intelligence or inclination to truly understand what they're
> dealing with, and the quotes you give from their dictionary seem to
> fit that mold quite closely.
That is quite a statement. One that I feel is unjustified. I understand
that a dictionary is not an encylopedia. Simplification leads to
generalization and there are always exceptions to a generalization. But
stating flatly that Barron's references are some sort of dictionary for
dummies is really erroneous. In addition, you make the assertion but
provide no contrary evidence to back it up. I have clearly demonstrated in
this thread, not this particular post, but in the thread, the mechanics of
floating point math and why BCD is superior for fault tolerand applications.
Your statement is simply that, a statement. You argue, but you have no
argument.
>
> I'd particularly suggest Kahan's paper "Mathematics Written in Sand",
> which is available for download at:
I read it, at your advise and find it to be a list of extreme
illustrations that prove my point. His basic statement is that the machines
and programming languages that proport to be calculators and mathematic
engines have glaring limitations. They are not the limitless black boxes
capable of all mathematical calculations that those who have never
understood or studied assembly language, or all that much math, believe them
to be.
I'm not sure if you are getting it but that is exactly what I am saying.
Now, you may understand my endorsement of BCD data types over floating
data types as whole sale endorsement of calculators and all such programs
and tools that use BCD. That is quite a stretch. BCD is logic dependent.
While the limitation of machine binary is its finite number set. The
limitation of BCD and calculator logic is the problem set envisioned by the
programmer. Again a contention by your cited author. That is why you have
programmable calculators. That is why the TI-89, and 92 have their own
assembly language provided and dedicated web sites for new programs. So
that those who are advanced enough to exceed the problem set of the machine
as designed can do so.
The basic point, and one that I keep having to repeat over and over is
that computers have a finite set of numbers that they can use to perform
operations at the machine level. Regardless of the size type or
advancement. If you get outside of this set then you achieve error. And
all it takes is a large difference in exponential value between any two
values or a rational value that does not divide evenly and then combine that
in a sum or repetive algorithm where the errror is maginfied and within a
very short time you will be totally wrong mathematically. The concept that
is coined in "Numerical Recipes in C" is that of functional stability. That
is, once a program or function achieves an error factor outside of the error
tolerance it is no longer stable. Instability is most easily achieved by
performing operations on operands of large differences in magnetude.
The writer of your paper brings up some interesting points but most of
them are moot. They are like floating point division errors on early intel
processors, only Phd's would every find them and that, only if they were
looking. He talks about a woman being paid compounded interest on a penny a
minuite compounded by the minute over a long period. Well any one with half
a brain and a first quarter in algebra can see where he is going with that
one. Please. This is simply an example of the lack of intelligence at
Berkley. Or the gullibility of the students.
No, the common errors that programmers run into are as simple as over
flow, underflow and repetitive truncation in the arithmatic logic. But it
would seem that many programmers don't have a clue as to what those terms
mean.
>
> http://www.cs.berkeley.edu/~wkahan/MathSand.pdf
>
> Take particular note of the TI calculators. As he notes, at least
> most of them use decimal calculations, but also do "egregious" things
> to calculations...
>
With his extreem and worthless examples yes. But his point and paper is
moot.
They don't eliminate rounding errors. For example, the base-7 number
0.1 is 0.142857... in decimal. Rounding errors in that number can only
be eliminated in bases that are a multiple of seven. They eliminate
rounding errors in a subset of numbers, admittedly ones that are common.
As others have pointed out, multiplying by a power of ten does the
same.
> What advantage is there in having an exact
>> representation a little bit more often?
>
>When people are talking about large sums of money they prefer to be exact.
>There are standard accounting techniques that require exact mathematical
>formulas be used. There are applications in industry when dealing with
>large quantities or distances where triginometric functions or volumetric
>functions are employed and absolute exactitude is required.
>
> [Maybe if you are counting
>> pennies, but even then interest payments will take you to irrational
>> numbers soon, and sharing a sum between three will lead you to infinite
>> decimals.]
>
>That is the entire point.
That is why there is no point, and BCD has little real value.
>> > BCD arithmatic is consderably slower and takes more memory than binary
>> >arithmetic. It us used primarily in financial work and other situations
>> >where rounding errors are intolerable. Pocket calculators use BCD"
>>
>> I thought that was more to do with simplifying internal software.
>>
>No sir, it is directly related to accuracy.
How do you know?
>> Now I'm lost. You admit that BCD doesn't have any real
>> technical advantage over BCB,
>
>I am not sure where I could have been interpreted to have said that. I have
>only said that it is slower. The primary reason for using BCD is it
>eliminates machine dependent rounding errors. It does not eliminate
>algorithmic truncation. But at least with BCD you have a tool that is
>accurate at the machine level for calculation.
I don't see how you fail to see that it isn't.
> Do you want to cripple (say)
>> expensive fluid-dynamic computations just to make it easier to implement
>> the consistent input of numbers like 1.9?
>
>Not at all. It just that a computer scientist has to understand the
>limitations of his machine. The value you site may or may not be correct
>depending on the math used to attain it. How important is it that it be
>accurate to within one tenth? or one onehundreth. I regularly measure
>fasteners for industrial use to 1/1000 of an inch. I have performed
>measurements to 1/10000 of an inch. So it all depends on the application.
It can be made as accurate as desired in any number base, as can the
measurements of your fasteners.
> As I said in another post in this thread, you either have to use an
>abstract that will absolutely represent the values you are dealing with or
>you have to work within an acceptable tolerance. Most computer work is done
>within acceptable tolerances and people do it with out even thinking. Some
>mathmatical work has to be done to acceptable tolerances because there is no
>absolute accurate value such as pi.
> Work generally continues until they run up against an application that
>requires absolute accuracy or they stray outside the acceptable tolerance
>through a series of calculations and they get incorrect results or something
>doesn't fit or the right amount of money isn't there or there is more of
>something in a place where it is not supposed to be. They generally assume
>that they are dealing with exact values or exact enough values to get the
>job done and are surprized when either rounding error or truncation produces
>a problem.
I want to divide 1000 by 3. How does BCD provide "absolute accuracy"
that binary can't?
There is no base 7 number system.
number
> 0.1 is 0.142857... in decimal. Rounding errors in that number can only
> be eliminated in bases that are a multiple of seven. They eliminate
> rounding errors in a subset of numbers, admittedly ones that are common.
> As others have pointed out, multiplying by a power of ten does the
> same.
>
> > What advantage is there in having an exact
> >> representation a little bit more often?
> >
> >When people are talking about large sums of money they prefer to be
exact.
> >There are standard accounting techniques that require exact mathematical
> >formulas be used. There are applications in industry when dealing with
> >large quantities or distances where triginometric functions or volumetric
> >functions are employed and absolute exactitude is required.
> >
> > [Maybe if you are counting
> >> pennies, but even then interest payments will take you to irrational
> >> numbers soon, and sharing a sum between three will lead you to infinite
> >> decimals.]
> >
> >That is the entire point.
>
> That is why there is no point, and BCD has little real value.
>
Well, there sure is a lot of technology devoted to BCD values for them to be
of none. BCD is supported in math co-processors of Intel systems and almost
exclusively in hand calculators. And if you don't see the point then you
don't see the point.
> >> > BCD arithmatic is consderably slower and takes more memory than
binary
> >> >arithmetic. It us used primarily in financial work and other
situations
> >> >where rounding errors are intolerable. Pocket calculators use BCD"
> >>
> >> I thought that was more to do with simplifying internal software.
> >>
> >No sir, it is directly related to accuracy.
>
> How do you know?
>
Because I am a budding computer scientist and have enough of an education in
math and computer construction that I know.
> >> Now I'm lost. You admit that BCD doesn't have any real
> >> technical advantage over BCB,
> >
> >I am not sure where I could have been interpreted to have said that. I
have
> >only said that it is slower. The primary reason for using BCD is it
> >eliminates machine dependent rounding errors. It does not eliminate
> >algorithmic truncation. But at least with BCD you have a tool that is
> >accurate at the machine level for calculation.
>
> I don't see how you fail to see that it isn't.
>
Well, this is interesting. But as I have said before, you argue but you
have no arguement. You say that I am wrong when I have demonstrated
mathematically that I am right. The only conclusion that I can come to is
that you do not understand what I am talking about.
But for your benifit I will reiterate.
BCD represents each integer of a real number using a four bit word. It is
very much like a big number abstract data type where you dedicate an integer
value or char value to each number within a number. The math performed on
such a data type is therefore done at a logical level not a machine level.
The math may indeed be binary but it is done one digit at a time. So if we
are adding 222 and 222, each 2 in each column will be added with each two.
That is, 0010 would be added with 0010 giving 0100 as a result. All
mathematical operations are repitions of addition and subtraction at the
logical level.
Whereas in binary the number 222 is represented by 1100 1000 when we add
two of these if we don't have another set of bits to go to we lose the most
significant digit. Because we have a one in the left most column it must be
carried to the next column. If there isn't one well then we lose the value.
This is a basic problem in simple integer math that illustrates a simple
advantage to BCD in large calculations.
As explained in a earlier post floating point numbers have at least two
parts. The mantissa and the exponent. For a number to be added at the
binary level ( and most processors accomplish all math through adding using
a two's compliment binary number system), it must be of the same magnitude.
Therefore, if to numbers differ in magnetude the number of the lower value
is right shifted and the exponent increased until the exponents are the
same. Therefore, if the lower number differs in magnetude enough places it
will be shifted out of existence and will become zero and the operation will
fail. This is generally refered to as point overflow. And even in
successful operations the addition is inaccurate do to the rounding of
digits and becomes more inaccurate the more the numbers are used.
BCD performs numeric calculations at the logical level so that each
decimal place is calulated in it's value. Carry operations are performed at
an integer level. If there is a truncation it is due to the math itself and
not due to the machine. There are situations where if there are not enough
values for the BCD to handle the number that the calculation will not work.
But in those cases the logic will report to the user that the values are too
large, or too small for the data type to handle them.
> > Do you want to cripple (say)
> >> expensive fluid-dynamic computations just to make it easier to
implement
> >> the consistent input of numbers like 1.9?
> >
> >Not at all. It just that a computer scientist has to understand the
> >limitations of his machine. The value you site may or may not be correct
> >depending on the math used to attain it. How important is it that it be
> >accurate to within one tenth? or one onehundreth. I regularly measure
> >fasteners for industrial use to 1/1000 of an inch. I have performed
> >measurements to 1/10000 of an inch. So it all depends on the
application.
>
> It can be made as accurate as desired in any number base, as can the
> measurements of your fasteners.
>
No, regarding the fasteners, I can only measure to the level of the
instrument I am using. And once you get to the ten thousandths realm the
time of day and the size of the object can effect the accuracy or
consistency of the measurement.
You are making a common mistake of many computer programmers is that you
misconstrue mathmatics with reality. Only on paper is math infinite. Every
where else, that is in real life applications, math works with limitations.
Everything in the real world has limits, tolerances and error. But math in
its pure form, does not, 1/3 is exactly 1/3 conceptually. But 1/3 can not
be represented in its absolute value using decimal numbers. As 1/10 can not
be represented absolutely using binary. Therefore, if you perform
calculations using a decimal numbering system to represent 1/3 you will
incur error. This error will be in the form of a logical truncation. It
doesn't matter if you do it with a machine or on paper. The reason BCD is
superior in some respects to binary is that the control of this error is at
the logic level where the programmer can deal with it and rounding error
from overflow does not occur using BCD.
> > As I said in another post in this thread, you either have to use an
> >abstract that will absolutely represent the values you are dealing with
or
> >you have to work within an acceptable tolerance. Most computer work is
done
> >within acceptable tolerances and people do it with out even thinking.
Some
> >mathmatical work has to be done to acceptable tolerances because there is
no
> >absolute accurate value such as pi.
> > Work generally continues until they run up against an application that
> >requires absolute accuracy or they stray outside the acceptable tolerance
> >through a series of calculations and they get incorrect results or
something
> >doesn't fit or the right amount of money isn't there or there is more of
> >something in a place where it is not supposed to be. They generally
assume
> >that they are dealing with exact values or exact enough values to get the
> >job done and are surprized when either rounding error or truncation
produces
> >a problem.
>
> I want to divide 1000 by 3. How does BCD provide "absolute accuracy"
> that binary can't?
>
BCD will put the number at the designated accuracy level.
This is not a valid illustration though.
Let's try something really worthwhile.
How about dividing 1000000000 by .0000000003?
How would BCD help then?
If you peform this calculation on your windows calculator it will produce an
error because you will shift the 3 off the end.
I can perform this calculation using a scientific calculator because it uses
BCD.
>There is no base 7 number system.
Sure there is! It happens not to be a popular base for computer
design, but that doesn't mean it does not exist!
[ ... ]
> I have clearly demonstrated in
> this thread, not this particular post, but in the thread, the mechanics of
> floating point math and why BCD is superior for fault tolerand applications.
You may _think_ you've done so, but in reality you've done nothing of
the sort.
Rather than try to quote and reply to your points (especially since
some of them make less sense that Japanse translated to English by
way of Swahili) I'll try to lay things out very simply.
I'm not entirely certain, but it appears that you see the following
three major problems with binary floating point:
1) truncation.
2) roundoff error.
3) repeating numbers (e.g. 1/10 is a repeating number in binary).
Now to make a long story short, BCD does not cure a single one of
these. Truncation and roundoff still occur. If the memory used to
hold a given variable is held constant, BCD actually makes both of
these _worse_ because BCD stores numbers less efficiently, so fewer
digits are stored in a fixed quantity of storage.
As I see it, BCD has two strengths. First of all, BCD converts to
and from textual form more easily (as a rule) than binary floating
point. If you're doing only minimal computation, the savings here
may outweigh any savings elsewhere. Second, most people think (more
or less) in decimal, so BCD can match their expectations more
closely. E.g. they expect .1 to be exact, but don't expect 1/3rd to
be exact, so BCD does what they expect (even though it has
fundamentally the same shortcoming as binary floating point in this
respect).
Neither of those qualifies as any more than a truly trivial point --
the first is mostly a question of efficiency, not accuracy or
precision. The second only rarely means much, because most people
don't look closely enough at most inputs to form an expectation that
"this will be precise, but that won't" -- they simply expect
everything to be precise, and neither binary nor BCD fulfills that
expectation.
That begs the question: did you (or do you) have any point at all.
Though I think you've missed it, the answer is that yes, to some
degree you DO have a point, but it's not really related to BCD vs.
binary. Instead, the difference I suspect you're really trying to
get is is more or less coincidentally related to binary vs. BCD. The
point would be that BCD is typically used with fixed point (instead
of really floating point) numbers.
In that respect you're _partly_ (but only partly) correct: fixed
point numbers do have some advantages over floating point numbers.
Particularly, without knowing the characteristics of the exact
floating point representation at hand, it can be difficult to predict
(for example) how big of a step there might be between one number
that can be represented and the next, and in particular, that step
varies depending upon the magnitude of an individual number. This is
decidedly different from most fixed-point formats.
The basic difference is that in a fixed-point format, you generally
either retain exactly the chosen precision, or else things overflow
and you lose essentially everything at once. By contrast, most of
the same effects in floating point tend to be gradual -- you lose
precision a few bits at a time.
The difference here is simple: when things go wrong with fixed point
numbers, you fairly typically end up with answers that are really
_obviously_ wrong. Problems in floating point tend to be less
obvious, and it's much easier to get wrong answers that still look
reasonable (this is part of the reason for exaggerated examples in
papers, books etc. -- to make it _obvious_ that things have gone
wrong, and NOT leave a situation where people can dismiss the problem
as too minor to care about).
The main thing to keep in mind is that BCD vs. binary isn't really
the issue involved here at all: you could store conventional floating
point numbers using BCD (though this isn't common) or you can store
fixed-point numbers in binary (which is fairly common).
When you do either of these things, you start to realize that BCD vs.
binary was a red herring all along. The real issue was fixed vs.
floating point.
Fixed point isn't a cure-all either. In quite a few cases, it does
less to cure a problem than simply ignore it, and place the onus for
curing it elsewhere. For example, it "prevents" you from trying to
add 1e-100 to 1e+100, simply by typically preventing you from
representing both in the same format of number. If you do use a
representation that can store both those numbers, the result won't
"lose" the addition.
Floating point doesn't really change (much) of this: for example, if
the storage for a BCD fixed-point number that could hold both 1e-100
and 1e+100 had been used for binary floating point instead, it would
also (easily) retain the same precision in the output.
As I said before: BCD has done nothing to fix anything -- using
fixed-point more or less prevents the problem, but only more or less.
If (for example) we start with 16-digit BCD numbers, and want to add
two numbers that differ by a couple of hundred orders of magnitude,
we still can't get an accurate result using those BCD numbers -- the
only difference is that this may be more obvious, because it won't
make any attempt at all to represent numbers that differ so widely in
magnitude. The "improvement" doesn't give us a correct answer -- at
best it simply refuses to give an answer at all.
The only register width systems ever produced to my knowledge are: 12 bit
now extinct, 8, 16, 32 and 64.
So while a 7 digit number system is conceptually possible it has never
existed in computer science in a practical application.
Therefore, in reference to the discussion at hand, which is specifically
the application of number systems to existing computer languages and
hardware the statement, "there is no base 7 system" is correct.
Is that better now?
Also,
The answer for the problem
1000000000 divided by 0.0000000003
is
3.33333E+18
or 3.33^18
I calculated this using Excel which uses VB as the primary programming
language.
As I said in a previous post this calculation produces an error on the
windows calculator. Most likely because it was likely programmed in VC++
MFC using standard data types. The standard data types are register
dependent. It could be that the system uses a BCD value but does not have
the word width to accomodate the calculation too. But the former is more
likely.
The data types in VB and therefore Excel are abstract and therefore not
machine limited.
They are not anywhere near as fast as machine register math but they are
far more capable and accurate.
"Richard Norman" <rsno...@mediaone.net> wrote in message
news:av33uto9h359h5i1s...@4ax.com...
I have already clearly proven that BCD corrects machine roundoff error and
quoted sources that agree with me.
Truncation and roundoff still occur. If the memory used to
> hold a given variable is held constant, BCD actually makes both of
> these _worse_ because BCD stores numbers less efficiently, so fewer
> digits are stored in a fixed quantity of storage.
>
> As I see it, BCD has two strengths. First of all, BCD converts to
> and from textual form more easily (as a rule) than binary floating
> point.
BCD performs math at a logical decimal level binary performs math at machine
a binary level. This is a functional difference of the mathematics of the
two data types.
If you're doing only minimal computation, the savings here
> may outweigh any savings elsewhere. Second, most people think (more
> or less) in decimal, so BCD can match their expectations more
> closely.
People don't know if they are using BCD or binary. They only know if the
answer is right or wrong.
E.g. they expect .1 to be exact, but don't expect 1/3rd to
> be exact, so BCD does what they expect (even though it has
> fundamentally the same shortcoming as binary floating point in this
> respect).
Again, you are confusing roundoff with truncation.
Representing 0.1 using machine binary in certain math calculations will
result in machine rounding errors due to the limitations of the binary
number systems.
Representing 1/3 as .33 will represent truncation errors regardless of
whether you are using a peice of paper or a computer.
>
> Neither of those qualifies as any more than a truly trivial point --
It depends on the application. If you have to have 1/3 represented
absolutely due to repetition or whatever then you are going to have to
create an abstract data type to represent it.
> the first is mostly a question of efficiency, not accuracy or
> precision. The second only rarely means much, because most people
> don't look closely enough at most inputs to form an expectation that
> "this will be precise, but that won't" -- they simply expect
> everything to be precise, and neither binary nor BCD fulfills that
> expectation.
>
Simply not true.
BCD is capable of performing exact financial calculations that binary number
types are not. It is just that simple.
> That begs the question: did you (or do you) have any point at all.
> Though I think you've missed it, the answer is that yes, to some
> degree you DO have a point, but it's not really related to BCD vs.
> binary. Instead, the difference I suspect you're really trying to
> get is is more or less coincidentally related to binary vs. BCD. The
> point would be that BCD is typically used with fixed point (instead
> of really floating point) numbers.
>
Well, I would agree that I am apparently not achieving very much by
participating in this dialog.
> In that respect you're _partly_ (but only partly) correct: fixed
> point numbers do have some advantages over floating point numbers.
> Particularly, without knowing the characteristics of the exact
> floating point representation at hand, it can be difficult to predict
> (for example) how big of a step there might be between one number
> that can be represented and the next, and in particular, that step
> varies depending upon the magnitude of an individual number. This is
> decidedly different from most fixed-point formats.
>
I have already carefully explained the advantages and the disadvantages.
And as you have said they appear to be a different language to you.
> The basic difference is that in a fixed-point format, you generally
> either retain exactly the chosen precision, or else things overflow
> and you lose essentially everything at once.
Nope.
The data type either works or it doesn't. If it doesn't the program will
tell you it won't work.
Binary on the other hand will give you the wrong answer.
By contrast, most of
> the same effects in floating point tend to be gradual -- you lose
> precision a few bits at a time.
>
> The difference here is simple: when things go wrong with fixed point
> numbers, you fairly typically end up with answers that are really
> _obviously_ wrong. Problems in floating point tend to be less
> obvious, and it's much easier to get wrong answers that still look
> reasonable (this is part of the reason for exaggerated examples in
> papers, books etc. -- to make it _obvious_ that things have gone
> wrong, and NOT leave a situation where people can dismiss the problem
> as too minor to care about).
>
> The main thing to keep in mind is that BCD vs. binary isn't really
> the issue involved here at all: you could store conventional floating
> point numbers using BCD (though this isn't common)
Only when using fault intolerant math.
or you can store
> fixed-point numbers in binary (which is fairly common).
>
> When you do either of these things, you start to realize that BCD vs.
> binary was a red herring all along. The real issue was fixed vs.
> floating point.
Well, this is really silly.
I appreciate your time in trying to understand what I have said but it is
apparent that you don't have the computer science back ground or math
background to grasp my point.
<snip>
[ ... ]
> > They don't eliminate rounding errors. For example, the base-7
>
> There is no base 7 number system.
Where did you hear that? Did Elvis tell you while you were both on
the aliens' UFO?
> BCD will put the number at the designated accuracy level.
> This is not a valid illustration though.
> Let's try something really worthwhile.
> How about dividing 1000000000 by .0000000003?
> How would BCD help then?
> If you peform this calculation on your windows calculator it will produce an
> error because you will shift the 3 off the end.
Would you care to bet? The Windows calculator produces a result of:
3333333333333333333.3333333333333
which is, interestingly enough, precisely correct.
> I can perform this calculation using a scientific calculator because it uses
> BCD.
My scientific calculator (an HP-41CV that's probably older than you
are) gives a FAR less precise answer of: 3.3333333 e+18. Oh, BTW,
most good scientific calculators do NOT use BCD -- if you'd actually
understood even the most elementary parts of the paper I cited
previously, you'd have realized this. Scientific calculators that do
use BCD give no assurance of better results by any means.
You knowledge is clearly quite minimal. For some of the most obvious
ones, add 1, 4, 6, 12, 18, 24, 36 and 60 bits to the list.
> So while a 7 digit number system is conceptually possible it has never
> existed in computer science in a practical application.
> Therefore, in reference to the discussion at hand, which is specifically
> the application of number systems to existing computer languages and
> hardware the statement, "there is no base 7 system" is correct.
You said there is no base 7 number system. That's a whole different
story from saying that you don't know of a computer in widespread use
at the present time that normally uses base 7 for its native
calculations.
> Is that better now?
No.
> The data types in VB and therefore Excel are abstract and therefore not
> machine limited.
Right. Elvis told you about that too, right?
Hint: VB uses perfectly normal binary floating point of the 32 and 64
bit varieties, exactly what the Intel hardware supports. In short,
if you've proven anything, it's that your contention has been wrong
from beginning to end.
And as far as numerical analysis is concerned I had no intention of
bringing the discussion to that level either.
My only statements have been basic ascertions about the basic
characteristics of dealing with the discrete set of numbers available with a
computer and the infinite set of numbers available in mathematics.
I have never argued the virture of the entire book. I don't have the
background or education to do that.
I do think that the introductory chapters are correct about the basic
types of errors that you can encounter using a digital computer.
In addition, calculators do use BCD. And I only tried the calculation
once in Win 98. I just tried it again in W2K and it worked. So I was a bit
hasty on that one.
I don't think it is that much of a stretch of intellect to understand that
to eliminate machine rounding errors from multiple calculations involving
numbers with a component less than one, you have to use an abstract data
type.
That is my point and I think I have made it.
One thing is for sure, if you want to get into an arguement with a bunch
of psuedo intellectuals the news groups are certainly the place to do it.
You obviously missed the word commonly.
>
> > So while a 7 digit number system is conceptually possible it has never
> > existed in computer science in a practical application.
> > Therefore, in reference to the discussion at hand, which is
specifically
> > the application of number systems to existing computer languages and
> > hardware the statement, "there is no base 7 system" is correct.
>
> You said there is no base 7 number system. That's a whole different
> story from saying that you don't know of a computer in widespread use
> at the present time that normally uses base 7 for its native
> calculations.
>
I think I explained myself. And I think any reasonable person would have
understood the statement in its context.
> > Is that better now?
>
> No.
>
Of course not.
> > The data types in VB and therefore Excel are abstract and therefore
not
> > machine limited.
>
> Right. Elvis told you about that too, right?
>
Whoa, I guess you don't have to be in a religious news group to get a
fanatical response.
> Hint: VB uses perfectly normal binary floating point of the 32 and 64
> bit varieties, exactly what the Intel hardware supports. In short,
> if you've proven anything, it's that your contention has been wrong
> from beginning to end.
>
Oh, that is good.
You don't have a clue pal.
Correct.
You may (or may not) be interested to know that the number of different
orderings of a 52-card deck is:
166 333 146 106 161 323 043 043 031
214 053 663 451 100 345 563 121 330
163 222 605 655 351 445 600 000 000
(in base 7, of course).
--
Richard Heathfield : bin...@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton
It certainly is a good place to show off your ability to mis-spell
or mis-type. Verbose dogmatism also helps.
How many decimal machines have you built? Did you use excess-3,
2*421, 8421, or what other coding? Did you use 9's or 10's
complement, and why? How did you normalize operands? How did you
detect over/underflow and how did you treat it? Did you round
towards zero, infinity, odd, or even? What was the precision of
the transcendental functions, if supplied?
Repeat for binary and ternary. Why is ternary potentially the
most efficient base?
[ ... ]
> I have already clearly proven that BCD corrects machine roundoff error and
> quoted sources that agree with me.
Oh my. I'll bet if you keep trying, you can also prove that BCD will
cure cancer.
> BCD performs math at a logical decimal level binary performs math at machine
> a binary level. This is a functional difference of the mathematics of the
> two data types.
Pure nonsense. Some machines do BCD in hardware. Some machines do
BCD in software. Some use a combination. Likewise, some do binary
work in hardware, software or (most often) a combination of them.
There is NO functional difference at this level at all.
Do you honestly believe that if you took the same BCD algorithm you
use and rendered it in hardware instead of software, that it would
somehow lose whatever you believe it gains now, just because at was
then being done in hardware at a binary level?
> E.g. they expect .1 to be exact, but don't expect 1/3rd to
> > be exact, so BCD does what they expect (even though it has
> > fundamentally the same shortcoming as binary floating point in this
> > respect).
>
> Again, you are confusing roundoff with truncation.
I'm not confusing anything of the sort. You're apparently drawing
lines in the sand that really don't exist. If I try to represent .1
in binary I get an inexact result. If I try to represent 1/3 in BCD,
the nature of the inexactitude is _precisely_ identical.
> Representing 0.1 using machine binary in certain math calculations will
> result in machine rounding errors due to the limitations of the binary
> number systems.
> Representing 1/3 as .33 will represent truncation errors regardless of
> whether you are using a peice of paper or a computer.
You're either an idiot, or deliberately missing the point.
> Simply not true.
> BCD is capable of performing exact financial calculations that binary number
> types are not. It is just that simple.
You clearly lack a background in computer science to make assertions
like this in the first place -- this is _exactly_ the sort of thing
that Turing's theorem covered completely.
The whole basic idea of a Turing machine is that such differences as
BCD vs. binary representation of numbers do NOT really matter. A
binary computer is capable of performing precisely the same
calculations as a decimal or BCD computer and with the correct
programming both will arrive at precisely identical answers.
The contention you claim to have proven was, in reality,
categorically DISproven many years ago. All possible wiggle room has
been eliminated: you're clearly ignorant of the very foundations of
all computer science.
> Well, I would agree that I am apparently not achieving very much by
> participating in this dialog.
Actually, you HAVE achieved one thing quite well: you've proven that
cluelessness is NOT restricted to people who post via AOL -- In all
the years I've read and posted on Usenet, I've never met anybody who
could compete with you on this score.
[ ... ]
> I have already carefully explained the advantages and the disadvantages.
> And as you have said they appear to be a different language to you.
You should really read more carefully -- I said they looked like
something that started out in a different language, but had been
poorly translated. I'm quite certain that's fairly accurate: you've
undoubtedly read a few elementary texts on computer math, but haven't
understood them yet. When you post, you echo bits and pieces of what
they have to say, but rearrange the pieces incorrectly, so what's
left sounds a bit like it's related to the subject at hand, but in
reality makes no real sense, and clearly contradicts many proven
facts.
> Nope.
> The data type either works or it doesn't. If it doesn't the program will
> tell you it won't work.
You're clearly too young to have had any experience with PL/I.
> Binary on the other hand will give you the wrong answer.
Fixed point (represented in either binary or BCD) can and will do the
same. PL/I became quite notorious for a while for some of the wrong
answers it produced from fixed point calculations...so notorious, in
fact, that fixed-point types have been expunged from nearly every
programming language invented since.
> > The main thing to keep in mind is that BCD vs. binary isn't really
> > the issue involved here at all: you could store conventional floating
> > point numbers using BCD (though this isn't common)
>
> Only when using fault intolerant math.
What in the world do you even think you mean by that?
> Well, this is really silly.
Yup -- with a little luck and a lot of studying, you'll eventually
start to realize just HOW silly.
> I appreciate your time in trying to understand what I have said but it is
> apparent that you don't have the computer science back ground or math
> background to grasp my point.
ROFL. It's odd how I have no trouble understanding Kahan, Knuth,
Koblitz (just to mention a few from the "K" range), have been
published internationally, etc., ad naseum, but somehow YOUR
expositions are SO advanced that these are insufficient preparation
to even being to understand the new frontiers in computer science to
which, apparently, only you in the entire world are privy, since the
rest of the world knows your claims were proven wrong years ago.
[ ... ]
> > > computer system. And the only number systems commonly used on digital
> > > computers are binary, hexedecimal and decimal.
> > >
> > > The only register width systems ever produced to my knowledge are: 12
> > > bit now extinct, 8, 16, 32 and 64.
> >
> > You knowledge is clearly quite minimal. For some of the most obvious
> > ones, add 1, 4, 6, 12, 18, 24, 36 and 60 bits to the list.
>
> You obviously missed the word commonly.
No, obviously I learned how to read -- the "commonly" is in the
sentence about "number systems." The sentence about register widths
does not contain the word "commonly."
[ ... ]
> I think I explained myself. And I think any reasonable person would have
> understood the statement in its context.
ROFL. IOW, you were dead wrong from beginning to end, but you'd
rather not admit it.
> > Hint: VB uses perfectly normal binary floating point of the 32 and 64
> > bit varieties, exactly what the Intel hardware supports. In short,
> > if you've proven anything, it's that your contention has been wrong
> > from beginning to end.
> >
>
> Oh, that is good.
> You don't have a clue pal.
Apparently MS doesn't either in that case. Here's a direct cut 'n
paste from the VB help file:
Note Floating-point values can be expressed as mmmEeee
or mmmDeee, in which mmm is the mantissa and eee is the
exponent (a power of 10). The highest positive value of
a Single data type is 3.402823E+38, or 3.4 times 10 to the
38th power; the highest positive value of a Double data type
is 1.79769313486232D+308, or about 1.8 times 10 to the
308th power. Using D to separate the mantissa and exponent
in a numeric literal causes the value to be treated as a
Double data type. Likewise, using E in the same fashion
treats the value as a Single data type.
Clearly MS thinks the single and double precision types in VB
correspond very closely to same types used in VC++. In fairness,
based on your writing English may not be your native language, and
it's possible you intended to specifically say that the "Currency"
type in VB was the one that was different. To at least a limited
degree, that's more or less correct.
OTOH, your example makes it appear that this is NOT what you were
really thinking of: attempting the division you used as an example in
VB using the Currency type does NOT produce the result you said it
would -- it simply causes a "division by zero" error. To get the
result you said it would, you'd have to divide floats or doubles,
which are the binary types supported by the hardware. Of course, if
you use those, you get the exact same result you do if you use VC++
instead.
In the end, we're left only with a question of how clueless you
really are. Either you picked the wrong example based on a gross
misunderstanding of the datatypes, or else you disproved everything
everything else you tried to say.
BTW, your post included a quote of my signature and tagline. If your
news software is properly configured, this won't happen by accident;
normally signatures and taglines should NOT be quoted.
"Jerry Coffin" <jco...@taeus.com> wrote in message
news:MPG.164b8ce8e...@news.direcpc.com...
> In article <d4hE7.4069$Tb.23...@news1.sttln1.wa.home.com>,
> johnsta...@home.com says...
>
> [ ... ]
>
> > I have already clearly proven that BCD corrects machine roundoff error
and
> > quoted sources that agree with me.
>
> Oh my. I'll bet if you keep trying, you can also prove that BCD will
> cure cancer.
>
no,
> > BCD performs math at a logical decimal level binary performs math at
machine
> > a binary level. This is a functional difference of the mathematics of
the
> > two data types.
>
> Pure nonsense.
No very simple math.
BCD is an integer abstraction.
Some machines do BCD in hardware.
True, that doesn't make it any less an abstract.
Some machines do
> BCD in software. Some use a combination. Likewise, some do binary
> work in hardware, software or (most often) a combination of them.
The only binary software work is done in VMs.
> There is NO functional difference at this level at all.
>
This statement reveals your ignorance of machine language and the way that
math is performed on computers.
> Do you honestly believe that if you took the same BCD algorithm
You used the word algorithm.
you
> use and rendered it in hardware instead of software, that it would
> somehow lose whatever you believe it gains now, just because at was
> then being done in hardware at a binary level?
No, never said that. What I said is that BCD corrects machine rounding
errors and it does.
>
> > E.g. they expect .1 to be exact, but don't expect 1/3rd to
> > > be exact, so BCD does what they expect (even though it has
> > > fundamentally the same shortcoming as binary floating point in this
> > > respect).
> >
> > Again, you are confusing roundoff with truncation.
>
> I'm not confusing anything of the sort. You're apparently drawing
> lines in the sand that really don't exist. If I try to represent .1
> in binary I get an inexact result.
If I try to represent 1/3 in BCD,
> the nature of the inexactitude is _precisely_ identical.
>
That is true, but that is not because of BCD but because of the decimal
system. That is the difference. The same problem exists at the decimal
level using binary data types. Therefore, you miss the point completely.
As I said before, there are two types of errors: machine and logical. BCD
eliminates the machine errors but it does not eliminate the logical errors
and I never said it did.
> > Representing 0.1 using machine binary in certain math calculations will
> > result in machine rounding errors due to the limitations of the binary
> > number systems.
> > Representing 1/3 as .33 will represent truncation errors regardless of
> > whether you are using a peice of paper or a computer.
>
> You're either an idiot, or deliberately missing the point.
>
We do seem to be shouting past each other here.
> > Simply not true.
> > BCD is capable of performing exact financial calculations that binary
number
> > types are not. It is just that simple.
>
> You clearly lack a background in computer science to make assertions
> like this in the first place -- this is _exactly_ the sort of thing
> that Turing's theorem covered completely.
>
> The whole basic idea of a Turing machine is that such differences as
> BCD vs. binary representation of numbers do NOT really matter.
A Turing machine has to do with processing strings and compiler design. It
has nothing to do with what we are talking about. I just finished a course
on the theory of automata. So that one is really out there. A turing
machine is a theoretic entity. That is why you study them in computational
'theory' classes.
A
> binary computer is capable of performing precisely the same
> calculations as a decimal or BCD computer and with the correct
> programming both will arrive at precisely identical answers.
>
This is simply false. Otherwise BCD would not have been developed.
> The contention you claim to have proven was, in reality,
> categorically DISproven many years ago. All possible wiggle room has
> been eliminated: you're clearly ignorant of the very foundations of
> all computer science.
>
Please, I am 20 semester credits shy of a BS in CS. I have 40 semester
credits specifically in computer science. I have been through assembly and
have built gates using nand circuits up to multiplexors. So, please, I have
been there, done that, bought the tee shirt and the coffee mug.
The fact is BCD is still a very popular number format precisely because I
as a programmer can build a math engine using BCD data types that will work
on any processor. Big endian little endian issues are not there. Most of
your overflow issues go away as well. That is why you can download
calculator emulators from Hewlitt Packard and TI. The engine is abstract.
> > Well, I would agree that I am apparently not achieving very much by
> > participating in this dialog.
>
> Actually, you HAVE achieved one thing quite well: you've proven that
> cluelessness is NOT restricted to people who post via AOL --
I don't see what you not being on AOL has to do with this.
In all
> the years I've read and posted on Usenet, I've never met anybody who
> could compete with you on this score.
>
> [ ... ]
Well, thanks. I'll take that as a compliment. But you really shouldn't get
so upset. It's just math.
>
> > I have already carefully explained the advantages and the disadvantages.
> > And as you have said they appear to be a different language to you.
>
> You should really read more carefully -- I said they looked like
> something that started out in a different language, but had been
> poorly translated.
Listen I realize that you can't understand what I am talking about. It's
nothing to be ashamed of. You can go to college for a few years and you
will get better.
I'm quite certain that's fairly accurate: you've
> undoubtedly read a few elementary texts on computer math, but haven't
> understood them yet.
Well, I passed the tests.
When you post, you echo bits and pieces of what
> they have to say, but rearrange the pieces incorrectly, so what's
> left sounds a bit like it's related to the subject at hand, but in
> reality makes no real sense, and clearly contradicts many proven
> facts.
>
Well, I thought what I said was pretty clearly written. I guess I could try
to draw a picture or somthing.
I think you need a little help in the 'facts' department though.
> > Nope.
> > The data type either works or it doesn't. If it doesn't the program
will
> > tell you it won't work.
>
> You're clearly too young to have had any experience with PL/I.
>
How old a person is has nothing to do with it. But I would think that we
would want to discuss a language that is a little newer than 1965. But even
people then thought that language was garbage.
The point is clear. When working with a logical data type most of the
time (about 99% or greater), when encountering exceptions the logic handles
it.
When you generate an error at the machine level due to using a data type
incorrectly you get incorrect results. The machine does not throw an
exception.
Of course there are rare occurances where the testing does not show up
certain logic errors and they produce wrong results too but that is pretty
rare especially these days.
> > Binary on the other hand will give you the wrong answer.
>
> Fixed point (represented in either binary or BCD) can and will do the
> same.
Why would you need or want to fix the decimal point in BCD?
PL/I became quite notorious for a while for some of the wrong
> answers it produced from fixed point calculations...so notorious, in
> fact, that fixed-point types have been expunged from nearly every
> programming language invented since.
>
> > > The main thing to keep in mind is that BCD vs. binary isn't really
> > > the issue involved here at all: you could store conventional floating
> > > point numbers using BCD (though this isn't common)
> >
> > Only when using fault intolerant math.
>
> What in the world do you even think you mean by that?
>
Math where you can't tolerate faults.
> > Well, this is really silly.
>
> Yup -- with a little luck and a lot of studying, you'll eventually
> start to realize just HOW silly.
>
The studying I've done. Luck, well, that's another thing.
> > I appreciate your time in trying to understand what I have said but it
is
> > apparent that you don't have the computer science back ground or math
> > background to grasp my point.
>
> ROFL. It's odd how I have no trouble understanding Kahan, Knuth,
> Koblitz (just to mention a few from the "K" range), have been
> published internationally, etc., ad naseum, but somehow YOUR
> expositions are SO advanced that these are insufficient preparation
> to even being to understand the new frontiers in computer science to
> which, apparently, only you in the entire world are privy, since the
> rest of the world knows your claims were proven wrong years ago.
>
Well, humm., how to reply to that?
Let's see, what if I create a linked list ADT that uses an integer type and
limit the input using character input so that I only have the numbers 0-9
for each node, and then make the entry system so that I can enter as many
numbers as the dynamic memory of the system will handle? Each linked list
will be considered a 'big number' and will be an object.
Then I make special case logic for a decimal point entry.
I have just made a data type that can accept any size number. Probably as
many as a 1000 digits, 10000 digits! There isn't a register in concept that
can handle that. Then I can generate logic for addition, subtraction, and
division and all I have to worry about is making sure I carry and borrow
correctly. And I can use operator overloading to create those operations.
Well, I haven't created all the operations, but I have gotten as far as
adding two of these. I have written a program where you can enter digits to
your heart's content for each of two numbers and then add them. Fill the
screen with numbers if you want. It works.
So I know what I am talking about when it comes to logical math operations
verses machine math.
Now, whether you are misunderstanding what I am saying or what I don't know.
But let's just forget this little conversation shall we? I mean, we might
meet each other some day and it would be nice not to have a bunch of hard
feelings over a stupid arguement, don't you think?
http://www.escape.ca/~rrrobins/Assembly/BCDArithmetic.html
http://developer.intel.com/design/intarch/techinfo/Pentium/datatype.htm#6012
by the way,
I hope you're not confusing BCD with EBCDIC. They are two completely
different things. The only reason I mention this is you brought up PL/1.
[etc.... Let me just point out that many scientific calculators
maintain about three digits beyond what can be seen, and this, in binary
or BCD, is what - partially - prevents rounding errors. It varies with
the calculator model.]
Jerry Coffin has explicated thoroughly your errors. You should reread
the post in which he does so. At least you may become as wise as
Socrates, who knew that he knew nothing.
Fortunately, Socrates did not have online dictionaries and random
webpages by the use of which he could have persuaded himself that his
wackier extrapolations from the little knowledge he had were correct.
BCD does indeed allow exact representation of some values such as 1.9,
which in turn are generally inexact or arbitrary approximations of
physical quantities (though sometimes they represent an integral number
of pennies). It also provides a variation on fixed point mathematical
methods, themselves a variation on integral arithmetic, which are also
useful in certain circumstances for preventing certain types of errors.
But the former is trivial, and the latter is not an advantage of BCD per
se.
In short, your notion that BCD has any great significance in error
reduction is false. Consult a real reference on numerical analysis.
... snip lots more ...
"4321"
does that represent 1234, or 4321? Sounds suspiciously like an
endian issue to me. Shows up all the time in input routines.
That's fine, whatever, the discussion is getting diverted.
>
> Jerry Coffin has explicated thoroughly your errors.
No sir. He did not.
I have demonstrated clearly how I am right.
And this is a much simpler point than argueing algorithmic or numeric
analysis. It is apparent to me that you gentlemen either don't understand
what I am saying or are completely ignorant of machine level
characteristics.
The following web sight explains exactly how BCD operations are more exact
than binary at the assembly language level.
http://www.escape.ca/~rrrobins/Assembly/BCDArithmetic.html
I also suggest that you review the Intel cite for addressing modes for the
Pentium processor.
http://developer.intel.com/design/intarch/techinfo/Pentium/datatype.htm#6012
And next time, try to think a little before you start typing.
<snip>
Assuming you understand what you are talking about.
The issue is input control not big e, little e. You have to know what the
string input was to process it into a register.
So that is how you understand what the MSD is.
But input is not the issue here, we are talking about the limitations of
machine language computing in comparison with a digital abstraction, BCD in
particular.
And my point is that while BCD is far slower it is more accurate. Nothing
more.
Yet all you marvellously intelligent genious programmers out there don't
get it.
This is not an argument about numeric analysis or algorithmic analysis.
The original post cited a floating point discrepancy in a variation of
compile builds. I suggested that going to BCD might provide an answer over
machine dependent calculation.
You gentlemen are lost in the woods.
If you will review the following Intel web site you will see that the
machine instructions making high order calculations possible on the Intel
Pentium processor are BCD.
http://developer.intel.com/design/intarch/techinfo/Pentium/fpu.htm#21431
You will see that the real number data type for BCD is 80 bits in width.
That is under floating point data type formats about 1/3 of the way down the
rather long page.
You will also see that under Length, Precision, and Range of FPU Data
Types the precision of BCD is 18. While the precision of an extended real
is 64.
But notice the term, "normalized range."
This term refers to the difference of operands in magnitude.
Notice how that it is not pertinent at the binary level.
I am not stating this to assert that the range of the BCD is greater than
or even close to that of the binary data type, but to allow you to observe
that the binary system is not connected to the decimal in BCD. It has no
effect on the number system at all. Therefore, the inaccuracies inherent
within the binary system are not translated to the decimal system as they
are with a binary based decimal data type.
So while even though the decimal normalized range of BCD is significantly
less than than of the double real, the accuracy is more than that of that of
the double real due to the rounding error at the binary level.
The definition of precision is the points which the number can represent.
The definition of accuracy is how well those numbers represent real
numbers. Due to the inherent error built into the conversion process of
binary to decimal the double real is less accurate than Packed BCD when used
in mathematical calculations.
Therefore, you will incurr more mathematical error over a sequence of
operations using binary based data than with a data abstraction such as BCD.
Binary errors are most evident in calculating percentages of values
repeatedly. That is why BCD is used primarily for financial calculations
and binary for scientific. Because in finance mathematical error is not
tolerable. In science many calculations are made to a set of significant
digits and beyond that it is not important to the process. In finance
calculations must be made to specific accounting rules and the process must
be logically controlled.
Another reason is that BCD is more translateable to ASCII, and typically
finacial calculations are much smaller in magnetude than scientific ones and
the need for strict accuracy greater.
BCD is inefficient, it has far less range than binary and it requires
special programming. But the fact is it is more accurate.
That is why it is part of the Intel Pentium processor and part of modern
programming.
So, anyway, I appreciate the exercise. And frankly, I knew I was right
when I started this but I know more now than then so you fellows have helped
me a bit.
I hope that you are at least curious if not convinced and if you don't
believe that what I have said is correct, well, I wish you the best and hope
that you figure it out someday.
Ta Ta
You are correct, I was thinking more of the fixed point data type in VB than
of the floating point.
And as far as that is concerned, I am not a VB programmer. So I probably
shouldn't have used the illustration.
I often make a mistake in making illustrations to try to prove a point and
then the person I am discussing the point with makes the illustration the
point.
I wonder why VB would have a currency data type? Shouldn't floats be
accurate for financial calculations? Well, anyone with half a brain in
their head knows they aren't.
But anyway, I have provided ample solid arguement and evidence that BCD and
numeric ADTs are more accurate than binary based data types due to the
limitations of rounding and inaccurate binary reprentations of decimal
numbers.
This was in an effort to try to help the OP in a possible compile or run
time error.
I provided a very clear and referenced explaination of rounding and
truncation error and eventually had to provide assembly language code and
Intel documentation to prove that BCD data types and programming is more
accurate than binary floating point.
And you know, I am willing to bet that one of two things will happen here.
1. Those that are intelligent enough to figure out I am right will leave
and say nothing.
2. Those who aren't will continue to rant in their ignorance.
So one would wonder why I should continue with this?
I will say that I have learned a few things, about BCD, assembly and Intel
math coprocessors. So this has definately not been a waste of time in that
sense.
But there are things that I should be doing.
Thanks for your time gentlemen. And I hope that if you haven't figured out
that I am right yet that you will someday.
[ ... ]
> If you will review the following Intel web site you will see that the
> machine instructions making high order calculations possible on the Intel
> Pentium processor are BCD.
>
> http://developer.intel.com/design/intarch/techinfo/Pentium/fpu.htm#21431
How about if you sit back for a few minutes and study those Intel
docs in more detail. Don't worry, I won't get impatient while you do
so.
...
Okay, now that you've studied them in more detail, I'm sure you've
noticed that the Intel FPU has instructions to load and store BCD
data, but NONE to actually operate on BCD data. You'll undoubtedly
also have noticed that the BCD load and store instructions are REALLY
slow -- much slower than the extended real load and store
instructions, despite dealing with approximately the same amount of
data.
The reason for this is quite simple: to quote a bit from the same
page to which you posted the URL:
The FILD (load integer) instruction converts an integer
operand in memory into extended-real format and pushes
the value onto the top of the register stack. The FBLD
(load packed decimal) instruction performs the same
load operation for a packed BCD operand in memory.
IOW, the Intel FPU never does any calculations on BCD data at all:
all data is converted from BCD to extended real format when it's
loaded into the FPU; all calculations are done on the data in
floating point format.
If you choose to do so, you can then use the FBST instruction to
convert the final result of those calculations back to BCD format.
Based upon this, it's clear that the rest of your assertions are
basically nonsensical: the result of a "BCD calculation" on an Intel
FPU can't possibly be better in any way (more precise, more accurate,
etc.) than the result from doing the same calculations on binary
floating point data, because in reality the calculations are done on
binary floating point data either way.
> So, anyway, I appreciate the exercise. And frankly, I knew I was right
> when I started this but I know more now than then so you fellows have helped
> me a bit.
I'll more or less ignore your attitude here, and take the fact that
you're looking at the Intel manuals as a sign that you at least MIGHT
be willing to learn something from this.
> I hope that you are at least curious if not convinced and if you don't
> believe that what I have said is correct, well, I wish you the best and hope
> that you figure it out someday.
Very nicely put. Since you've cracked the Intel manuals, with a bit
of luck you'll read them in more detail and begin to realize to whom
the words best apply.
Exactly.
However, with BCD data types each packed BCD word is operated on on its
corresponding magnitude operand as though it was a singular interger value.
Therefore, there is no bit shift effect, and there is no binary conversion
error.
Operations done on BCD data types are done on values between 0-9 decimal
only. There is no binary conversion process because each 4 digit BCD
represents a decimal entity. There are 10 variations of 4 bit BCD integers,
not 16. Therefore, even though BCD is coded in binary it is in fact a
decimal, or base 10, system. Therefore, there is no binary conversion
error.
Therefore, the errors inherent with the binary system at the machine
language level are not introduced into the decimal number system used.
We are getting closer.
[ ... ]
> However, with BCD data types each packed BCD word is operated on on its
> corresponding magnitude operand as though it was a singular interger value.
You were starting to make sense for a little bit there, but you seem
to have lost it again.
> Therefore, there is no bit shift effect, and there is no binary conversion
> error.
Quite the contrary. The reason there's no error in converting to
binary is that inputs are restricted to a range within which binary
floating point numbers are always precise. Binary floating point
inputs would provide identical results as long as we restricted them
to the same range.
> Operations done on BCD data types are done on values between 0-9 decimal
> only. There is no binary conversion process because each 4 digit BCD
> represents a decimal entity.
There is a binary conversion process; all we've done is restricted
inputs to a range within which we know that every possible input has
a unique and precise representation in binary floating point.
> There are 10 variations of 4 bit BCD integers, not 16. Therefore,
> even though BCD is coded in binary it is in fact a decimal, or base
> 10, system. Therefore, there is no binary conversion error.
Conversions are always precise because BCD inputs are only accepted
if they're within a range that can be represented precisely.
> We are getting closer.
Sometimes I think so -- the see a post like the one to which I'm
replying, and I start to wonder again.
"Jerry Coffin" <jco...@taeus.com> wrote in message
news:MPG.164cbf3ec...@news.direcpc.com...
> In article <d2CE7.8938$Tb.45...@news1.sttln1.wa.home.com>,
> john...@home.com says...
>
> [ ... ]
>
> > However, with BCD data types each packed BCD word is operated on on
its
> > corresponding magnitude operand as though it was a singular interger
value.
>
> You were starting to make sense for a little bit there, but you seem
> to have lost it again.
>
No, this is very simple.
BCD represents each integer in a number with 4 bits.
0=0000
1=0001
2=0010
3=0011
4=0100
5=0101
6=0110
7=0111
8=1000
9=1001
1234 would be represented.
0001 0010 0011 0100
Therefore, it is a base ten system. There is no binary conversion other
than a simple translation of the 4 digit value to a single ascii character
for screen processing.
Carry operations occur when exceeding the value of nine. There are
special registers and instructions for handling these data types in the
Intel processor line and has been for around 20 years.
> > Therefore, there is no bit shift effect, and there is no binary
conversion
> > error.
>
> Quite the contrary.
No. Not to the contrary.
Shift effect occurs only in binary operations because of the data type.
There is no shift effect using BCD because there is no shift.
The reason there's no error in converting to
> binary is that inputs are restricted to a range within which binary
> floating point numbers are always precise.
Not true again. They may be precise but they are not accurate.
Ever heard of the pariot missle error?
You can read about it here.
http://www.ima.umn.edu/~arnold/455.f96/disasters.html
BCD does not incur this error because BCD is not binary.
Binary floating point
> inputs would provide identical results as long as we restricted them
> to the same range.
>
Again, mathematically false.
Note this page especially example 3 for the common binary conversion
mathmatic problems.
http://www.ug.cs.sunysb.edu/~cse114/binaryconversions.txt
"
Also as far as the range is concerned check out this paragraph:
Using Binary Coded Decimal Numbers
Binary coded decimal (BCD) numbers allow calculations on large numbers
without rounding errors. This characteristic makes BCD numbers a common
choice for monetary calculations. Although BCDs can represent integers of
any precision, the 8087-based coprocessors accommodate BCD numbers only in
the range ±999,999,999,999,999,999.
This section explains how to define BCD numbers, how to access them with a
math coprocessor or emulator, and how to perform simple BCD calculations on
the main processor.
Defining BCD Constants and Variables
Unpacked BCD numbers are made up of bytes containing a single decimal digit
in the lower 4 bits of each byte. Packed BCD numbers are made up of bytes
containing two decimal digits: one in the upper 4 bits and one in the lower
4 bits. The leftmost digit holds the sign (0 for positive, 1 for negative).
Packed BCD numbers are encoded in the 8087 coprocessor’s packed BCD format.
They can be up to 18 digits long, packed two digits per byte. The assembler
zero-pads BCDs initialized with fewer than 18 digits. Digit 20 is the sign
bit, and digit 19 is reserved.
When you define an integer constant with the TBYTE directive and the current
radix is decimal (t), the assembler interprets the number as a packed BCD
number.
The syntax for specifying packed BCDs is the same as for other integers.
"
The above is from the Microsoft Assembler (MASM) Programming Guide chapter
6, found below
http://webster.cs.ucr.edu/TechnicalDocs/MASMDoc/ProgrammersGuide/Chap_06.htm
> > Operations done on BCD data types are done on values between 0-9
decimal
> > only. There is no binary conversion process because each 4 digit BCD
> > represents a decimal entity.
>
> There is a binary conversion process; all we've done is restricted
> inputs to a range within which we know that every possible input has
> a unique and precise representation in binary floating point.
Simply false. There is no binary conversion process because BCD is a base
ten number system. The term conversion means that you are converting a
number from one base to another. For instance, converting hexidecimal to
decimal would be a conversion. Because BCD is a base ten number system
there is no conversion because there is a one to one relationship between
the numbers 0-9 and their counterparts in the BCD system.
>
> > There are 10 variations of 4 bit BCD integers, not 16. Therefore,
> > even though BCD is coded in binary it is in fact a decimal, or base
> > 10, system. Therefore, there is no binary conversion error.
>
> Conversions are always precise because BCD inputs are only accepted
> if they're within a range that can be represented precisely.
>
There are no conversions.
I think with the quotes that I have provided I have successfully proven my
point. If you don't understand that you are completely wrong on this by now
then you never will. And the if other people who have posted here opposing
my statements don't clearly understand that I was right when I said that
rounding error and numeric error caused by the binary number system at the
data type level is corrected by BCD then you never will understand it
because there is no way to prove the point any clearer than what I have.
I have substantiated my point with examples, quotes from math texts and now
real world examples and a quote from the MASM programmers manual. All
clearly validating my position. If that doesn't prove my point to you then
you are either yanking my chain so to speak or are really very stuck in an
invalid perception.
At any rate, I've had enough. I've proven my point.
> The following web sight explains exactly how BCD operations are more exact
>than binary at the assembly language level.
>http://www.escape.ca/~rrrobins/Assembly/BCDArithmetic.html
No, it doesn't - it simply makes a false assertion to that effect.
Clearly the author is unaware that fixed point and integral operations
of all types have the same qualities he ascribes to BCD.
> I also suggest that you review the Intel cite for addressing modes for the
>Pentium processor.
>http://developer.intel.com/design/intarch/techinfo/Pentium/datatype.htm#6012
The page defines BCD. How does that support your argument? Most
microprocessors do indeed support BCD in microcode and the Pentium is no
exception.
> And next time, try to think a little before you start typing.
Your tenacity in the face of correction is admirable - maybe you will
discover something useful sometime. But this is not it.
So you are right and he is wrong huh?
You are really sorry.
Below is the pasted dialog from Jerry Coffin's also sorry argument.
One more time:
1234 would be represented.
0001 0010 0011 0100
http://www.ima.umn.edu/~arnold/455.f96/disasters.html
the range Ä…999,999,999,999,999,999.
There are no conversions.
Good look guys, you're going to need it.
This is an artifact of the choices they made with the hardware.
If they had chosen 1/16 of a second instead of 1/10 second the
problem wouldn't have happened despite it being binary.
> Note this page especially example 3 for the common binary conversion
> mathmatic problems.
> http://www.ug.cs.sunysb.edu/~cse114/binaryconversions.txt
>
> "
> Also as far as the range is concerned check out this paragraph:
> Using Binary Coded Decimal Numbers
> Binary coded decimal (BCD) numbers allow calculations on large numbers
> without rounding errors. This characteristic makes BCD numbers a common
> choice for monetary calculations. Although BCDs can represent integers of
> any precision, the 8087-based coprocessors accommodate BCD numbers only in
> the range Ä…999,999,999,999,999,999.
With enough bits (fewer than required for BCD), the same can be said for
binary. There are plenty of infinite precision libraries that do not store
their representations in BCD.
Try doing 10/3 in BCD and tell me if there is no rounding error. (It does
turn out that any number precisely represented in binary can be represented
in BCD while the converse is not true, e.g. 1/10).
There is no magic in choosing a base. All the same issues exist. Some
bases can represent some numbers that others can't. In trinary, 1/3 is
representable exactly. Big deal. The fact that BCD is used for many
monetary calculations is not that is does away with rounding errors, but
that there are very specific rules about how rounding is done and those
rules are expressed as base 10.
marco
That is correct. And it is also correct that it would not have happened had
they used a BCD data type. Which is the point.
And if they had used binary as a number system instead of decimal they
wouldn't have had an error either. But people don't generally read numbers
in binary too well.
The point also is, they, as well as several in this news group, were
completely unaware of this characteristic.
But there are areas of life where one can not arbitrarily pick the
denominator of the ratio calculated.
Money is the primary example and the primary justification for including the
laborous, slow intensive BCD number format and codes present on Intel and
other processors and for using it in monetary calculations using interest.
>
> > Note this page especially example 3 for the common binary conversion
> > mathmatic problems.
> > http://www.ug.cs.sunysb.edu/~cse114/binaryconversions.txt
> >
> > "
> > Also as far as the range is concerned check out this paragraph:
> > Using Binary Coded Decimal Numbers
> > Binary coded decimal (BCD) numbers allow calculations on large numbers
> > without rounding errors.
Please read that statement a few times. What rounding errors is the
Microsoft manual for MASM speaking of? Decimal round off errors? No, it is
refering to binary round off errors. Hopefully the lights are coming on
about now.
This characteristic makes BCD numbers a common
> > choice for monetary calculations. Although BCDs can represent integers
of
> > any precision, the 8087-based coprocessors accommodate BCD numbers only
in
> > the range Ä…999,999,999,999,999,999.
>
> With enough bits (fewer than required for BCD), the same can be said for
> binary.
You are not reading the post carefully nor the referenced cites. If you had
done so you would have discovered that what you are disputing is a statement
from the MASM programmers manual. The primary justification and purpose for
BCD is to eliminate binary rounding error.
There are plenty of infinite precision libraries that do not store
> their representations in BCD.
>
As I have tried to explain to others who have posted in this thread. There
is a difference between precision and accuracy. BCD can accurately
represent 1/10 as a absolute value, period. Binary based data types can
not. No matter how many digits of precision you generate in binary to
represent decimal 0.1 the binary value underlying that variable will not be
an absolute 1/10. Therefore, even though BCD is less precise it is more
accurate when representing base 10 number calculations in a binary computer.
> Try doing 10/3 in BCD and tell me if there is no rounding error. (It does
> turn out that any number precisely represented in binary can be
represented
> in BCD while the converse is not true, e.g. 1/10).
In other words you want to represent a rational number using the decimal
system. That is a red herring. Because it is a problem in any math dealing
with convertin integer to rational values or visa versa. So it has nothing
to do with the data type but the number system. In other words, you have
this problem with both BCD and with binary so the problem is conceptually
higher than that of the machine storage, conversion issues or what we are
talking about here. The fact is, when representing base ten data types the
only way to do it with absolute accuracy is to use a BCD data type.
The only way to represent rational data types with absolute accuracy on a
digital computer is to use a BCD value for the numerator and the denominator
and perform all math in the fractional or rational method. This, of course
would require an abstract data type or ADT and is not part of any official
numeric standard.
>
> There is no magic in choosing a base. All the same issues exist.
Again, if you had been reading the posts you would understand that I have
proven this ascertion false already. There are different issues in dealing
with base 10 numbers and currency than exist with other types of
calculations. The Patriot Missle story is a particularly valuable
illustration because it shows how binary base data types translate rounding
errors into decimal number math over a series of calculations. This is a
common mistake and the primary ascertion that I have been trying to make.
Some
> bases can represent some numbers that others can't.
Very true but when dealing with the real world most people and systems deal
in decimal. And there are certain mathematical applications, such as time
variance in shooting missles, or calculating interest over time, that
conversion error and machine level rounding is not tolerable. So the reason
that you would use BCD is not because it is a number system that is without
any problems converting from it to any other because no such number system
exists. The reason that you would use BCD is because it creates an absolute
representation of the decimal numbering system in the computer and therefore
eliminates the rounding and conversion errors associated with using binary
computers in mathematical calculations.
In trinary, 1/3 is
> representable exactly. Big deal.
Agreed, big deal.
The fact that BCD is used for many
> monetary calculations is not that is does away with rounding errors,
Wrong. It does away with binary rounding errors. That is the point. That
is the statement from the MASM manual. Binary can not represent certain
numbers accurately. Therefore, they are rounded. Therefore, binary is not
accurate regardless of how much precision you use 1/10 will never absolutely
equal 1/10 in a binary system. BCD does away with binary rounding errors.
After you have the number in BCD you round in decimal in your calculations
in accordance with what ever standard is in place. But you are aware of the
rounding and it is under control.
Whereas, in our missle illustration as the clock was ticking the error
factor was slowly moving to the left in magnitude and no one knew. And
people died because of it. This is why and when BCD is used. When accuracy
is not an option but a requirement.
but
> that there are very specific rules about how rounding is done and those
> rules are expressed as base 10.
Actually, official rounding rules are stated in ANSI and General Accounting
Standards. Using BCD you can effect those standards absolutely accurately
in your programs with absolute confidence that your math is not in error due
to machine level conversion or rounding.
Appreciate your comment Marco.
Hope that I haven't offended you or caused any hard feelings.
Please re read the post though and the references because I don't think that
you understand yet why there is ample justification for BCD data types.
> marco
>
>
>
[ ... ]
> Therefore, it is a base ten system.
This much is basically correct.
> There is no binary conversion other
> than a simple translation of the 4 digit value to a single ascii character
> for screen processing.
Wrong, wrong WRONG! As I already pointed out, and quoted the Intel
material to support, when you load a BCD value into the FPU, it ends
up as a binary floating point value. You've said that understand
that changing from one number base to another is a conversion.
You've seen and acknowledged the Intel material showing that when you
load the BCD value, it's turned from a integer in BCD format to an
extended precision floating point number in binary format.
How can you _possibly_ realize both those things, and still say
"there is no binary conversion"? This is roughly like saying you
realize that water makes things wet and that the ocean consists of
_tons_ of water, but still insisting that the bottom of the Pacific
ocean is completely dry.
I'll stop there, since it's pointless to argue the rest of your
mistakes until you can understand at least something as simple as
this.
Right, right, RIGHT!
As I already pointed out, and quoted the Intel
> material to support, when you load a BCD value into the FPU, it ends
> up as a binary floating point value.
No, it doesn't. Not the way you are thinking. You apparently think that
Intel created BCD so they could perform binary math on it. That has got to
be the most ridiculous statement. Why? Why not just use binary numbers?
What is the purpose of a BCD if all you are going to do is convert it from
its abstract value to a binary real? That is foolish I couldn't believe
that, that is what you meant initially but now that you insist on it I see
how utterly lost on this you are.
Yes, but there are only 10 values for a 4 bit number.
Each. That equals a decimal system and a one to one relation. A one to one
symbolic relation equals no conversion. The symbolic relation of binary to
decimal is 2 to 10. From decimal to hex is 10 to 16. From BCD to decimal
is 10 to 10.
You've said that understand
> that changing from one number base to another is a conversion.
Yes but when we do math in BCD or decimal we carry when we exceed 9 or in
BCD 1001. When we do math when we do binary we carry when we exceed 1 or
0001. In doing math on hex we carry when we exceed 16 or F.
There is no conversion in BCD because each 4 digit BCD value is treated as
a mathematical entity. 0001 represents 1 to the processor. So even though
each digit is represented using binary code each four binary digit section
is treated as a single digit by the FPU. That is why it is called binary
coded DECIMAL. Becauses it is a binary coded decimal system.
> You've seen and acknowledged the Intel material showing that when you
> load the BCD value, it's turned from a integer in BCD format to an
> extended precision floating point number in binary format.
>
And you have seen how that the MASM material flatly stated that BCD corrects
rounding errors. What rounding errors? Not those in the decimal base.
Those in the binary base. How could binary rounding errors be corrected if
you were performing math operations on binary values? They can't.
> How can you _possibly_ realize both those things, and still say
> "there is no binary conversion"?
It is very simple.
There is no binary conversion. There is a one to one relationship between
each binary number and 0-9. Therefore there is no conversion. There is no
1011 in BCD. Why? Because that is 10 and if you are only representing 0-9
in abstract you don't use 10.
This is roughly like saying you
> realize that water makes things wet and that the ocean consists of
> _tons_ of water, but still insisting that the bottom of the Pacific
> ocean is completely dry.
>
No, Jerry, you just don't get it. It is just beyond you.
> I'll stop there, since it's pointless to argue the rest of your
> mistakes until you can understand at least something as simple as
> this.
>
That's fine. It's pretty evident that I have proven my point as best I can.
But let me ask you something.
If BCD data is converted to binary floating point data types in math
operations, why do they have to be flagged as BCD and why is there a
completely different set of instructions for BCD than for FP? Why?
Because the data type and instructions for BCD are completely different
than FP. And BCD is not converted to binary to perform operations, that is
ridiculous.
<snip>
> No, it doesn't. Not the way you are thinking. You apparently think that
> Intel created BCD so they could perform binary math on it.
This implies that Intel created BCD (for some other reason). I have
looked for supporting evidence of that assertion and couldn't find any.
I don't have any reason to doubt you, but I would be interested to see
some kind of reference supporting your statement.
--
Richard Heathfield : bin...@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton
Intel processors do not perform floating point math on BCD. Nor are BCD
numbers converted to binary numbers when used as intended with the correct
processor operators.
The specifics on exactly how BCD arithmatic is done in the intel
processor are documented in the book, Code, Charles Petzold, p271, Microsoft
Publishing. And the MASM programmers manual quoted below chapter 6.
If you look at the Intel web site you will see that BCD data types are
refered to as integer data types and as floating data types. There is a
normalized range given for the BCD as being from (-10^18 + 1) to (10^18 - 1)
. This raises the question of what the term normalize means in reference
to the BCD data type because the BCD data type does not contain an exponent.
The summary definition of normalized is given in the document as follows:
"To summarize, a normalized real number consists of a normalized significand
that represents a real number between 1 and 2 and an exponent that specifies
the number's binary point."
The significant is also refered to as the characteristic or the mantissa in
other math settings.
So the range of the BCD data type is in reference to its converted form in
the FPU. That is, if one were to perform an FPU operation on it. Which of
course would defeat the purpose of having a BCD in most circumstances. But
I suppose there could be times where you may need to convert a BCD data type
to binary based for some reason.
"
Decimal Integers
Decimal integers are stored in a 10-byte, packed BCD format. Table 31-8
gives the precision and range of this data type and Figure 31-17 shows the
format. In this format, the first 9 bytes hold 18 BCD digits, 2 digits per
byte (see "BCD Integers"). The least-significant digit is contained in the
lower half-byte of byte 0 and the most-significant digit is contained in the
upper half-byte of byte 9. The most significant bit of byte 10 contains the
sign bit (0 = positive and 1 = negative). (Bits 0 through 6 of byte 10 are
don't care bits.) Negative decimal integers are not stored in two's
complement form; they are distinguished from positive decimal integers only
by the sign bit.
Table 31-11 gives the possible encodings of value in the decimal integer
data type.
The decimal integer format exists in memory only. When a decimal integer is
loaded in a data register in the FPU, it is automatically converted to the
extended-real format. All decimal integers are exactly representable in
extended-real format.
"
http://developer.intel.com/design/intarch/techinfo/Pentium/fpu.htm#49040
This establishes that if you are to do BCD base ten calculations you can not
do them in the FPU. They will cease to be BCD values and become binary
extended real. The math then will be binary and the data binary.
The statement from the MASM programmers manual explains how BCD math is to
be performed.
"
Packed BCD Numbers
Packed BCD numbers are made up of bytes containing two decimal digits: one
in the upper 4 bits and one in the lower 4 bits. The 8086-family processors
provide instructions for adjusting packed BCD numbers after addition and
subtraction. You must write your own routines to adjust for multiplication
and division.
For processor calculations on packed BCD numbers, you must do the 8-bit
arithmetic calculations on each byte separately, placing the result in the
AL register. After each operation, use the corresponding decimal-adjust
instruction to adjust the result. The decimal-adjust instructions do not
take an operand and always work on the value in the AL register.
The 8086-family processors provide the instructions DAA (Decimal Adjust
after Addition) and DAS (Decimal Adjust after Subtraction) for adjusting
packed BCD numbers after addition and subtraction.
These examples use DAA and DAS to add and subtract BCDs.
;To add 88 and 33:
mov ax, 8833h ; Load 88 and 33 as packed BCDs
add al, ah ; Add 88 and 33 to get 0BBh
daa ; Adjust 0BBh to 121 (packed BCD:)
; 1 in carry and 21 in AL
;To subtract 38 from 83:
mov ax, 3883h ; Load 83 and 38 as packed BCDs
sub al, ah ; Subtract 38 from 83 to get 04Bh
das ; Adjust 04Bh to 45 (packed BCD:)
; 0 in carry and 45 in AL
Unlike the ASCII-adjust instructions, the decimal-adjust instructions never
affect AH. The assembler sets the auxiliary carry flag if the digit in the
lower 4 bits carries to or borrows from the digit in the upper 4 bits, and
it sets the carry flag if the digit in the upper 4 bits needs to carry to or
borrow from another byte.
Multidigit BCD numbers are usually processed in loops. Each byte is
processed and adjusted in turn.
"
http://webster.cs.ucr.edu/TechnicalDocs/MASMDoc/ProgrammersGuide/Chap_06.htm
As explained above, mathematics on BCD data is done one 4 bit value at a
time. Of course, logic for carry detect and special cases would have to be
introduced. But this should go without saying. But in the light of the
previous posts I have had to deal with I figure I'd mention it.
Therefore, because operations are confined to addition and substraction
within a 10 character range, 0-9 the binary round off and inaccuracies due
to shifting is not present. The DAA and DAS instructions simply provide the
functionality. They do not convert the BCD data into real extended. On the
contrary they are specifically provided so that you can do math exclusively
in BCD. Because as any good programmer knows all math is nothing but
addition and subtraction. They do convert the binary addition or subtraction
from the hex value to the decimal equivalent on a carry. This, of course,
simply preserves the decimal integrity of the operation and facilitates
carry operations.
I can see how Jerry would get the idea that a BCD value is converted to
extended real any time it is used because that is what it says in the Intel
documentation that I provided. But I also provided the assembler manual
that I quote. And the instructions that make BCD possible are in the
processor, not the FPU. So they are not located in the FPU documentation.
This also establishes the answer to the question how are floating point
values represented in BCD and the answer is, they aren't. I now know why
fixed point data types are fixed. Because the decimal point is part of the
form or tool that is used to represent the data. It is not part of the
data. That is, when using a VB fixed point data type you have 4 digits of
precision to the right of the decimal. That doesn't mean that it's a
floating point data type because the point doesn't float it's fixed.
Therefore, there is no point in the data. It has to be built into the
algorithm that uses the data. That is, when you program your math engine
your numeric magnetudes will always be the same for each digit. I suppose
you could figure out different ways of splitting the number up and using two
BCD values to represent one number, (sort of like splitting registers), and
carrying or subtracting anything but that is for a different post.
As stated above in the quotes from the MASM programmers guide. If you
want to go beyond addition and substraction you have to do it yourself.
That is, you would have to program a BCD math engine. But the fact remains,
that I have established that the BCD data type is isolated from binary
rounding errors. And that the operations allowing exclusive BCD math to be
performed on the intel processor DO NOT convert the number to a floating
point binary.
Thank you
Thank you very much.
Randy Johnson
[ ... ]
> If BCD data is converted to binary floating point data types in math
> operations, why do they have to be flagged as BCD and why is there a
> completely different set of instructions for BCD than for FP?
If you'd bother to read the documents, you'd realize that there
AREN'T any math operations specifically for BCD. The FPU supports
two types of operations with BCD: Load a BCD value (and convert it to
a binary floating number by the time it is inside the FPU) and store
a BCD value, starting from a binary floating point value.
There is not even ONE more single BCD operation in the Intel FPU:
there is no instruction to add two BCD numbers. There is no
instruction to subtract them. There is none to divide them. There
is not to multiply them. There is absolutely, positively NO
operation in the FPU to work with a BCD number EXCEPT to load it or
store it, and when you do either of those, the value in the FPU
itself is _always_ a binary floating point number.
I'll tell you what: if you honestly think there's an FPU instruction
to add two BCD numbers, why don't you look through the books and tell
us its op-code? I'll tell you why you don't: because when you try to
find it, you'll realize that NO SUCH THING EXISTS, and you'll have NO
CHOICE but to admit you're WRONG and have been from the beginning!
> Why?
> Because the data type and instructions for BCD are completely different
> than FP. And BCD is not converted to binary to perform operations, that is
> ridiculous.
No, it's not. In fact, it's EXACTLIY what the quote from the Intel
manual said. If you'd bothered to read it, you'd have realized this
was true.
I'll repeat since you're having SUCH a difficult time with this: the
FPU does NOT carry out _any_ mathematical operations on BCD numbers.
It doesn't add, subtract, multiply or divide them. It has a total of
TWO BCD operations: load a BCD or store a BCD. When it loads a BCD
number, it converts that to an extended precision binary floating
point number. When it stores a BCD number, it starts with an
extended precision binary floating point number, and converts it to
BCD as it places the value in memory.
As I've said a half dozen times now, the fact that BCD is involved
does NOT fix or prevent rounding errors. Intel has simply restricted
BCD inputs and outputs to numbers that can be represented with
perfect precision AND accuracy as extended precision binary floating
point numbers.
Now, regardless of the number base they chose to use in the FPU, it
is a given that SOME fractions would produce repeating numbers in
that base. For an obvious example, if the denominator in a rational
number is a prime number greater than the number base, then
attempting to represent that rational in that base will ALWAYS
produce a repeating number (that's not the minimum necessary
definition, but it's an adequate one for demonstration purposes).
Intel ensured against this ever happening in a VERY simple way: BCD
inputs are always integers. Since there's never any fractional part,
there's never a possibility of a fractional part that would result in
a repeating number.
[ ... ]
> Intel processors do not perform floating point math on BCD. Nor are BCD
> numbers converted to binary numbers when used as intended with the correct
> processor operators.
So now we're supposed to believe that FBLD and FBST are "incorrect"
operators there were never intended to be used at all?
> So the range of the BCD data type is in reference to its converted form in
> the FPU. That is, if one were to perform an FPU operation on it. Which of
> course would defeat the purpose of having a BCD in most circumstances.
No, it would not. Just for example, if you start with two BCD
numbers, use FBLD to load them into the FPU, add them together (as
binary floating point numbers) and the store the result in BCD again,
you'll get one of two possible results: if the sum of the two is
within the range that can be represented as BCD, you'll get an answer
that is perfectly precise and accurate. Otherwise, the FPU will
attempt to raise an exception, which tells you the result was out of
range. Either way, you will NOT store a wrong answer.
I'll repeat, however: this is not because of any characteristic of
BCD itself. It's a characteristic of how Intel implemented the BCD
data type: they restricted inputs to integers in a range that can be
represented perfectly as floating point.
Of course, depending on what you decide to do, you can produce
numbers that cannot be represented perfectly. For an obvious one, if
you divide 1 by 10, you'll get a result that can't be represented
perfectly. The saving grace is that if you attempt to store that
result in BCD format, the FPU will signal that a problem has arisen.
> This establishes that if you are to do BCD base ten calculations you can not
> do them in the FPU. They will cease to be BCD values and become binary
> extended real.
Thanks for rejoining us in planet earth.
> The math then will be binary and the data binary.
Precisely (no pun intended). The thing to realize here, as I've
stressed a number of times, is that Intel has restricted the possible
BCD inputs to those which can be perfectly represented in floating
point format.
> The statement from the MASM programmers manual explains how BCD math is to
> be performed.
This is one possible way of performing operations on BCD numbers.
It's an inefficient method that was used primarily when you couldn't
depend on the existence of an FPU -- at one time an FPU was an
expensive and relatively rarely add-on, not a standard part of every
CPU. At the present time, there's little to gain by using the BCD
instructions for the Integer unit unless there's a good chance of
needing to run your code on such an old machine.
> Packed BCD numbers are made up of bytes containing two decimal digits: one
> in the upper 4 bits and one in the lower 4 bits. The 8086-family processors
> provide instructions for adjusting packed BCD numbers after addition and
> subtraction. You must write your own routines to adjust for multiplication
> and division.
That's more or less true, but AAD and AAM work with unpacked BCD
along with multiplication and division.
> Therefore, because operations are confined to addition and substraction
> within a 10 character range, 0-9 the binary round off and inaccuracies due
> to shifting is not present.
As I've said before: this is true of ANY integer, and therefore of
fixed point as well. The fact that you're using BCD has introduced
NOTHING.
> The DAA and DAS instructions simply provide the functionality.
Provide WHAT functionality? They do NOT eliminate or reduce any
truncation, roundoff, etc. Those are eliminated by the fact that
you're now working in fixed point (I.e. really integers).
> They do not convert the BCD data into real extended. On the
> contrary they are specifically provided so that you can do math exclusively
> in BCD.
Quite the contrary: they do decimal adjustment AFTER addition or
subtraction. The addition or subtraction itself is STILL done on a
binary value, and then you adjust it to be proper BCD again
afterwards.
Elimination of roundoff errors and such is NOT accomplished by using
DAA, DAS, or BCD. It's accomplished by using fixed-point, as I've
said before.
> But I also provided the assembler manual
> that I quote. And the instructions that make BCD possible are in the
> processor, not the FPU. So they are not located in the FPU documentation.
Both the CPU and FPU provide instructions for operating on BCD data.
The primary difference is simple: the FPU does things sanely (and
efficiently) while the integer unit does things insanely and
_extremely_ inefficiently.
> But the fact remains, that I have established that the BCD data type
> is isolated from binary rounding errors.
No, you have not. As I've hammered on for what seems like a year
now, BCD is entirely beside the real point. The real effect is due
to the fact that you're working with integers (which you can treat as
fixed point if you choose to do so).
> And that the operations allowing exclusive BCD math to be
> performed on the intel processor DO NOT convert the number to a floating
> point binary.
The CPU does provide a few instructions for converting/adjusting
between binary integer values and BCD, that's true. As always,
however, BCD is not what contributes anything here: the difference is
due to using integer/fixed point math rather than floating point
math.
Nonetheless, floating point math can accomplish the same thing as
long as it's done with sufficient care. Just for example, you might
want to write a set of routines to carry out the addition and
subtraction operations on 18-digit packed BCD numbers using the CPU
integer instructions, but accepting the same format as the FPU
accepts with FBLD.
Once you've done that, write a little routine that generates some
random BCD values in memory, and then has both your routines and the
FPU routines add or subtract one to/from the other. Compare the
results of this addition or subtraction, and you'll have two possible
results: either they're identical, or if you step through things,
you'll find that there's a bug in one of your routines. In theory,
it's also possible that you'll find a bug in the Intel FPU, but
unless you use one of the known-defective Pentiums, I doubt it'll
happen -- that fiasco caused Intel SUCH problems that I'm reasonably
certain they're doubly cautious to prevent it happening again.
Here you go.
"Richard Heathfield" <bin...@eton.powernet.co.uk> wrote in message
news:3BE4ABAD...@eton.powernet.co.uk...
> RJ wrote:
> >
>
> <snip>
>
> > No, it doesn't. Not the way you are thinking. You apparently think that
> > Intel created BCD so they could perform binary math on it.
>
> This implies that Intel created BCD (for some other reason). I have
> looked for supporting evidence of that assertion and couldn't find any.
> I don't have any reason to doubt you, but I would be interested to see
> some kind of reference supporting your statement.
Intel processors do not perform floating point math on BCD. Nor are BCD
numbers converted to binary numbers when used as intended with the correct
processor operators.
The specifics on exactly how BCD arithmatic is done in the intel
processor are documented in the book, Code, Charles Petzold, p271, Microsoft
Publishing. And the MASM programmers manual quoted below chapter 6.
If you look at the Intel web site you will see that BCD data types are
refered to as integer data types and as floating data types. There is a
normalized range given for the BCD as being from (-10^18 + 1) to (10^18 - 1)
. This raises the question of what the term normalize means in reference
to the BCD data type because the BCD data type does not contain an exponent.
The summary definition of normalized is given in the document as follows:
"To summarize, a normalized real number consists of a normalized significand
that represents a real number between 1 and 2 and an exponent that specifies
the number's binary point."
The significant is also refered to as the characteristic or the mantissa in
other math settings.
So the range of the BCD data type is in reference to its converted form in
the FPU. That is, if one were to perform an FPU operation on it. Which of
course would defeat the purpose of having a BCD in most circumstances. But
I suppose there could be times where you may need to convert a BCD data type
to binary based for some reason.
"
Decimal Integers
Decimal integers are stored in a 10-byte, packed BCD format. Table 31-8
gives the precision and range of this data type and Figure 31-17 shows the
format. In this format, the first 9 bytes hold 18 BCD digits, 2 digits per
byte (see "BCD Integers"). The least-significant digit is contained in the
lower half-byte of byte 0 and the most-significant digit is contained in the
upper half-byte of byte 9. The most significant bit of byte 10 contains the
sign bit (0 = positive and 1 = negative). (Bits 0 through 6 of byte 10 are
don't care bits.) Negative decimal integers are not stored in two's
complement form; they are distinguished from positive decimal integers only
by the sign bit.
Table 31-11 gives the possible encodings of value in the decimal integer
data type.
The decimal integer format exists in memory only. When a decimal integer is
loaded in a data register in the FPU, it is automatically converted to the
extended-real format. All decimal integers are exactly representable in
extended-real format.
"
http://developer.intel.com/design/intarch/techinfo/Pentium/fpu.htm#49040
This establishes that if you are to do BCD base ten calculations you can not
do them in the FPU. They will cease to be BCD values and become binary
extended real. The math then will be binary and the data binary.
The statement from the MASM programmers manual explains how BCD math is to
be performed.
"
Packed BCD Numbers
Packed BCD numbers are made up of bytes containing two decimal digits: one
in the upper 4 bits and one in the lower 4 bits. The 8086-family processors
provide instructions for adjusting packed BCD numbers after addition and
subtraction. You must write your own routines to adjust for multiplication
and division.
For processor calculations on packed BCD numbers, you must do the 8-bit
arithmetic calculations on each byte separately, placing the result in the
AL register. After each operation, use the corresponding decimal-adjust
instruction to adjust the result. The decimal-adjust instructions do not
take an operand and always work on the value in the AL register.
The 8086-family processors provide the instructions DAA (Decimal Adjust
after Addition) and DAS (Decimal Adjust after Subtraction) for adjusting
packed BCD numbers after addition and subtraction.
These examples use DAA and DAS to add and subtract BCDs.
;To add 88 and 33:
mov ax, 8833h ; Load 88 and 33 as packed BCDs
add al, ah ; Add 88 and 33 to get 0BBh
daa ; Adjust 0BBh to 121 (packed BCD:)
; 1 in carry and 21 in AL
;To subtract 38 from 83:
mov ax, 3883h ; Load 83 and 38 as packed BCDs
sub al, ah ; Subtract 38 from 83 to get 04Bh
das ; Adjust 04Bh to 45 (packed BCD:)
; 0 in carry and 45 in AL
Unlike the ASCII-adjust instructions, the decimal-adjust instructions never
affect AH. The assembler sets the auxiliary carry flag if the digit in the
lower 4 bits carries to or borrows from the digit in the upper 4 bits, and
it sets the carry flag if the digit in the upper 4 bits needs to carry to or
borrow from another byte.
Multidigit BCD numbers are usually processed in loops. Each byte is
processed and adjusted in turn.
"
http://webster.cs.ucr.edu/TechnicalDocs/MASMDoc/ProgrammersGuide/Chap_06.htm
As explained above, mathematics on BCD data is done one 4 bit value at a
time. Of course, logic for carry detect and special cases would have to be
introduced. But this should go without saying. But in the light of the
previous posts I have had to deal with I figure I'd mention it.
Therefore, because operations are confined to addition and substraction
within a 10 character range, 0-9 the binary round off and inaccuracies due
to shifting is not present. The DAA and DAS instructions simply provide the
functionality. They do not convert the BCD data into real extended. On the
contrary they are specifically provided so that you can do math exclusively
in BCD. Because as any good programmer knows all math is nothing but
addition and subtraction. They do convert the binary addition or subtraction
from the hex value to the decimal equivalent on a carry. This, of course,
simply preserves the decimal integrity of the operation and facilitates
carry operations.
I can see how Jerry would get the idea that a BCD value is converted to
extended real any time it is used because that is what it says in the Intel
documentation that I provided. But I also provided the assembler manual
that I quote. And the instructions that make BCD possible are in the
processor, not the FPU. So they are not located in the FPU documentation.
This also establishes the answer to the question how are floating point
values represented in BCD and the answer is, they aren't. I now know why
fixed point data types are fixed. Because the decimal point is part of the
form or tool that is used to represent the data. It is not part of the
data. That is, when using a VB fixed point data type you have 4 digits of
precision to the right of the decimal. That doesn't mean that it's a
floating point data type because the point doesn't float it's fixed.
Therefore, there is no point in the data. It has to be built into the
algorithm that uses the data. That is, when you program your math engine
your numeric magnetudes will always be the same for each digit. I suppose
you could figure out different ways of splitting the number up and using two
BCD values to represent one number, (sort of like splitting registers), and
carrying or subtracting anything but that is for a different post.
As stated above in the quotes from the MASM programmers guide. If you
want to go beyond addition and substraction you have to do it yourself.
That is, you would have to program a BCD math engine. But the fact remains,
that I have established that the BCD data type is isolated from binary
rounding errors. And that the operations allowing exclusive BCD math to be
performed on the intel processor DO NOT convert the number to a floating
point binary.
Thank you
I demonstrated precisely how the assembler does it.
I can only conclude that you do not understand either the math or the
assembly language example.
thank you.
I am talking about when you have to have 1/10 and you have to have it
10000 or 100000 times accurately.
You can't get that with binary.
And if you would just read the statements that I have quoted by reputable
sources, such as MASM, Petzold, Barron's, you would see that the operations
that I have cited are not antiquated or out of use.
They are currently popular in fault intolerant financial calculations.
The exact assembly routine cited and data form cited is what is used to
preform all calculations precisely to circumvent binary rounting error.
If the Patriot missle system had been using BCD data with the MASM type
mathematical algorithm that Patriot missle would not have missed the scud,
it would have been on time because the computer would not have generated the
1/10 binary rounding error.
In that type of system and situation efficiency is not the issue, accuracy
is. And binary to decimal conversions are not accurate.
The following quotes and pages should be of some help to the lurkers if not
you.
"Since many devices use BCD, knowing how to handle this system is
important."
http://www.tpub.com/neets/book13/53s.htm
"Binary coded decimal (BCD) is a method for implementing lossless decimal
arithmetic (including decimal fractions) on a binary computer. The most
obvious uses involve money amounts where round-off error from using binary
approximations is unacceptable. Some early computers used BCD exclusively."
http://www.osdata.com/topic/language/asm/bcdarith.htm
"An important variation on the idea of storing cardinals in a digit by digit
fashion and then using a picture to print out the numbers with a decimal
point is to store the digits along with a decimal point position. In other
words, each stored item is thought of as a decimal fraction with a specific
number of places after the decimal. Again, the storage representation is
exact, so no rounding off errors take place."
http://www.arjay.bc.ca/Modula-2/Text/Ch17/Ch17.6.html
This is how BCD math works
http://www.site.uottawa.ca/~dhaher/ceg2131A/lecture7.PDF
"One of the most widely used representations of numerical data is the binary
coded decimal (BCD) form in which each integer of a decimal number is
represented by a 4-bit binary number (see conversion table). It is
particularly useful for the driving of display devices where a decimal
output is desired."
http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/number3.html
A military industrial application.
http://www.mitsi.com/Projects/usafbcd.htm
BCD numbering systems are used extensively in PLCs or programmable logic
controllers.
The actual instruction set showing the Intel instructions for BCD math are
found here
http://developer.intel.com/design/intarch/techinfo/Pentium/instsum.htm
There is no reason to not keep a BCD math engine isolated from the binary
data forms. It would be difficult to certify a system 100 percent accurate
in all decimal calculations with out performing all calculations using BCD
data types and BCD instructions.
Sorry, Randall, but nowhere in your lengthy reply can I find any
evidence that Intel created BCD. Was my inference mistaken, then?
To be clear: you said "[Jerry Coffin thinks] that Intel created BCD so
they could perform binary math on it". To me, this suggests that you
hold the opinion that Intel did indeed create BCD, but not for the
purpose of performing binary math.
In other words, my query is about the origins of BCD itself; it occurred
to me that it was likely to pre-date Intel, so I was interested in
whether you had any evidence to support what I have inferred to be your
claim.
Miss-statement on my part.
Intention was to convey that they created BCD data types for their processor
and created the calls for their use in a manner that does not convert the
value to a binary data type.
Did not intend to convey that they created BCD itself. I am pretty sure
BCD predates intel.
If you look through the Google archives for comp.lang.asm.x86 (for
one example) you'll find that I not only read it quite well, but that
I've been writing it your years. If you look more carefully, you can
probably still track down James Vahn's Snippets collection from the
Fidonet 80xxx echo, in which you'll find more examples of assembly
code I wrote (much of it close to 10 years old now).
> Is that why you snipped out the assembly code that showed exactly how BCD
> math is done?
No -- I snipped it because it was irrelevant. You've apparently
missed the point of the "adjust" in DAA, DAS (as well as AAA, AAS,
AAD and AAM). With ALL of these, you still get the same basic
sequence of operations as if you work with BCD using the FPU: you
start with something in BCD. You do math on it in in binary. You
then convert the result back to BCD. There is not a single one of
these cases in which BCD has actually contributed anything in the way
of accuracy -- all differences in the results are caused by one fact:
you're now working with integers rather than fractions.
> I showed you how addition and subtraction is done using BCD and how that
> it is isolated from the binary number system and how it corrects for binary
> rounding errors.
No, you didn't. You may not _realize_ this fact (yet), but you
really did NOT show anything of the sort. BCD didn't prevent or
correct anything.
> I demonstrated precisely how the assembler does it.
> I can only conclude that you do not understand either the math or the
> assembly language example.
What you can conclude once you realize what's going on is that both
possibilities you've given are wrong. BCD contributes nothing in any
of the examples you've given.
Okay. Try representing 1/10 AT ALL in BCD using either the FPU OR
the integer instructions. You _can't_ do it. AT ALL. It's just not
supported or allowed.
What you can do is decide to multiply every number you work with by
some constant factor, so that 1/10 comes out as an integer. If, for
example, you want results rounded to two decimal places, you can
multiply every number by 100, in which case 1/10 comes out as 10 and
.01 comes out as 1. You can then add these numbers together with no
difficulty at all. If you decide to support multiplication and
division, you have to add some shifting of your own to support it --
e.g. if you multiply 1 by 1, the result should obviously be 1. If,
however, you've scaled it by 100, then 1x1 will come out as 100.
IOW, after a multiplication you have to scale the result down by the
original scaling factor. If you do division, you theoretically scale
the result UP by the scaling factor -- in reality, to avoid
underflow, you normally do this by scaling one of the factors prior
to the division.
This, of course brings us back to on of the comments I made a long
time ago, that when things go wrong in fixed point, they usually go
very badly and obviously wrong. For example, the intermediate result
of a multiplication in the example above is two orders of magnitude
larger than the real answer. This can obviously result in the
intermediate result overflowing whatever size of register you're
using, even though both inputs AND the properly scaled result WILL
fit.
> You can't get that with binary.
Quite the contrary. People do it with binary fixed point ALL THE
TIME. Just for example, back when most machines didn't support
floating point in hardware, _many_ of us wrote games that did used
fixed-point for the graphics. In real life, most people scale by a
power of two rather than a power of 10, but the basic idea is still
roughly the same.
> The exact assembly routine cited and data form cited is what is used to
> preform all calculations precisely to circumvent binary rounting error.
You can use anything you want for any reason you want. The question
is whether IT accomplished what you wanted. The answer is that no,
by itself it did not.
Given that your initial examples used the BCD format used by the FPU,
and that you clearly were NOT aware of how it worked (since you
argued long and hard that it couldn't possibly work the way it does)
I find your claims about how you've used BCD highly questionable at
best.
> In that type of system and situation efficiency is not the issue, accuracy
> is. And binary to decimal conversions are not accurate.
It's clear that you still haven't gotten the basic point: BCD isn't
what's involved here. fixed-point IS.
[ ... ]
> In other words, my query is about the origins of BCD itself; it occurred
> to me that it was likely to pre-date Intel, so I was interested in
> whether you had any evidence to support what I have inferred to be your
> claim.
It's clear that BCD _does_ predate Intel by quite a wide margin.
Just for a really obvious example, it was supported by the IBM 360
series starting in 1964. It wasn't new then either, but that's long
enough before Intel was formed that it obviously predates Intel by a
wide margin.
[ ... ]
> Intention was to convey that they created BCD data types for their processor
> and created the calls for their use in a manner that does not convert the
> value to a binary data type.
That may have been your intention, but it's only one of the many
places where you're clearly wrong.
Or me being overly literal when reading. Thanks for clearing it up.
Sure you can.
That's got to be one of the dumbest statements I ever heard.
That is why BCD is used for "fixed point" calculations.
It's very simple, that doesn't seem to prevent misunderstanding or
ignorance though.
In your math operations that you create external to the CPU using the
provided BCD specific operations you simply introduce your decimal point and
keep it constant in all evaluations.
So if you want four digits of precision the number 1 = 10000 BCD.
>
> What you can do is decide to multiply every number you work with by
> some constant factor, so that 1/10 comes out as an integer. If, for
> example, you want results rounded to two decimal places, you can
> multiply every number by 100, in which case 1/10 comes out as 10 and
> .01 comes out as 1.
Why would you do that? You would just introduce binary inaccuracies into
the algorithm. You should just use binary data types and correct for
inaccuracies.
You can then add these numbers together with no
> difficulty at all. If you decide to support multiplication and
> division, you have to add some shifting of your own to support it --
Shifting causes inaccuracy and rounding loss.
> e.g. if you multiply 1 by 1, the result should obviously be 1. If,
> however, you've scaled it by 100, then 1x1 will come out as 100.
> IOW, after a multiplication you have to scale the result down by the
> original scaling factor. If you do division, you theoretically scale
> the result UP by the scaling factor -- in reality, to avoid
> underflow, you normally do this by scaling one of the factors prior
> to the division.
>
You are making it a lot more complex than it is.
> This, of course brings us back to on of the comments I made a long
> time ago, that when things go wrong in fixed point, they usually go
> very badly and obviously wrong. For example, the intermediate result
> of a multiplication in the example above is two orders of magnitude
> larger than the real answer.
That's only because you don't understand fixed point math or the data
system.
This can obviously result in the
> intermediate result overflowing whatever size of register you're
> using, even though both inputs AND the properly scaled result WILL
> fit.
>
> > You can't get that with binary.
>
> Quite the contrary. People do it with binary fixed point ALL THE
> TIME.
Binary fixed point and BCD are not the same thing.
Binary fixed point is simply a way of representing numbers without bit
shifting.
All the inaccuracies previously discussed are part of that system.
Just for example, back when most machines didn't support
> floating point in hardware, _many_ of us wrote games that did used
> fixed-point for the graphics. In real life, most people scale by a
> power of two rather than a power of 10, but the basic idea is still
> roughly the same.
>
Sure, but binary fixed point has nothing to do with BCD.
> > The exact assembly routine cited and data form cited is what is used to
> > preform all calculations precisely to circumvent binary rounting error.
>
> You can use anything you want for any reason you want. The question
> is whether IT accomplished what you wanted. The answer is that no,
> by itself it did not.
It should be obvious that when representing 1 as 10000 in BCD 1/10 = 1000.
And that 1/10 is an absolute value. It is not rounded. To the simplest
mind this should make sense.
>
> Given that your initial examples used the BCD format used by the FPU,
No, my initial examples did not.
You misunderstood.
> and that you clearly were NOT aware of how it worked (since you
> argued long and hard that it couldn't possibly work the way it does)
No, I did not.
I argued that the processor provides the intructions. I provided the MASM
instructions that do the math.
You apparently misunderstood what I was saying.
You, on the other hand flatly insisted that there were no instructions,
NONE, over and over again that performed BCD math in its pure decimal form.
Then when clearly proven wrong, you stand back and say,"I knew that."
Sure you did.
> I find your claims about how you've used BCD highly questionable at
> best.
>
That does not surprize me.
> > In that type of system and situation efficiency is not the issue,
accuracy
> > is. And binary to decimal conversions are not accurate.
>
> It's clear that you still haven't gotten the basic point: BCD isn't
> what's involved here. fixed-point IS.
>
It's clear that you still fail to understand the difference between binary
fixed digit and BCD.
Exactly. All he says is
>Data is very useful in this format for at least two good
>reasons: easy adjustment to the ASCII character set,
>and calculation with large floating point numbers with
>very little roundoff error. The latter is possible up to any
>size values, making BCD arithmetic especially useful.
He then notes that use of the floating point coprocessor limits the size
of values, so clearly the last statement is intended to apply only to
data stored and processed in integral form. In this case, BCD has
nothing but _disadvantages_ compared to integral or fixed point binary
arithmetic, as it is both slower and less compact.
As I pointed out at the start, BCB or Binary Coded Binary has no special
mathematical disadvantages over BCD. In mathematics, 2 is as good a
number base as 10, so any advantages of BCD are restricted to input and
output, and are trivial at that.
Consider the Patriot missile: suppose the system clock had produced
ticks every 16th of a second rather than every tenth. Then the
incompetent use of BCD arithmetic would have led to the same error as
the incompetent use of BCB arithmetic did in this instance, while BCB
would have worked. Or if it had been every 60th of a second, both would
have produced errors.
>
>Wrong. It does away with binary rounding errors. That is the point. That
>is the statement from the MASM manual. Binary can not represent certain
>numbers accurately. Therefore, they are rounded. Therefore, binary is not
>accurate regardless of how much precision you use 1/10 will never absolutely
>equal 1/10 in a binary system. BCD does away with binary rounding errors.
Of course it can be represented. Just use fixed point arithmetic, in
which every value represents a number of tenths.
In the case of money, represent all amounts as an integral number of
pennies. That also works for currencies that don't happen to have
smaller denominations that are a negative power of ten times the larger
ones.
> As I already pointed out, and quoted the Intel
>> material to support, when you load a BCD value into the FPU, it ends
>> up as a binary floating point value.
>
>No, it doesn't. Not the way you are thinking. You apparently think that
>Intel created BCD so they could perform binary math on it. That has got to
>be the most ridiculous statement. Why? Why not just use binary numbers?
>What is the purpose of a BCD if all you are going to do is convert it from
>its abstract value to a binary real? That is foolish I couldn't believe
>that, that is what you meant initially but now that you insist on it I see
>how utterly lost on this you are.
No, Intel created the FLOATING POINT COPROCESSOR so they could perform
floating point operations quickly.
They then created functions to translate other data representations into
a form that works on it.
One of these, a rather unimportant one, was for BCD.
IIRC, neither would have produced errors if anyone had bothered to read
the manual, which noted that a daily reboot was required. Fog of war,
and all that.
[ ... ]
> > Okay. Try representing 1/10 AT ALL in BCD using either the FPU OR
> > the integer instructions. You _can't_ do it. AT ALL. It's just not
> > supported or allowed.
>
> Sure you can.
No you can't -- you can _only_ represent integers with either one.
You can use those integers as fixed-point numbers, but you have NOT
represented .1 (or any other fraction) _in_ BCD. You've represented
an integer in BCD, and decided to interpret that particular integer
as indicating a fraction.
> That's got to be one of the dumbest statements I ever heard.
Yes -- when I said the FPU worked with BCD by always converting it to
floating point, doing the work, then converting the result back to
BCD, THAT was one of the dumbest things you'd ever heard of -- but
it's exactly how things really ARE. It isn't stupid either: it works
very nicely.
> That is why BCD is used for "fixed point" calculations.
Let me see now: the VERY first post I put into this thread said that
you were talking about BCD, but _trying_ to get at fixed-point. You
replied and said that no I had no clue, and fixed-point had nothing
to do with anything.
Now you're starting to see a LITTLE of the light: you're starting to
realize that fixed-point really IS a necessary part of the equation.
Eventually, you're going to realize that fixed-point is THE necessary
part, and BCD has been a red herring all along.
> In your math operations that you create external to the CPU using the
> provided BCD specific operations you simply introduce your decimal point and
> keep it constant in all evaluations.
IOW, you use fixed-point, which is what I've said was important from
the VERY FIRST POST I made to this thread. If you skip the BCD part
and use fixed point, you'll achieve ALL the same things WRT accuracy,
rounding, etc.
BCD, as I've said from the very beginning of this thread, can be
worthwhile when (for example) you're reading a number in, doing very
little work on it, and then writing it back out. In that case, the
time and effort you save in converting it to fixed-point binary can
outweigh the extra work you do on individual operations.
> Why would you do that? You would just introduce binary inaccuracies into
> the algorithm. You should just use binary data types and correct for
> inaccuracies.
What in the world do you ever _think_ you mean by this?
> You can then add these numbers together with no
> > difficulty at all. If you decide to support multiplication and
> > division, you have to add some shifting of your own to support it --
>
> Shifting causes inaccuracy and rounding loss.
No, shit-for-brains, it does not. If you'd bother learning a little
bit about what's being discussed before putting in your -2 cents
worth, you'd realize that.
[ ... ]
> That's only because you don't understand fixed point math or the data
> system.
Quite the contrary. I understand it perfectly -- I've been using it
for years. You haven't a clue of what you're talking about.
> Binary fixed point and BCD are not the same thing.
Gee really? Next I'll be you're going to tell us that there's a
difference between day and night, or maybe something even more
obviously like the HUGE difference between you and somebody
intelligent.
> Binary fixed point is simply a way of representing numbers without bit
> shifting.
> All the inaccuracies previously discussed are part of that system.
Wrong, wrong WRONG! You haven't got a clue in the world yet if you
honestly believe that.
> No, my initial examples did not.
Yes, they did.
> You misunderstood.
No I didn't -- you cut your own throat when you quoted the Intel
documentation, because it's _trivial_ to prove beyond any shadow of a
doubt that your quotes were about the BCD data type supported by the
FPU.
> > and that you clearly were NOT aware of how it worked (since you
> > argued long and hard that it couldn't possibly work the way it does)
>
> No, I did not.
Yes, you did. In fact, you said that the way it works was the
stupidest (or did you say "craziest"?) thing you'd ever heard.
> You apparently misunderstood what I was saying.
You could only wish -- that way you could TRY deny what you really
said rather than admit that you were wrong.
> You, on the other hand flatly insisted that there were no instructions,
> NONE, over and over again that performed BCD math in its pure decimal form.
That's true. It's true that I said it, and it's also a true
statement about the processor.
> Then when clearly proven wrong, you stand back and say,"I knew that."
Quite the contrary. So far, you haven't proven me wrong on even the
most trivial point.
> > It's clear that you still haven't gotten the basic point: BCD isn't
> > what's involved here. fixed-point IS.
> >
> It's clear that you still fail to understand the difference between binary
> fixed digit and BCD.
Quite the contrary: I understand the difference completely: you
simply don't understand that what's producing the accuracy you're
interested in is NOT the use of BCD, but the use of fixed-point.
Binary fixed-point and BCD fixed-point will produce identical
answers; BCD simply makes it slower and more difficult to get them.
-
true
> Binary fixed point is simply a way of representing numbers without bit
> shifting.
What???
> All the inaccuracies previously discussed are part of that system.
Good grief man.
I see that all the problems on this thread come from some strange definition
of the phrase "binary fixed point" that you are clinging to like a drowning
man to a life buoy.
> It should be obvious that when representing 1 as 10000 in BCD 1/10 = 1000.
> And that 1/10 is an absolute value. It is not rounded. To the simplest
> mind this should make sense.
I have a binary fixed point type I use that stores numbers as integer
fractions of 65536. Of course this means that certain decimal numbers cant
be represented, but hell, when making a binary fixed point type I can define
the base to be whatever I want, and as a result store losslessly numbers
that are integer multiples of that base.
Say I need to accurately store and woth with numbers to 3 decimal places. I
simply define my base binary type to be an integer multiple of 1/1000ths.
So, in binary, with the msb on the left, if I was to store 0.001 I wold
store:
00000000 00000000 00000000 00000001
0.1 would be exactly represented by:
00000000 00000000 00000000 01100100
and, 1.0 would be stored:
00000000 00000000 00000011 11101000
All possible decimal numbers that are integer multiples of 0.001 up to a
value of about 4 million can be stored exactly with no loss using this
system. And completely lossless calculations can be performed on them.
The ONLY advantage BCD numbers have over binary fixed point is the ease of
converting the number to a decimal string - either for display, or storage
or use in a sql database. BCD numbers are easy to convert to strings as each
digit can simply be read out of the corresponding nibble.
Converting
00000000 00000000 00000011 11101000 to 1.0 however requires a non trivial
process of repeatedly dividing the number by 10, and reading the remainder
off each time as "the next digit". Especually when dealing with arbitrary
width numbers where the number spans several machine words, a rather
expensive long division algorithm needs to be performed.
Now, thats what I call binary fixed point. And its just as competent as BCD
at storing numbers accurately.
The definition of "binary fixed point" that you are obsessing on seems to be
some quase floating point type with a implied (i.e constant) exponent - why
anyone would design or use such a type - nevermind document it is not known
to me.
.Chris
You guys are idiots.
I'm outa here
And I feel I have presented my case as thoroughly as possible.
Right now I feel like I'm trying to teach a pig to sing.
So that's it. I've had it. This subject is exausted. I've proven my
point several times from several angles. If you don't get it you never
will.
Just as an aside, I would like to share an interesting book on
cryptography with you and anyone who has been observing this dialog.
The book is:
"In Code, A Mathematical Journey," Sarah Flanery with David Flannery,
Published in the US by Workman Originally by Profile books, (c) 2000,
ISBN0-76112384-9
Has nothing what so ever to do with the subject at had and is offered with
no secondary messages what ever.
You gentlemen have a nice life. I do not see any noble purpose in
explaining BCD math to people.
Thank You
The conclusion being that, it is not that a computer scientist or
programmer can not creat numbers to his heart's content, it is when that
programmer is presented a real world problem where he has to deal with
reality.
Games are not reality.
If you had read the initial post you would see that the discuss started
with a post concerned rounding errors in floating point calculations.
My only ascertion has been that BCD corrects for these rounding errors.
To which you seem to agree.
I could see how that representing thousanths as a 32 bit unsigned integer
would work.
The problems with your number system though are as follows:
1. There are no standardized processor level instructions or data formats.
2. The magnitude value is very limited. Having 4 million as your top end is
unsatisfactory for many finacial applications.
3. Due to the use of unstandardized routines, you have to design them
yourself and you are not abolute sure what happens in the processor, it
would be difficult to certify this data type for mission critical
applications.
So while you are right mathematically, you number system presents problems
that make it inferior in practical application to BCD.
Nice try though.
I dont know about that. There are a number of libraries "out there" - either
for uses in calculating fractals, or cryptographic applications, that allow
operations on arbitrary width binary fixed point numbers.
At the tertiary education level, the difference is in the business
information system streams that yields the programmers that go and work on
bank and financial database systems - that prefer BCD, and the computer
science stream that yields programmers that ... write ... errr, games,
Operating systems and shareware :)
The more informed people in both camps know that floating point types are
lossy and not to be used in certain situations, in the situations where
floats cannot be used, the BIS people tend to use BCD, the CS people use a
fixed point binary system like I described, binary coded hex if you will :)
Im sure if you reread Jerry Coffin's - and the other people you so
vehemenntly opposed on this threads posts :) you will find that they were
recommending the system I described, NOT a "fixed point float" type. Their
opposition to BCD was NOT that a fixed point float can be made to work -
clearly it cannot for all the reasons you described, but that a normal
binary number with a fixed base - say 1/1000th of a unit would suffice.
People who do not work with financial systems find BCD to be archaic, clunky
and frankly more confusing and difficult to work with than a machine word
representing quantities of some chosen base fraction. Plus the utter lack of
support for BCD by our commonly used languages: pascal, c, c++ - obviously
in asm you can activate and use the BCD arithimatic :) but thats so obscure,
that despite years of assembler experience I have no idea how.
However, that is not what the argument was over.
The arguement was over whether BCD data types and math are clearly more
accurate than binary data types and math. And whether those data types and
math are clearly supported in the IA 32 system.
I have repeatedly proven both to be so.
Fixed point has only been brought up in the last few comments.
While your definition of fixed point is accurate and valid it is not the
only definition. Nor is it the standard definition.
The standard definition is simply that the point in the data type is fixed
and therefore all math operations that result in numbers smaller than that
which the number value can represent are lost.
As you have said, fixed point systems can be any base.
Essentially what you demonatrate is a fixed point decimal system using
binary representations.
Values less than 1/1000 are lost.
One thing about your data type though is that you would have to implement
the same rounding and math rules that you would have to with BCD.
Of course, you would have to do the same thing with a VB currency data
type too.
And while your solution solves the situation as well and is more efficient
than BCD it is besides the point and not part of the argument.
But thank you.
I found it very interesting.
I have not seen the system you described in these posts until now.
I have been argued with since the bigging that literal BCD math is not
supported in the Intel archetecture.
And as far as exactly how, I provided that as well.
No, your point is completely different than Jerry's.
You give him too much credit.
You are probably right.
Jerry probably was arguing your point all along.
But I sure got a lot of information out of him that I probably won't have
otherwise.
And though I understood the math behind what you were saying until you
represented fixed digit in your context I did not understand what Jerry
meant when he used the term. What I thought was fixed digit would probably
be more accurately described as a fixed binary value or a constant where the
exponent value of the float would be declared constant. Probably a
misconception on my part.
The problem with the Patriot missile system was probably that the second
value used was a 1/10 of a second value programmed as a constant.
There are two places were BCD is used quite a bit.
COBAL. Due to financial calculations.
Programmable Logic Controllers. Facilitates using the numbers in
calculations and representing them in ascii with the least amount of
processing.
(For the non electrical types, a progrmmable logic controller is an
electrical device that is used for controlling commercial/industrial
equipment with micro processors).
And for the benifit of all, my background is in electrical/electronics
control. I am currently studying to get a BS in CS but I have a way to go.
Thank you Jerry,
And I apologize for calling you an idiot.
I appreciate the discussion and I have learned a lot.
Hope I didn't frazzle you too much.
The post from Chris helped a lot.
I just wasn't familiar with the number system refered to and I am familiar
with BCD and the sources I quoted. Had a few ideas and threw them out
there.
Sometimes converting English to math is not easy. After reading the
explaination from Chris and working it out on paper I see now what you were
refering too and that essentially in your arguement for fixed digit you are
and were correct.
In your arguement against BCD I thing there were a few mistakes made but
nothing of any significance.
So again, thanks for the education and I hope that I haven't offended you
too bad.
Correct.
For instance you create an integer data type and call the one's position a
tenth or a hundreth or whatever.
Sure.
Not arguing that.
My initial argument is that for 1/10 to be represented it had to be
abstracted. This, to me, is simply a form of abstraction. You are giving
an integer data type an imaginary decimal point.
So yes the idea works and it is more efficient than BCD.
But my original proposition is still true.
BCD corrects for binary data type errors and accurately represents decimal
values. The binary number system can not represent 1/10 accurately and the
only way you can represent one tenth accurately is to use an abstraction.
For instance.
You argued that absolute 1/10 can not be represented at all by a binary
number.
I argued the same.
I argued that if 1/10 is to be represented it has to be in an abstract
manner.
You pretty much argued the same.
In that:
You argued that BCD only represents 1/10 accurately because it represents
numbers at a decimal integer level. Your arguement was that you can
accomplish the same thing by using an integer data type and imposing a
decimal point where ever you like, but consistently, in that integer data
type. This, to me, is an abstraction. Because you are imposing an idea on
the data that is not part of the data. You will have to enter programming
later to accomodate this abstraction because the language and processor do
not recognize it.
Being that an integer data type is represented in the computer by a binary
data type you were completely correct when you said I was wrong in saying
that 1/10 can not be represented by a binary data type. My assertion should
have been that 1/10 can not be represented by a binary number.
I think that the main reason that this has developed the way it has is
that you have essentially been defending fixed point, which I did not
understand your, or the, definition of.
The merits of BCD over fixed point would have to be determined on an
application by application basis. The bottom line is though that they both
correct for binary inaccuracy.
So if that is what you are saying, okay. I agree.
> The thing to realize here, as I've stressed a number of times, is
> that Intel has restricted the possible BCD inputs to those which
> can be perfectly represented in floating point format.
That seems an extraordinary statement to me.
"10" would be valid BCD would it not? Yet it cannot be represented
precisely in FP.
> This is one possible way of performing operations on BCD numbers.
> It's an inefficient method that was used primarily when you couldn't
> depend on the existence of an FPU...
Or if you require high-precision math with no FP errors.
--
|_ CJSonnack <Ch...@Sonnack.com> _____________| How's my programming? |
|_ http://www.Sonnack.com/ ___________________| Call: 1-800-DEV-NULL |
|_____________________________________________|_______________________|
> Okay. Try representing 1/10 AT ALL in BCD...
digits = "01" dp = 1
> What you can do is decide to multiply every number you work with by
> some constant factor, so that 1/10 comes out as an integer.
Not exactly, but in a sense.
With BCD-like processing.... Hmmmm, I think one thing that's happening
to you two is you are using different words not fully understood by the
other. I think I understand the point you're both trying to make, but
neither of you seem to be making much attempt to actually *see* what the
other guy is saying. Anyway, let me define some terms as I use them...
floating point: a base-2 binary representation of a number either the
same as, or designed similarly to, the IEEE FP spec. Specifically, a
format with three *base-2* elements: sign, matissa and exponent. One
feature of FP is that the entire numeric value is expressed in a SINGLE
base-2 value. That value is normalized to the form "x.nnnnnnnn" and
the "x" (since in binary it's always a 1) is discarded. Another important
charactistic of FP is that the number of digits (precision) has a limit
based on how many bits are used to express the mantissa. Further the
absolute range (high/low) is limited by the number of bits used to express
the exponent.
fixed point: an "integer" way of representing decimal numbers. Not a
defined standard, but many exist. Visual BASIC has Currency and Decimal,
for example. Implementations vary; in some the location of the decimal
point is always the same (e.g. xxxxxxxx.yyyy); in others, some other
value specifies the location of the dp.
BCD: has two flavors: "ASCII-BCD" (or EBCDIC-BCD) and "binary BCD". In
the first flavor (not used much anymore), digits are stored in their
character code representations (e.g. a 4 is 0x34). This makes it very
fast to do I/O, but very slow to do anything else. The latter flavor,
which is where RJ is coming from, stores base-10 digits as their BINARY
representation, but--AND THIS IS THE IMPORTANT BIT--each digit stands
alone. For example, 46 would be 0x04 0x06.
Again, there's no universal standard, and implementations vary. In some,
there's a fixed number of factional digits; in others, there's some way
of specifying how many there are. What can be very nice about BCD (the
binary flavor) is that one can work on huge, huge (like 200-digits long)
numbers with *absolute* precision. This is doubly impossible with FP
(can't express that many digits and can't go that high).
Why would one want to do that sort of processing? One word: encryption.
Many modern encryption schemes use 200+ digit numbers, and the math must
be *absolutely* precise. I've never looked under the skirts of the Java
(or any other) BigNum library, but I'd bet you a bagel it's done with
binary BCD.
Okay, now where was I....
> What you can do is decide to multiply every number you work with by
> some constant factor, so that 1/10 comes out as an integer.
With BCD-like processing you're implementing the exact same math as
you were taught in grade school. For instance, you add by starting
with the least-significant digits and working your way "upwards".
You can even implement good old long division (although better methods
exist I understand).
Fast? No, but sometimes the only thing available.
>>> Okay. Try representing 1/10 AT ALL in BCD using either the FPU OR
>>> the integer instructions. You _can't_ do it. AT ALL. It's just not
>>> supported or allowed.
>>
>> Sure you can.
>
> No you can't -- you can _only_ represent integers with either one.
Sorry, that's just not so.
I think what's throwing you is that you don't realize that one can
implement math functions that process *discrete* digits. It is not
necessary to represent 1/10. It is sufficient to represent "0" "1"
and have some way of knowing where the DP goes.
I think a point of confusion here is how you're expressing yourself.
Above you write "Try representing ... using either the FPU OR the
integer instructions."
One doesn't represent numbers with instructions. One represents numbers
with data. One processes data with instructions.
I don't know if the Intel FPU supports native binary BCD instructions,
but I do know the x86 chips can certainly process it, and that they
include instructions to make it easier (and if they didn't, even the
basic instruction set can handle it).
> Now you're starting to see a LITTLE of the light: you're starting to
> realize that fixed-point really IS a necessary part of the equation.
If you mean that in the sense of: "I have a 64-bit number that represents
some value that is--in reality--multiplied by some factor of 10", then
no, you are wrong. Sorry.
If you mean that in the sense of: "I have a string of BCD digits that
represents some value and there's a DP located right HERE", then yes,
that's exactly the point.
> Binary fixed-point and BCD fixed-point will produce identical
> answers; BCD simply makes it slower and more difficult to get them.
Yeah?
Okay, do this:
2243089872345098273489273487234552334897349865823469.283469283569234
divided by
23498234904984562304098734598345382374982.37000000345349865982369823
By any FP standard I'm aware of: impossible
By nearly all fixed-point binary formats I know: impossible
With BCD: slow, but fairly trivial
> However,
> I would appear that the fixed point data type in VB is based on BCD.
Incorrect.
Currency: 64-bit scaled integer (always four decimal digits)
Decimal: 14-byte structure containing (among other things) a 64-bit
integer, a sign value and a decimal point value (so can have
any number (within limits) of decimal digits.
>> Pure nonsense.
>
> No very simple math.
> BCD is an integer abstraction.
Meaningless. All this stuff is an abstraction to the same degree.
(Although, I'd say it's actually pretty concrete, myself.)
>> Some machines do BCD in hardware.
> True, that doesn't make it any less an abstract.
Seems pretty damned concrete to me.
>> Some machines do BCD in software. Some use a combination.
>> Likewise, some do binary work in hardware, software or (most
>> often) a combination of them.
>
> The only binary software work is done in VMs.
This sentence doesn't parse. At all.
>> The whole basic idea of a Turing machine is that such differences as
>> BCD vs. binary representation of numbers do NOT really matter.
>
> A Turing machine has to do with processing strings and compiler design.
> It has nothing to do with what we are talking about. I just finished a
> course on the theory of automata. So that one is really out there. A
> turing machine is a theoretic entity. That is why you study them in
> computational 'theory' classes.
You apparently didn't get much from that class. The theory behind Turing's
work underlies ***EVERYTHING*** we do. It is relevant **everywhere**. How
can you not get this if you're a CS student?
>> binary computer is capable of performing precisely the same
>> calculations as a decimal or BCD computer and with the correct
>> programming both will arrive at precisely identical answers.
>
> This is simply false. Otherwise BCD would not have been developed.
Wrong. The other guy is right, but there is an implicit thing behind
it (which you wouldn't get, since you dismissed the Turing machine).
On a theoretical level (that is, assuming infinite memory and assuming
sufficient implementations of FP and BCD), the fact is simply this:
There is *inaccuracy* in any mathematical processing. The base does
not matter. The implementation (FP, BCD, etc) does not matter. The
answer to one-divided-by-three cannot be represented in a single value.
At some point you have to say, "Okay, it's accurate *enough*." And stop.
As people have told you over and over in this thread, ALL implementations
have their strengths and weaknesses. You are correct, BCD has some
strengths that FP does not. And vice versa.
> Please, I am 20 semester credits shy of a BS in CS.
Please, I've been a working programmer for over twenty years. Some of
the very *worst* programmers I ever met were CS babies. Don't think
your 40 credits means squat. It don't.
> I have been through assembly and have built gates using nand circuits
> up to multiplexors.
Oooooh. Wow. Some of us have been doing that since before you were
a gleam in your parents eyes. Some of have designed entire systems
with gates and embedded assembly.
> So, please, I have been there, done that, bought the tee shirt and
> the coffee mug.
No. You haven't. You've just been born. You're a baby programmer.
If you can get this through your head, get that chip off your shoulder
and realize how vastly much you have to learn, you might actually be
useful one day.
> I as a programmer can build a math engine using BCD data types that
> will work on any processor.
Maybe you could. I HAVE.
> Big endian little endian issues are not there. Most of your overflow
> issues go away as well.
True on the first.... On the second,... it depends. Not all packages
could handle, say: 5923469.8293846 ^ 234342
>> Actually, you HAVE achieved one thing quite well: you've proven that
>> cluelessness is NOT restricted to people who post via AOL --
>
> I don't see what you not being on AOL has to do with this.
AOLers are widely regarded (usually correctly so) as internet clueless,
and--by false extension--generally clueless.
> Listen I realize that you can't understand what I am talking about.
> It's nothing to be ashamed of. You can go to college for a few years
> and you will get better.
LOL!! Arrogant *and* clueless. Have you considered politics? ;-)
> Well, I thought what I said was pretty clearly written.
If English is your first language, then I'd have to say not. One never
knows on the internet if people are fluent in English or not. I was
assuming it wasn't your first language...
>> You're clearly too young to have had any experience with PL/I.
>
> But I would think that we would want to discuss a language that is
> a little newer than 1965. But even people then thought that language
> was garbage.
Hardly. It was, by some, regarded as a little too power and complex
and hard to learn. Now we have Ada. ;-) (I'm KIDDING!!!)
> Why would you need or want to fix the decimal point in BCD?
Speed. Otherwise you have to adjust for differences when you implement
your math functions. E.g. if you have "123.456" and "7.89" and you
want to simply add them, you need to (1) add a zero to the second number,
or (2) write your add function so it's DP-aware. Either method takes
time.
> Clearly MS thinks the single and double precision types in VB
> correspond very closely to same types used in VC++.
In fact, they are identical. Both languages use the IEEE standard
for 32- and 64-bit floating point datatypes.
> "Currency" type in VB was the one that was different.
> To at least a limited degree, that's more or less correct.
It's a 64-bit integer with an assumed fix-point of four. Seems
completely different from any FP type to me!
The posts by me in this news group on this topic were not intended to debate
fixed digit types such as this but to state various aspects of BCD.
1. BCD is a business and industrial standard of representing numeric data
that is lossless representing decimal values.
2. BCD is represented in the Intel processor with both a data type and
specific instructions that facilate corrections when performing addition and
subtraction on nibbles or single bytes of BCD data.
This allows for non binary math to occur on large BCD numbers.
Of course, if data is consistently stored in BCD binary operations using
sufficiently wide data types can be performed without loss of accuracy.
Each time the data is stored in BCD it in effect 0s the number to a finite
decimal value. For example, on a series sum such as a probabilty sum each
addition would have to be stored in and then retrieved from a BCD data type.
Otherwise you would have to calculate the possible error factor over a
series of operations using all possible numbers.
The advantage to this though, is that you could be able to perform certain
floating point calculations that would not be possible, or extreemly hard to
implement with a externally programmed BCD engine.
3. It is understood that representing a decimal number with an abstract
point using an integer data type is also accurate. The two points in excess
(thousandths and ten thousandths), in the VB currency type are to
facilitate accounting rounding rules.
I understand that I have made a few assumptions and mistakes in this
thread.
It would be nice however, if some mistakes that were made by others would
be acknowledged.
"Programmer Dude" <cjso...@mmm.com> wrote in message
news:3BE9988E...@mmm.com...
Where'd you get your degree and when?
What are your qualifications other than you can fool someone who pays you
into thinking you know what you are doing?
> The posts by me in this news group on this topic were [...] to state
> various aspects of BCD.
Except either you've gotten some of it plain wrong, or you're not
expressing yourself very well. To wit:
> 1. BCD is a business and industrial standard of representing numeric
> data that is lossless representing decimal values.
Only in certain domains. It *IS* lossy when you try to express 1/3.
NO single-value data type CAN express it without loss.
> 2. BCD is represented in the Intel processor with both a data type and
> specific instructions that facilate corrections when performing addition
> and subtraction on nibbles or single bytes of BCD data.
That's true enough, but...
> This allows for non binary math to occur on large BCD numbers.
Non-binary? Poor choice of words and inexcusable for a professed
CS student. In the CPU in question, ALL MATH is binary. With BCD,
math operations consists of multiple binary maths performed on each
digit along with those correction instructions applied afterwards
(which themselves need to be followed by the appropriate adjustment;
the instructions themselves just set flags, IIRC).
> Of course, if data is consistently stored in BCD binary operations
> using sufficiently wide data types can be performed without loss of
> accuracy.
No datatype is wide enough to store 1/3. Or pi. Not accurately.
Only accurately *enough* (but that applies to all data types).
> Each time the data is stored in BCD it in effect 0s the number to
> a finite decimal value.
Which directly contradicts your accuracy assertion. Again: 1/3 = ???
> 3. It is understood that representing a decimal number with an
> abstract point using an integer data type is also accurate. The two
> points in excess (thousandths and ten thousandths), in the VB currency
> type are to facilitate accounting rounding rules.
And it should be obvious that they don't insure complete accuracy, they
just move the *error* down to a point of acceptability.
BCD is great for some stuff, terrible for others.
FP is great for some stuff, terrible for others.
One hopes they've at least taught you that The Ultimate Answer to
*any* computer science question is: "It depends..."
I can't believe how ignorant and arrogant you are. You, a computer science
student, are arguing with people who are probably some of the best in their
field.
I am a mere electrical engineering student, and I got the "NAND gate and
multiplexor" T-shirt and coffee mug in the first year of my degree. Despite
the fact that I am about to embark on a project whose purpose is to design
state machines given only circuit input and output patterns, I am nowhere near
qualified to engage in an intellectual discussion with these professionals.
You don't seem to have the maturity to discuss BCD with these people, let
alone the knowledge.
Once you leave university, you will learn that what you have been taught is
not even 1% of what you *need* to know in the real world.
Just a nit... define a new data type called BCT (Binary coded Trinary).
Each digit is 2 bits representing {0,1,2} and a number is formed just
like BCD only base 3.
In this system 0.1 = 1/3. Voila, an exact representation of 1/3. Just
don't try to make it represent 1/10.
BCD is precise in decimal because a number like 1/3 isn't decimal
(i.e. can't be represented in decimal). However, when we say "lossless"
we usually mean more than just representability but closed under
some set of operations. Of note, BCD is not lossless under division.
So the idea that BCD "prevents" rounding errors is just plain wrong.
I don't know how that myth ever got started. BCD may make the rules
of rounding easier to implement, but that is a different issue.
marco
Overall, I think you have a little work to do to catch up of what they
said (eg. some concrete proof that they are wrong).
The original statement was that BCD corrects for bit shifting and rounding
errors of binary data types.
This has been further refined to the statement that BCD and fixed integer
abstracts correct for binary based floating point losses in accuracy, to
include rounding and bit shifting.
Rounding being defined as the property of any number system to
inaccurately represent infinite numbers. Binary being a number system can
not represent certain numbers accurately.
When you use a binary floating point data type you will encounter rounding
loss due to inaccuracy of the binary number system.
Loss by bit shifting comes from performing operations where "normalizing"
of the values causes loss of significant digits.
BCD is the oldest and most standardized (standardized meaning used across
many different platforms and in different environments) method of
representing decimal values in digital microprocessors.
Primarily because of thier use in COBAL and programmable logic controllers
in industrial electrical control.
However, as I have discovered in this particular thread, using a fixed
point integer data type is more efficient and more common today due to the
Visual Basic programming language and the currency data type it uses. Which
is a fixed point integer data type.
"Marco Dalla Gasperina" <mar...@home.com> wrote in message
news:m_mG7.3144$XJ4.1...@news1.sttln1.wa.home.com...
That has definately happened.
or you're not
> expressing yourself very well.
That as well has probably occured one or twice.
To wit:
>
> > 1. BCD is a business and industrial standard of representing numeric
> > data that is lossless representing decimal values.
>
> Only in certain domains. It *IS* lossy when you try to express 1/3.
> NO single-value data type CAN express it without loss.
>
Of course.
And only someone who was trying to impress by suggesting the obvious would
make such a simple statement.
> > 2. BCD is represented in the Intel processor with both a data type and
> > specific instructions that facilate corrections when performing addition
> > and subtraction on nibbles or single bytes of BCD data.
>
> That's true enough, but...
>
> > This allows for non binary math to occur on large BCD numbers.
>
> Non-binary? Poor choice of words and inexcusable for a professed
> CS student.
Not at all.
And I am not a professed student I am a registered and paid for student.
Now, you may not have very much respect for my opinions but the fact is, I
am within a year or 20 credits, of graduating.
Non-binary means that the binary number system is by passed to perform
decimal equivalent operations.
Just because you are using 0s and 1s does not make the number system
binary in the strict sence. The concept of a binary number system also
carries with it the values of the numeric places of each column.
BCD is not a binary system, it is a decimal or base 10 system because you
have 10 discreet symbols that represent 0-9 and operations dedicated to
preserving those symbols through addition and subtraction operations.
In BCD the binary number system is circumvented and therefore not used.
Bit shifting doesn't work in BCD. Because it is not a binary number system.
But you are right, everyone has their own set of terms that they learn to
reference things by and not all of them agree with one another.
There are people in this news group insist that a fixed digit number is a
binary data type. While it is binary in the sense of it only using two
symbols it is not binary in that the numbers generated do not equal a
literary binary numeric value.
A fixed digit number that has representations to the right of the decimal
point has completely different columnar representations than the binary
number system. So is that a binary number? More accurately it would be
called a binary representation of a decimal system.
For instance, 15 means 15 in decimal, but in hex it means 21. So if I
have a string of numbers with no letters in it such as 1122 4411 5551 is
that decimal or is it hex? Well, it depends on the values of the columns
doesn't it?
In the CPU in question, ALL MATH is binary. With BCD,
> math operations consists of multiple binary maths performed on each
> digit along with those correction instructions applied afterwards
> (which themselves need to be followed by the appropriate adjustment;
> the instructions themselves just set flags, IIRC).
>
That is true, but the math is performed on one byte at a time restricting
the values to 0-99. While it is binary math to say that it is at all the
same as floating point math is false. As you also point out carry flags are
set to facilitate math operations. I have had people argue with me that
these operations are not math operations but adjustments. That is simply
false, the value of the operand is changed and a flag is set. That is a
math operation.
The entire purpose of the instructions is to facilitate a different number
system.
> > Of course, if data is consistently stored in BCD binary operations
> > using sufficiently wide data types can be performed without loss of
> > accuracy.
>
> No datatype is wide enough to store 1/3. Or pi. Not accurately.
As far as 1/3 goes, sure there is. All you have to do is create an abstract
data type that represents rational numbers.
pi is a different story because pi is not a number system or part of a
number system but a ratio of the diameter of a circle to its circumference.
You can't represent e either. That has nothing to do with computers but
with math itself.
> Only accurately *enough* (but that applies to all data types).
>
It depends on what you are trying to represent.
If you are trying to represent money you can represent that with absolute
accuracy.
Math really doesn't mean anything in a practical way until you are
representing something in particular.
> > Each time the data is stored in BCD it in effect 0s the number to
> > a finite decimal value.
>
> Which directly contradicts your accuracy assertion. Again: 1/3 = ???
>
You know, I have been around this block already. Why don't you read some
of the other posts?
I can represent decimal numbers with absolute accuracy using BCD. What
you are saying is that I can not convert from a rational data type to a
decimal data type.
My response is, so? What does that have to do with anything?
What does 1/3 have to do with the problems associated with transferring
binary form data to decimal?
nothing.
> > 3. It is understood that representing a decimal number with an
> > abstract point using an integer data type is also accurate. The two
> > points in excess (thousandths and ten thousandths), in the VB currency
> > type are to facilitate accounting rounding rules.
>
> And it should be obvious that they don't insure complete accuracy, they
> just move the *error* down to a point of acceptability.
>
Which is exactly what a BCD or fixed point integer does.
> BCD is great for some stuff, terrible for others.
> FP is great for some stuff, terrible for others.
>
Well, that really concludes the matter doesn't it?
> One hopes they've at least taught you that The Ultimate Answer to
> *any* computer science question is: "It depends..."
>
Well, I guess I'm just the stupid little ignoramus here aren't I?
I just feel so privileged to be in the presense of you higher life forms.