Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Rounding errors

160 views
Skip to first unread message

Robert Wagner

unread,
Aug 24, 2004, 9:12:13 AM8/24/04
to
In an idle moment, I thought about how Cobol rounds numbers, using the
method we were taught in elementary school. Suppose we have a large
collection of numbers in format v999 and we want to round them to v99.
We and Cobol do it by adding .005, then discarding the rightmost
digit. Intuition tells us half the numbers will round up and half will
round down. Given a very large sample, the average will remain .500 ..
we think.

It turns out, we and Cobol are wrong. We're introducing a bias.

Suppose we have a million random numbers formatted v999, adding up to
500,000. Let's divide them into three groups: one containing rightmost
digit of zero, a second containing 1-4 and a third containing 5-9.
Let's round each group the Cobol way and sum the rounded numbers.

Digit Population Sum
0 100,000 50,000
1-4 400,000 200,000 - 1,000 (-.0025 * 400,000)
5-9 500,000 250,000 + 1,500 (+.0003 * 500,000)
Total 500,500

Our rounding increased the total by 500. The numbers in group 2 were
decreased by (.001 + .002 + .003 + .004) / 4 = an average of .0025.
The numbers in group 3 were increased by (.005 + .004 + .003 + .002 +
.001) / 5 = an average of .003.

Here's another way of looking at what we did. We discarded the
rightmost digit, producing numbers that look like v99. Then we left
half of them unchanged and added .01 to the other half. By doing so,
we increased the total by (.01 * 500,000) = 500.

The most common solution is Bankers' Rounding, the rules of which say
right-digit 1-4 rounds down, 6-9 rounds up, 5 rounds down when the
second digit is even and up when it's odd. The previous model, with
addition of a group for 5s, produces this outcome:

Digit Population Sum
0 100,000 50,000
1-4 400,000 200,000 - 1,000 (-.0025 * 400,000)
6-9 400,000 200,000 + 1,000 (+.0025 * 400,000)
5 100,000 50,000
Total 500,000

I find it ironic that we criticize floating-point for being too
inaccurate for business use due to rounding errors while at the same
time we've been making much larger rounding errors for 40+ years.
Single-precision floating-point errors are on the order of 10 parts
per million; double-precision is orders of magnitude better. Our error
is 500 parts per million.

Robert

Richard

unread,
Aug 24, 2004, 3:36:02 PM8/24/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> Intuition tells us half the numbers will round up and half will
> round down. Given a very large sample, the average will remain .500 ..
> we think.

It may be what you had thought, but why do you assume that everyone
else thought that ?



> It turns out, we and Cobol are wrong. We're introducing a bias.

It turns out that _you_ are wrong, not "we".



> Suppose we have a million random numbers formatted v999, adding up to
> 500,000. Let's divide them into three groups: one containing rightmost
> digit of zero, a second containing 1-4 and a third containing 5-9.
> Let's round each group the Cobol way and sum the rounded numbers.
>
> Digit Population Sum
> 0 100,000 50,000
> 1-4 400,000 200,000 - 1,000 (-.0025 * 400,000)
> 5-9 500,000 250,000 + 1,500 (+.0003 * 500,000)
> Total 500,500

Your methodology is flawed. In fact you have made a gross statistical
error.

The fault is that you have claimed they were 'random' when you have
contrived to truncate after the third digit. If they had not been
truncated and you had left the remaining additional digits in, say, a
v99999999 and then added them up you would have got very close to the
total of 500,500.

So the flaw is not that the rounding at the third digit increased the
rounded total, but your truncation of the 4th and later digits
decreased the original total.



> Here's another way of looking at what we did. We discarded the
> rightmost digit, producing numbers that look like v99. Then we left
> half of them unchanged and added .01 to the other half. By doing so,
> we increased the total by (.01 * 500,000) = 500.

No. The rounded answer is correct. You had trucated the original
random numbers by an average of .0005 in making them v999. 1,000,000
x .0005 is the 500 that rounding correctly restored.

> The most common solution is Bankers' Rounding, the rules of which say
> right-digit 1-4 rounds down, 6-9 rounds up, 5 rounds down when the
> second digit is even and up when it's odd. The previous model, with
> addition of a group for 5s, produces this outcome:

I have done systems that carry the rounding forward. That is when the
first number is rounded the difference is added to the next number
before that is rounded (or truncated, as preferred). This ensures
that the total is always correct rather than being randomly incorrect
by a small amount.

> I find it ironic that we criticize floating-point for being too
> inaccurate for business use due to rounding errors

No. Wrong. "We" don't criticise floating-point for 'rounding errors'
at all. We criticise floating-point for not being able to represent
numbers exactly.

a = 0.50 * 2.0
if ( a == 1.0 ) <- is not true


> while at the same
> time we've been making much larger rounding errors for 40+ years.
> Single-precision floating-point errors are on the order of 10 parts
> per million; double-precision is orders of magnitude better. Our error
> is 500 parts per million.

Why do you insist on using plural for your singular errors ?

LX-i

unread,
Aug 24, 2004, 5:31:24 PM8/24/04
to
Robert Wagner wrote:
> In an idle moment, I thought about how Cobol rounds numbers, using the
> method we were taught in elementary school. Suppose we have a large
> collection of numbers in format v999 and we want to round them to v99.
> We and Cobol do it by adding .005, then discarding the rightmost
> digit. Intuition tells us half the numbers will round up and half will
> round down. Given a very large sample, the average will remain .500 ..
> we think.
>
> It turns out, we and Cobol are wrong. We're introducing a bias.
>
> Suppose we have a million random numbers formatted v999, adding up to
> 500,000. Let's divide them into three groups: one containing rightmost
> digit of zero, a second containing 1-4 and a third containing 5-9.
> Let's round each group the Cobol way and sum the rounded numbers.
>
> Digit Population Sum
> 0 100,000 50,000
> 1-4 400,000 200,000 - 1,000 (-.0025 * 400,000)
> 5-9 500,000 250,000 + 1,500 (+.0003 * 500,000)
> Total 500,500

Are you saying that this is the way the compiler does it, or the way
folks have coded rounding using COBOL?


--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ / \ / ~ Live from Montgomery, AL! ~
~ / \/ o ~ ~
~ / /\ - | ~ LXi...@Netscape.net ~
~ _____ / \ | ~ http://www.knology.net/~mopsmom/daniel ~
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
~ I do not read e-mail at the above address ~
~ Please see website if you wish to contact me privately ~
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
~ GEEKCODE 3.12 GCS/IT d s-:+ a C++ L++ E--- W++ N++ o? K- w$ ~
~ !O M-- V PS+ PE++ Y? !PGP t+ 5? X+ R* tv b+ DI++ D+ G- e ~
~ h---- r+++ z++++ ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

William M. Klein

unread,
Aug 24, 2004, 8:33:22 PM8/24/04
to
There are a VARIETY of rounding options (toward zero, odd/even bankers, etc).
Many (most) are in an ISO "numerical functions" specification. This was looked
at during the development of the 2002 Standard. It was determined that there
was (and I would maintain still is) insufficient "user demand" to add this (in
an upwardly compatible way) to COBOL. With the ISO 2002 "user-defined" function
facility, it would certainly be possible to create NUMBER of "rounding"
functions. Maybe you would like to create them and sell them as "add-ons" for
existing compilers. Doing so would certainly let you know whether or not
existing COBOL sites thought these were "enhancements" (worth paying for) or
not.

--
Bill Klein
wmklein <at> ix.netcom.com
"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
news:s39mi0ti3ruelndhn...@4ax.com...

Robert Wagner

unread,
Aug 24, 2004, 9:31:35 PM8/24/04
to
On 24 Aug 2004 12:36:02 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote
>
>> Intuition tells us half the numbers will round up and half will
>> round down. Given a very large sample, the average will remain .500 ..
>> we think.
>
>It may be what you had thought, but why do you assume that everyone
>else thought that ?

Because it's in the Cobol standard.

>> Suppose we have a million random numbers formatted v999, adding up to
>> 500,000. Let's divide them into three groups: one containing rightmost
>> digit of zero, a second containing 1-4 and a third containing 5-9.
>> Let's round each group the Cobol way and sum the rounded numbers.
>>
>> Digit Population Sum
>> 0 100,000 50,000
>> 1-4 400,000 200,000 - 1,000 (-.0025 * 400,000)
>> 5-9 500,000 250,000 + 1,500 (+.0003 * 500,000)
>> Total 500,500
>
>Your methodology is flawed. In fact you have made a gross statistical
>error.
>
>The fault is that you have claimed they were 'random' when you have
>contrived to truncate after the third digit. If they had not been
>truncated and you had left the remaining additional digits in, say, a
>v99999999 and then added them up you would have got very close to the
>total of 500,500.

You're all wet. It is intuitively obvious that collections of random
numbers formatted v99, v999 or v9(infinity) will average .50000.

>So the flaw is not that the rounding at the third digit increased the
>rounded total, but your truncation of the 4th and later digits
>decreased the original total.

You are wrong. Support this with an example.

>> Here's another way of looking at what we did. We discarded the
>> rightmost digit, producing numbers that look like v99. Then we left
>> half of them unchanged and added .01 to the other half. By doing so,
>> we increased the total by (.01 * 500,000) = 500.
>
>No. The rounded answer is correct. You had trucated the original
>random numbers by an average of .0005 in making them v999. 1,000,000
>x .0005 is the 500 that rounding correctly restored.

The rounded answer is incorrect.

There are two types of ignorance -- simple and volitional. The former
simply doesn't know; the latter doesn't WANT to know and becomes
hostile when you attempt to educate him. Most of us have encountered
the latter type in our daily lives.

>I have done systems that carry the rounding forward. That is when the
>first number is rounded the difference is added to the next number
>before that is rounded (or truncated, as preferred). This ensures
>that the total is always correct rather than being randomly incorrect
>by a small amount.

Rounding intermediate results is THE classic beginner's mistake. It
doesn't surprise me that you advocate it.

The right way is to carry intermediate results to say six digits right
of decimal and round them only when going to a report. The wrong way
is to add rounded numbers into a total. I'm an autodidact but assume
they used to teach this in Programming 101.

>> I find it ironic that we criticize floating-point for being too
>> inaccurate for business use due to rounding errors
>
>No. Wrong. "We" don't criticise floating-point for 'rounding errors'
>at all. We criticise floating-point for not being able to represent
>numbers exactly.
>
> a = 0.50 * 2.0
> if ( a == 1.0 ) <- is not true

Same thing. The error is in rounding a to .99999999 rather than 1.0.

>> while at the same
>> time we've been making much larger rounding errors for 40+ years.
>> Single-precision floating-point errors are on the order of 10 parts
>> per million; double-precision is orders of magnitude better. Our error
>> is 500 parts per million.
>
>Why do you insist on using plural for your singular errors ?

Because I'm addressing more than one Cobol programmer.

---------------------------------------------------
If someone else can show an error in my logic, without rancor, I'd be
delighted to address his or her argument. Flammage offers neither
information nor entertainment .. unless it's artful. The level of art
in evidence here doesn't support the effort to respond.

Succinctly, it's like 'pissing into the wind.'

Richard

unread,
Aug 25, 2004, 3:59:08 AM8/25/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

>>> Intuition tells us half the numbers will round up and half will
>>> round down. Given a very large sample, the average will remain
.500 ..
>>> we think.
>
> >It may be what you had thought, but why do you assume that everyone
> >else thought that ?
>
> Because it's in the Cobol standard.

No. Wrong. Your comment was 'the average will remain .500.. we
think'.

I don't think that because the average of your example wasn't .500 in
the first place. You created a fantasy situation.

> >> Suppose we have a million random numbers formatted v999, adding up to
> >> 500,000.

It is fantasy because if there is an even distribution of all possile
values in a v999 it won't add up to 500,000. If it does add up to
that it isn't random.

If the random numbers were long enough then the total would be 500,000
(or close).

> >> Let's divide them into three groups: one containing rightmost
> >> digit of zero, a second containing 1-4 and a third containing 5-9.
> >> Let's round each group the Cobol way and sum the rounded numbers.
> >>
> >> Digit Population Sum
> >> 0 100,000 50,000
> >> 1-4 400,000 200,000 - 1,000 (-.0025 * 400,000)
> >> 5-9 500,000 250,000 + 1,500 (+.0003 * 500,000)
> >> Total 500,500
> >
> >Your methodology is flawed. In fact you have made a gross statistical
> >error.
> >
> >The fault is that you have claimed they were 'random' when you have
> >contrived to truncate after the third digit. If they had not been
> >truncated and you had left the remaining additional digits in, say, a
> >v99999999 and then added them up you would have got very close to the
> >total of 500,500.

> You're all wet. It is intuitively obvious that collections of random
> numbers formatted v99, v999 or v9(infinity) will average .50000.

That may be 'intuitively obvious' to _you_, but it is 'intuitively
obvious to us that are numerate' that it is quite wrong.

If you have an even distribution of the numbers 0.000 to 0.999 with 3
digits then the average is _NOT_ 0.500. It is in fact 0.4995.

This is very easy to prove. Just add up the 1000 numbers from 000 to
999 and the total is 499500.

A short cut to this is that there are 500 pairs: the first is 000 +
999 -> 999, the second is 001 + 998 -> 999, ... the last pair is 499 +
500 -> 999.

500 * 999 -> 499500

Similarly the average of:
v9 0 - 9 is 0.45
v99 00 - 99 is 0.495
v999 000 - 999 is 0.4995
v9999 0000 - 9999 is 0.49995

Now it is true that given an arbitrary number it will approach 0.50000
very closely.

The mistake you made is that if you have a random distribution of
fractions with an arbitrary precision then the average will indeed be
.500000 or close to it.

You then truncated that to 3 digits and lost and average of .0005 per
value reducing the average to the .4995 that the v999 numbers now
actually add up to form.

Rounding of the original set of long numbers, or the truncated set as
it only requires the third digit to do rounding, restores the average
back to .5000.

> >So the flaw is not that the rounding at the third digit increased the
> >rounded total, but your truncation of the 4th and later digits
> >decreased the original total.
>
> You are wrong. Support this with an example.

To show that an even distribution of all fractions with 3 digits does
not average .5 :

MOVE ZERO TO FracTotal
MOVE ZERO TO FracCount
PERFORM VARYING Frac FROM 0.000 BY 0.001 UNTIL Frac > 0.999
ADD Frac TO FracTotal
ADD 1 TO FracCount
END-PERFORM
COMPUTE FracAverage = FracTotal / FracCount
DISPLAY FracAverage

-> 0.4995

If you had started with a realistic set of random numbers between
0.000000000 and 0.999999999999999999999 then the average would indeed
be close to 0.5. Truncating after the 3rd digit would indeed affect
the total of all these numbers by an average of 0.0005 per number.
The result of the truncation should result in an even distribution of
the 3 digit values between 0.000 and 0.999 which _provably_ averages
to 0.4995 - exactly matching the loss of data.

When you round the numbers to two digits what existed beyond the 3rd
digit is irrelevant, so the truncation doesn't matter. The rounding
will recover the original average of .50.

> >> Here's another way of looking at what we did. We discarded the
> >> rightmost digit, producing numbers that look like v99. Then we left
> >> half of them unchanged and added .01 to the other half. By doing so,
> >> we increased the total by (.01 * 500,000) = 500.
>

> The rounded answer is incorrect.

No. The rounded answer is correct for a set of random numbers of
arbitrary length. The 3 digit truncated set of numbers doesn't
represent that set accurately, thus there are 3 averages:

arbitrary length random -> .500000
truncated 3 digit (your set) -> .4995
rounded 2 digit set -> .500000


> There are two types of ignorance -- simple and volitional. The former
> simply doesn't know; the latter doesn't WANT to know and becomes
> hostile when you attempt to educate him. Most of us have encountered
> the latter type in our daily lives.

As we in fact often encounter this exactly in your messages.


> >I have done systems that carry the rounding forward. That is when the
> >first number is rounded the difference is added to the next number
> >before that is rounded (or truncated, as preferred). This ensures
> >that the total is always correct rather than being randomly incorrect
> >by a small amount.
>
> Rounding intermediate results is THE classic beginner's mistake. It
> doesn't surprise me that you advocate it.

You obviously didn't read or didn't understand the mechanism I used
which does _not_ round intermediate results.

Given a set of dollar and cent values that add up to a certain total
it may be required to show this as dollars only (for example for total
sales by branch). If the values are truncated to dollars they won't
add up to the total. If they are rounded individually they may, or
may not, add up to the rounded exact total.

The way to correct this is to round each value and take the difference
between that and the pre-rounded value and add that to the next number
before rounding that.

There is _no_ 'rounding intermediate result', there is _no_
advocating. Your criticism is based on not reading my message.


> The right way is to carry intermediate results to say six digits right
> of decimal and round them only when going to a report. The wrong way
> is to add rounded numbers into a total. I'm an autodidact but assume
> they used to teach this in Programming 101.

You are so sure that you are right that it never even occurs that you
might check your claims.


> >No. Wrong. "We" don't criticise floating-point for 'rounding errors'
> >at all. We criticise floating-point for not being able to represent
> >numbers exactly.
>

> Same thing. The error is in rounding a to .99999999 rather than 1.0.

No. Not the same thing at all. A binary floating point number cannot
represent 1 accurately. It isn't a 'rounding error' it is a matter of
precision. Rounding is the correction for this problem.


> Because I'm addressing more than one Cobol programmer.

I haven't seen any other Cobol programmer making these same errors.

> ---------------------------------------------------
> If someone else can show an error in my logic, without rancor, I'd be
> delighted to address his or her argument. Flammage offers neither
> information nor entertainment .. unless it's artful. The level of art
> in evidence here doesn't support the effort to respond.

There is no 'rancor', no flammage, in saying that you are wrong. You
obviously feel that you are being personally attacked by the mere
suggestion that you could ever make an error.

However, it was your claim that everyone else was wrong too, and that
we all made the assumptions that you did that started the rancor.

While I criticised your _methodolgy_ and said that your _conclusions_
were wrong. you responded with personal insults such as "You're all
wet", implied that I am ignorant by volition, and that I am a
beginner, and generally made ad hominem attacks.

> Succinctly, it's like 'pissing into the wind.'

I have noticed that attempting to educate you is exactly that.

Robert Wagner

unread,
Aug 25, 2004, 5:25:55 AM8/25/04
to
On Tue, 24 Aug 2004 16:31:24 -0500, LX-i <lxi...@netscape.net> wrote:

>Robert Wagner wrote:
>> In an idle moment, I thought about how Cobol rounds numbers, ...

>Are you saying that this is the way the compiler does it, or the way
>folks have coded rounding using COBOL?

The way the compiler does it.

Robert Wagner

unread,
Aug 25, 2004, 6:20:11 AM8/25/04
to
On Wed, 25 Aug 2004 00:33:22 GMT, "William M. Klein"
<wmk...@nospam.netcom.com> wrote:

>There are a VARIETY of rounding options (toward zero, odd/even bankers, etc).
>Many (most) are in an ISO "numerical functions" specification. This was looked
>at during the development of the 2002 Standard. It was determined that there
>was (and I would maintain still is) insufficient "user demand" to add this (in
>an upwardly compatible way) to COBOL. With the ISO 2002 "user-defined" function
>facility, it would certainly be possible to create NUMBER of "rounding"
>functions. Maybe you would like to create them and sell them as "add-ons" for
>existing compilers. Doing so would certainly let you know whether or not
>existing COBOL sites thought these were "enhancements" (worth paying for) or
>not.

Fast food gives lie to the proposition that market demand is the
result of a rational decision process, or some indicator of
correctness.

Richard

unread,
Aug 25, 2004, 4:07:35 PM8/25/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> >Your methodology is flawed. In fact you have made a gross statistical
> >error.

> You're all wet. It is intuitively obvious that collections of random


> numbers formatted v99, v999 or v9(infinity) will average .50000.

This is your statistical error. The actual averages are .495, .4995
and .500... respectively.

It is only when a set of v9(infinity) is ROUNDED to v999 (actually it
requires 9v999) that it retains an average of .50000

> Rounding intermediate results is THE classic beginner's mistake. It
> doesn't surprise me that you advocate it.

Well, it seems that you made "THE classic beginner's mistake" by
'intuitively' rounding the random numbers to arrive at your claim they
averaged to .50000.

You had 'intuitively' rounded to v999 then rounded again to v99. Note
that a set of v999 that does average to .5000 does not have the
distribution that you claim for it in the table.

Richard

unread,
Aug 25, 2004, 5:04:27 PM8/25/04
to
rip...@Azonic.co.nz (Richard) wrote

> I have done systems that carry the rounding forward. That is when the
> first number is rounded the difference is added to the next number
> before that is rounded (or truncated, as preferred). This ensures
> that the total is always correct rather than being randomly incorrect
> by a small amount.

To clarify how this works (it does NOT do 'intermediate rounding')
here is an example.

Suppose we have a list of dollar.cent values, say branch sales totals,
that add up to a certain figure that is the total sales. The account
wants his report to be in thousands of dollars only. We could round
each branch total and the total sales separately, but then it might
not add up to the rounded total sales. We could round each and add
these to give a new total, but it might not be the same number as the
total sales.

Branch sales Rounded separately Rounded and added
1 5623.55 6 6
2 6700.22 7 7
3 1512.63 2 2
4 1622.00 2 2
------------------------------------------------
total 15458.40 15 17

The accountant won't like columns that don't add up, and also don't
like totals that are wrong.

By rounding each branch figure there is a rounding error. In the
example above this has been discarded so the difference between to 15
and the 17 is the accumulation of rounding errors (due to rather
contrived values).

To cater for these errors and arrive at the best (for the accountant)
report the error from one rounding is added to the next number before
that is rounded. The accumulated rounding error is then less than .5
for the column.

(view with a fixed font please)

Branch sales carry forward adjusted Rounded
5623.55 0.00 5623.55 6
6700.22 - 376.45 6323.77 6
1512.63 323.77 1836.40 2
1622.00 - 163.60 1458.40 1
458.40
-------- -----
15458.40 15

The column now adds up correctly and gives the correct total. The
accumulated rounding error for the column is exactly the rounding
error on the total.

For completeness you would store this away until the next month and
use it as the initial 'carry forward' so that at the end of the year
the 12 monthly reports totals will actually add up to the annual total
sales rounded figure.

Robert Wagner

unread,
Aug 25, 2004, 7:22:42 PM8/25/04
to
On 25 Aug 2004 00:59:08 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

>If you have an even distribution of the numbers 0.000 to 0.999 with 3


>digits then the average is _NOT_ 0.500. It is in fact 0.4995.
>
>This is very easy to prove. Just add up the 1000 numbers from 000 to
>999 and the total is 499500.
>
>A short cut to this is that there are 500 pairs: the first is 000 +
>999 -> 999, the second is 001 + 998 -> 999, ... the last pair is 499 +
>500 -> 999.
>
> 500 * 999 -> 499500

The spoiler is the inclusion of zero. If we take 999 numbers between
.001 and .999, the average is .500. See below.

>The mistake you made is that if you have a random distribution of
>fractions with an arbitrary precision then the average will indeed be
>.500000 or close to it.

That's because a fraction will not produce zero. Zero isn't a rational
number, it's a limit. As x approaches infinity, 1/x approaches zero.

You can find documents on the Web saying zero is not a natural number
but is a rational number. My counter is that every rational number can
be expressed as a fraction of natural numbers. Show me a fraction that
produces zero.



>You then truncated that to 3 digits and lost and average of .0005 per
>value reducing the average to the .4995 that the v999 numbers now
>actually add up to form.

Alternatively,if I eliminated zeros (post-truncation), the average of
the other 99.9% would be .500.

>Rounding of the original set of long numbers, or the truncated set as
>it only requires the third digit to do rounding, restores the average
>back to .5000.
>
>> >So the flaw is not that the rounding at the third digit increased the
>> >rounded total, but your truncation of the 4th and later digits
>> >decreased the original total.

Assume my test set contained no zeros, the sum was 500,000, the
average was .500. After rounding was applied, the sum became 500,500.
This demonstrates that rounding introduced an error, an upward bias.

Suppose I added a million zeros to that set. The sum would still be
500,000 but the average would be .250.

>> You are wrong. Support this with an example.
>
>To show that an even distribution of all fractions with 3 digits does
>not average .5 :
>
> MOVE ZERO TO FracTotal
> MOVE ZERO TO FracCount
> PERFORM VARYING Frac FROM 0.000 BY 0.001 UNTIL Frac > 0.999
> ADD Frac TO FracTotal
> ADD 1 TO FracCount
> END-PERFORM
> COMPUTE FracAverage = FracTotal / FracCount
> DISPLAY FracAverage
>
> -> 0.4995

If PERFORM VARYING Frac FROM .001 BY .001 UNTIL Frac > .999,
the answer is 499.5/999 = .500.

Thank you for providing a framework that demonstrates the error in
Cobol rounding without the issue of truncating random numbers.
This program rounds an evenly distributed set of numbers and displays
the average with and without rounding. The upward bias caused by
rounding is as predicted.

identification division.
program-id. test27.
*> author. Robert Wagner.
*> Test rounding error
*> To insure the same number of rounds up and down, 499 each,
*> do not round .999.
*> Findings: .5000 .5004
data division.
working-storage section.
01 unqualified-variables.
05 Frac pic 9v999.
05 FracRounded pic 9v99.
05 FracTotal-1 value zero pic 99999v999.
05 FracTotal-2 value zero pic 99999v999.
05 FracCount value zero pic 9999.
05 FracAverage-1 pic zz.9999.
05 FracAverage-2 pic zz.9999.

procedure division.
main.
PERFORM VARYING Frac FROM .001 BY 0.001 UNTIL Frac > .999
COMPUTE FracRounded ROUNDED = Frac
ADD Frac TO FracTotal-1
IF Frac = .999
ADD Frac TO FracTotal-2
ELSE
ADD FracRounded TO FracTotal-2
END-IF


ADD 1 TO FracCount
END-PERFORM

COMPUTE FracAverage-1 = FracTotal-1 / FracCount
COMPUTE FracAverage-2 = FracTotal-2 / FracCount
DISPLAY FracAverage-1 FracAverage-2.

Result: .5000 .5004

Since the above does not use truncated random numbers, I clipped the
remainder of that debate. It was clouding the issue of Cobol rounding.

>> >I have done systems that carry the rounding forward. That is when the
>> >first number is rounded the difference is added to the next number
>> >before that is rounded (or truncated, as preferred). This ensures
>> >that the total is always correct rather than being randomly incorrect
>> >by a small amount.
>>
>> Rounding intermediate results is THE classic beginner's mistake. It
>> doesn't surprise me that you advocate it.
>
>You obviously didn't read or didn't understand the mechanism I used
>which does _not_ round intermediate results.

I understand. You are propogating rounding backward into detail so
that the details sum to the (rounded) total. You can still be off by 1
.. unless you add the last difference into the total.

>Given a set of dollar and cent values that add up to a certain total
>it may be required to show this as dollars only (for example for total
>sales by branch). If the values are truncated to dollars they won't
>add up to the total. If they are rounded individually they may, or
>may not, add up to the rounded exact total.

In the financial data warehouse industry, we deal with this all the
time -- reports that don't quite add up to the total. The worst-case
error is plus or minus the number of detail lines. It's not a big
deal. Our numbers are in thousands rather than currency units. For
Brazil, they're in millions.

Your system would screw us up. On one report you might say you are
holding 1,000 (thousands of Euros) in XYZ. On the next report the same
holding might be reported as 1,001. We would think you bought one.
Some analytical methods are based on the number of 'decisions'. They
don't care about quantity bought or sold. Your rounding correction
would falsely count as a decision.

Moreover, we measure turnover rate in a portfolio -- the number of
changes since the previous report. Your corrections would make it look
like you are 'churning' i.e. trading more than your peers. That would
make you look bad to potential investors.

>> >No. Wrong. "We" don't criticise floating-point for 'rounding errors'
>> >at all. We criticise floating-point for not being able to represent
>> >numbers exactly.
>>
>> Same thing. The error is in rounding a to .99999999 rather than 1.0.
>
>No. Not the same thing at all. A binary floating point number cannot
>represent 1 accurately. It isn't a 'rounding error' it is a matter of
>precision. Rounding is the correction for this problem.

No system of numeration can store all real numbers exactly. Integers
cannot represent pi, the square root of 2 and other irrational
numbers. Nor can they express many rational fractions as a single
number, for example 1/3.

This discussion is about rounding. The function of rounding is to
create a LESS precise representation.

>> Because I'm addressing more than one Cobol programmer.
>
>I haven't seen any other Cobol programmer making these same errors.

Every time they say ROUNDED they're creating an error of 500 parts per
million. They've been doing it for 45 years.

>> ---------------------------------------------------
>> If someone else can show an error in my logic, without rancor, I'd be
>> delighted to address his or her argument. Flammage offers neither
>> information nor entertainment .. unless it's artful. The level of art
>> in evidence here doesn't support the effort to respond.
>
>There is no 'rancor', no flammage, in saying that you are wrong. You
>obviously feel that you are being personally attacked by the mere
>suggestion that you could ever make an error.

I have no problem with being found in error. Nor even name-calling
when I deserve it. What I object to is criticism when my conslusion
is, in fact, correct .. which it is in this case.

>While I criticised your _methodology_ and said that your _conclusions_


>were wrong. you responded with personal insults such as "You're all
>wet", implied that I am ignorant by volition, and that I am a
>beginner, and generally made ad hominem attacks.

I apologize for those remarks.

Richard

unread,
Aug 26, 2004, 4:35:12 AM8/26/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> The spoiler is the inclusion of zero. If we take 999 numbers between
> .001 and .999, the average is .500. See below.

> >The mistake you made is that if you have a random distribution of
> >fractions with an arbitrary precision then the average will indeed be
> >.500000 or close to it.
>
> That's because a fraction will not produce zero. Zero isn't a rational
> number, it's a limit. As x approaches infinity, 1/x approaches zero.

Whatever. You will still get numbers that are less than .001 that when
truncated to 3 digits will be .000.



> >You then truncated that to 3 digits and lost and average of .0005 per
> >value reducing the average to the .4995 that the v999 numbers now
> >actually add up to form.
>
> Alternatively,if I eliminated zeros (post-truncation), the average of
> the other 99.9% would be .500.

.000 is still a valid result from a random set of 3 digit numbers even
if 0.0000000000000... is not.

You are just manipulating the results to attempt to disguise the error
you made.

Take for example 10 random single digit numbers. If there was an even
distribution (o - 9) the average is 0.45 (try it). Now you can
contrive to say that the nine numbers 1-9 average to 0.50.

Howver, on what basis can you claim that you can dispense with one of
the 10 digits in a random set ? It may give a 'better' result for
your claim, but only if you ignore the consequences.

.1 .2 .3 .4 .5 .6 .7 .8 .9 total 4.5 average .5
rounded 0 0 0 0 1 1 1 1 1 total 5 average 0.555..

Howver, if you take a set of large precision random numbers between
.10000 and .99999.. the average will be 0.55.. not 0.5000.

> Assume my test set contained no zeros, the sum was 500,000, the
> average was .500. After rounding was applied, the sum became 500,500.
> This demonstrates that rounding introduced an error, an upward bias.

No. It demonstrates that you don't understand random numbers. If you
eliminate all the 3 digit zeros (that derive from the random numbers
0.00000..1 to 0.00099999..) then you would have replaced them with
some other numbers. If this was done in a random way then the total
will be 500,000 as you say (up from 499,500). The total of the full
arbitrary length untruncated actual random numbers, though, will be
the 500,000 plus 0.1% for the 0.1% of the numbers you have replaced ie
500,500.

The rounding to 2 digits will restore the full count 500,500.


> Suppose I added a million zeros to that set. The sum would still be
> 500,000 but the average would be .250.

That is because the set is getting larger by that count. Above, you
are not changing the count (ie 1 million) but are changing some values
from .000 to something else and then _NOT_ adding the new values to
the total.


> If PERFORM VARYING Frac FROM .001 BY .001 UNTIL Frac > .999,
> the answer is 499.5/999 = .500.

Well, sure and if you leave off the .999 because you don't like that
one you will get another total, again.


> Thank you for providing a framework that demonstrates the error in
> Cobol rounding without the issue of truncating random numbers.

There is no 'error' in rounding. There is only an error in what you
expect to happen. Rounding is designed to take a set of arbitary
precision numbers and to give an accurate representation in a limited
precision.

ie if we have a set of random numbers in the form 9v9(lots) then the
average will indeed be 0.5000 even if it includes 0.000000001.

When that set is rounded to 2 digits of precision it only needs to
look at the 3rd digit. The result will have exactly the same average
as the original set at 0.50000, even if it included numbers that
started 0.000.

Because only 3 digits are needed to do the rounding it is actually
only necessary to store these 3, the rest of the digits can be
discarded.

However, the discarding of these digits from the 4th onwards loses the
value of these. The average is, as I have shown, 0.4995. This
doesn't matter to the rounding because the loss is of unrequired data.

Your error is that you expected the truncated 3 digit numbers to add
up to the same as the original random numbers. And then you expected
the rounded two digit numbers to have the same characteristics (total
average) as the 3 digit numbers.

Rounding is to recover the characteristics of the original large
precision while working in limited precision.

Your set of 3 digit numbers represents a loss of data which the
rounding recovers. Come back when you can understand this.

One more time. Rounding is not designed to take a set of fixed
precision numbers and reproduce the characteristics in a lesser
precision. Your expectation is flawed.

If you were to do this with 0.000001 BY 0.000001 UNTIL > .999999 then
you will get a much closer result

For random numbers correctly in the range 0.0000...1 to 0.9999....

large precision set -> average 0.500..
rounded to 2 digits -> average 0.500..
rounded to 1 digit -> average 0.500..
rounded to 9 digits -> average 0.500..
rounded to 0 digits -> average 0.500.. (half round to 1.0)

truncated to 3 digits -> average 0.4995 (as demonstrated)
truncated to 2 digits -> average 0.495
truncated to 1 digit -> average 0.45

So now you can complain that round is 'out' by 500, or 5000, or 50000
on the same data.

But it is not the _roundeing_ that is wrong, it is the truncation to
3, or 2 or 1 digit. That truncated set no longer represents the total
value of the random numbers, but the rounded set does.

So how does this affect Cobol programs ?

If it is required to some divisions then the result will be an
arbitrary number of fractional digits. If the result has to be
reduced to 2 digits as being, say, cents, then it is only necessary to
store 3 digits in order to determine how it should be rounded.

Your assumption is that a million of those those 3 digits should
represent the characteristics of the full precision results. THEY
DON'T. The rounded 2 digit results _DO_ represent the full precision
results and it doesn't matter that the 3 digit truncations do not.

That is: the rounding recovers the _correct_ result that the 3 digit
truncation does not represent.

You are correct that the 3 digit set and the 2 digit rounded set are
different, but it is the rounded set that is correct.


> I understand. You are propogating rounding backward into detail so
> that the details sum to the (rounded) total. You can still be off by 1
> .. unless you add the last difference into the total.

Noi. Wrong. the mechanism cannot be off by 1, it will only be off by
+-.5 or less and this will be _exactly_ the same as the rounding error
when rounding the total.

> In the financial data warehouse industry, we deal with this all the
> time -- reports that don't quite add up to the total. The worst-case
> error is plus or minus the number of detail lines. It's not a big
> deal. Our numbers are in thousands rather than currency units. For
> Brazil, they're in millions.
>
> Your system would screw us up. On one report you might say you are
> holding 1,000 (thousands of Euros) in XYZ. On the next report the same
> holding might be reported as 1,001. We would think you bought one.
> Some analytical methods are based on the number of 'decisions'. They
> don't care about quantity bought or sold. Your rounding correction
> would falsely count as a decision.

Then don't use it for that application. It is a tool, if it doesn't
fit then it is the wrong tool for _that_ job. It may be the correct
tool for some other job.



> Moreover, we measure turnover rate in a portfolio -- the number of
> changes since the previous report. Your corrections would make it look
> like you are 'churning' i.e. trading more than your peers. That would
> make you look bad to potential investors.

It wasn't compulsory. Unlike you I usually don't claim something is
'best practice' when it may be inappropriate.

> No system of numeration can store all real numbers exactly. Integers
> cannot represent pi, the square root of 2 and other irrational
> numbers. Nor can they express many rational fractions as a single
> number, for example 1/3.

> This discussion is about rounding.

Then discuss rounding in floating-point as a mechanism, for eaxample:

if ( round(a, 2) == 1.00 )

rather than confusing it with precision, which is what you did.

> The function of rounding is to
> create a LESS precise representation.

Less precise, but, on average, as accurate as the original,and more
accurate than some truncated version of the data.

> Every time they say ROUNDED they're creating an error of 500 parts per
> million. They've been doing it for 45 years.

You still don't understand that it is the truncation to 3 digits that
is out by 500 parts per million.

> I have no problem with being found in error. Nor even name-calling
> when I deserve it. What I object to is criticism when my conslusion
> is, in fact, correct .. which it is in this case.

No. Wrong. Your conclusion that rounding gives the wrong result is
incorrect. Certainly your observation that the rounding is not the
same as the truncated 3 digit set is correct. Your conclusion as to
which of these is wrong and why is flawed.

> I apologize for those remarks.

Thank you.

JerryMouse

unread,
Aug 26, 2004, 8:29:55 AM8/26/04
to
Robert Wagner wrote:
>> The mistake you made is that if you have a random distribution of
>> fractions with an arbitrary precision then the average will indeed be
>> .500000 or close to it.
>
> That's because a fraction will not produce zero. Zero isn't a rational
> number, it's a limit. As x approaches infinity, 1/x approaches zero.
>
> You can find documents on the Web saying zero is not a natural number
> but is a rational number. My counter is that every rational number can
> be expressed as a fraction of natural numbers. Show me a fraction that
> produces zero.

0/1 ?


Michael Wojcik

unread,
Aug 26, 2004, 12:22:30 PM8/26/04
to

In article <a--dnYIif_Z...@giganews.com>, "JerryMouse" <nos...@bisusa.com> writes:

> Robert Wagner wrote:
> >
> > That's because a fraction will not produce zero. Zero isn't a rational
> > number, it's a limit. As x approaches infinity, 1/x approaches zero.
> >
> > You can find documents on the Web saying zero is not a natural number
> > but is a rational number. My counter is that every rational number can
> > be expressed as a fraction of natural numbers. Show me a fraction that
> > produces zero.
>
> 0/1 ?

And it turns out that this fascinating result is not isolated. In
fact, there are infinitely many ratios of natural numbers that equal
zero! Who knew?

And, alas, not all rational numbers can be expressed as ratios of
natural numbers. Besides zero, there's an uncountably infinite
set of rationals which cannot be expressed as ratios of natural
numbers. Hint: They *can* be expressed as ratios of *integers*,
which is the proper definition.

(Of course, there are also ratios of integers - infinitely many -
which aren't rational. They might be hysteretic, depending on
whose take on the subject you prefer. But they all have the same
denominator, so they're easy to spot.)

--
Michael Wojcik michael...@microfocus.com

As always, great patience and a clean work area are required for fulfillment
of this diversion, and it should not be attempted if either are compromised.
-- Chris Ware

Curtis Bass

unread,
Aug 26, 2004, 1:57:01 PM8/26/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote in message news:<8a2qi094vn5ht6pfq...@4ax.com>...

> That's because a fraction will not produce zero. Zero isn't a rational
> number, it's a limit.

Categorically incorrect. Zero is a rational number, as it can be
expressed as a ratio of integers: 0/N where N is any integer other
than zero.

Unless you would deny that zero is an integer.

> As x approaches infinity, 1/x approaches zero.

And as x approaches infinity, (x+1) / x approaches one. This means
that 1 is a limit. Does this preclude 1 from being a rational number?

> You can find documents on the Web saying zero is not a natural number
> but is a rational number. My counter is that every rational number can
> be expressed as a fraction of natural numbers. Show me a fraction that
> produces zero.

0/N where N is an integer, as stated above.


Curtis

Richard

unread,
Aug 26, 2004, 3:40:34 PM8/26/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> Your system would screw us up. On one report you might say you are
> holding 1,000 (thousands of Euros) in XYZ. On the next report the same
> holding might be reported as 1,001. We would think you bought one.
> Some analytical methods are based on the number of 'decisions'. They
> don't care about quantity bought or sold. Your rounding correction
> would falsely count as a decision.

It seems to me that noticing a change based on the number that has
been rounded to thousands is an extremly poor mechanism. You could
buy or sell nearly a thousand and there would be no change in the
number shown.

For example 19500 -> 20. 20450 -> 20.

It seems that your current mechanism may hide decisions. I would
suggest that you use a different mechanism such as the actual number
of trades on the report, or a '*' or '+/-' indicating change since
last report (even if the number is the same).

Once you have such a definite mechanism the users will no longer need
to look at two reports to see what has changed (though it won't show
all changes) and it would not matter whether you used a mechanism to
ensure that the columns added up - if that is also a requirement of
course.

Walter Murray

unread,
Aug 26, 2004, 6:37:58 PM8/26/04
to
"Michael Wojcik" <mwo...@newsguy.com> wrote:
> And, alas, not all rational numbers can be expressed as ratios of
> natural numbers. Besides zero, there's an uncountably infinite
> set of rationals which cannot be expressed as ratios of natural
> numbers. Hint: They *can* be expressed as ratios of *integers*,
> which is the proper definition.

I suspect that Michael meant to write "countably infinite". The set of all
rational numbers is countable.

Walter


Robert Wagner

unread,
Aug 26, 2004, 6:58:18 PM8/26/04
to
On 25 Aug 2004 13:07:35 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote
>
>> >Your methodology is flawed. In fact you have made a gross statistical
>> >error.
>
>> You're all wet. It is intuitively obvious that collections of random
>> numbers formatted v99, v999 or v9(infinity) will average .50000.
>
>This is your statistical error. The actual averages are .495, .4995
>and .500... respectively.
>
>It is only when a set of v9(infinity) is ROUNDED to v999 (actually it
>requires 9v999) that it retains an average of .50000

I thought rounding was a given, automatic and did not require
mentioning. Now I realize that's not true. In the demos I posted here
I should have said:
compute data-key-1 (i) ROUNDED = function random

The compiler (library) is returning 'random' numbers that are not
evenly distributed. They are biased toward the low end.

Thanks for pointing that out.

But wait .. rounding the random numbers introduces the error we're
testing for. The average will not be .5000, it will be .5005.

>> Rounding intermediate results is THE classic beginner's mistake. It
>> doesn't surprise me that you advocate it.
>
>Well, it seems that you made "THE classic beginner's mistake" by
>'intuitively' rounding the random numbers to arrive at your claim they
>averaged to .50000.
>
>You had 'intuitively' rounded to v999 then rounded again to v99. Note
>that a set of v999 that does average to .5000 does not have the
>distribution that you claim for it in the table.

You're right. The picture would have to be 9v999 to accomodate random
numbers greater than .9995.

Robert Wagner

unread,
Aug 26, 2004, 6:58:19 PM8/26/04
to
On 26 Aug 2004 01:35:12 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

>> >You then truncated that to 3 digits and lost and average of .0005 per


>> >value reducing the average to the .4995 that the v999 numbers now
>> >actually add up to form.
>>
>> Alternatively,if I eliminated zeros (post-truncation), the average of
>> the other 99.9% would be .500.
>
>.000 is still a valid result from a random set of 3 digit numbers even
>if 0.0000000000000... is not.
>
>You are just manipulating the results to attempt to disguise the error
>you made.

Second attempt. I see the answer now. Change the pic to 9v999 and the
set size to 1001. Values start with .000 and end with .999, 1.000. The
average is 500.5/1001 = .500.

Why add 1.000? Because the output of a random number generator must be
rounded and, as you pointed out, the result can be 1.000.



>Take for example 10 random single digit numbers. If there was an even
>distribution (o - 9) the average is 0.45 (try it).

Again, eleven values starting with .0 and ending with 1.0 average
5.5/11= .5

>However, on what basis can you claim that you can dispense with one of


>the 10 digits in a random set ? It may give a 'better' result for
>your claim, but only if you ignore the consequences.

You're right, deleting zero is a mistake.

>> If PERFORM VARYING Frac FROM .001 BY .001 UNTIL Frac > .999,
>> the answer is 499.5/999 = .500.

Ok, I restored .000 and added 1.000. The result is the same.

Here is a more commonplace demo program. It takes familiar dollars and
cents and rounds them to dollars. Now the error is 5,000 parts per
million (dollars).

identification division.
program-id. test27b.


*> author. Robert Wagner.
*> Test rounding error

*> Results: 5000.0000 5000.0050


data division.
working-storage section.
01 unqualified-variables.

05 Amt comp pic 99999v99.
05 AmtRounded comp pic 99999.
05 AmtTotal-1 value zero comp pic 9(14)v99.
05 AmtTotal-2 value zero comp pic 9(14)v99.
05 AmtCount value zero comp pic 9(14).
05 AmtAverage-1 pic zzzzzz.9999.
05 AmtAverage-2 pic zzzzzz.9999.

procedure division.
main.
PERFORM VARYING Amt FROM ZERO BY .01 UNTIL Amt > 10000
COMPUTE AmtRounded ROUNDED = Amt
ADD Amt TO AmtTotal-1
ADD AmtRounded TO AmtTotal-2
ADD 1 TO AmtCount
END-PERFORM
COMPUTE AmtAverage-1 rounded = AmtTotal-1 / AmtCount
COMPUTE AmtAverage-2 rounded = AmtTotal-2 / AmtCount
DISPLAY AmtAverage-1 AmtAverage-2.



>There is no 'error' in rounding.

Yes, there IS an error in rounding. That's the point I'm making here.

>There is only an error in what you expect to happen.

I expect the average to remain the same. If it changes, rounding is
changing the numbers.

> Rounding is designed to take a set of arbitary
>precision numbers and to give an accurate representation in a limited
>precision.
>
>ie if we have a set of random numbers in the form 9v9(lots) then the
>average will indeed be 0.5000 even if it includes 0.000000001.
>
>When that set is rounded to 2 digits of precision it only needs to
>look at the 3rd digit. The result will have exactly the same average
>as the original set at 0.50000, even if it included numbers that
>started 0.000.

That's how it should work. Rounding with Cobol raises the average to
.5005, as I demonstrated.

>Because only 3 digits are needed to do the rounding it is actually
>only necessary to store these 3, the rest of the digits can be
>discarded.

Right, after rounding to 3 digits.

>However, the discarding of these digits from the 4th onwards loses the
>value of these. The average is, as I have shown, 0.4995. This
>doesn't matter to the rounding because the loss is of unrequired data.

No, if the random numbers are rounded to 3 digits, the average will be
.500. We need a sample size of at least 1001 because the values after
rounding can range from .000 to 1.000.

>Your error is that you expected the truncated 3 digit numbers to add
>up to the same as the original random numbers. And then you expected
>the rounded two digit numbers to have the same characteristics (total
>average) as the 3 digit numbers.
>
>Rounding is to recover the characteristics of the original large
>precision while working in limited precision.
>
>Your set of 3 digit numbers represents a loss of data which the
>rounding recovers. Come back when you can understand this.

This talk about random numbers is a straw man. The topic is the error
in Cobol rounding.

Readers can easily test this. Find a large file with currency amounts.
Do a sum and compute the rounded average out to four places. Now round
the numbers to whole units and compute their average. You will find
that it increased by .5%. If numbers are computed rather than read
from a file, they the error will be less. See demo below.

How much is your company's payroll? If the company has 10,000
employees, its payroll will be around $500M or 50B pennies. By
rounding each paycheck to the 'nearest' penny (not), the company might
be overpaying by $65,000.

Fix rounding to work right and you'll be a hero. Alternatively, put
the difference into your own paycheck and management will not see a
change in the total. Try to get a cell with internet access so we can
congratulate you for fulfilling an urban myth.

>One more time. Rounding is not designed to take a set of fixed
>precision numbers and reproduce the characteristics in a lesser
>precision. Your expectation is flawed.
>
>If you were to do this with 0.000001 BY 0.000001 UNTIL > .999999 then
>you will get a much closer result

Rounding six digits to five and six digits to two produced the same
result:
.500000000 .500000500
As before, 5 was added one position to the right of digits removed by
rounding.

Aha! Now I see the value of Standard Intermediate Data Item. With its
32-digit precision (average 16 right), rounding errors move from 4th
right of decimal to the 17th.

>For random numbers correctly in the range 0.0000...1 to 0.9999....
>
> large precision set -> average 0.500..
> rounded to 2 digits -> average 0.500..
> rounded to 1 digit -> average 0.500..
> rounded to 9 digits -> average 0.500..
> rounded to 0 digits -> average 0.500.. (half round to 1.0)

To see the error, your average must have one digit more than the
random numbers you're rounding.

When we're computing, how do we know the size of the intermediate
being rounded? I wrote this simple simulation of a payroll calculation
to measure that.

identification division.
program-id. test27c.
*> author. Robert Wagner.
*> Test rounding error on payroll calculation
*> Step interval is a prime to eliminate repeats
*> Results: 1705.4328000000 1705.4329297429


data division.
working-storage section.
01 unqualified-variables.

05 Amt comp pic 99999v9(09).
05 AmtRounded comp pic 99999v99.
05 AmtTotal-1 value zero comp pic 9(09)v9(09).
05 AmtTotal-2 value zero comp pic 9(09)v9(09).
05 AmtCount value zero comp pic 9(10).
05 Hours comp pic 9(03)v99.
05 Rate comp pic 9(03)v99.
05 StopOpt value zero comp pic 9(03)v99.
05 AmtAverage-1 pic zzzzzz.9(10).
05 AmtAverage-2 pic zzzzzz.9(10).

procedure division.
main.
PERFORM VARYING Hours from 20 by .23 until Hours > 79
AFTER Rate from 10 by .23 until Rate > 59
COMPUTE Amt ROUNDED = Hours * Rate
ADD StopOpt to Hours
COMPUTE AmtRounded ROUNDED = Hours * Rate
ADD Amt TO AmtTotal-1
ADD AmtRounded TO AmtTotal-2
ADD 1 TO AmtCount
END-PERFORM
COMPUTE AmtAverage-1 rounded = AmtTotal-1 / AmtCount
COMPUTE AmtAverage-2 rounded = AmtTotal-2 / AmtCount
DISPLAY AmtAverage-1 AmtAverage-2.

The average paycheck is off by 130 parts per million. The compiler
(Realia) is doing a pretty good job managing the intermediate.

>So how does this affect Cobol programs ?
>
>If it is required to some divisions then the result will be an
>arbitrary number of fractional digits. If the result has to be
>reduced to 2 digits as being, say, cents, then it is only necessary to
>store 3 digits in order to determine how it should be rounded.
>
>Your assumption is that a million of those those 3 digits should
>represent the characteristics of the full precision results. THEY
>DON'T. The rounded 2 digit results _DO_ represent the full precision
>results and it doesn't matter that the 3 digit truncations do not.
>
>That is: the rounding recovers the _correct_ result that the 3 digit
>truncation does not represent.
>
>You are correct that the 3 digit set and the 2 digit rounded set are
>different, but it is the rounded set that is correct.

If you had rounded the 3-digit set, it would have the same average as
the original numbers. Now, when you round three digits to two, how
does Cobol know whether it should 'recover' the original average or
leave it unchanged? It doesn't.

You're confusing an outright error in the rounding algorithm with
miraculous 'recovery' of missing data.

>Then don't use it for that application. It is a tool, if it doesn't
>fit then it is the wrong tool for _that_ job. It may be the correct
>tool for some other job.

When you publish data publically, you man not be aware of how it will
be used.

>> Every time they say ROUNDED they're creating an error of 500 parts per
>> million. They've been doing it for 45 years.
>
>You still don't understand that it is the truncation to 3 digits that
>is out by 500 parts per million.

The demos I posted above don't do any truncation, yet the average
after rounding is higher than before rounding.

Truncation is a straw man. You set him up and then easily knock him
down.


Robert Wagner

unread,
Aug 26, 2004, 8:49:51 PM8/26/04
to
On 26 Aug 2004 12:40:34 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote
>
>> Your system would screw us up. On one report you might say you are
>> holding 1,000 (thousands of Euros) in XYZ. On the next report the same
>> holding might be reported as 1,001. We would think you bought one.
>> Some analytical methods are based on the number of 'decisions'. They
>> don't care about quantity bought or sold. Your rounding correction
>> would falsely count as a decision.
>
>It seems to me that noticing a change based on the number that has
>been rounded to thousands is an extremly poor mechanism. You could
>buy or sell nearly a thousand and there would be no change in the
>number shown.

I was referring to bonds, where quantity is face value. Corporate
bonds are sold in denomination of thousands; US Government bonds are
in ten thousand. In other words, one bond is worth one currency unit,
but you have to buy a thousand at a time.

With stocks, quantity is shares, which is usually expressed in units
rather than thousands.

>It seems that your current mechanism may hide decisions. I would
>suggest that you use a different mechanism such as the actual number
>of trades on the report, or a '*' or '+/-' indicating change since
>last report (even if the number is the same).

There are as many variations in formatting as the thousands of
entities who create reports. Some, usually universities, seem to
delight in obfuscation. They present numbers in scientific notation.
They rotate the report 90 degrees so columnar data reads down the page
rather than across.

We have to deal with spreadsheets and report generators that, when the
report is too wide for the page, split it down the middle, presenting
all the left sides followed by all the right sides. We have to 'Scotch
tape' it back into a sensible report.

Then there are the screw-ups, who resend last quarter's report by
mistake. Or they re-send second quarter of last year rather than
second quarter of this year. Or they forget to hit Recalc, so the data
is correct but totals are the same as the previous report. We detect
those errors by comparing to history, then ask them to send a
corrected report, or verify that the detail IS correct.

All this used to be done by low-priced offshore labor. Now it's done
by my software (100% Cobol), the accuracy rate has gone up
substantially.

>Once you have such a definite mechanism the users will no longer need
>to look at two reports to see what has changed (though it won't show
>all changes) and it would not matter whether you used a mechanism to
>ensure that the columns added up - if that is also a requirement of
>course.

Users don't look at two reports, we do it for them. If the input had +
and -, we wouldn't trust it to be right. Adding +/- would require an
edict from the Securities and Exchange Commission, which would
generate complaints about the cost of change.

Richard

unread,
Aug 27, 2004, 3:13:36 AM8/27/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> >Because only 3 digits are needed to do the rounding it is actually
> >only necessary to store these 3, the rest of the digits can be
> >discarded.
>
> Right, after rounding to 3 digits.

No. Round to 3 then rounding to 2 is the beginner's mistake that gives
an error.

> No, if the random numbers are rounded to 3 digits, the average will be
> .500.

That is exactly the problem. You are manipulating the numbers to get
an average of 0.500. You think that every set of numbers should
average 0.500. They should not. It is because you force the average
for the 3 digit set up from the 0.4995 that it should be to 0.500 that
the subsequent rounding includes this additional 0.0005.

What you consistently fail to notice is why the average of the 3 digit
set is supposed to be 0.4995, and is when the set is correctly
established.

> This talk about random numbers is a straw man. The topic is the error
> in Cobol rounding.

Your original requirements:

>>> Suppose we have a million random numbers ...

> Readers can easily test this. Find a large file with currency amounts.
> Do a sum and compute the rounded average out to four places. Now round
> the numbers to whole units and compute their average. You will find
> that it increased by .5%. If numbers are computed rather than read
> from a file, they the error will be less. See demo below.

Yes. That occurs. But you don't seem to know why.

The currency amounts have been calculated by some mechanism, usually
by multiplying or dividing. For example hours x rate or salary / 12
or 13 or 26.

If the result is _rounded_ to cents to give the amount to be paid then
rounding this again to dollars is the 'beginner's error' that you
mentioned. It will give the 'wrong' answer.

If the result is truncated to give the cents then rounding to dollars
will give the 'correct' answer that will be the same as a summation of
the large precison calculations.

Of course the compiler can't tell whether you are double rounding or
not - it just works the way it is designed.

> How much is your company's payroll? If the company has 10,000
> employees, its payroll will be around $500M or 50B pennies. By
> rounding each paycheck to the 'nearest' penny (not), the company might
> be overpaying by $65,000.

No. That is completely wrong and an incompetent extrapolation of your
erroneous conclusions.

Your 'extra' 0.005 is caused by you rounding or manipulating the
numbers to force an average of 0.500 before applying rounding. In
other words you are double-rounding and getting the wrong answer.

Calculating by multiplying and dividing and doing one rounding to
cents gives the correct answer. There is no 'extra 65,000.00'.

> Rounding six digits to five and six digits to two produced the same
> result:
> .500000000 .500000500
> As before, 5 was added one position to the right of digits removed by
> rounding.

No. You had contrived the numbers to give an uneven distribution that
gave an average of 0.5000000. THIS IS WRONG. A correct even
distribution for 6 digit numbers will have an average of 0.4999995.
Rounding this to 5 digits will give 0.500000 which is the correct
result.

1 0 - 9 .45
2 00 - 99 .495
3 000 - 999 .4995
4 0000 - 9999 .49995
5 00000- 99999 .499995
6 000000-999999 .4999995
(lots) 00.. -999... .4999999999... (approx 0.5)

(any) rounded .5000

If you are not getting these results it is because you are frigging
the numbers. Either you don't have an even distribution or you have
rounded.

For example adding an extra 1.000.. gives an uneven distribution with
extra zero digits, and it adds the 1 that changes the averages above
to the WRONG VALUE of 0.5000.

> If you had rounded the 3-digit set, it would have the same average as
> the original numbers.

It would _incorrectly_ have the same average.

> Now, when you round three digits to two, how
> does Cobol know whether it should 'recover' the original average or
> leave it unchanged? It doesn't.

It doesn't, that is why double rounding is _wrong_.

> You're confusing an outright error in the rounding algorithm with
> miraculous 'recovery' of missing data.

It is not miraculous at all.

If you have a set of arbirary long precision random numbers, or
results of calculations, with an average of 0.50000 exactly and want
to have a set of 2 digit numbers (say, cents) that have the same
average of 0.5000 you round to 2 digit numbers directly from the long
numbers. This only uses the two digits and the third to use for
rounding:

0.29878765852098.. 0.30
0.760870909877987987.. 0.76
0.1709832455341.. 0.17
.... ...
0.309165413075241.. 0.31
------------------ ----
average 0.50000000.. average 0.5000


As you can see it only uses 3 digits, the rest do add up into the
total to calculate the average but are not used in the rounding. They
can be discarded.

0.298 as above
0.760
0.170
...
0.309
----------- -----
average ??? average 0.5000

Because the 4th digit onwards has been discarded the total is now
_less_ and the average is down to 0.4995.

You consistently and _WRONGLY_ manipulate that 3 digit column to force
it have an average of 0.5000 by rounding or adding numbers or changing
zeros to something else. When this is applied to the original long
precision set it increases the average of that to 0.5005. You are
then surprised when the average of the rounded set is .5005.


> When you publish data publically, you man not be aware of how it will
> be used.

I really don't care who and how it is used. You evaluated it and
found that it did not suit your particular needs. That does not mean
that it is wrong for everyone, I expect each to evaluate and judge for
themselves.

> The demos I posted above don't do any truncation, yet the average
> after rounding is higher than before rounding.

It is irrelevant that you didn't explicitly truncate. You implicitly
truncated by using an even set of numbers.

> Truncation is a straw man. You set him up and then easily knock him
> down.

No. Not at all. You started with 'random numbers'. Rounding is
designed to operate on long precision numbers and work in a limited
precision.

Robert Wagner

unread,
Aug 27, 2004, 6:55:34 AM8/27/04
to
On 27 Aug 2004 00:13:36 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

>That is exactly the problem. You are manipulating the numbers to get


>an average of 0.500. You think that every set of numbers should
>average 0.500. They should not. It is because you force the average
>for the 3 digit set up from the 0.4995 that it should be to 0.500 that
>the subsequent rounding includes this additional 0.0005.
>
>What you consistently fail to notice is why the average of the 3 digit
>set is supposed to be 0.4995, and is when the set is correctly
>established.

>


>> Rounding six digits to five and six digits to two produced the same
>> result:
>> .500000000 .500000500
>> As before, 5 was added one position to the right of digits removed by
>> rounding.
>
>No. You had contrived the numbers to give an uneven distribution that
>gave an average of 0.5000000. THIS IS WRONG. A correct even
>distribution for 6 digit numbers will have an average of 0.4999995.
>Rounding this to 5 digits will give 0.500000 which is the correct
>result.
>
> 1 0 - 9 .45
> 2 00 - 99 .495
> 3 000 - 999 .4995
> 4 0000 - 9999 .49995
> 5 00000- 99999 .499995
> 6 000000-999999 .4999995
> (lots) 00.. -999... .4999999999... (approx 0.5)

This demo uses a six-digit integer representing, say, the number of
stock shares in a portfolio. A number like that was never computed nor
rounded nor truncated, thus removing your argument about double
rounding. We are going to round the number to thousands for
presentation on a report. The test is whether the presented numbers
have the same average as the originals. If they do not, Cobol is
introducing an error.

Per your insistance, I let the count range from zero to 999999. The
resulting average is, as you say, 499999.5.

Explain why the average of the rounded report numbers is .5 higher
than actual counts. In other words, every detail line contains an
extra half share, on average. The total of the reported numbers is
500,000 shares higher than it should be.

Don't say double rounding, the counts were not rounded numbers. Don't
say I manipulated the population; I did exactly what you said I
should.

program-id. test27d.


*> author. Robert Wagner.
*> Test rounding error

*> Simulate a file of integers representing, say, stock shares
*> Round them to thousands
*> Findings: 499999.5000 500000.0000
*> The report is off by 500000.0000


data division.
working-storage section.
01 unqualified-variables.

05 Qty comp pic 9(07).
05 QtyRounded comp pic 9(03).
05 QtyTotal-1 value zero comp pic 9(15).
05 QtyTotal-2 value zero comp pic 9(15).
05 QtyCount value zero comp pic 9(09).
05 QtyAverage-1 pic z(07).9999.
05 QtyAverage-2 pic z(07).9999.

procedure division.
main.
PERFORM VARYING Qty FROM zero BY 1 UNTIL Qty > 999999
COMPUTE QtyRounded ROUNDED = Qty / 1000
ADD Qty TO QtyTotal-1
ADD QtyRounded TO QtyTotal-2
ADD 1 TO QtyCount
END-PERFORM
COMPUTE QtyAverage-1 rounded = QtyTotal-1 / QtyCount
COMPUTE QtyAverage-2 rounded = QtyTotal-2 * 1000 / QtyCount
DISPLAY QtyAverage-1 QtyAverage-2
COMPUTE QtyAverage-1 = (QtyTotal-2 * 1000) - QtyTotal-1
DISPLAY 'The report is off by ' QtyAverage-1.

>The currency amounts have been calculated by some mechanism, usually
>by multiplying or dividing. For example hours x rate or salary / 12
>or 13 or 26.
>
>If the result is _rounded_ to cents to give the amount to be paid then
>rounding this again to dollars is the 'beginner's error' that you
>mentioned. It will give the 'wrong' answer.
>
>If the result is truncated to give the cents then rounding to dollars
>will give the 'correct' answer that will be the same as a summation of
>the large precison calculations.

>> How much is your company's payroll? If the company has 10,000


>> employees, its payroll will be around $500M or 50B pennies. By
>> rounding each paycheck to the 'nearest' penny (not), the company might
>> be overpaying by $65,000.
>
>No. That is completely wrong and an incompetent extrapolation of your
>erroneous conclusions.
>
>Your 'extra' 0.005 is caused by you rounding or manipulating the
>numbers to force an average of 0.500 before applying rounding. In
>other words you are double-rounding and getting the wrong answer.
>
>Calculating by multiplying and dividing and doing one rounding to
>cents gives the correct answer. There is no 'extra 65,000.00'.

If you had looked at the code I posted, you have seen there is no
double rounding. The check amount was independently calculated twice
-- once to high precision and a second time to (incorrectly) rounded
pennies.

Here I enhanced it to round two ways -- the Cobol way and the right
way.

identification division.
program-id. test27c.
*> author. Robert Wagner.
*> Test rounding error on payroll calculation

*> Step intervals are primes to eliminate repeats
*> Results: 1707.4050000000 1707.4054493381 1707.4049002009
*> The program overpaid by 867.05
*> With better rounding, the error is 192.57-

data division.
working-storage section.
01 unqualified-variables.

05 Amt pic 999999v9(04).
05 filler redefines Amt.
10 AmtDigit occurs 10 pic 9.
05 AmtRounded comp pic 999999v99.
05 AmtRounded-3 comp pic 999999v99.
05 AmtTotal-1 value zero comp pic 9(12)v9(04).
05 AmtTotal-2 value zero comp pic 9(12)v9(02).
05 AmtTotal-3 value zero comp pic 9(12)v9(02).
05 AmtCount value zero comp pic 9(09).


05 Hours comp pic 9(03)v99.
05 Rate comp pic 9(03)v99.

05 AmtAverage-1 pic zzzzzz.9(10).
05 AmtAverage-2 pic zzzzzz.9(10).

05 AmtAverage-3 pic zzzzzz.9(10).
05 AmtError pic zzzzzz.99-.

procedure division.
main.
PERFORM VARYING Hours from 20 by .03 until Hours > 79
AFTER Rate from 10 by .05 until Rate > 59
* Compute Amt with no rounding, Cobol rounding and Bankers' rounding
COMPUTE Amt = Hours * Rate


COMPUTE AmtRounded ROUNDED = Hours * Rate

MOVE Amt TO AmtRounded-3
IF (AmtDigit (9) GREATER THAN 5 OR
(AmtDigit (9) EQUAL TO 5 AND
(AmtDigit (8) = 1 OR 3 OR 5 OR 7 OR 9)))
ADD .01 TO AmtRounded-3
END-IF


ADD Amt TO AmtTotal-1
ADD AmtRounded TO AmtTotal-2

ADD AmtRounded-3 TO AmtTotal-3


ADD 1 TO AmtCount
END-PERFORM
COMPUTE AmtAverage-1 rounded = AmtTotal-1 / AmtCount
COMPUTE AmtAverage-2 rounded = AmtTotal-2 / AmtCount

COMPUTE AmtAverage-3 rounded = AmtTotal-3 / AmtCount
DISPLAY AmtAverage-1 AmtAverage-2 AmtAverage-3
COMPUTE AmtError = AmtTotal-2 - AmtTotal-1
DISPLAY 'The program overpaid by ' AmtError
COMPUTE AmtError = AmtTotal-3 - AmtTotal-1
DISPLAY 'With better rounding, the error is ' AmtError.


Robert Jones

unread,
Aug 27, 2004, 1:24:11 PM8/27/04
to
I think that the averages for the separate sums of the rounded set and
the original set will almost always be slightly different for
individual non-infinite sets of numbers.

Michael Wojcik

unread,
Aug 27, 2004, 1:11:53 PM8/27/04
to

Drat! That is what I meant, of course. Er, I think; maybe I actually
did have a little brain glitch there. (Forming one set by mapping a
countably-infinite set is pretty much how infinite-set cardinality is
demonstrated, after all, so it should be obvious that the rationals
are countable.)

--
Michael Wojcik michael...@microfocus.com

Americans have five disadvantages which you should take into account
before giving us too hard a time:
- We're landlocked
- We're monolingual
- We have poor math and geography skills -- Lucas MacBride

Robert Wagner

unread,
Aug 27, 2004, 2:38:13 PM8/27/04
to
On 27 Aug 2004 10:24:11 -0700, rjo...@hotmail.com (Robert Jones)
wrote:

>I think that the averages for the separate sums of the rounded set and
>the original set will almost always be slightly different for
>individual non-infinite sets of numbers.

Common sense says that if exactly half round up and half 'round down'
(are truncated), the average will be exactly the same. Not so.

Slight differences that go in both directions -- one batch up, the
next down -- are tolerable. Difference of 5,000 parts per million,
always up, is not 'slightly different' caused by random noise. It's an
error.

docd...@panix.com

unread,
Aug 27, 2004, 2:58:32 PM8/27/04
to
In article <01vui0pp92mpjdnfm...@4ax.com>,

Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>On 27 Aug 2004 10:24:11 -0700, rjo...@hotmail.com (Robert Jones)
>wrote:
>
>>I think that the averages for the separate sums of the rounded set and
>>the original set will almost always be slightly different for
>>individual non-infinite sets of numbers.
>
>Common sense says that if exactly half round up and half 'round down'
>(are truncated), the average will be exactly the same.

Common sense says that heavier objects fall faster than lighter ones.

>Not so.

Exactly... gotta watch out for that 'common sense' stuff.

DD

Robert Wagner

unread,
Aug 27, 2004, 3:17:42 PM8/27/04
to

If common sense were truly common, more than half would have some.

docd...@panix.com

unread,
Aug 27, 2004, 4:29:30 PM8/27/04
to
In article <q52vi0dceei6jn0te...@4ax.com>,

Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>On 27 Aug 2004 14:58:32 -0400, docd...@panix.com wrote:
>
>>In article <01vui0pp92mpjdnfm...@4ax.com>,
>>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>>>On 27 Aug 2004 10:24:11 -0700, rjo...@hotmail.com (Robert Jones)
>>>wrote:
>>>
>>>>I think that the averages for the separate sums of the rounded set and
>>>>the original set will almost always be slightly different for
>>>>individual non-infinite sets of numbers.
>>>
>>>Common sense says that if exactly half round up and half 'round down'
>>>(are truncated), the average will be exactly the same.
>>
>>Common sense says that heavier objects fall faster than lighter ones.
>>
>>>Not so.
>>
>>Exactly... gotta watch out for that 'common sense' stuff.
>
>If common sense were truly common, more than half would have some.

As I've heard attributed to Thomas Paine, Voltaire and Robert Heinlein, Mr
Wagner, 'Common sense is anything but common'; as Rene Descartes put it,
'Common sense is the most fairly distributed thing in the world, for each
one thinks he is so well-endowed with it that even those who are hardest
to satisfy in all other matters are not in the habit of desiring more of
it than they already have.'

DD

Richard

unread,
Aug 27, 2004, 5:16:04 PM8/27/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> Per your insistance, I let the count range from zero to 999999. The
> resulting average is, as you say, 499999.5.

Thank you for finally recognising that your "Suppose we have a million
random numbers formatted v999, adding up to 500,000" was invalid. It
would, as you now see, in this case, only add up to 495,500.

And also that the cause of the 'error' is wrong: "It is intuitively


obvious that collections of random numbers formatted v99, v999 or

v9(infinity) will average .50000.". As you now see, finally, the
averages will be 0.495, 0.4995 and 0.499...

> Explain why the average of the rounded report numbers is .5 higher
> than actual counts. In other words, every detail line contains an
> extra half share, on average. The total of the reported numbers is
> 500,000 shares higher than it should be.

The fault you are making here is adding up the rounded amounts to use
as the total (and then dividing it to get the 'average').

This is the beginners second mistake - right after learning that
double rounding is wrong. DO NOT add up the rounded amounts for the
total, add up the actual amounts and round that. They teach that here
in primary school (no, really, they do).

Of course, as explained in 'round and forward', this may mean that the
total isn't the same number as adding up the column, but that is
because the numbers are no longer exact, they are approximations to
the nearest thousand.

If you want the column to also add up then simple rounding is the
WRONG TOOL for THAT JOB. As I have already said it is your
expectation that is in error (as well as your manipulation of the
numbers in your other examples).

In fact the RIGHT TOOL for representing those numbers as a column of
thousands that do add up is the 'round and forward' that I described.
This, as already demonstrated, gives a column that does add up to the
rounded actual total.


> Don't say double rounding, the counts were not rounded numbers. Don't
> say I manipulated the population; I did exactly what you said I
> should.

Yes. And you got exactly the result that _I_ expected. That is
exactly what a different tool: round and forward, if for. If you use
the wrong tool you get the wrong results.

If you want a column that show the correct rounded thousands and the
correct rounded total then round but DO NOT add up the rounded
numbers. Add up the actuals and round that to give the total.


> If you had looked at the code I posted, you have seen there is no
> double rounding. The check amount was independently calculated twice
> -- once to high precision and a second time to (incorrectly) rounded
> pennies.

Your original payroll example was:

>> Here is a more commonplace demo program. It takes familiar dollars
and
>> cents and rounds them to dollars. Now the error is 5,000 parts per
>> million (dollars).

The 'dollars and cents' has been rounded to cents after the
calculation. This is the correct way to do it and gives the correct
answer to hours x rates, but only if the rates and hours aren't
already rounded. You are correct that this may give the wrong answer
where the rates and hours are already rounded (double rounding
occurs) but this is accepted that an employee may get an extra cent a
week, or a cent less - that is what they will be paid. This
calculation may be out by 1 cent, not by some percentage.

If the rates and hours are derived without rounding then the rounded
cents will be correct and some pays will be +.5 and others -.5.

However, you wanted to take these amounts and round them to dollars.
The individual payment result will be the correct rounded dollars for
those payment amounts.

It would be an error to add up the rounded dollars to give the total.
The dollar-cent amounts should be added and that total rounded if you
want the 'correct' total. The figures may, or may not, add up to
that.

Note that the amounts paid are correct. A sum of these amounts paid
will be correct. A rounded total of a sum of the actual payments will
be correct. There is no $65,000 that you can put in your pocket.

There are other tools that could be used for this job. You could for
example use 'round and forward' if you want the column of rounded
dollars to add up exactly to the rounded total.

As the individual rounded dollars are not being used to make payments
or deduct from bank accounts there is no '65,000 you can put in your
pocket'. Or that is to say, these totals of rounded numbers should
NOT be used to do that. If you are then your programs are seriously
in error.

Next week we will more on to 'Numeric Analysis for Beginners'. ;-)

Richard

unread,
Aug 27, 2004, 5:41:30 PM8/27/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> >I think that the averages for the separate sums of the rounded set and
> >the original set will almost always be slightly different for
> >individual non-infinite sets of numbers.
>
> Common sense says that if exactly half round up and half 'round down'
> (are truncated), the average will be exactly the same. Not so.
>
> Slight differences that go in both directions -- one batch up, the
> next down -- are tolerable. Difference of 5,000 parts per million,
> always up, is not 'slightly different' caused by random noise. It's an
> error.

I thought that you had finally 'got it'. Obviously not.

It is not '5,000 parts per million'. It is a 5 in the first column
you are trucating to form your working set. It happens that you chose
3 digits to work with and thus the truncation is of an average 0.0005
. You then seemed to conclude that the code that did the rounding was
in error by '5000 parts per million'.

If you had used a set of 4 digit even distributed numbers it would be
'out' by 0.00005, with 6 it would be 'out' by '5 parts per million'.

And the amount that it is 'out' does not depend on how many digits are
rounded to. With 6 digit even distribution the total of the rounded
numbers will be the same whether they are rounded to 5, or 4, or 1
digit.

So, as a simple pointer to where the 'problem' lies. It is indepenent
of the degree of rounding. It is dependent on the precision of the
even set of numbers.

In other words it is dependent on where the truncation occurs.

This is because the 'out by x parts per million' is caused by
truncating and only exists in the truncated set.

Russell Styles

unread,
Aug 27, 2004, 7:49:49 PM8/27/04
to

"Richard" <rip...@Azonic.co.nz> wrote in message
news:217e491a.04082...@posting.google.com...

> Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote
>
> > Per your insistance, I let the count range from zero to 999999. The
> > resulting average is, as you say, 499999.5.
>
> Thank you for finally recognising that your "Suppose we have a million
> random numbers formatted v999, adding up to 500,000" was invalid. It
> would, as you now see, in this case, only add up to 495,500.
>
> And also that the cause of the 'error' is wrong: "It is intuitively
> obvious that collections of random numbers formatted v99, v999 or
> v9(infinity) will average .50000.". As you now see, finally, the
> averages will be 0.495, 0.4995 and 0.499...
>
> > Explain why the average of the rounded report numbers is .5 higher
> > than actual counts. In other words, every detail line contains an
> > extra half share, on average. The total of the reported numbers is
> > 500,000 shares higher than it should be.
>
> The fault you are making here is adding up the rounded amounts to use
> as the total (and then dividing it to get the 'average').
>
> This is the beginners second mistake - right after learning that
> double rounding is wrong. DO NOT add up the rounded amounts for the
> total, add up the actual amounts and round that. They teach that here
> in primary school (no, really, they do).
>

Adding up the rounded amounts might be bad practice,
but it IS the best way to test the effect rounding might have on the
figures. One hopes that, with a sufficently large sample of random
numbers, that the two totals would be very close.

When testing to destruction, you have to use bad practices. or
WRONG TOOLS.

Add up the set of complete (long) numbers.

Add up same set, rounding before adding.

Round first total to match second set.

Compare.


Might be interesting if someone tries it.

Richard

unread,
Aug 28, 2004, 3:50:45 PM8/28/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> Common sense says that if exactly half round up and half 'round down'
> (are truncated), the average will be exactly the same. Not so.
>
> Slight differences that go in both directions -- one batch up, the
> next down -- are tolerable. Difference of 5,000 parts per million,
> always up, is not 'slightly different' caused by random noise. It's an
> error.

It is not an error in the rounding, it is an 'error' in using rounding
on that set of data.

Rounding works on real numbers, rounding only looks at one digit,
everything beyond that is assumed to be an arbitrary length of random
digits. When the data meets this expectation, as it does when used in
appropriate circumstances, it works as designed. Well actually, it
works as part of the number system.

Your stock example is of integers. Integers don't work the same way as
real numbers do. You are rounding to thousands of dollars and the
result is as if the numbers were reals with an arbitrary number of
digits following the point of rounding.

Stocks are always (you say) whole dollars only. It may be, for example
that, for another set, the stocks are only found in 500 dollar
increments. You may ponder how rounding could possibly know how to
deal with that. Would you claim that the resulting average is in
error caused by the cobol compiler ?

Another way of getting the same average as the set of integers
(assuming they are random to the dollar) is to prepare the set by
subtracting the shortfall from a random set of reals average before
rounding. In the case of the stocks you would subtract 0.50 (50 cents)
from each positive value before rounding to whatever precision you
want. Well actually it should be 0.4999.. that is subtracted.

For a set of dollar.cents which have been rounded from calculations
you can 'unround' these by subtracting 0.004999.. before rounding to
dollars, usually this is not an issue and the variablity of the data
means that the difference is insignificant.

Of course there will give a few numbers that are not 'right'.

Robert Wagner

unread,
Aug 28, 2004, 3:52:42 PM8/28/04
to
On 27 Aug 2004 14:16:04 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote
>
>> Per your insistance, I let the count range from zero to 999999. The
>> resulting average is, as you say, 499999.5.
>
>Thank you for finally recognising that your "Suppose we have a million
>random numbers formatted v999, adding up to 500,000" was invalid. It
>would, as you now see, in this case, only add up to 495,500.
>
>And also that the cause of the 'error' is wrong: "It is intuitively
>obvious that collections of random numbers formatted v99, v999 or
>v9(infinity) will average .50000.". As you now see, finally, the
>averages will be 0.495, 0.4995 and 0.499...

RP: 1, RW: 0

>> Explain why the average of the rounded report numbers is .5 higher
>> than actual counts. In other words, every detail line contains an
>> extra half share, on average. The total of the reported numbers is
>> 500,000 shares higher than it should be.
>
>The fault you are making here is adding up the rounded amounts to use
>as the total (and then dividing it to get the 'average').
>
>This is the beginners second mistake - right after learning that
>double rounding is wrong. DO NOT add up the rounded amounts for the
>total, add up the actual amounts and round that. They teach that here
>in primary school (no, really, they do).

That's what I meant by "rounding intermediate results." Detail lines
are intermediates.

Since we, as receiver of the report, lack access to unrounded detail,
we have no choice but to sum rounded numbers. Assuming the sender did
as you said above, we understand why the sum doesn't match his total.

What should we do with the difference? Logically, we should construct
a detail line called Rounding Error and stuff it there. Accountants
call this practice 'plugging' -- adjusting detail to make it sum to a
predetermined total. It's considered bad accounting practice. Our
users wouldn't get any useful information from the number. So we drop
it. Downstream, the rounded details are re-summed to a slightly
incorrect total.

>If you want the column to also add up then simple rounding is the
>WRONG TOOL for THAT JOB. As I have already said it is your
>expectation that is in error (as well as your manipulation of the
>numbers in your other examples).
>
>In fact the RIGHT TOOL for representing those numbers as a column of
>thousands that do add up is the 'round and forward' that I described.
>This, as already demonstrated, gives a column that does add up to the
>rounded actual total.

No, round and forward causes problems I described earlier. A better
tool is Bankers' Rounding.

Visual Basic and .NET (including C#) do use Bankers' Rounding.
Apparently, it caused so many complaints that Excel rounds the
'normal' way, and doesn't offer an option to round the 'right' way.

>> Don't say double rounding, the counts were not rounded numbers. Don't
>> say I manipulated the population; I did exactly what you said I
>> should.
>
>Yes. And you got exactly the result that _I_ expected. That is

>exactly what a different tool: round and forward, is for. If you use


>the wrong tool you get the wrong results.

Most mutual fund holdings reports are produced via Excel, by
non-programmers. I don't know about all reports in all industries, but
I assume a significant portion are produced with similar tools.

As a practical matter, the 'right tool' will not, cannot, be used
unless it's provided by the software.

Suppose round and forward were available in Excel. What would happen?
Some would use it. On the second report, the noise it creates --
numbers going up and down by 1 -- would quickly be detected. Word
would go out that it's an inappropriate tool for financial reporting
to the outside world.

>> If you had looked at the code I posted, you have seen there is no
>> double rounding. The check amount was independently calculated twice
>> -- once to high precision and a second time to (incorrectly) rounded
>> pennies.
>
>Your original payroll example was:
>
>>> Here is a more commonplace demo program. It takes familiar dollars
>and
>>> cents and rounds them to dollars. Now the error is 5,000 parts per
>>> million (dollars).

Wrong demo. Look at the payroll demo.

>The 'dollars and cents' has been rounded to cents after the
>calculation. This is the correct way to do it and gives the correct
>answer to hours x rates, but only if the rates and hours aren't
>already rounded. You are correct that this may give the wrong answer
>where the rates and hours are already rounded (double rounding
>occurs) but this is accepted that an employee may get an extra cent a
>week, or a cent less - that is what they will be paid. This
>calculation may be out by 1 cent, not by some percentage.

One cent per paycheck adds up to hundreds or thousands of dollars, if
rounding is biased in one direction. Statistically, it can be
expressed as a percentage.

>Note that the amounts paid are correct. A sum of these amounts paid
>will be correct. A rounded total of a sum of the actual payments will
>be correct. There is no $65,000 that you can put in your pocket.

You're jumbling two different cases. One added a column of dollars and
cents amounts, the other computed paychecks rounded to cents.

>Next week we will more on to 'Numeric Analysis for Beginners'. ;-)

We'll see why a single algorithm is inappropriate. We'll use one
algorithm for small inputs, a second for middle values and a third for
big numbers. What fun.

Robert Wagner

unread,
Aug 28, 2004, 3:52:45 PM8/28/04
to

Correct. The 5 appears one position right of the rightmost digit
removed by rounding.

0.nnn rounded to two digits gives
0.5005

0.nnn rounded to zero digits (whole number) gives
1.0005

0.nnnnn rounded to 1 through 4 digits gives the same
0.500005

Dollars and cents rounded to dollars:

x.nn rounds to
y.005

That's 5000 parts per million (move the decimal six positions to the
right). Someone might ask where is the .005 in a number rounded to
dollars. It's in an upward bias in the average.

Now consider the Standard Intermediate Data Item in the 2002 Cobol
standard. It has, on average, 16 digits right of the decimal. Thus:

x.nnnnnnnnnnnnnnnn rounds to
y.00000000000000005

As a result, rounding error is negligable.

With that understanding, we can examine the effect of double rounding.
Suppose we start with a six-place number and round it to cents. Then
we round the intermediate cents to dollars.

0.nnnnnn rounds to
0.5000005 which then rounds to
1.0050005

The error increases from 5000 ppm to 5000.5 ppm. Seeing the number
puts it in better perspective than hand waving.

Robert Wagner

unread,
Aug 28, 2004, 8:39:43 PM8/28/04
to
On 28 Aug 2004 12:50:45 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote
>
>> Common sense says that if exactly half round up and half 'round down'
>> (are truncated), the average will be exactly the same. Not so.
>>
>> Slight differences that go in both directions -- one batch up, the
>> next down -- are tolerable. Difference of 5,000 parts per million,
>> always up, is not 'slightly different' caused by random noise. It's an
>> error.
>
>It is not an error in the rounding, it is an 'error' in using rounding
>on that set of data.
>
>Rounding works on real numbers, rounding only looks at one digit,
>everything beyond that is assumed to be an arbitrary length of random
>digits. When the data meets this expectation, as it does when used in
>appropriate circumstances, it works as designed. Well actually, it
>works as part of the number system.
>
>Your stock example is of integers. Integers don't work the same way as
>real numbers do. You are rounding to thousands of dollars and the
>result is as if the numbers were reals with an arbitrary number of
>digits following the point of rounding.

If I took a dollar amount, removed 'V' and changed the name from
'amount-in-dollars' to 'amount-in-pennies', would that magically
convert it from a real to an integer? It was already an integer. All
fixed-point numbers are integers. Punctuation just makes them easier
for humans to read.

>Stocks are always (you say) whole dollars only.

I said bond quantity (maturity value) is multiples of 1,000. When
currency is devalued, conversion will produce odd numbers or
fractions. That's usually rare, the last (in a country with a stock
market) was Brazil in January 1999. However, when European countries
adopted the euro in 2001, there was massive 're-denomination' of
bonds. Also, there is the currency conversion issue.

Stocks are traded in blocks of 100 shares. Splits can produce odd
numbers.

> It may be, for example
>that, for another set, the stocks are only found in 500 dollar
>increments. You may ponder how rounding could possibly know how to
>deal with that. Would you claim that the resulting average is in
>error caused by the cobol compiler ?

In that hypothetical case, yes. When rounding to thousands, half would
be unchanged and half would round up. The average line would increase
by $250.

That's an illustration of why arithmetic rounding doesn't work as
expected. Wish I'd thought of it. Half the numbers gain 500 and the
other half are unchanged. They don't 'round down', they don't lose any
value.

If Bankers' Rounding were used, there would be no error. Half the
numbers ending with 500 would lose 500 and half would gain 500.

>Another way of getting the same average as the set of integers
>(assuming they are random to the dollar) is to prepare the set by
>subtracting the shortfall from a random set of reals average before

>rounding. In the case of the [bonds] you would subtract 0.50 (50 cents)


>from each positive value before rounding to whatever precision you
>want. Well actually it should be 0.4999.. that is subtracted.

That would work for evenly distributed numbers. Good idea. It's
explicitly compensating for the rounding error being introduced by
Cobol.

When rounding to thousands, it's not necessary for bond quantities,
which are in multiples of 1,000. Where the numbers are multiples of
100, as they are for stock shares, it would convert the Cobol error
from +50 to -50. That's 50, not .50.

Why? Before correction, 10% of the numbers were unchanged, 40% rounded
down by an average 250, 50% rounded up by an average 300. Average
change = 50. After subtracting .5, 10% of the numbers are unchanged
(the ones ending in 000), 50% round down by an average 300, 40% round
up by an average 250. Average change = -50.

Fifty additional shares per line is a major error. Fortunately, stock
shares are nor customarily rounded to thousands. Are there other
numbers in multiples of a hundred? Consider your electric meter, which
is accurate to one tenth of a kilowatt-hour. Someone might store it as
watt-hours, then round it to kilowatt-hours for reporting .. without
realizing he's adding 50 watt-hours per meter.

>For a set of dollar.cents which have been rounded from calculations
>you can 'unround' these by subtracting 0.004999.. before rounding to
>dollars, usually this is not an issue and the variablity of the data
>means that the difference is insignificant.

Nope. You'd have to know the precision they were rounded FROM.
Subtracting .005 works only if they were rounded from three digits to
two. If the intermediate was in standard arithmetic, the correction
would be -0.00000000000000005.


William M. Klein

unread,
Aug 29, 2004, 12:42:10 AM8/29/04
to
I just HAD to create a new thread with this one (class in my opinion) RW
statement

"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message

news:pmq1j0l9grmt07nvm...@4ax.com...


> On 28 Aug 2004 12:50:45 -0700, rip...@Azonic.co.nz (Richard) wrote:

<snip>


> If I took a dollar amount, removed 'V' and changed the name from
> 'amount-in-dollars' to 'amount-in-pennies', would that magically
> convert it from a real to an integer? It was already an integer. All
> fixed-point numbers are integers. Punctuation just makes them easier
> for humans to read.

<end snip>

The statement,

"All fixed-point numbers are integers. "

must win the prize for something !!!! <G>

--
Bill Klein
wmklein <at> ix.netcom.com


William M. Klein

unread,
Aug 29, 2004, 12:44:52 AM8/29/04
to
(oops) - change that

" (class in my opinion)"

to

" (classic in my opinion)

--
Bill Klein
wmklein <at> ix.netcom.com

"William M. Klein" <wmk...@nospam.netcom.com> wrote in message
news:C8dYc.1456$8d1...@newsread2.news.pas.earthlink.net...

Richard

unread,
Aug 29, 2004, 1:34:00 AM8/29/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> Correct. The 5 appears one position right of the rightmost digit
> removed by rounding.
>
> 0.nnn rounded to two digits gives
> 0.5005

Not quite right. You are still not seeing that rounding is giving the
_correct_ answer because you started with the wrong average:

0.nnnnnnnnnnn... -> averages to .0500...

Split this into two parts:

0.nnn <-> nnnnnnnnn...

This is:
0.nnn 0.000nnnnnnnnn...

The first part averages to 0.4995. The second part to 0.0004999..95.
The total of the two parts is an average of 0.50000...

Now round the 0.nnn and this add a 0.0005 'adjustment' that you call
an 'error'.

0.nnn -> average 0.4995
round -> average 0.5000 which was the average of the original

You keep making the mistake of claiming that the result of the
rounding is 0.5005. It isn't. The pre rounded figure is 0.4995.

Why is this number 0.4995 or .49999...95 ?

It is because the digits 0 to 9 add up to 45. The average is 4.5 for
each column of evenly distributed digits. So a single column averages
to 0.45, two columns to 0.45 + 0.045. Three columns to 0.45 + 0.045 +
0.0045.

The round then give 0.500000000 consistently. There is no 0.5005 or
0.500005 that is just your mistake of 'intuitively' thinking that all
columns of even distribution average to 0.5000. They do not. I have
told you do not, I have demonstrated why they do not.

But then previous experiebce has shown that I need to say things about
4 different ways before you finally accept them.

> x.nnnnnnnnnnnnnnnn rounds to
> y.00000000000000005

No. Wrong. The average of 16 digit even distribution is 0.4999999..95
which rounds to 0.5000000000000000

> As a result, rounding error is negligable.

Non-existent actually.



> With that understanding, we can examine the effect of double rounding.

Let's see if you can get it in 3 this time ;-)

Richard

unread,
Aug 29, 2004, 3:20:11 AM8/29/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> If I took a dollar amount, removed 'V' and changed the name from
> 'amount-in-dollars' to 'amount-in-pennies', would that magically
> convert it from a real to an integer?

Of course not. Dollar.cents value are scaled integers, they are not
reals.

They may have derived from a real that was created by multiplying a
rate by a time, or dividing a salary by the numbers of periods, and
this real may have been represented as an intermediate scaled integer
with a tiny loss of value. But, in theory that real result of the
calculation can be truncated to 3 decimal digits (where the cents
would average 0.4995) and then be rounded to two decimals to give an
average of 0.50.

> In that hypothetical case, yes. When rounding to thousands, half would
> be unchanged and half would round up. The average line would increase
> by $250.
>
> That's an illustration of why arithmetic rounding doesn't work as
> expected.

No. It does work exactly as _expected_ by me and by many others. If it
took you by surprise then you should take a course that takes away
that surprise by teaching you what you should expect.

> Wish I'd thought of it. Half the numbers gain 500 and the
> other half are unchanged. They don't 'round down', they don't lose any
> value.
>
> If Bankers' Rounding were used, there would be no error. Half the
> numbers ending with 500 would lose 500 and half would gain 500.

Exactly. Use the right tool.



> >want. Well actually it should be 0.4999.. that is subtracted.
>
> That would work for evenly distributed numbers.

No. Wrong. That does not 'work for evenly distributed numbers' it
works for the results of taking a set of evenly distributed numbers
that have been rounded.

Take a thousand lines: "Evenly distributed sets of numbers DO NOT
average 0.500".

The rounded set does average 0.500, subtracting 0.0049999 'unrounds'
to average 0.4995.

> Good idea. It's
> explicitly compensating for the rounding error being introduced by
> Cobol.

No. There is no 'rounding error introduced by Cobol'. Cobol merely
implements the number system. The number system works in particular
ways. Well defined ways. Ways that you fail to understand.



> When rounding to thousands, it's not necessary for bond quantities,
> which are in multiples of 1,000. Where the numbers are multiples of
> 100, as they are for stock shares, it would convert the Cobol error
> from +50 to -50. That's 50, not .50.

It is not a Cobol error. It may be a programmer error to use tools
they don't understand that work in ways the programmer doesn't expect.



> >For a set of dollar.cents which have been rounded from calculations
> >you can 'unround' these by subtracting 0.004999.. before rounding to
> >dollars, usually this is not an issue and the variablity of the data
> >means that the difference is insignificant.
>
> Nope. You'd have to know the precision they were rounded FROM.

No. You are wrong.

> Subtracting .005 works only if they were rounded from three digits to
> two. If the intermediate was in standard arithmetic, the correction
> would be -0.00000000000000005.

No. you are wrong. Go back and read all the messages again.

Or preferably read an actual tutorial on how the number system works.

Glenn Someone

unread,
Aug 29, 2004, 4:09:58 AM8/29/04
to
I say this not knowing what has been said in the rest of the thread,
but I will say that RW probably didn't communicate this clearly
enough. He is right, all fixed-point numbers are processed as
integers. This is one advantage of COBOL, because it's one of the few
languages that allow proper logical decimal processing as integers
innately, and therefore is more accurate than a floating-point
processing.

To wit, when I performed the Mike Cowlishaw decimal study as posted in
here in a different language, what was proven in one case was that the
calculations had to be performed with the decimal values as integers
to be entirely accurate. One thing to realize is that floating point
values (the other alternative) are innately inaccurate, and in fact in
my testing fell apart very readily and produced inaccurate results.

He may not be communicating this point correctly, but the intention in
the statement is quite correct. Let me rephrase: "All fixed point
numbers in COBOL are processed as integers." And this is true -
9(10)v99 is the same as 9(5)v9(7), and is calculated in the same
manner.

I do have to wonder, what is everyone's grudge against him? I'll
admit he comes out as a troll every once in a while, but he does bring
some good knowledge to the table here, and definitely makes people
think out of the "Big Momma IBM says this is the standard so I need to
cower in fear over not crossing it" box like I've talked about in here
before.

As I keep saying, if you want to become better programmers, stop
thinking in this mindset and start thinking for yourselves. Some
rules can be bent, others can be broken. Yet your program will still
run flawlessly. Think, learn, adapt.

Robert Wagner

unread,
Aug 29, 2004, 6:19:40 AM8/29/04
to
On Sun, 29 Aug 2004 04:42:10 GMT, "William M. Klein"
<wmk...@nospam.netcom.com> wrote:

How about the prize for Numerancy. If you don't understand that
fixed-point numbers are integers, you're innumerate.

Robert Wagner

unread,
Aug 29, 2004, 6:55:47 AM8/29/04
to
On Sun, 29 Aug 2004 03:09:58 -0500, Glenn Someone
<donts...@whydoyouneedmyaddressspammers.com> wrote:

>I do have to wonder, what is everyone's grudge against him?

Because I advocte Real Programming, as opposed to mediocrity.

>I'll admit he comes out as a troll every once in a while

Who me, a troll? I'm here to jump-start Cobol programmers to think in
basics rather than dumbass 'standards'.

>As I keep saying, if you want to become better programmers, stop
>thinking in this mindset and start thinking for yourselves. Some
>rules can be bent, others can be broken. Yet your program will still
>run flawlessly. Think, learn, adapt.

Amen, brother.

SkippyPB

unread,
Aug 29, 2004, 11:13:46 AM8/29/04
to
On Sun, 29 Aug 2004 04:42:10 GMT, "William M. Klein"
<wmk...@nospam.netcom.com> enlightened us:

Mr. Obvious of 2004?

Regards,

////
(o o)
-oOO--(_)--OOo-


"Always do sober what you said you'd do drunk.
That will teach you to keep your mouth shut."
-- Ernest Hemingway
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Remove nospam to email me.

Steve

docd...@panix.com

unread,
Aug 29, 2004, 11:53:44 AM8/29/04
to
In article <j933j0ta4sk90q0in...@4ax.com>,
Glenn Someone <donts...@whydoyouneedmyaddressspammers.com> wrote:

[snip]

>Some
>rules can be bent, others can be broken. Yet your program will still
>run flawlessly.

As has been stated previously:

IF PROGRAM-RUNS
PERFORM NEXT-ASSIGNMENT
ELSE
PERFORM CODE-LIKE-HELL UNTIL DAMNED-THING-WORKS.

DD

Rick Smith

unread,
Aug 29, 2004, 2:20:59 PM8/29/04
to

"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
news:o7b3j01nfgsr6b72q...@4ax.com...

> On Sun, 29 Aug 2004 04:42:10 GMT, "William M. Klein"
> <wmk...@nospam.netcom.com> wrote:
>
> >I just HAD to create a new thread with this one (class in my opinion) RW
> >statement
> >
[snip]

> >
> >The statement,
> >
> > "All fixed-point numbers are integers. "
> >
> >must win the prize for something !!!! <G>
>
> How about the prize for Numerancy. If you don't understand that
> fixed-point numbers are integers, you're innumerate.

Mr Wagner, it is my understanding that fixed-point numbers are
scaled-integers, not mere integers, and that fixed-point numbers
consist of an integer value and an implied operator and integer
scale factor.

Perhaps I am innumerate, nonetheless, because I cannot count
the number of times you have introduced an inaccuaracy and
defended it as correct. <g>

Mr Wagner, your allegation was, as I recall, that COBOL
rounding was wrong because the sum of rounded numbers
was too high for a contrived sequence of numbers. This
cannot be the case because the COBOL rounding rule
applies to only single calculations not their sum.

The method I learned (and have not practiced since) was, as
I recall, 'Round to even numbers'. This may be implemented,
for positive numbers, (untested) as:

if (function rem (my-number 1) = 0.5
if (function mod (function integer (my-number) 2)) = 0
subtract 0.5 from my-number giving rounded-number
end-if
else
add 0 my-number giving rounded-number rounded
end-if

1.5+2.5+3.5+4.5=12 becomes
2+2+4+4=12

but 1.5+3.5+5.5+7.5=18 becomes
2+4+6+8=20

and 2.5+4.5+6.5+8.5=22 becomes
2+4+6+8=20

thus, as shown here, the accuracy of the sum of rounded
numbers depends upon the distribution, between even and
odd, of the integer part of the numbers. Furthermore, the
accuracy of the sum of rounded numbers also depends
on the distribution, relative to 0.5, of the fractional part.
The COBOL rounding rule does not define, nor control,
such distributions.

The point being that a rounding rule that applies to only
single calculations cannot, properly, be blamed for the
inaccuracy of the sum. This includes the rule 'Round to
even numbers'.

Richard

unread,
Aug 29, 2004, 4:33:38 PM8/29/04
to
Glenn Someone <donts...@whydoyouneedmyaddressspammers.com> wrote

> I do have to wonder, what is everyone's grudge against him?

I have seen others express that point. However, from my point, I am
not arguing against Robert, I argue against what I see as
misinformation wherever I see it.

> I'll admit he comes out as a troll every once in a while,

He has an unfortunate habit of extrapolating from a single data point
to being the 'universal truth'. Many of us will use such terms as
'there is a tendancy to', or 'some' or 'it may be that', or even 'many
of us'. Robert will tend to use absolutes in those cases with implied
'all' and 'every'. I mean 'fish in a barrel', one counter example and
he is wrong.

Eaxamples:

"""I like Kenny G but the cognosenti don't. They say he's
repititious."""

It may well be that some do and some don't, or is RW defining
'cognosenti' as "those who say Kenny G is repititious" ?

"""Academics don't call it Classical, they call it Art Music."""

Again: is this supposed to mean that if they don't call it 'Art Music'
then they aren't Academics and vv ? and he based this on a single
example.

You are correct that he doesn't communicate well at all, but worse, he
doesn't learn.

In the past he has made staements that 'all mainframe programmers' are
incompetent and made other insults about people he has never known.

> but he does bring some good knowledge to the table here,

That is debatable. Sometimes he has answers, sometimes he presents
misinformation.

The hard part is that he has absolute beliefs in his own rightness. He
seems to never bother checking when he presents misinformation and is
told that he is wrong. He will simply keep repeating bad information.
He was a Marine (apparently) and this may have set his attitudes in
his formative years, ones that he can't grow out of. This is why the
discussions go on. RW seems to have learnt the Marines way that if he
keeps repeating things eventually he will 'win' (ie everyone gives up
goes away), therefore, he concludes, he is right.

So RW may not be a troller* but he just acts like one.


* In this context the word troll does not refer to the Billy Goat
Gruff, lives under a bridge, type of creature, and is thus not the
message writer. It refers to the type of fishing where a line is
dragged in front of the fish with a bait or lure hoping to hook one of
them. The 'troll' is the fishing line or the message and it rhymes
with stroll whereas the scandinavian creature rhymes with doll.

> I'll admit he comes out as a troll every once in a while,

Of course I may have completely misinterpreted what you are saying
here. There is a completely diffent third meaning to 'troll' which
relates directly to 'coming out' but is something that you would never
accuse a Marine of.

Given the parallel discussions of british high camp humour, ... no
that can't be what you mean ;-)

LX-i

unread,
Aug 29, 2004, 9:27:09 PM8/29/04
to

Do you disagree that fixed-point numbers cannot be expressed as integers
with a decimal point shift? Granted, that's not quite the way he said
it, but I think that was his point.

--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ / \ / ~ Live from Montgomery, AL! ~
~ / \/ o ~ ~
~ / /\ - | ~ LXi...@Netscape.net ~
~ _____ / \ | ~ http://www.knology.net/~mopsmom/daniel ~
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
~ I do not read e-mail at the above address ~
~ Please see website if you wish to contact me privately ~
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
~ GEEKCODE 3.12 GCS/IT d s-:+ a C++ L++ E--- W++ N++ o? K- w$ ~
~ !O M-- V PS+ PE++ Y? !PGP t+ 5? X+ R* tv b+ DI++ D+ G- e ~
~ h---- r+++ z++++ ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Robert Wagner

unread,
Aug 30, 2004, 1:06:14 AM8/30/04
to

Truncation lowers the average to .499..5 because it discards
information. That's easy to understand. Rounding is supposed to
maintain the average. It's not supposed to push it upward, but it does
in Cobol due to a bug.

You try to explain away the bug by linking the two, by claiming that
rounding somehow recalls and corrects truncation error. It sounds
plausible to the innumerate because both remove digits from the right
side of a number, and the rounding error is approximately equal to the
truncation error in the other direction.

When I present cases where there was no truncation to correct, you
brush them aside by saying the rules are different for integers.

Your argument is an appeal to ignorance. Rounding is broken.


Robert Wagner

unread,
Aug 30, 2004, 1:50:56 AM8/30/04
to
On 29 Aug 2004 00:20:11 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote
>
>> If I took a dollar amount, removed 'V' and changed the name from
>> 'amount-in-dollars' to 'amount-in-pennies', would that magically
>> convert it from a real to an integer?
>
>Of course not. Dollar.cents value are scaled integers, they are not
>reals.

Thank you.

>They may have derived from a real that was created by multiplying a
>rate by a time, or dividing a salary by the numbers of periods, and
>this real may have been represented as an intermediate scaled integer
>with a tiny loss of value. But, in theory that real result of the
>calculation can be truncated to 3 decimal digits (where the cents
>would average 0.4995) and then be rounded to two decimals to give an
>average of 0.50.

It is apparent you have a strong attachment to the idea that
truncation and rounding are somehow related. You're wrong. They
aren't. How does the rounding operation know what truncation came
before?

>> In that hypothetical case, yes. When rounding to thousands, half would
>> be unchanged and half would round up. The average line would increase
>> by $250.
>>
>> That's an illustration of why arithmetic rounding doesn't work as
>> expected.
>
>No. It does work exactly as _expected_ by me and by many others. If it
>took you by surprise then you should take a course that takes away
>that surprise by teaching you what you should expect.

Classism via education. 'What do you expect from a high school
dropout.' Next you'll trot out the family pedigree.

Let's deal with the issue rather than the players' social class.

>> Wish I'd thought of it. Half the numbers gain 500 and the
>> other half are unchanged. They don't 'round down', they don't lose any
>> value.
>>
>> If Bankers' Rounding were used, there would be no error. Half the
>> numbers ending with 500 would lose 500 and half would gain 500.
>
>Exactly. Use the right tool.

Cobol doesn't give me a choice. Neither does Excel.

>> Good idea. It's
>> explicitly compensating for the rounding error being introduced by
>> Cobol.
>
>No. There is no 'rounding error introduced by Cobol'.

Yes there is. You stubbornly refuse to acknowledge that when presented
with factual evidence.

>Cobol merely
>implements the number system. The number system works in particular
>ways. Well defined ways. Ways that you fail to understand.

Yeah, right. I'm too stupid to understand.

One knows a debate is bottom-fishing when it relies on the other
side's stupidity to make its point. If you had a valid point supported
by FACTS, you would have made it. Readers will conclude facts aren't
on your side.

>> Subtracting .005 works only if they were rounded from three digits to
>> two. If the intermediate was in standard arithmetic, the correction
>> would be -0.00000000000000005.
>
>No. you are wrong. Go back and read all the messages again.
>
>Or preferably read an actual tutorial on how the number system works.

I recall what the messages said and could write a tutorial on how the
number system works. Please don't talk down to me.

docd...@panix.com

unread,
Aug 30, 2004, 5:36:15 AM8/30/04
to
In article <afd5j01oq7bnh3117...@4ax.com>,

Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>On 29 Aug 2004 00:20:11 -0700, rip...@Azonic.co.nz (Richard) wrote:

[snip]

>>No. It does work exactly as _expected_ by me and by many others. If it
>>took you by surprise then you should take a course that takes away
>>that surprise by teaching you what you should expect.
>
>Classism via education. 'What do you expect from a high school
>dropout.' Next you'll trot out the family pedigree.

No, Mr Wagner, next he'll call someone with whose conclusions he disagrees
'innumerate'; if suggesting (note the 'you should') that learning
something is in order is 'classicm via education' then you seem, indeed,
to be the pot calling the kettle black.

>
>Let's deal with the issue rather than the players' social class.

What seems to be addressed, Mr Wagner, is not social class but level of
education... and if you believe that you have nothing to learn then I
dearly, dearly hope that you see this belief might be in error.

As for 'trot(ting) out family pedigree'... oh, I *cannot* resist...

... this is the kind of paranoid, self-pitying, passive-aggressive
delusion often suffered by one whose progenitors were consanguinaceous,
aye.

(it was ugly, sure... but I could *not* resist)

DD

Chuck Stevens

unread,
Aug 30, 2004, 12:23:41 PM8/30/04
to
"LX-i" <lxi...@netscape.net> wrote in message
news:NnvYc.34591$5s3....@fe40.usenetserver.com...

> Granted, that's not quite the way he said it, but I think that was his
point.

May be, but that's exactly the issue. I suspect it is probably true that a
*majority* of implementations store fixed-point numeric items as integers
and keep track of the necessary scale factors in the compiler-generated
code, and that may indeed be what he meant. I don't believe the standard
*requires* this, but I think a preference toward integer arithmetic is
implicit in the COBOL standards.

I might even be persuaded to agree with him if he had amended "All
fixed-point numbers are integers"
to be something like "Most COBOL environments store and process fixed-point
numeric values as integers" early on in the thread.

What his point (his *conclusion*) was is irrelevant when the *premise* he
drew upon in order to arrive at that point is faulty. Where he tends to
lose credibility is not in making the error in the first place, it is in the
continued instance that, as stated, it was correct and everybody disagreeing
was wrong. In this case, it is not only the original statement but the
*continued insistance* that all fixed-point numbers *really are* integers,
that is absurd.

-Chuck Stevens


Rick Smith

unread,
Aug 30, 2004, 1:38:02 PM8/30/04
to

"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
news:9mr4j0lium86r4frs...@4ax.com...
[snip]
> Rounding is broken.

Mr Wagner, you do have a point to be made; but that's not it.

There are circumstances when a calculation will provide a precise
result that has the continuing fraction 50... to the right of the
least significant digit of a result field. This occurs when dividing
by even numbers (or multiplying by their exact reciprical.)

Given the integers N, where 0 <= N < 4, {0, 1, 2, 3}.
Divide N by 2, exactly, {0.0, 0.5, 1.0, 1.5}, the sum is 3.0.
Divide 2 by N, PIC 9, with COBOL rounding, {0, 1, 1, 2},
the sum is 4.
Divide 2 by N, PIC 9, with alternative rounding, {0, 0, 1, 2}
the sum is 3.

If the requirement is to use COBOL rounding, or round halves
up, the sum 4 is correct even if not accurate. Given the
requirement, the method for rounding is not broken, nor wrong,
not is it a bug.

If the requirement is to provide an accurate sum then alternative
rounding methods will do that. What may be learned is that,
when summing rounded numbers, creative solutions may be
better. Thank you.

Robert Wagner

unread,
Aug 30, 2004, 2:44:07 PM8/30/04
to
On 29 Aug 2004 13:33:38 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Glenn Someone <donts...@whydoyouneedmyaddressspammers.com> wrote
>
>> I do have to wonder, what is everyone's grudge against him?
>
>I have seen others express that point. However, from my point, I am
>not arguing against Robert, I argue against what I see as
>misinformation wherever I see it.

Like the fact that rounding biases numbers upward?

>> I'll admit he comes out as a troll every once in a while,
>
>He has an unfortunate habit of extrapolating from a single data point
>to being the 'universal truth'. Many of us will use such terms as
>'there is a tendancy to', or 'some' or 'it may be that', or even 'many
>of us'. Robert will tend to use absolutes in those cases with implied
>'all' and 'every'. I mean 'fish in a barrel', one counter example and
>he is wrong.

If the numbers all end with 000, there is no error.

>Examples:


>
>"""I like Kenny G but the cognosenti don't. They say he's
>repititious."""
>
>It may well be that some do and some don't, or is RW defining
>'cognosenti' as "those who say Kenny G is repititious" ?

The majority of the ones I happened to read.

>"""Academics don't call it Classical, they call it Art Music."""
>
>Again: is this supposed to mean that if they don't call it 'Art Music'
>then they aren't Academics and vv ? and he based this on a single
>example.

I based it on reading the literature. Here is an example:

http://www.munb.com/artx2.html

>In the past he has made staements that 'all mainframe programmers' are
>incompetent and made other insults about people he has never known.

All mainframe programmers aren't giving me a hard time here .. nor did
they name you as their spokesman.

>> but he does bring some good knowledge to the table here,
>
>That is debatable. Sometimes he has answers, sometimes he presents
>misinformation.

We all make mistakes. That's not it. People become irritated when I
question 'prevailing wisdom', when I challenge practices they've been
following for 30 years.

>The hard part is that he has absolute beliefs in his own rightness. He
>seems to never bother checking when he presents misinformation and is
>told that he is wrong. He will simply keep repeating bad information.
>He was a Marine (apparently) and this may have set his attitudes in
>his formative years, ones that he can't grow out of. This is why the
>discussions go on. RW seems to have learnt the Marines way that if he
>keeps repeating things eventually he will 'win' (ie everyone gives up
>goes away), therefore, he concludes, he is right.

It's true that the Marines teach perseverance, but the most important
thing they teach is Superior Tactical Firepower. In the context of
this forum, that means Truth eventually wins. If I didn't believe
that, I wouldn't be here.

My observation about rounding error is not widely known.

My sort demos showed how to build trees and linked lists, and call C
functions .. things readers might not have thought of on their own.

They showed how to sort the OO way and how to sort ten times faster
than the Micro Focus library.

Those seem more valuable contributions than answers to beginners'
questions.

>So RW may not be a troller* but he just acts like one.

I sometimes overstate to stimulate discussion in a positive direction.
Trollers post outright falsehoods intended to attract negative
attention.

>> I'll admit he comes out as a troll every once in a while,
>
>Of course I may have completely misinterpreted what you are saying
>here. There is a completely diffent third meaning to 'troll' which
>relates directly to 'coming out' but is something that you would never
>accuse a Marine of.
>
>Given the parallel discussions of british high camp humour, ... no
>that can't be what you mean ;-)

I don't know what relevance my sexual taste has to programming, but
since you brought it up I'm straight.

Chuck Stevens

unread,
Aug 30, 2004, 3:35:17 PM8/30/04
to

"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
news:9mr4j0lium86r4frs...@4ax.com...

>> Rounding is broken.

Whatever your view of what the COBOL "ROUNDED" phrase does or whether what
it actually does is correct or not, it is abundantly clear from the
historical record that both the description and the behavior have been
substantially unchanged for something over forty years. If ROUNDED is
broken, it's been broken since the introduction of the phrase in COBOL, I
believe sometime before COBOL-60.

You've spent a lot of energy definding your position by describing all the
ways in which you think roundING is broken, but not a lot of ways describing
the ways in which you think its DESCRIPTION is broken in the standard. How
would you improve it?

And if it's not the *definition* of ROUNDED but rather its *use* (or, for
that matter, the use of roundING through explicit calculations) that sticks
in your craw, what are you suggesting that, for example, the banking
industry do to correct this error that nearly a half-century of auditors
have failed to catch?

Personally, I think "ROUNDED" had better do exactly what the standard says
it does, and I also think what the standard says it does is abundantly
clear. Changing the definition of ROUNDED in a future standard would break
upward-compatibility rules in any case, and I can virtually guarantee that
neither WG4 nor J4 would find the arguments that (a) that definition is
broken and (b) has to be corrected a convincing one.

As to whether individual programmers use roundING appropriately or not in
COBOL, well, that's a matter for individual programmers, for their bosses,
and for the auditors that verify the company's books. I think those
auditors have a pretty good clue as to what constitutes appropriate rounding
and what does not.

This whole discussion reminds me of an acquaintance (now long deceased) who
had finished his course work for a doctorate in mathematics, and was
preparing his dissertation on the formulae relating to shock wave theory
(this being early on in the cold war). He was able to prove that all the
prevailing formulae were incorrect, but felt it was *ethically
inappropriate* to publish this as a dissertation without including a
defensible alternative, which he was unable to do. Rather than "prove a
negative" as his doctoral dissertation -- which his advisor and the faculty
were willing to accept -- he mailed a copy of the dissertation to the three
people in the Western world who might have cared -- as I recall all
physicists at Los Alamos -- and withdrew from the doctoral program, never to
return.

Presuming, Mr. Wagner, that you have indeed proven that "rounding is
broken", what *constructive* actions do your ethics require you to take so
as to rectify that error?

-Chuck Stevens


Richard

unread,
Aug 30, 2004, 3:39:58 PM8/30/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> Truncation lowers the average to .499..5 because it discards
> information. That's easy to understand.

Good, because you started by saying that all v99, v999, v9999 averaged
.5000 which meant that even though there was some discarded it still
somehow added up to the same total.

> Rounding is supposed to maintain the average.

It does. It maintains the avrerage of the infinite precision original
real numbers. With enough digits the average is close enough to
0.500. Rounding maintains this with an average of 0.500.

> It's not supposed to push it upward,

It only 'pushes it upwards' with respect to how much truncation
'pushed it downwards'. Your numbers of 0.5005 were _wrong_ and were
not the result of valid experiment.

> but it does in Cobol due to a bug.

No. Wrong. The rounding mechanism is well defined and works
identically in primary school textbooks, mechanical adding machines,
C, Java, and Cobol. There is no 'bug'. Complain to whoever invented
the number system.

> You try to explain away the bug by linking the two, by claiming that
> rounding somehow recalls and corrects truncation error.

No. It doesn't 'recall' anything. It completely ignores whether it
has been truncated or not. Rounding, say, to cents, takes the two
digits and inspects the 3rd. It neither knows nor cares whether there
are no digits beyond this or an infinity of them. Nor does it care
whether these digits are (or would be) all 9s or all 0s.

This means that you _can_ truncate beyond the third digit without
affecting the result of rounding, which is entirely predicatible.

> It sounds plausible to the innumerate

I think you have that around the wrong way.

> because both remove digits from the right
> side of a number, and the rounding error is approximately equal to the
> truncation error in the other direction.

You continue to say 'rounding error' when applied to an average. there
is no _the_ rounding error. _A_ 'rounding error' is the difference
between each individual original infinite precision real and the
rounded number. For example 1.23761.. rounded gives 1.24 with a
rounding error of .00238.. The total of the infinite precision reals
and of the rounded numbers will be about the same for even
distributions.

The total of the truncated set (truncating the reals) will (for
positive numbers) be less than either the set of reals or of the set
of rounded numbers.

The amount it is less is dependent entirely on how much is discarded.
The difference between the total of the reals and the total of the
truncated set will depend on whether you truncate to 6 or 3 digits.

Now here's the thing: read this 3 times, then come back tomorrow and
read it again:

The difference between the set of infinite precision reals and
the truncated set,
and
the difference between the rounded set and the truncated set

are _identical_ (in ideal conditions)

If the set of infinite precision reals adds up to the same as the set
of rounded numbers (which it does under ideal conditions). Then any
set that is less than that set (such as a truncated set) will have the
_same_ difference to the two sets that have the same total.

How hard is that ?

> When I present cases where there was no truncation to correct, you
> brush them aside by saying the rules are different for integers.

Rounding works on real numbers of infinite precision and random digits
beyond the rounding point to create numbers with fixed precision.

If you use it on numbers that do not have those characteristics then
either you put up with the errors (of usage) or you modify the
technique to cater for the differences that you _should_ expect.



> Your argument is an appeal to ignorance.

I really don't know what to say without it seeming like an ad hominem
attack ;-)

Perhaps all this is a troll on your part to get me to throw insults at
you.

> Rounding is broken.

I suggest that you try using C or Java or Perl or just follow the
instructions in a primary school maths book and see what answers you
get. When you find some definition of rounding that gives better
answers than Cobol does you can then present that here.

Robert Wagner

unread,
Aug 30, 2004, 3:44:25 PM8/30/04
to
On 30 Aug 2004 05:36:15 -0400, docd...@panix.com wrote:

>... this is the kind of paranoid, self-pitying, passive-aggressive
>delusion often suffered by one whose progenitors were consanguinaceous,
>aye.

When using big words to call someone stupid, it is really important to
spell them right. Otherwise, you look like the fool.

Richard

unread,
Aug 30, 2004, 4:29:13 PM8/30/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> It is apparent you have a strong attachment to the idea that
> truncation and rounding are somehow related. You're wrong. They
> aren't. How does the rounding operation know what truncation came
> before?

Rounding neither knows nor cares whether truncation has occured or
not. It merely operates as if there were an infinity of random digits
following the point of rounding.**

The link between rounding and truncation was entirely of your making
when you compared the sum of a set of truncated numbers and a set of
rounded numbers.

You were surprised when they didn't add up to the same value, and thus
didn't have the same average.

Much of this discussion has been me explaining in sufficient
excrusiating detail (but still, it seems, falling short) why the
truncated set should not add up to the same as the set of infinite
precision and why the truncated sets are short of your 'intuition'
that they should average 0.500.

** The numbers that you are using for stocks do not have an infinite
number of random digits following. Rounding will give an answer that
would be expected (by me) for a set that _did_ have an infinite number
of random digits and this would be a different answer. This is not a
fault of Cobol, nor even of rounding, but of an expectation that
rounding should give some other answer.


> >No. It does work exactly as _expected_ by me and by many others. If it
> >took you by surprise then you should take a course that takes away
> >that surprise by teaching you what you should expect.
>
> Classism via education. 'What do you expect from a high school
> dropout.' Next you'll trot out the family pedigree.

What _I_ expect from a high school dropout who claims to be self
educated is a recognition that perhaps they haven't been well served
by his tutors.

In fact you made claims about what your 'intution' told you. Now I am
certainly no classisist but it seems to me that the prefix 'in' might
mean 'without' so your claims to what you would expect was 'without
tuition'.

I am suggesting that you may get a different expectation 'with
tuition'.

> Let's deal with the issue rather than the players' social class.

I neither know nor care what your 'social class' is. In fact, I live
in a country which is ruthlessly classless.

My only concern are the issues that you present and that your
misconceptions and misinformation may become accepted by others.


> >> If Bankers' Rounding were used, there would be no error. Half the
> >> numbers ending with 500 would lose 500 and half would gain 500.
> >
> >Exactly. Use the right tool.
>
> Cobol doesn't give me a choice. Neither does Excel.

It is quite true that there is no provided function to do this. You
have to write the code yourself. Of course if the results aren't
acceptable (as you have explained) then use the wrong tool as you are
already doing, but _KNOW_WHY_ the answers may differ.

> >No. There is no 'rounding error introduced by Cobol'.
>
> Yes there is. You stubbornly refuse to acknowledge that when presented
> with factual evidence.

Show me an example where C or Pascal or a Primary School Maths book
gives a different answer and your claims may be looked at closer.

I will repeat:

> >Cobol merely
> >implements the number system. The number system works in particular
> >ways. Well defined ways. Ways that you fail to understand.
>
> Yeah, right. I'm too stupid to understand.

No. You are too stubbon. The way of determining whether the problem
lies in Cobol or in 'the number system' (or specifically your
expectations of this) is to try the same algorithm manually following
a text book, or using other mechanisms, such as an adding machine.

Don't use Excel it is known to be broken, but they won't fix it
because it gives numbers that everyone likes. ;-)


> One knows a debate is bottom-fishing when it relies on the other
> side's stupidity to make its point. If you had a valid point supported
> by FACTS, you would have made it. Readers will conclude facts aren't
> on your side.

I would be very interested to see what others do think. I consider
that I have presented adequate facts and arguments. But then I would,
wouldn't I.


> I recall what the messages said and could write a tutorial on how the
> number system works.

OK. Write a tutorial.

Chuck Stevens

unread,
Aug 30, 2004, 5:34:05 PM8/30/04
to
"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
news:fo07j05mcb78ga01l...@4ax.com...

Hm. I personally have no problem with someone combining a stem and a suffix
to produce a term that more accurately represents the author's intent than a
more-commonly-used form -- such has been characteristic of both Germanic and
Romance languages for some time now.

The only word that Microsoft Word identifies as a spelling error is
"consanguinaceous", and I honestly do not put a whole lot of stock into the
judgment of that particular piece of software when it comes to English
spelling. I find "consanguineous" in my Webster's Collegiate, and I
suspect this is the word, and the spelling, that you would have preferred in
this context. The remainder of this message makes that presumption.

I see a subtle distinction between the suffixes "-aceous" and "-[e]ous".
The root -- "consanguin-" -- is common to both. I believe both the meaning,
and the distinction between the term used and the more common word in
Webster's, is clear based on the fundamental rules of compound formation in
English. The fact that a particular compound does not appear in an
accepted English dictionary does not necessarily mean that the compound word
does not express a valid meaning in English, nor does it mean that the
compound is obviously a misspelling of a word that *is* present in such a
dictionary.

In any event, it seems to me that the patent thrust of the characterization
to which you take exception isn't about *fundamental intelligence* but about
*behavior*. I think he who behaves foolishly will by his actions
precipitate the perception of his behavior as foolish. I don't see
"consanguinaceous" as a foolish spelling error but rather as the
appropriate, correct and accurate application of English rules for the
formation of compound words.

I don't think it's appropriate to contend that a person exclaiming "Heaven
forfend!" *really meant* "Heaven forbid!" just because the latter is more
commonly heard. Asserting that "Heaven forfend" is an error would, in my
view, be a foolish act, and if not a stupid one then a clear indication of
inattention during basic English classes. The same is true for complaining
about "consanguinaceous" as a spelling error for "consanguineous".

Do I really need to quote or even paraphrase "Forrest Gump" here?

-Chuck Stevens


Richard

unread,
Aug 30, 2004, 7:54:38 PM8/30/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> >I have seen others express that point. However, from my point, I am
> >not arguing against Robert, I argue against what I see as
> >misinformation wherever I see it.
>
> Like the fact that rounding biases numbers upward?

Yes, that is _exactly_ the misinformation that I am arguing against,
or more specifically that Cobol is making an error in doing so when it
is used on sets of numbers that aren't real numbers with infinite
precision.



> >> I'll admit he comes out as a troll every once in a while,
> >
> >He has an unfortunate habit of extrapolating from a single data point
> >to being the 'universal truth'.

> If the numbers all end with 000, there is no error.

Well, exactly. That is a 'single data point' from which you seem to
want to extrapolate a universal truth of some sort.

> >"""I like Kenny G but the cognosenti don't. They say he's
> >repititious."""

> The majority of the ones I happened to read.

And if you had qualified your staement with 'generally' or 'in my
experience' or even 'the majority' then there could be little
argument.

Your later 'I meant' qualification admits that only 'the majority' say
this. Perhaps many say nothing at all on the subject, but it does
seem to admit that some do not say that. Some may actually like him,
some may find he is not repititious at all.

Given that others may read completely different items written by a
different set of authors then they have read those that did like KG
and may find your statement to be completely wrong.



> >"""Academics don't call it Classical, they call it Art Music."""
> >
> >Again: is this supposed to mean that if they don't call it 'Art Music'
> >then they aren't Academics and vv ? and he based this on a single
> >example.
>
> I based it on reading the literature. Here is an example:

Your claim was: "I know that because a former gf had a PhD in it."
which seemed to me to be a single data point and was not claiming that
you read anything at all.

> http://www.munb.com/artx2.html

Yet again a single data point, granted a second one.

But in any case, your two statements seemed to have the primary aim of
denigrating 'the cognescenti' and 'academics'.


> >In the past he has made staements that 'all mainframe programmers' are
> >incompetent and made other insults about people he has never known.
>
> All mainframe programmers aren't giving me a hard time here ..

Then why do you insult all of them, ones you have no knowledge of ?

> nor did they name you as their spokesman.

I made no claim that I was speaking for any or all of them at all. I
was once a mainframe programmer when mainframes was just about all
there were.

But my observation was based entirely on your writing of the insults,
not on what others thought of them.

> We all make mistakes. That's not it. People become irritated when I
> question 'prevailing wisdom', when I challenge practices they've been
> following for 30 years.

If your intent is to deliberately 'irritate people', then it may be
working. But you are irritating the wrong people when they are
constrained by their employers and terms of employment.

> It's true that the Marines teach perseverance, but the most important
> thing they teach is Superior Tactical Firepower. In the context of
> this forum, that means Truth eventually wins. If I didn't believe
> that, I wouldn't be here.

Yes. Unfortunately you believe that 'Superior Tactical Firepower' _IS_
'Truth'. Given the usual American foreign policy as I see it, then
that may well be what they indoctrinated you with in the Marines.

> My observation about rounding error is not widely known.

I think that it is very valuble to discuss such matters. I hope a lot
of people are reading the messages and putting the results into the
context of their own programming and what techniques they have been
taught to use.

However, if they think that 'Cobol has a bug' then the truth has lost
out to 'tactical firepower'.


> My sort demos showed how to build trees and linked lists, and call C
> functions .. things readers might not have thought of on their own.

Which were valuble things to discuss. However you made claims that
linked lists were much faster than tables as if this was a general
statement of revealed truth when it was provavbly wrong.



> They showed how to sort the OO way and how to sort ten times faster
> than the Micro Focus library.

Specific case solutions are always faster than more generalised cases.



> Those seem more valuable contributions than answers to beginners'
> questions.

I have a different view on those issues. Reverting to 'what C did in
the 80s' is not necessarily useful.

What you failed to present, probably because you never did much C, was
why C++ was developed to solve the problems that existed in C, such as
the problems inherent in using pointers. And then Java solved the
problems inherent in C++.

When you introduce the use of raw address pointers, and also of other
things such as ENTRY, you completely fail to take into account the 20
or 30 years of experience that C programmers have had with the
problems that these introduce. Most likely because you weren't doing C
in enough depth to care.

What would be more valuble is if you brought in these 30 year old
techniques and explained why _not_ to use them. Then people would see
why C++ was developed and would understand why OO is a valuble tool
(but pointers are not).


> I sometimes overstate to stimulate discussion in a positive direction.
> Trollers post outright falsehoods intended to attract negative
> attention.

Hmmm. It seems that you do post 'outright falsehoods' (occasionally)
and do attract a lot of 'negative attention'.

What you seem to want to say is that trollers _deliberately_ post
outright falsehoods and yours are not deliberate.

I suppose that you will demand examples. Well just as a starter:
"ANS84 does not define a standard file status code for 'file not
found'".

It may be that Bill has a list somewhere that he could post if you
asked him nicely.

> > is something that you would never accuse a Marine of.

> > no that can't be what you mean ;-)



> I don't know what relevance my sexual taste has to programming, but
> since you brought it up I'm straight.

You may note that I specifically denied that anything else was meant.
Twice.

In any case there was _no_ connection to 'programming', the connection
was to your mode of posting messages on any of the various disjoint
topics that you (and others, including me) post on.

Robert Wagner

unread,
Aug 30, 2004, 8:23:51 PM8/30/04
to
On Mon, 30 Aug 2004 09:23:41 -0700, "Chuck Stevens"
<charles...@unisys.com> wrote:


>What his point (his *conclusion*) was is irrelevant when the *premise* he
>drew upon in order to arrive at that point is faulty. Where he tends to
>lose credibility is not in making the error in the first place, it is in the
>continued instance that, as stated, it was correct and everybody disagreeing
>was wrong. In this case, it is not only the original statement but the
>*continued insistance* that all fixed-point numbers *really are* integers,
>that is absurd.

They are integers. If you don't think so, post some evidence to the
contrary.

docd...@panix.com

unread,
Aug 30, 2004, 10:00:38 PM8/30/04
to
In article <g4l6j05mdps70k5th...@4ax.com>,
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:

[snip]

>It's true that the Marines teach perseverance, but the most important
>thing they teach is Superior Tactical Firepower.

'He who thinks all fruits ripen the same time as strawberries knows
nothing of grapes.' - Paracelsus

Mr Wagner, any Marine commander worth their commission should know that
some situations require 'Superior Tactical Firepower' and enough lead so
you can walk on the air... and some situations require a barefoot assassin
with a knife. An attempt to treat either situation as the other may
lessen one's probabilities of success.

>In the context of
>this forum, that means Truth eventually wins.

If that is what it 'means', Mr Wagner, then it is no wonder that the
techniques used to support it more closely appear to resemble 'throw
enough crap against the wall and some of it is sure to stick.'

DD

docd...@panix.com

unread,
Aug 30, 2004, 10:06:40 PM8/30/04
to
In article <fo07j05mcb78ga01l...@4ax.com>,

When taking comments out of context to comment on spelling it is really
important to maintain a sense of humor. Otherwise, you prove yourself a
fool.

DD

William M. Klein

unread,
Aug 31, 2004, 12:07:54 AM8/31/04
to
There are TONS of "mathmatics" dictionaries that can explan (and document) that
"fixed point" and "integer" are two very DIFFERENT terms (an integer is - always
I think, but am not positive - a fixed point number; but a fixed point number is
NOT an integer). See for example:

http://www.webster-dictionary.org/definition/integer

"An inductive definition of an integer is a number that is either zero or an
integer plus or minus one. An integer is a number with no fractional part. If
written as a fixed-point number, the part after the decimal (or other base)
point will be zero."

Also,
http://www.campusprogram.com/reference/en/wikipedia/f/fi/fixed_point_1.html

"In computing, a fixed-point number representation is a real data type for a
number that has a fixed number of digits after the decimal (or binary or
hexadecimal) point. For example, a fixed-point number with 4 digits after the
decimal point could be used to store numbers such as 1.3467, 281243.3234 and
0.1000, but would round 1.0301789 to 1.0302 and 0.0000654 to 0.0001."

while (same general source
http://www.campusprogram.com/reference/en/wikipedia/i/in/integer.html

"They are also known as the whole numbers, although that term is also used to
refer only to the positive integers (with or without zero)."

***

However, possibly the most relevant (for CLC) definition for "integer" is that
used in the COBOL Standard itself. See the section "
5.4 Integer operands" on page 20 of the 2002 Standard.

NOTE:
In answer to one of the earlier posts, I certainly agree that EVERY "fixed
point" number can be expressed as an integer with a SCALING factor (i.e. times a
"power of 10). However, that is QUITE a different thing than saying that every
fixed point number *is* an integer (which it simply isn't in either the COBOL,
number theory, or general computing definition of the two terms)

--
Bill Klein
wmklein <at> ix.netcom.com


"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message

news:v2h7j0dqgqga79u7u...@4ax.com...

Richard

unread,
Aug 31, 2004, 2:30:31 AM8/31/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> They are integers. If you don't think so, post some evidence to the
> contrary.

An example of a fixed point number is 3.45

This is not an integer.

Now it may be that scaled integers are processed in some ways as if
they are integers. They _may_ be represented in Cobol as integers
(though this is not compulsory). But they are not integers as defined
in the terminology of the number system.

Robert Wagner

unread,
Aug 31, 2004, 3:34:54 AM8/31/04
to
On Mon, 30 Aug 2004 14:34:05 -0700, "Chuck Stevens"
<charles...@unisys.com> wrote:

>"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
>news:fo07j05mcb78ga01l...@4ax.com...
>> On 30 Aug 2004 05:36:15 -0400, docd...@panix.com wrote:
>>
>> >... this is the kind of paranoid, self-pitying, passive-aggressive
>> >delusion often suffered by one whose progenitors were consanguinaceous,
>> >aye.
>>
>> When using big words to call someone stupid, it is really important to
>> spell them right. Otherwise, you look like the fool.
>
>Hm. I personally have no problem with someone combining a stem and a suffix
>to produce a term that more accurately represents the author's intent than a
>more-commonly-used form -- such has been characteristic of both Germanic and
>Romance languages for some time now.

What we see in evidence raises CLC to a new level of comedic
absurdity. The diminuitive doctor makes a stupid spelling error and
the august Chuck Stevens rises to his defense. I am, as they say,
rolling on the floor laughing.

>I see a subtle distinction between the suffixes "-aceous" and "-[e]ous".
>The root -- "consanguin-" -- is common to both. I believe both the meaning,
>and the distinction between the term used and the more common word in
>Webster's, is clear based on the fundamental rules of compound formation in
>English. The fact that a particular compound does not appear in an
>accepted English dictionary does not necessarily mean that the compound word
>does not express a valid meaning in English, nor does it mean that the
>compound is obviously a misspelling of a word that *is* present in such a
>dictionary.

He is saying my parents were relatives. I didn't grow up in West
Virginia, I grew up on Long Island .. where consonants are replaced by
ugly vowel sounds. There is no "R" in New York, the "R" sounds vaguely
like an "O" or "W". New Yoook.


docd...@panix.com

unread,
Aug 31, 2004, 5:20:37 AM8/31/04
to
In article <c998j0ttprb0lmotp...@4ax.com>,

Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>On Mon, 30 Aug 2004 14:34:05 -0700, "Chuck Stevens"
><charles...@unisys.com> wrote:
>
>>"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
>>news:fo07j05mcb78ga01l...@4ax.com...
>>> On 30 Aug 2004 05:36:15 -0400, docd...@panix.com wrote:
>>>
>>> >... this is the kind of paranoid, self-pitying, passive-aggressive
>>> >delusion often suffered by one whose progenitors were consanguinaceous,
>>> >aye.
>>>
>>> When using big words to call someone stupid, it is really important to
>>> spell them right. Otherwise, you look like the fool.
>>
>>Hm. I personally have no problem with someone combining a stem and a suffix
>>to produce a term that more accurately represents the author's intent than a
>>more-commonly-used form -- such has been characteristic of both Germanic and
>>Romance languages for some time now.
>
>What we see in evidence raises CLC to a new level of comedic
>absurdity. The diminuitive doctor makes a stupid spelling error and
>the august Chuck Stevens rises to his defense. I am, as they say,
>rolling on the floor laughing.

When you rise, Mr Wagner, you might be able to see how deeply Mr Steven's
tongue is buried in his cheek.

>
>>I see a subtle distinction between the suffixes "-aceous" and "-[e]ous".
>>The root -- "consanguin-" -- is common to both. I believe both the meaning,
>>and the distinction between the term used and the more common word in
>>Webster's, is clear based on the fundamental rules of compound formation in
>>English. The fact that a particular compound does not appear in an
>>accepted English dictionary does not necessarily mean that the compound word
>>does not express a valid meaning in English, nor does it mean that the
>>compound is obviously a misspelling of a word that *is* present in such a
>>dictionary.
>
>He is saying my parents were relatives. I didn't grow up in West
>Virginia, I grew up on Long Island .. where consonants are replaced by
>ugly vowel sounds. There is no "R" in New York, the "R" sounds vaguely
>like an "O" or "W". New Yoook.

Mr Wagner, I am saying a great deal more than that. Your apparent
inability to recognise this might just be indicative of that which I
attempt to address.

'All I give them is truth... they can't handle truth!' has been a cry of
garden-variety 'Net-kooks as long as I've known the 'Net.

DD

Robert Wagner

unread,
Aug 31, 2004, 7:25:50 AM8/31/04
to

No problem. I find the whole thread hilarious.

docd...@panix.com

unread,
Aug 31, 2004, 7:57:41 AM8/31/04
to
In article <i1o8j095acgbl9egt...@4ax.com>,

Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>On 30 Aug 2004 22:06:40 -0400, docd...@panix.com wrote:
>
>>In article <fo07j05mcb78ga01l...@4ax.com>,
>>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>>>On 30 Aug 2004 05:36:15 -0400, docd...@panix.com wrote:
>>>
>>>>... this is the kind of paranoid, self-pitying, passive-aggressive
>>>>delusion often suffered by one whose progenitors were consanguinaceous,
>>>>aye.
>>>
>>>When using big words to call someone stupid, it is really important to
>>>spell them right. Otherwise, you look like the fool.
>>
>>When taking comments out of context to comment on spelling it is really
>>important to maintain a sense of humor. Otherwise, you prove yourself a
>>fool.
>
>No problem. I find the whole thread hilarious.

We both seem to do so, Mr Wagner... but for what might be different
reasons.

DD

Chuck Stevens

unread,
Aug 31, 2004, 11:45:26 AM8/31/04
to

"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
news:c998j0ttprb0lmotp...@4ax.com...

> What we see in evidence raises CLC to a new level of comedic
> absurdity. The diminuitive doctor makes a stupid spelling error and
> the august Chuck Stevens rises to his defense. I am, as they say,
> rolling on the floor laughing.

> ,,,


>
> He is saying my parents were relatives.

No, that is *precisely* my point. The way I read what docdwarf wrote, it
strikes me that he is saying that your *actions* appear to be those
appropriate to the category of someone whose parents were relatives. Had he
written "consanguineous", your statement would be true.

With tongue *not* planted in my cheek, that's exactly the difference that I
see between between "-[e]ous" and "-acious".

I have respect for docdwarf's education, and that is *precisely* the sort of
distinction, using established rules of English grammar and word formation,
that I would have expected him to make.

I presumed that docdwarf meant *exactly* what he wrote, nothing more,
nothing less. That Microsoft software didn't understand what he wrote is
evidence of the failings of Microsoft software to understand the rules of
English suffixes (and arguably prefixes), most likely through the error of
relying on a fixed lexicon without including, for example, a set of
Chomskian transforms. That a reader didn't bother to analyze what he wrote
before deeming it an error doesn't strike me as the author's fault.

> I didn't grow up in West
> Virginia, I grew up on Long Island .. where consonants are replaced by
> ugly vowel sounds. There is no "R" in New York, the "R" sounds vaguely
> like an "O" or "W". New Yoook.

Thank you for your lesson in phonemics and American dialectology. However,
I believe that the "R" sound as pronounced on Long Gisland is just as
phonemic an "R" as you would expect from a resident of North Dakota. That
it sounds like a vowel to a non-native speaker does not make it function as
a vowel to a native speaker. Most of my cousins grew up in Forest Hills, my
mother grew up on Staten Island, and my father grew up on Manhattan, and my
family spent some years living in Newark, so I have significant first- and
second-hand exposure to the various (and numerous) English dialects of the
New York metropolitan area, and I spent some time researching American
Dialectology there immediately after receiving my degree in Linguistics, in
which formal studies in American Dialectology formed a part. Perhaps your
formal studies on the subject were more intensive (or more recent) than
mine.

-Chuck Stevens


Howard Brazee

unread,
Aug 31, 2004, 12:26:00 PM8/31/04
to

On 31-Aug-2004, "Chuck Stevens" <charles...@unisys.com> wrote:

> No, that is *precisely* my point. The way I read what docdwarf wrote, it
> strikes me that he is saying that your *actions* appear to be those
> appropriate to the category of someone whose parents were relatives. Had he
> written "consanguineous", your statement would be true.

My wife and I have been relatives since 6 months after we met.


> Thank you for your lesson in phonemics and American dialectology. However,
> I believe that the "R" sound as pronounced on Long Gisland is just as
> phonemic an "R" as you would expect from a resident of North Dakota.

For a lot of people - the other guy's way of talking is wrong. Whether it is
how you pronounce creek, coyote, rodeo, or whatever - you can make fun of him if
he doesn't say it your way. And obviously someone who speaks in a black
dialect is only doing it because he's incapable of speaking the way I do, and
the Brits speak their way to put on airs. (even the Cockneys).

P.S. There are valid reasons to criticize the U.S. president - his regional
pronunciation of nuclear is not one of those.

Robert Wagner

unread,
Aug 31, 2004, 12:35:07 PM8/31/04
to
On 31 Aug 2004 05:20:37 -0400, docd...@panix.com wrote:


>'All I give them is truth... they can't handle truth!' has been a cry of
>garden-variety 'Net-kooks as long as I've known the 'Net.

This promotion from code cranker to Net-kook made my day. I shall add
it to my resume forthwith.

With gratitude,
Robert

Chuck Stevens

unread,
Aug 31, 2004, 12:55:40 PM8/31/04
to

"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
news:v2h7j0dqgqga79u7u...@4ax.com...

> They [all fixed-point numbers] are integers. If you don't think so, post


some evidence to the
> contrary.

Certainly.

Webster's Ninth New Collegiate Dictionary:

Integer: any of the natural numbers, the negatives of these numbers, or
zero

Natural number: the number 1 or any number (as 3, 12, 432) obtained by
adding 1 to this number one or more times.

Fixed-point: involving or being a mathematical notation (as in a decimal
system) in which the point separating whole numbers and fractions is fixed

Whole number: (synonymous with) integer

ANSI X3.23-1974, page I-60, 4.2, Glossary, Definitions:

Integer: A numeric literal or numeric data item that does not include any
character positions to the right of the assumed decimal point. ...

And to Bill's recommendation of Page 20 for the definition of "integer", I
would add the suggestion to note Page 75, 8.3.1.2.2.1 Fixed-point numeric
literals, which states:

A fixed-point numeric literal is a character-string whose characters are
selected from the digits '0' through '9', the plus sign, the minus sign, and
the decimal point. ... An integer literal is a fixed-point numeric literal
that contains no decimal point."

For COBOL's purposes, these citations make it clear that the set of "integer
data items" is a subset of the set of "fixed-point numeric data items".

That means that, even presuming only the context of standard COBOL, the
statements "an integer data item is a fixed-point data item" and "an integer
literal is a fixed-point numeric literal" are TRUE, the statements "a
fixed-point data item is an integer data item" and "a fixed-point literal is
an integer literal" are simply FALSE.

Others have pointed out the absurdity that 3.14159 is really an integer in
contexts beyond COBOL.

-Chuck Stevens


docd...@panix.com

unread,
Aug 31, 2004, 1:04:44 PM8/31/04
to
In article <t0a9j01uqenf0vrer...@4ax.com>,

Have a care, Mr Wagner, and consider the mockingbird... merely to give the
cry does not make one a member of the species.

DD

Robert Wagner

unread,
Aug 31, 2004, 2:02:07 PM8/31/04
to
On 30 Aug 2004 23:30:31 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote
>
>> They are integers. If you don't think so, post some evidence to the
>> contrary.
>
>An example of a fixed point number is 3.45
>
>This is not an integer.

It is stored in memory as '0345', if decimal, or 0159, if binary/hex.
If you multiplied it by 1.23, the result in decimal would look like
'42435'. The decimal point is used to align receiving fields.

Chuck Stevens

unread,
Aug 31, 2004, 2:23:44 PM8/31/04
to
"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
news:8ue9j05arsd8h5jhk...@4ax.com...

> >> They are integers. If you don't think so, post some evidence to the
> >> contrary.
> >
> >An example of a fixed point number is 3.45
> >
> >This is not an integer.
>
> It is stored in memory as '0345', if decimal, or 0159, if binary/hex.
> If you multiplied it by 1.23, the result in decimal would look like
> '42435'. The decimal point is used to align receiving fields.

Last time I looked, "is", "is treated as" and "is stored in memory as"
aren't synonymous. I believe I've pointed that out before. What you
wrote was "All fixed-point numbers are integers."

If what you *really* meant was "All implementations of my acquaintance store
fixed-point numbers in memory as integers", then I'd suggest that that's
what you should have written. I wouldn't have a quibble with that. Even
the shorthand "Most implementations store fixed-point numbers as integers"
doesn't rankle. The standard doesn't require this, however.

I know environments in which integers, fixed-point and floating-point data
items are "stored in memory" identically; the only differentiation among
them is how they are handled and treated. In that environment, in fact,
there is no requirement that integer data actually *be* in "canonic integer
form"; a normalized floating-point representation of the value serves
equally well and is just as exact a representation. Thus, in that
environment, a fixed-point numeric value may be stored, *not* as an integer
defined for that implementation, but as a *floating-point* item that happens
to have an exact value. So I would take exception to the elision of "of my
acquaintance" because it would demonstrably turn the statement from "true"
to "false".

How something is *handled* and how something are *stored* may be entirely
orthogonal to what something *is*.

-Chuck Stevens


Robert Wagner

unread,
Aug 31, 2004, 3:15:27 PM8/31/04
to
On Tue, 31 Aug 2004 08:45:26 -0700, "Chuck Stevens"
<charles...@unisys.com> wrote:

>With tongue *not* planted in my cheek, that's exactly the difference that I
>see between between "-[e]ous" and "-acious".
>
>I have respect for docdwarf's education, and that is *precisely* the sort of
>distinction, using established rules of English grammar and word formation,
>that I would have expected him to make.
>
>I presumed that docdwarf meant *exactly* what he wrote, nothing more,
>nothing less. That Microsoft software didn't understand what he wrote is
>evidence of the failings of Microsoft software to understand the rules of
>English suffixes (and arguably prefixes), most likely through the error of
>relying on a fixed lexicon without including, for example, a set of
>Chomskian transforms.

Don't blame Microsoft. I used my own spelling checker, which has a
dictionary I tediously compiled from _multiple_ sources. If English
spelling were regular, writing a spelling checker would be much
simpler. There would be root words and rules.

Did you know many electronic dictionaries are 'salted' with
intentional misspellings to detect plagiarism? And most public-domain
dictionaries contain numerous UNintentional misspellings.

A dictionary compiler is faced with hard choices, for instance whether
to include 'calender'. The selection of words in a dictionary tells a
lot about the compilers interests, expertise and taste. I could easily
see whether the compiler was male or female.

> Most of my cousins grew up in Forest Hills,

I was born in Kew Gardens, one mile from Forest Hills.

> my
>mother grew up on Staten Island, and my father grew up on Manhattan, and my
>family spent some years living in Newark

Small world. I'm sitting in East Hanover, 20 miles from Newark.

> so I have significant first- and
>second-hand exposure to the various (and numerous) English dialects of the
>New York metropolitan area, and I spent some time researching American
>Dialectology there immediately after receiving my degree in Linguistics, in
>which formal studies in American Dialectology formed a part. Perhaps your
>formal studies on the subject were more intensive (or more recent) than
>mine.

My linguistic studies were conducted ..umm.. in the field and library.

William M. Klein

unread,
Aug 31, 2004, 3:19:30 PM8/31/04
to
Actually, what wouldn't bother me is if RW stated,

"Most implementations store fixed-point numbers the same way they store
integers - but the compiler also stores its scaling factor to insure proper
handling of the DIFFERENCE between integers and non-integer fixed point numbers"

Of course, it is also true that the statement

"It is stored in memory as '0345', if decimal, or 0159, if binary/hex."

isn't true of any operating system that I know of. All the ones that I have
worked with actually store ALL data (and instructions) in "binary". Therefore a
data field with a value of "3.45" can be stored in a number of ways (depending
upon the PICTURE clause, USAGE, etc - if it is stored by a COBOL program).
Consider:

01 Num-Disp Pic 9v99 Value 3.45 Usage Display.
probably stores it as B"111100111111010011110101" (if my calculations are
right - which they may not be) for an EBCDIC environment.

and don't ask me what it would store it as for
01 Num-Nat Pic 9v99 Value 3.45 Usage National.

And compare this to (again, my calculations may be in error)

01 Num-S-Disp Pic S9v99 Value +3.45 Usage Display.
probably stores it as B"111100111111010011000101"

and

01 Num-Sep-Disp Pic S9v99 Sign is trailing separate Value 3.45 Usage Display.
probably stores it as B"11110011111101001111010101001110

The chances are that the same bit-pattern might be interpreted (depending upon
"environment") as:
- an instruction sequence
- an integer
- a character string
- probably other things

Therefore, saying that a fixed-point data value is "stored" as an integer may
(or may not) be true but
- doesn't really mean much (as the storage could mean other things as well)
- still has NO relevance to any statement claiming that a "fixed-point item IS
an integer item"

P.S. For those (referring to the subject of this thread) who don't understand
why some people have problems with RW's posts, it is simply that when he is
wrong (and simply WRONG) he has such difficulty in admitting it. If he simply
admitted his errors and said,
"sorry about that, what I really meant to say was ..."
or
"sorry for my mistake, you're right and I was wrong"

then I think his posts (correct and erroneous) would get much better reception.
This thread is simply a "classic" example of his refusal to admit that his
original statement (clear, and simple) that

"They <fixed-point items> are integers"

was WRONG (is wrong and always will be wrong)

--
Bill Klein
wmklein <at> ix.netcom.com

"Chuck Stevens" <charles...@unisys.com> wrote in message
news:ch2fnh$2f7$1...@si05.rsvl.unisys.com...

Robert Wagner

unread,
Aug 31, 2004, 3:37:22 PM8/31/04
to
On Tue, 31 Aug 2004 16:26:00 GMT, "Howard Brazee" <how...@brazee.net>
wrote:

>For a lot of people - the other guy's way of talking is wrong. Whether it is
>how you pronounce creek, coyote, rodeo, or whatever - you can make fun of him if
>he doesn't say it your way.

My father was a radio and TV announcer who grew up in New Jersey.
Shortly after moving to Biloxi, Mississippi, he affected a Deep South
accent that sounded like a native. Then he moved to Iowa, where he
adopted its German-based accent. His Nebraska accent was fairly
neutral except it had touches of Slavic.

Most of us don't have the talent.

Chuck Stevens

unread,
Aug 31, 2004, 3:44:44 PM8/31/04
to
"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
news:0tg9j097mvsvnvclj...@4ax.com...

> Don't blame Microsoft. I used my own spelling checker, which has a
> dictionary I tediously compiled from _multiple_ sources. If English
> spelling were regular, writing a spelling checker would be much
> simpler. There would be root words and rules.

The problem is that in English there *are* root words and rules.

> Did you know many electronic dictionaries are 'salted' with
> intentional misspellings to detect plagiarism? And most public-domain
> dictionaries contain numerous UNintentional misspellings.

Don't doubt it. My spelling skills are pretty good, so I generally ignore
spell-checking software (sometimes to my chagrin).

> A dictionary compiler is faced with hard choices, for instance whether
> to include 'calender'. The selection of words in a dictionary tells a
> lot about the compilers interests, expertise and taste. I could easily
> see whether the compiler was male or female.

Last evening my ten-year-old foster daughter requested help coming up with a
single word that provided the highest-possible numerical score according to
the scheme A=1, B=2, etc. We insisted on *her* coming up with words and
chose one, but remembering "The $64,000 Question" of my youth, and after our
discussion here in comp.lang.cobol, the word *I* came up with was
"antidisestablishmentarianistically", which is *not* in any dictionary I'm
aware of, but which I would define as "actions characterized by behavior
typical of or appropriate to a person who subscribes to the perspectives of
antidisestablishmentarianism".

> My linguistic studies were conducted ..umm.. in the field and library.

Well, as my sainted mother learned from her Chicago-born parents, "Don't try
to teach your grandmother how to milk ducks."

-Chuck Stevens


Lueko Willms

unread,
Aug 31, 2004, 4:52:00 PM8/31/04
to
. Am 31.08.04
schrieb wmk...@nospam.netcom.com (William M. Klein)
auf /COMP/LANG/COBOL
in x76dnbUkvvD...@comcast.com
ueber Re: Classic RW

WMK> Of course, it is also true that the statement
WMK>
WMK> "It is stored in memory as '0345', if decimal, or 0159, if
WMK> binary/hex."
WMK>
WMK> isn't true of any operating system that I know of. All the ones that
WMK> I have worked with actually store ALL data (and instructions) in
WMK> "binary".

WMK> 01 Num-Disp Pic 9v99 Value 3.45 Usage Display.
WMK> probably stores it as B"111100111111010011110101" (if my
WMK> calculations are right - which they may not be) for an EBCDIC
WMK> environment.

Don't try to appear so dumb. It is clear from the context that
Robert Wagner meant the byte-or half-byte-wise representation in
hexadecimal. What else do you make of "binary/hex"?


Yours,
Lüko Willms http://www.mlwerke.de
/--------- L.WI...@jpberlin.de -- Alle Rechte vorbehalten --

"Ohne Pressefreiheit, Vereins- und Versammlungsrecht ist keine
Arbeiterbewegung möglich" - Friedrich Engels (Februar 1865)

William M. Klein

unread,
Aug 31, 2004, 6:15:44 PM8/31/04
to
I thought he was talking about the difference between how you "store"

05 Num-Bin Pic S9v99 Binary +3.45.
versus
05 Num-Disp Pic S9v99 Display +3.45.

As I didn't "de-compose" "0159" (and it didn't happen to have A-F "nibbles") I
just didn't think of the other interpretation of what he might have meant.

--
Bill Klein
wmklein <at> ix.netcom.com

"Lueko Willms" <l.wi...@jpberlin.de> wrote in message
news:9FvwA...@jpberlin-l.willms.jpberlin.de...

Jeff York

unread,
Aug 31, 2004, 6:17:33 PM8/31/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:

>A dictionary compiler is faced with hard choices, for instance whether
>to include 'calender'.

Why should a lexicographer choose to omit the word for a piece of
machinery used in cloth-making?

--
Jeff. Ironbridge, Shrops, U.K.
j...@jakfield.xu-netx.com (remove the x..x round u-net for return address)
and don't bother with ralf4, it's a spamtrap and I never go there.. :)

... "There are few hours in life more agreeable
than the hour dedicated to the ceremony
known as afternoon tea.."

Henry James, (1843 - 1916).


Richard

unread,
Aug 31, 2004, 6:37:39 PM8/31/04
to
"Howard Brazee" <how...@brazee.net> wrote

> P.S. There are valid reasons to criticize the U.S. president -

But at least you can't blame his being president on the voting public. ;-)

Richard

unread,
Aug 31, 2004, 7:05:17 PM8/31/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> >An example of a fixed point number is 3.45
> >
> >This is not an integer.
>
> It is stored in memory as '0345', if decimal, or 0159, if binary/hex.
> If you multiplied it by 1.23, the result in decimal would look like
> '42435'.

It _might_ look like that, or it might look like '000424'. If the
answer were to be treated as an actual integer it may look like
'00004'.

> The decimal point is used to align receiving fields.

To admit that the result has a decimal point denies that the fields
are integers.

The signs that I see are that you may soon leave here and will
complain, as you did last time you left, that 'everyone quibbles' too
much.

You should realise that we are not 'banging on' against you
personally, but once you have made a statement that is not generally
true you will use what you think is 'superior tactical firepower' to
defend your position rather than attempting to resolve the issue by
clarifying the position.

To say that the digits of 'fixed point numbers' act like those of
integers would be correct, at least in the context of the discussion.
To say that fixed point _are_ integers is incorrect in several ways,
and provably so. To claim that repeatedly after it has been pointed
out as wrong, with documentation to support this, invites broadsides
of counter fire.

docd...@panix.com

unread,
Aug 31, 2004, 7:24:48 PM8/31/04
to
In article <ch26em$2ti6$1...@si05.rsvl.unisys.com>,

Chuck Stevens <charles...@unisys.com> wrote:
>
>"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message
>news:c998j0ttprb0lmotp...@4ax.com...
>
>> What we see in evidence raises CLC to a new level of comedic
>> absurdity. The diminuitive doctor makes a stupid spelling error and
>> the august Chuck Stevens rises to his defense. I am, as they say,
>> rolling on the floor laughing.
>> ,,,
>>
>> He is saying my parents were relatives.
>
>No, that is *precisely* my point. The way I read what docdwarf wrote, it
>strikes me that he is saying that your *actions* appear to be those
>appropriate to the category of someone whose parents were relatives. Had he
>written "consanguineous", your statement would be true.
>
>With tongue *not* planted in my cheek, that's exactly the difference that I
>see between between "-[e]ous" and "-acious".
>
>I have respect for docdwarf's education, and that is *precisely* the sort of
>distinction, using established rules of English grammar and word formation,
>that I would have expected him to make.

With all due respect, Mr Stevens... I sometimes know what to expect of me
and one of those expectations is that I make errors. Consider:

<http://groups.google.com/groups?selm=E0nJ4.16909%240o4.114484%40iad-read.news.verio.net&output=gplain>

--begin quoted text:

Oh, I *cannot* resist...

... Mr Brazee, I *did* make favorable assumptions about your
character... and *still* concluded that it is that of a man whose
progenitors were consanguineous.

--end quoted text

(I do, at times,

My error, however, allowed Mr Wagner to make what could be construed as a
derogatory comment about spelling, dancing perilously close to the
'spelling flame' ( http://www.advicemeant.com/flame/04psych.shtml#Spelling
or http://www.hyperdictionary.com/computing/spelling+flame )

I make a botch, I try to learn from it, I try to move on to Bigger and
Better Botches... one day, if I am lucky, I'll graduate from botch mode to
online.

DD

docd...@panix.com

unread,
Aug 31, 2004, 7:28:17 PM8/31/04
to
In article <0tg9j097mvsvnvclj...@4ax.com>,
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:

[snip]

>Did you know many electronic dictionaries are 'salted' with
>intentional misspellings to detect plagiarism?

No, I did not know this, Mr Wagner, and I have difficulty believing it.
Please supply a few examples to support this assertion... it should be
easy for you since, as you have stated, there are 'many' out there.

>And most public-domain
>dictionaries contain numerous UNintentional misspellings.

No, I did not know this, Mr Wagner, and I have difficulty believing it.
Please supply a few examples to support this assertion... it should be
easy for you since, as you have stated, 'most' contain these.

DD

Chuck Stevens

unread,
Aug 31, 2004, 7:31:48 PM8/31/04
to
"Richard" <rip...@Azonic.co.nz> wrote in message
news:217e491a.04083...@posting.google.com...

> To claim that repeatedly after it has been pointed
> out as wrong, with documentation to support this, invites broadsides
> of counter fire.

Or, to allude to another thread in this forum, such claims in the light of
the counterarguments might be regarded analogous to the claims of the Black
Knight in "Monty Python and the Holy Grail" as to the injuries he would
inflict next. Incredulity regarding one's talents, capabilities, expertise
or knowledge may not always be a *positive* reaction.

Walt Kelly wrote "I refuse to have a battle of wits with an unarmed person."
I daresay some of us that participate in this forum would probably do well
to heed that advice.

William D. Blake wrote "Sometimes I suffer from delusions of adequacy." I
daresay some of us that participate in this forum would also be well-advised
to include such a characterization in our own inventory of our individual
strengths and weaknesses.

-Chuck Stevens


Robert Wagner

unread,
Aug 31, 2004, 9:38:51 PM8/31/04
to
On 30 Aug 2004 16:54:38 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

>> >"""I like Kenny G but the cognosenti don't. They say he's
>> >repititious."""

>> >"""Academics don't call it Classical, they call it Art Music."""

>But in any case, your two statements seemed to have the primary aim of
>denigrating 'the cognescenti' and 'academics'.

That wasn't my intention. Both encouraged the reader to learn what
others are saying.

>I was once a mainframe programmer when mainframes was just about all
>there were.

We all were. Some of us left the culture when it became practical to
do so.

>> We all make mistakes. That's not it. People become irritated when I
>> question 'prevailing wisdom', when I challenge practices they've been
>> following for 30 years.
>
>If your intent is to deliberately 'irritate people', then it may be
>working. But you are irritating the wrong people when they are
>constrained by their employers and terms of employment.

Hopefully, decision makers will peek in and learn how to save money,
make systems more reliable and programmers less cynical.

> Given the usual American foreign policy as I see it, then
>that may well be what they indoctrinated you with in the Marines.

I'm equally unhappy with American foreign policy. I'd like to see Iraq
turned into a modern country. Then citizens of other third world
countries would demand more from their governments. It's tragic that
billions of lives are wasted by inept or self-serving government.

>> My observation about rounding error is not widely known.
>
>I think that it is very valuble to discuss such matters. I hope a lot
>of people are reading the messages and putting the results into the
>context of their own programming and what techniques they have been
>taught to use.
>
>However, if they think that 'Cobol has a bug' then the truth has lost
>out to 'tactical firepower'.

If they don't believe me, your barrage of flak will have prevailed.

>> My sort demos showed how to build trees and linked lists, and call C
>> functions .. things readers might not have thought of on their own.
>
>Which were valuable things to discuss. However you made claims that
>linked lists were much faster than tables as if this was a general
>statement of revealed truth when it was provavbly wrong.

It was another lesson learned.

>> They showed how to sort the OO way and how to sort ten times faster
>> than the Micro Focus library.
>
>Specific case solutions are always faster than more generalised cases.

Good sort programs have more than one algorithm. I know SyncSort used
to have three or four.

>> Those seem more valuable contributions than answers to beginners'
>> questions.
>
>I have a different view on those issues. Reverting to 'what C did in
>the 80s' is not necessarily useful.

Professional programmers should know basics, like how to manage a
list.

>What you failed to present, probably because you never did much C, was
>why C++ was developed to solve the problems that existed in C, such as
>the problems inherent in using pointers. And then Java solved the
>problems inherent in C++.

Java syntax was copied from C++. Java solved the problem of
distributing software to multiple platforms.

>When you introduce the use of raw address pointers, and also of other
>things such as ENTRY, you completely fail to take into account the 20
>or 30 years of experience that C programmers have had with the
>problems that these introduce. Most likely because you weren't doing C
>in enough depth to care.

Namespace pollution isn't as bad as the mess created by mangling.
Cobol can have the same problems. I've worked on Cobol load modules
that had 100 callable programs.

>What would be more valuble is if you brought in these 30 year old
>techniques and explained why _not_ to use them. Then people would see
>why C++ was developed and would understand why OO is a valuble tool
>(but pointers are not).

I can't dispute that.

>I suppose that you will demand examples. Well just as a starter:
>"ANS84 does not define a standard file status code for 'file not
>found'".

In 74, it was a 9x. I forgot they standardized it in New Cobol.
Besides, I just write:
IF FILE-STATUS NOT EQUAL TO ZERO
DISPLAY 'Something is wrong ' FILE-STATUS
GO TO ABEND.

Robert Wagner

unread,
Aug 31, 2004, 9:38:54 PM8/31/04
to
On 30 Aug 2004 22:00:38 -0400, docd...@panix.com wrote:

>In article <g4l6j05mdps70k5th...@4ax.com>,


>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>
>[snip]
>

>>It's true that the Marines teach perseverance, but the most important
>>thing they teach is Superior Tactical Firepower.
>
>'He who thinks all fruits ripen the same time as strawberries knows
>nothing of grapes.' - Paracelsus

Strawberry isn't a fruit, it's a stem. The achenes on its surface,
often mistaken for seeds, are the fruits.

>Mr Wagner, any Marine commander worth their commission should know that
>some situations require 'Superior Tactical Firepower' and enough lead so
>you can walk on the air... and some situations require a barefoot assassin
>with a knife. An attempt to treat either situation as the other may
>lessen one's probabilities of success.

Because I was in Recon, I knew that. The remark about firepower was a
generality.

>>In the context of
>>this forum, that means Truth eventually wins.
>
>If that is what it 'means', Mr Wagner, then it is no wonder that the
>techniques used to support it more closely appear to resemble 'throw
>enough crap against the wall and some of it is sure to stick.'

Our technique was go to full auto and hose the area down. We didn't
care whether we hit anything. We just wanted them to duck long enough
to make our getaway.

Robert Wagner

unread,
Aug 31, 2004, 9:38:57 PM8/31/04
to
On Mon, 30 Aug 2004 23:07:54 -0500, "William M. Klein"
<wmk...@nospam.netcom.com> wrote:

>"In computing, a fixed-point number representation is a real data type for a
>number that has a fixed number of digits after the decimal (or binary or
>hexadecimal) point. For example, a fixed-point number with 4 digits after the
>decimal point could be used to store numbers such as 1.3467, 281243.3234 and
>0.1000, but would round 1.0301789 to 1.0302 and 0.0000654 to 0.0001."

How do you indicate a decimal point in a register? How do you
represent one in memory? Except for floating-point, computer
instructions don't know about decimal points; they operate on
integers.

Robert Wagner

unread,
Aug 31, 2004, 9:39:00 PM8/31/04
to
On Mon, 30 Aug 2004 12:35:17 -0700, "Chuck Stevens"
<charles...@unisys.com> wrote:

>You've spent a lot of energy definding your position by describing all the
>ways in which you think roundING is broken, but not a lot of ways describing
>the ways in which you think its DESCRIPTION is broken in the standard. How
>would you improve it?

By using Bankers' Rounding, which Jordan Wouk calls Nearest-unbiased.

I would NOT define a function. That defeats the convenience of using
pictures and COMPUTE. I'd make it a compiler option, like Standard
Math.

>And if it's not the *definition* of ROUNDED but rather its *use* (or, for
>that matter, the use of roundING through explicit calculations) that sticks
>in your craw, what are you suggesting that, for example, the banking
>industry do to correct this error that nearly a half-century of auditors
>have failed to catch?

They haven't failed to catch it, the problem is well known. Other
languages and databases that use Bankers' Rounding are Visual Basic,
hardly an obscure lanaguage, C# and all of .NET, JavaScript, Delphi
and Sybase.

It's called Bankers' Rounding because it is already standard practice
in the banking industry. No correction needed in that Unisys
stronghold.

>
>Personally, I think "ROUNDED" had better do exactly what the standard says
>it does, and I also think what the standard says it does is abundantly
>clear. Changing the definition of ROUNDED in a future standard would break
>upward-compatibility rules in any case, and I can virtually guarantee that
>neither WG4 nor J4 would find the arguments that (a) that definition is
>broken and (b) has to be corrected a convincing one.
>
>As to whether individual programmers use roundING appropriately or not in
>COBOL, well, that's a matter for individual programmers, for their bosses,
>and for the auditors that verify the company's books. I think those
>auditors have a pretty good clue as to what constitutes appropriate rounding
>and what does not.
>
>This whole discussion reminds me of an acquaintance (now long deceased) who
>had finished his course work for a doctorate in mathematics, and was
>preparing his dissertation on the formulae relating to shock wave theory
>(this being early on in the cold war). He was able to prove that all the
>prevailing formulae were incorrect, but felt it was *ethically
>inappropriate* to publish this as a dissertation without including a
>defensible alternative, which he was unable to do. Rather than "prove a
>negative" as his doctoral dissertation -- which his advisor and the faculty
>were willing to accept -- he mailed a copy of the dissertation to the three
>people in the Western world who might have cared -- as I recall all
>physicists at Los Alamos -- and withdrew from the doctoral program, never to
>return.
>
>Presuming, Mr. Wagner, that you have indeed proven that "rounding is
>broken", what *constructive* actions do your ethics require you to take so
>as to rectify that error?
>
> -Chuck Stevens
>

Robert Wagner

unread,
Aug 31, 2004, 9:39:01 PM8/31/04
to
On 30 Aug 2004 12:39:58 -0700, rip...@Azonic.co.nz (Richard) wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote
>
>> Truncation lowers the average to .499..5 because it discards
>> information. That's easy to understand.
>
>Good, because you started by saying that all v99, v999, v9999 averaged
>.5000 which meant that even though there was some discarded it still
>somehow added up to the same total.

Right. You win that point.

>> Rounding is supposed to maintain the average.
>
>It does. It maintains the average of the infinite precision original
>real numbers. With enough digits the average is close enough to
>0.500. Rounding maintains this with an average of 0.500.
>
>> It's not supposed to push it upward,
>
>It only 'pushes it upwards' with respect to how much truncation
>'pushed it downwards'. Your numbers of 0.5005 were _wrong_ and were
>not the result of valid experiment.

What if the numbers were not truncated, computed or rounded? I posted
a demo of that case to which you had no reasonable answer.

>> but it does in Cobol due to a bug.
>
>No. Wrong. The rounding mechanism is well defined and works
>identically in primary school textbooks, mechanical adding machines,
>C, Java, and Cobol.

You're incorrect. It doesn't work that way in C# nor in JavaScript (I
don't know about compiled Java).

This looks like a Brooklyn Bridge defense. Everyone else is doing
wrong.

>> You try to explain away the bug by linking the two, by claiming that
>> rounding somehow recalls and corrects truncation error.
>
>No. It doesn't 'recall' anything. It completely ignores whether it
>has been truncated or not. Rounding, say, to cents, takes the two
>digits and inspects the 3rd. It neither knows nor cares whether there
>are no digits beyond this or an infinity of them. Nor does it care
>whether these digits are (or would be) all 9s or all 0s.
>
>This means that you _can_ truncate beyond the third digit without
>affecting the result of rounding, which is entirely predictable.

You've said that over and over. What's the point of truncating to
three digits followed by rounding to two? Why not just round in the
first place?


>You continue to say 'rounding error' when applied to an average. there
>is no _the_ rounding error. _A_ 'rounding error' is the difference
>between each individual original infinite precision real and the
>rounded number. For example 1.23761.. rounded gives 1.24 with a
>rounding error of .00238.. The total of the infinite precision reals
>and of the rounded numbers will be about the same for even
>distributions.

Yes, and an infinite number of random errors average to zero. The
problem is Cobol rounding is not random. It pushes the average upward
always, unless the numbers happen to end with zero.


>Now here's the thing: read this 3 times, then come back tomorrow and
>read it again:
>
> The difference between the set of infinite precision reals and
> the truncated set,
> and
> the difference between the rounded set and the truncated set
>
> are _identical_ (in ideal conditions)

It didn't take that long. That's true only in the case where rounding
removes one digit. If we take the more common case of dollars.cents
rounded to dollars, there is an upward bias of half a cent.
>
>If the set of infinite precision reals adds up to the same as the set
>of rounded numbers (which it does under ideal conditions). Then any
>set that is less than that set (such as a truncated set) will have the
>_same_ difference to the two sets that have the same total.
>
>How hard is that ?

It's based on a false premise -- that the truncation error is offset
by the rounding error. They are two separate issues that don't
necessarily cancel out.

>> When I present cases where there was no truncation to correct, you
>> brush them aside by saying the rules are different for integers.
>
>Rounding works on real numbers of infinite precision and random digits
>beyond the rounding point to create numbers with fixed precision.
>
>If you use it on numbers that do not have those characteristics then
>either you put up with the errors (of usage) or you modify the
>technique to cater for the differences that you _should_ expect.

What does that mean .. in English? There is no reason rounding should
work differently on dollars rounded to thousands than it does on cents
rounded to dollars. As i pointed out in another thread, they're both
scaled integers at machine level.

>> Your argument is an appeal to ignorance.
>
>I really don't know what to say without it seeming like an ad hominem
>attack ;-)

Just say I'm an idiot. You've done it before. :)

>Perhaps all this is a troll on your part to get me to throw insults at
>you.
>
>> Rounding is broken.
>
>I suggest that you try using C or Java or Perl or just follow the
>instructions in a primary school maths book and see what answers you
>get. When you find some definition of rounding that gives better
>answers than Cobol does you can then present that here.

I have found a better definition. So have others who are serious about
the subject. It's called Nearest-unbiased.


docd...@panix.com

unread,
Aug 31, 2004, 9:55:49 PM8/31/04
to
In article <052aj0lmkgdi0r22g...@4ax.com>,

Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>On 30 Aug 2004 12:39:58 -0700, rip...@Azonic.co.nz (Richard) wrote:
>
>>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

[snip]

>>> but it does in Cobol due to a bug.
>>
>>No. Wrong. The rounding mechanism is well defined and works
>>identically in primary school textbooks, mechanical adding machines,
>>C, Java, and Cobol.
>
>You're incorrect. It doesn't work that way in C# nor in JavaScript (I
>don't know about compiled Java).
>
>This looks like a Brooklyn Bridge defense. Everyone else is doing
>wrong.

Hey, if you're going to use the Brooklyn Bridge then use it right,
Brody... errrrrr, Mr Wagner. The classic Brooklyn Bridge Defense is
justifying one's actions based on what others are doing, not what others
are not-doing. The interaction between a Petulant Youth and a - may she
sleep with the angels! - Sainted Paternal Grandmother would go something
like -

PY: 'All right, I'm gonna go (x).'

SPG: 'Over my dead body you'll (x), now go and do your homework.'

PY: 'Awwwwwww... Everyone Else is doing it!'

SPG: 'So... Everyone Else is jumping off the Brooklyn Bridge, you're
jumping, too?'

(The response to 'Nobody Else has to' could range, depending on mood, from
'So... Nobody Else has to feel the back of my hand, go do it' to 'So...
you'll be the first, go do it.')

DD

docd...@panix.com

unread,
Aug 31, 2004, 10:05:08 PM8/31/04
to
In article <68o9j0hjmld6k3jur...@4ax.com>,

Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>On 30 Aug 2004 16:54:38 -0700, rip...@Azonic.co.nz (Richard) wrote:

[snip]

>>If your intent is to deliberately 'irritate people', then it may be
>>working. But you are irritating the wrong people when they are
>>constrained by their employers and terms of employment.
>
>Hopefully, decision makers will peek in and learn how to save money,
>make systems more reliable and programmers less cynical.

On the UseNet? Mr Wagner, you've asserted at other times that 'decision
makers' get all their ideas from articles in throw-away airline magazines
they have skimmed during cross-country flights... this kind of behavior
is, in my experience, at odd with the idea-generating stimuli reported by
UseNet readers.

What comes next... the plaintive wail of 'The children... errrrr, the
newbies! We must do it for the newbies!'?

[snip]

>>However, if they think that 'Cobol has a bug' then the truth has lost
>>out to 'tactical firepower'.
>
>If they don't believe me, your barrage of flak will have prevailed.

Hmmmm... so it seems that Napoleon was a bit neglectful; both God *and*
'truth' appear to be on the side of the biggest battalions.

[snip]

>IF FILE-STATUS NOT EQUAL TO ZERO
> DISPLAY 'Something is wrong ' FILE-STATUS
> GO TO ABEND.

Mr Wagner, this kind of code in an MVS-esque environment will cause ABENDs
where none need occur.

DD

docd...@panix.com

unread,
Aug 31, 2004, 10:09:12 PM8/31/04
to
In article <0kr9j0lmelv9q5b72...@4ax.com>,

Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>On 30 Aug 2004 22:00:38 -0400, docd...@panix.com wrote:
>
>>In article <g4l6j05mdps70k5th...@4ax.com>,
>>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>>
>>[snip]
>>
>>>It's true that the Marines teach perseverance, but the most important
>>>thing they teach is Superior Tactical Firepower.
>>
>>'He who thinks all fruits ripen the same time as strawberries knows
>>nothing of grapes.' - Paracelsus
>
>Strawberry isn't a fruit, it's a stem. The achenes on its surface,
>often mistaken for seeds, are the fruits.

This just might be an indication as to why Paracelsus is remembered as a
physician and alchemist and not as a botanist.

>
>>Mr Wagner, any Marine commander worth their commission should know that
>>some situations require 'Superior Tactical Firepower' and enough lead so
>>you can walk on the air... and some situations require a barefoot assassin
>>with a knife. An attempt to treat either situation as the other may
>>lessen one's probabilities of success.
>
>Because I was in Recon, I knew that. The remark about firepower was a
>generality.

In the Air Force, Mr Wagner, an oft-heard phrase was 'no generality is
worth a damn'... perhaps this might make it to the Corps some day.

>
>>>In the context of
>>>this forum, that means Truth eventually wins.
>>
>>If that is what it 'means', Mr Wagner, then it is no wonder that the
>>techniques used to support it more closely appear to resemble 'throw
>>enough crap against the wall and some of it is sure to stick.'
>
>Our technique was go to full auto and hose the area down. We didn't
>care whether we hit anything. We just wanted them to duck long enough
>to make our getaway.

A rather limited repetoire, Mr Wagner, and it might be completely
ineffective when what is needed is a barefoot assassin with a knife... but
it seems that one does what one knows best.

DD

William M. Klein

unread,
Aug 31, 2004, 10:43:25 PM8/31/04
to
You don't indicate a "decimal point in a register" (nothing in the quoted text
claims you do)

Once again, you are confusing how one STORES data with what that data IS (or
even what it represents)

As stated before, a computer (at least all the ones that I know about) ONLY know
about "on and off" (or 1 and 0). How EACH bit is "used" is up to those who
create the software and hardware.

Although you have quoted my post out of context, remember it was you who asked
for "citations" and once you received them, you seem to "go off" on another
track (yet again) rather than admitting that the citations confirm the
(generally held by others in this forum and elsewhere) understanding that no
mater how one STORES integers and/or fixed-point items, the simple fact is that
the statement

fixed point items (or numbers or operands) ARE integers

is wrong.

Admit that (and admit your original error or misstating of what you meant) and
then you can postulate something that may (or may not) be true, e.g.

That on most of the computer systems that you know of (and in fact in most of
the systems that I know of), fixed point data items are STORED in the same way
as integers - it is the software (and/or hardware) that also stores the "scalar
information" to correctly interpret the meaning (where the decimal point is) for
such stored values.

--
Bill Klein
wmklein <at> ix.netcom.com

"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message

news:ots9j05fcab0ao37s...@4ax.com...

William M. Klein

unread,
Aug 31, 2004, 11:11:12 PM8/31/04
to
Let's look at how changing logic is used in a classic RW response

--
Bill Klein
wmklein <at> ix.netcom.com
"Robert Wagner" <rob...@wagner.net.yourmammaharvests> wrote in message

news:052aj0lmkgdi0r22g...@4ax.com...


> On 30 Aug 2004 12:39:58 -0700, rip...@Azonic.co.nz (Richard) wrote:

<snip>

> What if the numbers were not truncated, computed or rounded? I posted
> a demo of that case to which you had no reasonable answer.
>
>>> but it does in Cobol due to a bug.
>>
>>No. Wrong. The rounding mechanism is well defined and works
>>identically in primary school textbooks, mechanical adding machines,
>>C, Java, and Cobol.
>
> You're incorrect. It doesn't work that way in C# nor in JavaScript (I
> don't know about compiled Java).
>

<snip>

The original post stated,

"works identically in primary school textbooks, mechanical adding machines, C,
Java, and Cobol."

The response says that this is INCORRECT because (and I quote)

It doesn't work that way in C# nor in JavaScript (I don't know about compiled
Java)."

So now, tell me why does RW think this response has any relationship to the
original statement?

If the original statement had been,

"works identically in all variations and derrivites of Java and C and COBOL."

then his response would have been responsive.

Only (well that IS an exageration <G>) RW would respond to

"A and B work the same"
with the statement
"Incorrect, A and D don't work the same"

***

Again, (because I think I know what he MEANT to say), it would be nice if he
would respond correctly with a statement like,

"Sorry for my original mis-statement. What I meant to say was

You may (or may not) be right when you said ' rounding mechanism is well

defined and works identically in primary school textbooks, mechanical adding

machines, >>C, Java, and Cobol.'

but it is also true that there are many other programming languages that use
other approaches for rounding that avoid what I think is an unwarrented bias.
For example, C# and JavaScript use "bankers' rounding" see:

For C# :
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfsystemmathclassroundtopic3.aspFor JavaScript: well, oops, according to at least one reference, JavaScript does NOT use"bankers rounding" for its "normal" rounding. See:http://www.developingskills.com/ds.php?article=jsround&page=1which says,"Math.round - if the decimal part is 0.5 or more, it is rounded up. If it's lessthan 0.5, it is rounded down. This is just the way we're taught at school and isideal unless we specifically need something different."and later says,"Many programming languages use a type of rounding called "round to even" or"banker's rounding". This means that rounding works in the normal way EXCEPTwhen the decimal part is exactly 0.5. In this case it will round to the nearesteven number. So 1.5 rounds to 2 and so does 2.5. The Javascript function toachieve this is: function roundToEven(num){ if ((Math.floor(num)%2==0) && (Math.abs(num-Math.floor(num))==0.5)) return Math.round(num)-1; else return Math.round(num); }"

William M. Klein

unread,
Aug 31, 2004, 11:17:09 PM8/31/04
to
Oops formatting error at the end of original message. Formatting now (I hope)
corrected

--
Bill Klein
wmklein <at> ix.netcom.com

"William M. Klein" <wmk...@nospam.netcom.com> wrote in message
news:qaudnWZHTdN...@comcast.com...

which says,

and later says,

The Javascript function to achieve this is:


function roundToEven(num){
if ((Math.floor(num)%2==0) && (Math.abs(num-Math.floor(num))==0.5))
return Math.round(num)-1;
else
return Math.round(num);
}

There are plenty more types of rounding, including random rounding, alternate
rounding, symmetric rounding and asymmetric rounding. Microsoft have written an
interesting HOWTO covering these. Unfortunately it's written with Visual Basic
developers in mind, but there's plenty to think about no matter what language
you write your software in."


Richard

unread,
Sep 1, 2004, 3:18:28 AM9/1/04
to
Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote

> We all were. Some of us left the culture when it became practical to
> do so.

I never found that it was a 'culture'. It was just programming. In
the late 70s I moved between hangers full of mainframes to the new
8080 CP/M things and Unix in between.

> Hopefully, decision makers will peek in and learn how to save money,
> make systems more reliable and programmers less cynical.

Here ? Don't be silly.

> Professional programmers should know basics, like how to manage a
> list.

> Java syntax was copied from C++. Java solved the problem of
> distributing software to multiple platforms.

Java does more than that. For a start it dumps pointers to move to a
higher level language.

> Namespace pollution isn't as bad as the mess created by mangling.
> Cobol can have the same problems. I've worked on Cobol load modules
> that had 100 callable programs.

Actually ENTRY names are _in_addition_ to the program names, so there
is the _same_ problem, plus more problem.



> In 74, it was a 9x. I forgot they standardized it in New Cobol.

That's OK. No one can remember everything. The problem was that when
it was pointed out that there was one you brought out your 'superior
tactical firepower' and continually denied that it existed, even
[mis-]quoted Fujitsu to 'prove' you were right.

Instead of accepting that the correction may be right and checking in
the manual, you simply continue to argue the misinformation. You
mentioned the look of fear when you walked into a site, ....

> Besides, I just write:
> IF FILE-STATUS NOT EQUAL TO ZERO
> DISPLAY 'Something is wrong ' FILE-STATUS
> GO TO ABEND.

When testing a File Status code for success you should only check the
first character for being zero. There are success codes that are not
'00'.

Your code also appears to be something from a 'mainframe culture' of
the 70s. This may be suitable where someone will be called in at 2am
to decipher what '9[' means. Modern systems require a more
sophisticated approach.

Robert Wagner

unread,
Sep 1, 2004, 3:20:23 AM9/1/04
to
On Tue, 31 Aug 2004 23:17:33 +0100, Jeff York <ra...@btinternet.com>
wrote:

>Robert Wagner <rob...@wagner.net.yourmammaharvests> wrote:
>
>>A dictionary compiler is faced with hard choices, for instance whether
>>to include 'calender'.
>
>Why should a lexicographer choose to omit the word for a piece of
>machinery used in cloth-making?

Because it is seldom used by the general public and is more likely a
misspelling of calendar.

As with most spelling checkers, mine has a supplemental word list. If
the user is in the textile industry, he or she can easily add
calender.

It is loading more messages.
0 new messages