+----------------------------+
| HP48 STATISTICS VINDICATED |
+----------------------------+
by Joseph K. Horn
In posting <328D1B...@echip.com>,
Bob Wheeler <bwhe...@echip.com> wrote:
> HP is a good company and produces good product, but they have always
> been a bit backward with respect to their statistical stuff -- this
> goes way back before micro computers. You can trust the statistical
> calculations in the HP48 for small, non-challenging data sets, but
> don't do any extensive, serious calculations on it -- especially
> simulations with the random number generator -- random numbers are
> too important to be left to chance.
>
> The best and most accurate way to calculate the above is with a
> simple looping function. All the serious computer packages use it,
> because it is both the fastest and the most accurate way to do the
> calculations. See Miller, Alan, J. (1989). Updating means and
> variances. Jour. Computational Physics.
>
> Let X(i) be an array of i=1...N interval midpoints, and W(i) a
> corresponding array of weights (frequencies). Use arrays SW(i),
> SX(i), and M(i) to hold the calculations. Set SW(0)=SX(0)=M(0)=0,
> and then repeat
>
> SW(i+1)=SW(i)+W(i)
> d=[X(i+1)-M(i)]W(i)
> M(i+1)=M(i)+d/SW(i+1)
> SX(i+1)=SX(i)+d[X(i+1)-M(i+1)]
>
> SX(N) will contain N times the variance (SD squared), and M(N) the
> mean. You don't actually have to use arrays for SW,SX, and M:
> scalars will do if you update them.
>
> Bob Wheeler, ECHIP, Inc.
Sorry, I must strongly disagree for several reasons.
(1) This algorithm was first proposed by W. Kahan and B.N. Parlett in
the University of California, Berkeley College of Engineering
Electronics Research Lab Memorandum No. UCB/ERL M77/21, April 6, 1977.
NOTICE THE DATE. Please note that Professor Kahan was one of the
primary movers and shakers in the movement to make calculators as
accurate as possible, and HP was always ahead of the pack in this
regard, working very closely with Kahan.
(2) Inaccuracies in the statistics functions of the EARLY models were
discussed BY HP THEMSELVES in the Hewlett-Packard Journal BEFORE Kahan
& Parlett's article (cf. "The New Accuracy" by Dennis W. Harms,
Hewlett-Packard Journal, November 1976, pp. 16-17). Harms pointed out
that data with many leading digits that are the same are very
difficult to handle, for example:
6666666123 <-- seven leading 6's
6666666246
6666666369
No HP calculator at the time could find the correct standard deviation
of these data (123)... nor could any other brand.
(3) Kahan was a member (2514) of the HP-65 User's Club (later renamed
to PPC) and he published an article in its "65 Notes" newsletter
(later renamed to PPC Journal). The article is called "More Accurate
Sigma" (65 Notes, V4N9P1: November 1977, page 1). No such attention
to accuracy was found in the newsletters of any other calculator
newsletter. Oh, that's right: there weren't any others, just HP.
(4) Professor John Kennedy (PPC member 918) published an article in
the PPC Journal (V5N10P12: December 1978, page 12) entitled "More
Accurate Statistics" in which he fully explained not only the Kahan/
Parlett algorithm but also gave HP-67/97 programs that implemented
them for mean, standard deviation, and linear regression. He also
notes that this improved method STILL FAILS with HP's own example as
shown in #2 above, so IT ISN'T REALLY ALL THAT ACCURATE AFTER ALL.
(5) HP did not adopt the "improved" algorithm for four reasons: (a) it
only improved results in some cases, but it still failed in others
(see above); (b) the "older" algorithm allowed the Sigma+ key to be
used for vector arithmetic, a handy feature that users had come to
expect in HP calculators; (c) many HP calculator users are college
students, and statistics is still often taught as a process in which
we write down separate columns for sum-of-x, sum-of-x-squared, and so
on, so they expected the HP calculator to make those values available;
and (d) HP knew that there was a way better than either of those
algorithms, namely: keeping all the data (instead of making summations
and throwing the actual data away), and then using the formal
mathematical definitions of standard deviation etc.
(6) When the HP48 was introduced on March 6, 1990, this whole
discussion of accuracy in statistics became moot. The HP48 does it
right. It even gets the right standard deviation of this horrific
data set:
666666666123 <-- nine leading 6's
666666666246
666666666369
The correct standard deviation is 123. That's what the HP48 gets.
The Kahan/Parlett algorithm that Wheeler says is so much better (using
the program by John Meyer) gets 123.24974645. John Meyer warned me in
email that this inaccuracy may be due to the fact that it was written
in User RPL, whereas HP's SDEV is written to use higher accuracy
internally. No, that's not why. It's because HP's algorithm is
better; it uses the formal definition, not the one that's shown in
text books and reference books as "for computational purposes" based
on sum-of-x and sum-of-x-squared.
In other words, HP wasn't lying on page 3-301 in the AUR. Go ahead;
look it up. Look at the formula. HP does it right!
And to make students happy, HP provided functions that yield SigmaX,
SigmaX^2, etc. Best of both worlds: perfect accuracy *and* the
old-style summations for those who want them. As for vector
arithmetic, the HP48 of course handles that easily without any need
to use the Sigma+ key. Everybody's happy. Except Bob Wheeler.
(7) Wheeler says that HP have "always been a bit backward". I
disagree. As I've shown, HP knew exactly what they were doing, and
they now produce TWO calculators that do statistics RIGHT (the HP48
and the HP38). I just threw the above example into the best
calculator Casio makes, and it choked on it.
(8) Wheeler says that "the best and most accurate" algorithm is the
Kahan/Parlett algorithm. As I've shown, it sucks compared to the
formal definition algorithm used by the HP48.
(9) Wheeler says "all the serious computer packages use it". Oh,
yeah? Like, which ones? List your references, please.
(9) Wheeler says that it's also "the fastest" algorithm. Not true.
John Kennedy's article (cf #4 above) goes to great length to compare
the code size and speed differences between the two algorithms.
Conclusion: the "new" algorithm takes MORE TIME (and code).
(10) I admire Bob Wheeler for being a regular reader of the Journal of
Computational Physics, but somebody ought to publish an article in it
announcing the news that one year after Alan Miller's 1989 article, HP
introduced the HP48 with statistics even better than Miller proposed.
Please accompany all rebuttals with examples and references.
Otherwise I must assume that you're just blowing smoke. It really
irks me when I see well-established facts belittled and replaced with
outright misinformation, especially by somebody who, in several
postings, has claimed without any evidence that HP uses inferior
algorithms.
-Joe-
Joseph K. Horn
joe...@mail.liberty.com
-------------------==== Posted via Deja News ====-----------------------
http://www.dejanews.com/ Search, Read, Post to Usenet
joe...@mail.liberty.com wrote in article <8685660...@dejanews.com>...
> It's risky to disagree with an expert, but here goes...
>
> +----------------------------+
> | HP48 STATISTICS VINDICATED |
> +----------------------------+
> by Joseph K. Horn
>
> In posting <328D1B...@echip.com>,
> Bob Wheeler <bwhe...@echip.com> wrote:
>
<snipped>
> > Let X(i) be an array of i=1...N interval midpoints, and W(i) a
> > corresponding array of weights (frequencies). Use arrays SW(i),
> > SX(i), and M(i) to hold the calculations. Set SW(0)=SX(0)=M(0)=0,
> > and then repeat
> >
> > SW(i+1)=SW(i)+W(i)
> > d=[X(i+1)-M(i)]W(i)
> > M(i+1)=M(i)+d/SW(i+1)
> > SX(i+1)=SX(i)+d[X(i+1)-M(i+1)]
> >
> > SX(N) will contain N times the variance (SD squared), and M(N) the
> > mean. You don't actually have to use arrays for SW,SX, and M:
> > scalars will do if you update them.
> >
> > Bob Wheeler, ECHIP, Inc.
>
> Sorry, I must strongly disagree for several reasons.
>
<snipped>
> (6) When the HP48 was introduced on March 6, 1990, this whole
> discussion of accuracy in statistics became moot. The HP48 does it
> right. It even gets the right standard deviation of this horrific
> data set:
>
> 666666666123 <-- nine leading 6's
> 666666666246
> 666666666369
>
> The correct standard deviation is 123. That's what the HP48 gets.
> The Kahan/Parlett algorithm that Wheeler says is so much better (using
> the program by John Meyer) gets 123.24974645. John Meyer warned me in
> email that this inaccuracy may be due to the fact that it was written
> in User RPL, whereas HP's SDEV is written to use higher accuracy
> internally. No, that's not why. It's because HP's algorithm is
> better; it uses the formal definition, not the one that's shown in
> text books and reference books as "for computational purposes" based
> on sum-of-x and sum-of-x-squared.
>
> In other words, HP wasn't lying on page 3-301 in the AUR. Go ahead;
> look it up. Look at the formula. HP does it right!
This is an interesting topic. In my programs, I usually compute stdev. by
the "standard" method, by computing sum of squares and sum in one loop,
then
(n-1)*VAR= Sum(x^2)-(sum(x))^2/n
and I have never thought of the problems that occur when the numbers are
like in Joe's example.
A problem with computing st.dev. like on page 3-310 in the AUR, is that you
have to loop through the array two times, first to compute mean, then to
compute the sum of squared distances.
An alternative methot, which I have not tested much and can not guarantee,
is the following:
Choose a constant c that is in the same range as your data, for convenience
let c=x[1] (the first data)
In one loop, compute
Sum(i=2,n,(x[i]-c)^2) and
Sum(i=1,n,x[i])
(A pseudo-code might look like
c=x[1], sum=c, sumsq=0
FOR i=2 to n DO
sum=sum+x[i]
sumsq=sumsq+(x[i]-c)^2
LOOP)
Then the desired sum(x-xmean)^2 is equal to
sumsq-n*(sumx/n-c)^2
because
Sum(x-xmean)^2=sum(x^2)-2*xmean*Sum(x)+n*xmean^2
= <above> + n*c^2-n*c^2+2*c*n*xmean-2*c*n*xmean (add and subtract the
same)
= sum(x-c)^2-n*(xmean-c)^2
With the above method, I could compute the correct stdev of
666666666123
666666666246
666666666369
in a single userRPL-loop.
The method will run into problems if the first data is very different from
the others. If you have a very large data set, and it's expensive to loop
through the data set, you might run through a small fraction of the data
first, to compute a suitable c, then execute only one loop to compute the
st.dev.
Any coments? I am sure someone can find an excellent example where this
method don't work very well.
Christian
--
Christian Meland
Research Scientist, PFI
Oslo, Norway
Phone +47 22566105 ext. 267, at home +47 22221067
] It's risky to disagree with an expert, but here goes...
I'm by no means an expert, especially regarding statistics and numerical
methods generally, so I have no desire to disagree or lock horns (sorry!)
with anybody, but this post was most interesting and it caused me to carry
out the following brief investigation:
] (2) Inaccuracies in the statistics functions of the EARLY models were
] discussed BY HP THEMSELVES in the Hewlett-Packard Journal BEFORE Kahan
] & Parlett's article (cf. "The New Accuracy" by Dennis W. Harms,
] Hewlett-Packard Journal, November 1976, pp. 16-17). Harms pointed out
] that data with many leading digits that are the same are very
] difficult to handle, for example:
]
] 6666666123 <-- seven leading 6's
] 6666666246
] 6666666369
]
] No HP calculator at the time could find the correct standard deviation
] of these data (123)... nor could any other brand.
I tried this data on the following machines with the results given:
HP48G 123 Correct
HP42S 12903 !!!
HP32SII 12903 !!!
TI-85 1291 !!
TI-68 123 Correct
Casio fx-992s 0 !
The TI-68 by the way is a small, cheap machine that isn't programmable,
really. These results are surprising to me, and a bit disturbing; I'd like
to understand more how these errors arise.
] (6) When the HP48 was introduced on March 6, 1990, this whole
] discussion of accuracy in statistics became moot. The HP48 does it
] right. It even gets the right standard deviation of this horrific
] data set:
]
] 666666666123 <-- nine leading 6's
] 666666666246
] 666666666369
Here are the results for this data set:
HP48G 123 Correct
HP42S Error !!!
HP32SII Error !!!
TI-85 129099 !!
TI-68 153 Nearly correct
Casio fx-992s 0 !
] In other words, HP wasn't lying on page 3-301 in the AUR. Go ahead;
] look it up. Look at the formula. HP does it right!
Well, that's what I teach my engineering students to use before we get down
to using whatever machines they have. I liken it to the RMS of the
individual deviations from the mean, which they seem to understand.
Anyway, what a pity the HP42S and HP32SII gets it wrong too.
Dick
--
=============================================================================
Dick Smith Acorn Risc PC di...@risctex.demon.co.uk
=============================================================================
Dick Smith <di...@risctex.demon.co.uk> wrote in article
<19970711....@risctex.demon.co.uk>...
> In message <8685660...@dejanews.com> Joseph Horn wrote:
>
> ] It's risky to disagree with an expert, but here goes...
>
> I'm by no means an expert, especially regarding statistics and numerical
> methods generally, so I have no desire to disagree or lock horns (sorry!)
> with anybody, but this post was most interesting and it caused me to
carry
> out the following brief investigation:
>
> ] (2) Inaccuracies in the statistics functions of the EARLY models were
> ] discussed BY HP THEMSELVES in the Hewlett-Packard Journal BEFORE Kahan
> ] & Parlett's article (cf. "The New Accuracy" by Dennis W. Harms,
> ] Hewlett-Packard Journal, November 1976, pp. 16-17). Harms pointed out
> ] that data with many leading digits that are the same are very
> ] difficult to handle, for example:
> ]
> ] 6666666123 <-- seven leading 6's
> ] 6666666246
> ] 6666666369
> ]
> ] No HP calculator at the time could find the correct standard deviation
> ] of these data (123)... nor could any other brand.
>
> I tried this data on the following machines with the results given:
>
> HP48G 123 Correct
> HP42S 12903 !!!
> HP32SII 12903 !!!
> TI-85 1291 !!
> TI-68 123 Correct
> Casio fx-992s 0 !
>
> The TI-68 by the way is a small, cheap machine that isn't programmable,
> really. These results are surprising to me, and a bit disturbing; I'd
like
> to understand more how these errors arise.
>
>
> ] (6) When the HP48 was introduced on March 6, 1990, this whole
> ] discussion of accuracy in statistics became moot. The HP48 does it
> ] right. It even gets the right standard deviation of this horrific
> ] data set:
> ]
> ] 666666666123 <-- nine leading 6's
> ] 666666666246
> ] 666666666369
>
> Here are the results for this data set:
>
> HP48G 123 Correct
> HP42S Error !!!
> HP32SII Error !!!
> TI-85 129099 !!
> TI-68 153 Nearly correct
> Casio fx-992s 0 !
>
>
Excel 97 run into problems allready with five 6's, and serious problems
with seven. Minitab handle it right.
--
Bob Wheeler, ECHIP, Inc.
Reply to bwhe...@echip.com)
Thanks, Joe Horn, for the thorough and fascinating research
into statistical calculations and algorithms,
revealing even more about the role played by Prof. William Kahan
in the development of HP Calculators and algorithms.
Kahan's HP Journal articles circa 1980 also explain the Solve
and Integrate applications he later helped create for the HP34C
and HP15C, which we continue to enjoy in the current HP48.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> (5) HP did not adopt the "improved" [Std Dev] algorithm for four reasons:
> (a) it only improved results in some cases, but it still failed in others
> (see above); (b) the "older" algorithm allowed the Sigma+ key to be
> used for vector arithmetic, a handy feature that users had come to
> expect in HP calculators; (c) many HP calculator users are college
> students, and statistics is still often taught as a process in which
> we write down separate columns for sum-of-x, sum-of-x-squared, and so
> on, so they expected the HP calculator to make those values available;
> and (d) HP knew that there was a way better than either of those
> algorithms, namely: keeping all the data (instead of making summations
> and throwing the actual data away), and then using the formal
> mathematical definitions of standard deviation etc.
Well, if they had wanted to employ about two more "fixed registers,"
they could have offered all of this combined (and in the HP34C/15C era,
a good number of registers were hidden from the user; they could have
shared some between statistics and the solver, say), but anyway:
Before the advent of calculators (or even computer programs)
having a large enough memory to even consider the retention
of every individual data value, the only algorithms which could
possibly have been used are those which only keep a fixed,
limited number of "running totals" of some kind.
A common standard method which retains up to six totals
(N, SumX, SumX^2, SumY, SumY^2, SumX*Y) has been used for ages
by HP in every calculator which has a fixed limited memory;
every other calculator manufacturer (Casio, TI, Sharp, etc.)
seems also to have done about the same (Casio even provides
for entering weighting factors, which HP never has); variations
in ultimate accuracy between brands do occur, but generally
because of the number of digits used internally, and of course
HP's unique attention to maximizing fundamental arithmetic accuracy.
If we compare this method to what you now identify as
Kahan/Parlett's algorithm, which retains a different fixed set
of running totals, evidently Kahan/Parlett significantly improves
the accuracy of the end result in various "tougher" cases, and
we haven't seen any example yet of where it is not at least as good.
If executed using "Long Real" (15-digit) internal values,
rather than in User-RPL, it is bound to further improve in accuracy,
although still subject to some ultimate limitations when
"really tough" (e.g very closely grouped) data is presented to it.
Of course, when we compare even a better "one pass, running totals"
method to what can be done if we keep every data value and make
multiple passes through the data, the latter easily wins out.
Actually, I had not even known that the HP48 did use a two-pass
method until Joe Horn just posted results which confirm that it does,
and I'm glad to have been given this insight.
Pitting the "one-pass" and "two-pass" methods against each other
is, however, an "apples vs. oranges" comparison, isn't it?
BTW, there will always be ways to trip up the calculator by
presenting it with hard-to-digest data:
Consider the three values 1E15, -1E15, and 3; if we enter these
into the HP48 and then request the total and mean,
we get the respective correct answers 3 and 1.
However, if we merely change the order of entry to 3, 1E15, -1E15,
the total and mean are now each reported to be *zero*; in this case,
saving the individual data values has not helped a bit, because the
HP48 still computes a total by adding up the values in the sequence
in which they were entered, causing loss of significance in some cases,
obtaining the exact same answers which any older calculator would get.
The standard deviation of the above values remains correct (1E15),
but as we see, this does not protect us from wrong answers in the case
of some of the other group statistics. It would be entirely possible
to fix this as well, by grouping the individual saved values
according to magnitude before summing them, but at some point, the
software designer makes choices as to which algorithms to use, and
how much programming effort to expend in their implementation;
as a result, we now have an HP48 statistics application which is
better at standard deviations, but still no better at sums and means.
Another important consideration is with the practicality of actually
storing every individual data point, which is the only way in which
the HP48 is currently prepared to deal with statistics, having
completely abandoned the "running totals" approach.
The limitations of this approach surface when we consider the need
to compute "weighted" statistics (where each pont occurs N times,
and where N may or may not be an integer),
which are not built into the HP48.
Interestingly, Joe Horn has presented on Goodies Disks #7 and #8
each of these two fundamentally different approaches; his original
WEIGHT/SSIG program (GD#7) for Std Dev requires that each data point
be individually entered as many times as its weighting factor
(necessarily limiting the weights to integers), allowing the HP48
to use its accurate internal Standard Deviation calculation;
however, Joe himself later points out on GD#8 (WEIGHT2.DOC) that
this is impractical if the weights/repetition factors are extremely
large (and of course also if they are not integers), and he then
uses the traditional older (and less accurate) method to compute
the standard deviation from values multiplied by weights.
When the Kahan/Parlett method (which I learned about from Wheeler's post)
was substuted for the WEIGHT2 method (the result was posted as WEIGHT3),
sure enough -- the accuracy was improved over WEIGHT2, although it
of course is not as good as you can get with the original WEIGHT
(if the weights are all integers, and if you have sufficient memory).
WEIGHT3 added just one very small additional operational refinement,
which is the ability to make your own choice of "X-col" and "Y-col" from
the Statistics Matrix, as you can with other built-in HP48 functions.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
So, it may be that we have merely been arguing over "apples and oranges";
the Kahan/Parlett algorithm (perhaps later re-discovered in other literature
which Wheeler happens to know) does truly improve upon the "running totals"
method which HP (along with Casio and everyone else) did in fact always
use before; the HP48 standard deviation has meanwhile been improved
by substituting a "two pass" method which was not available in earlier
calculators because of lack of memory, but even the HP48 Total and Mean
can still be fooled into giving wrong answers, if you try hard.
For weighted statistics, the "two-pass" internal method is available,
provided you have enough memory to store every individual instance of
a weighted data point (and that all the weights are integers),
otherwise there is no choice but to use a "running totals" type
of algorithm, and in such a case, the Kahan/Parlett algorithm
is indeed better than the standard old "textbook" method,
exactly as Wheeler said.
As is often the case when we try to fit method to problem, there simply
is no "one size fits all" approach which is optimum for every purpose.
Wheeler's comments about Linear Regression algorithms and Random Number
generation have not been explored further, but who knows, there may
be interesting insights to be gained by any further information that
he or anyone else might care to post on those topics, leading to
our ever deepening appreciation of the art of finite numerical methods,
which often strangely contradicts the perfect continuous theoretical
results we classically expect, much as discrete Quantum Mechanics
contradicts our everyday expectations from classical physics.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
To avoid long-winded threads like this in the future,
how about we all agree not to say "My dog is better than your dog" :)
North American Public TV audience:
Check out "The Red Green Show" <http://www.pbs.org> <http://www.redgreen.com>
(A superior product, produced in Canada)
-----------------------------------------------------------
With best wishes from: John H Meyers <jhme...@mum.edu>
This would be true if we limited ourselves to relying upon the
built-in statistics functions to deliver the ultimate answers,
after making multiple entries for repeated data points;
however, nothing prevents us from using the "Y-col" values in the
Statistics Matrix as weights for the "X-col" values, just as in WEIGHT3,
but re-writing our own program (preferably in SysRPL to increase accuracy)
to implement the "two-pass" method for Standard Deviations.
Actually, it would be nice to have a complete program package which
included all of the "1-var" statistics functions, but for weighted
data (using "Y-col" as weights), and also using the best algorithm
for each computation. Joe, did I hear you say you were working on it? :)
> I detect a slight imprecision of expression in my previous post in
> this thread, where I seem to have said that for weighted statistics
> in general (including the possibility of vary large and/or non-integer
>
> weights), there is no alternative to using a "running totals" method.
>
> This would be true if we limited ourselves to relying upon the
> built-in statistics functions to deliver the ultimate answers,
> after making multiple entries for repeated data points;
> however, nothing prevents us from using the "Y-col" values in the
> Statistics Matrix as weights for the "X-col" values, just as in
> WEIGHT3,
> but re-writing our own program (preferably in SysRPL to increase
> accuracy)
> to implement the "two-pass" method for Standard Deviations.
The problem with the two-pass method is that it involves squares,
which for large enough values will overflow any register. The updating
method involves only linear increments, and must therefore retain
more accuracy than the two-pass method.
You should implement the updating method with the critical values
retained in the HP48's registers. Writing to ordinary memory
is where the loss of precision occurs.
>
>
> Actually, it would be nice to have a complete program package which
> included all of the "1-var" statistics functions, but for weighted
> data (using "Y-col" as weights), and also using the best algorithm
> for each computation. Joe, did I hear you say you were working on it?
> :)
>
> -----------------------------------------------------------
> With best wishes from: John H Meyers <jhme...@mum.edu>
--
Bob Wheeler --- (Reply to bwhe...@echip.com)
ECHIP, Inc
On <10.07.97> in article <8685660...@dejanews.com>,
joe...@mail.liberty.com (joehorn) wrote:
J>In other words, HP wasn't lying on page 3-301 in the AUR. Go ahead;
J>look it up.
Well, I did and I've got a bit surprised that the HP48 calculates
(666666666123 + 666666666246 + 666666666369)/3 = 666666666247,
due to the formula on page 3-186. So one isn't able to compute mean
and standard deviation by (HP48-)hand... If one uses the formula on
page 3-301, the number 123.00609741 is achieved.
J>John Meyer warned me in email that this inaccuracy may be due to the
J>fact that it was written in User RPL, whereas HP's SDEV is written to
J>use higher accuracy internally. No, that's not why.
OK, you've mentioned another kind of inaccuracy: but what's the
reason for that inaccuracy?
J>not the one that's shown in text books and reference books as "for
J>computational purposes"
It might be a bit off topic, but the property of the variance being
the difference between the mean of the squared random variable and
the square of its mean reveals something rather more conceptional
thing than a merely computational one. The difference between those
two is astonishing to common sense.
Michael
- --
- -= Michael Hoppe <michae...@k2.maus.de>, <mho...@hightek.com> =--
- -= Key fingerprint = 74 FD 0A E3 8B 2A 79 82 25 D0 AD 2B 75 6A AE 63
- -= PGP public key available on request. =---------------------------
-----BEGIN PGP SIGNATURE-----
Version: 2.6.3ia
Charset: cp850
iQCVAwUBM8s8OEo1jLvgpXMdAQHlhwP/cSWJk/stPOnLvpeArSaduLI54WLTa/RJ
c2bTWxdVMcODN1Jwok74+Yr94m38huOp55JJuUHTkeg35bm2/vsV+zLs7RR4tDog
6+VrhC6gDeqm7h91GJMXfKzTwjN9worebFjmFOZ/FKpcAYRr2UJbie6fI6sE8BiK
CdsDAyu2bI0=
=77XS
-----END PGP SIGNATURE-----