Does anyone here know if there is a standard or protocol for assessing the accuracy of calculating devices? What I'm thinking of is a series of problems, for each of the functions that one would typically find on a technical calculator, that could be used to assess the acuracy of the function's algorithm (or at least it's implementation).If such a standard or protocol exists, where would I look for it?Thanks in advance for any information... Re: Standard Tests for Calculator Accuracy?
Message #2 Posted by db (martinez, california) on 22 Apr 2006, 9:55 p.m.,
in response to message #1 by Steve S
The only standard one i know of is Mike Sebastian's "calculator forensics project". It is limited to what was his begining interest, trig accuracy. A couple of people here have pointed out (perhaps rightly) it's flaws, but have not done the work to design an improvement or implement a data base for a better one so..... Re: Standard Tests for Calculator Accuracy?
Message #3 Posted by Gerson W. Barbosa on 23 Apr 2006, 12:26 p.m.,
in response to message #2 by db (martinez, california)
Gerson & dB - Thanks for your responses. I already know about both of the webpages that you point to. What I was hoping to find was a more formal set of tests, for example something from NIST or the IEEE. Does anyone know of such a thing? Re: Standard Tests for Calculator Accuracy?
Message #6 Posted by John Limpert on 23 Apr 2006, 4:08 p.m.,
in response to message #1 by Steve S
John - Thanks for your comment. Because the tests on this webpage appear to be in C and Fortran, I'm not sure that they would be of much use for a calculator. The underlying algorithm, however, might be worthwhile if I can I can decode it. If nothing else turns up, I may try that. Thanks again for this reference...! Re: Standard Tests for Calculator Accuracy?
Message #8 Posted by Mike (Stgt) on 24 Apr 2006, 4:36 a.m.,
in response to message #1 by Steve S
Here is a link to NIST's software page which contains data sets which can be used for calibrating algorithms for statistics and sparsely populated matrices: may help but is only part of what I think you are looking for. Regards,John Re: Standard Tests for Calculator Accuracy?
Message #11 Posted by Steve S on 25 Apr 2006, 8:29 a.m.,
in response to message #10 by John Smitherman
What is the ISCD?
The ISCD is a non-profit medical society dedicated to high quality bone density testing. It is directed by a volunteer board of trustees and managed by a full time staff of professionals. The ISCD offers educational courses and certification in bone densitometry for clinicians and technologists, publishes a scientific journal and newsletter (SCAN), and has an annual meeting.
What is precision assessment?
Precision is reproducibility of a measurement. Precision assessment in the field of bone densitometry is the process whereby the ability of the instrument and the technologist to reproduce similar results, given no real biologic change, is tested. The mathematical result of precision assessment is called the precision error.
How do I do precision assessment?
To achieve statistical power, BMD testing is done on 15 patients 3 times, or 30 patients 2 times. The standard deviation for each patient is calculated, then the root mean square standard deviation for the group is calculated.
Which skeletal sites should be measured?
Precision assessment should be done for any skeletal site you plan on using to monitor patients. The lumbar spine(L1-L4) usually has the highest precision and the most rapid change in response to therapy. The total proximal femur may also be used, but precision is usually less and response to therapy slower than that in the spine. In certain clinical situations, such as primary hyperparathyroidism, you may want to monitor BMD in the mid forearm.
Is there a recommended timeframe for completing a precision assessment?
The intent of a precision assessment is to determine the precision of the technologist on a particular machine without introducing uncontrolled variables. Therefore, there is no recommended timeframe that each technologist must complete their collection of duplicate or triplicate subjects, providing their technique for scanning is not changing. Some have recommended completing the scans within a 30 day period to minimize the possibility that equipment failure (which can happen gradually over several months before being detected) will influence data. If an equipment failure and subsequent major repair occurs during the collection of data for a precision assessment, pre-failure data may need to be discarded and then reacquired depending on the nature of the failure.
How do I express the results?
The ISCD recommends expressing precision error as root mean square standard deviation in absolute terms (g/cm2). It is sometimes expressed as CV or %CV, but this is less desirable due variation in these values over a range of measured BMD.
What is least significant change?
Least significant change, or LSC, is the least amount of BMD change that can be considered statistically significant. The ISCD recommends calculating this for a 95% confidence level, which is done by multiplying the precision error by 2.77.
Sounds pretty complicated. How do I do all of this?
Use the ISCD Precision Calculator. As a service to professionals in the field of bone densitometry, ISCD has developed the ISCD Precision Calculator which is available for download from this Website.
How do I use the results?
Subtract the recent BMD result from the one used for comparison. If the difference is the same or greater than the LSC, then the change is considered to be statistically significant. The clinician must determine whether this is clinically significant. For example, there may be a statistically significant increase in spine BMD, but it could be due to degenerative arthritis or compression fractures rather than a response to therapy.
Are there legal or ethical issues associated with precision assessment?
Yes. Although the radiation exposure with DXA is very tiny, no patient should be exposed to radiation without the possibility of clinical benefit from the test. Your state may have regulations that apply to the use of any procedure involving radiation. If you are unsure, please consult the appropriate regulatory agency.
Why do my calculations come out different than those of the ISCD Calculator?
Assuming you have made no mathematical errors, there may be slight discrepancies due to rounding differences.
Is patient permission required to conduct a precision assessment study?
Adherence to local radiation safety regulations is necessary. A precision study does require the consent of participating patients. Precision assessment is not research and may potentially benefit patients. Patients should be informed of the merits of precision assessment, with right of refusal, but use of a consent form is not suggested.
This will print the result 0.020000000000000004 while it should just print 0.02 (if you use your calculator). As far as I understood this is due to errors in the floating point multiplication precision.
Does anyone have a good solution so that in such case I get the correct result 0.02? I know there are functions like toFixed or rounding would be another possibility, but I'd like to really have the whole number printed without any cutting and rounding. Just wanted to know if one of you has some nice, elegant solution.
Note that the first point only applies if you really need specific precise decimal behaviour. Most people don't need that, they're just irritated that their programs don't work correctly with numbers like 1/10 without realizing that they wouldn't even blink at the same error if it occurred with 1/3.
The recommended approach is to use correction factors (multiply by a suitable power of 10 so that the arithmetic happens between integers). For example, in the case of 0.1 * 0.2, the correction factor is 10, and you are performing the calculation:
Are you only performing multiplication? If so then you can use to your advantage a neat secret about decimal arithmetic. That is that NumberOfDecimals(X) + NumberOfDecimals(Y) = ExpectedNumberOfDecimals. That is to say that if we have 0.123 * 0.12 then we know that there will be 5 decimal places because 0.123 has 3 decimal places and 0.12 has two. Thus if JavaScript gave us a number like 0.014760000002 we can safely round to the 5th decimal place without fear of losing precision.
Numerical errors accumulate with every further operation and if you don't cut it off early it's just going to grow. Numerical libraries which present results that look clean simply cut off the last 2 digits at every step, numerical co-processors also have a "normal" and "full" lenght for the same reason. Cuf-offs are cheap for a processor but very expensive for you in a script (multiplying and dividing and using pov(...)). Good math lib would provide floor(x,n) to do the cut-off for you.
If you are doing if-s/comparisons and don't want to cut of then you also need a small constant, usually called eps, which is one decimal place higher than max expected error. Say that your cut-off is last two decimals - then your eps has 1 at the 3rd place from the last (3rd least significant) and you can use it to compare whether the result is within eps range of expected (0.02 -eps < 0.1*0.2 < 0.02 +eps).
Notice that for the general purpose use, this behavior is likely to be acceptable.
The problem arises when comparing those floating points values to determine an appropriate action.
With the advent of ES6, a new constant Number.EPSILON is defined to determine the acceptable error margin :
So instead of performing the comparison like this