Hello,
Typically I have conceived of margin of error in relative terms (i.e., as a percent of the true value) . For example, imagine a study that attempts to estimate the time it takes to receive a disability placard. Suppose the sample yields a point estimate of 10 days as the mean time to receipt of the placard. Further suppose that the 95% Confidence interval is 9-11 days. One could either describe the margin or error as +/- 1 day or +/- 10% (1/10 = .1).
When using the Java Applet to estimate sample size for the CI of a Mean, do we input margin of error in terms of the measurement unit (1 day) or in relative terms (.1)?
Can anyone clarify this for me?
Thank you,
Colin