On Tue, 10 Nov 2015 05:03:36 -0700, RH Draney <
dado...@cox.net>
wrote:
Statisticians want "equal intervals" between scale points
because the assumption of equal-error-of-residual across
the range of a scale lies behind the statisticial tests
that we perform.
That is why it is very desirable to start a model with the
"right" scaling for variables. With ratio variables, it is
possible to take the log or maybe the reciprocol to find
the right scaling.
https://en.wikipedia.org/wiki/Level_of_measurement
points out that this conventional hierarchy (which I use, too)
is not universally praised. To the extent that it is used to suggest
workable transformations for data analysis, the biggest omission
is "bounded at both ends" (implying a symmetric transformation,
like the Logit transformation which is conventionally used for
proportions).
>
>Temperature in anything other than an absolute system (such as Kelvin or
>Rankine) is an interval scale...it is meaningless on such a scale to say
>that one temperature is x times another, but it is still possible to say
>that one temperature *increase* is x times another....r
I find "IQ" or "smart" as being really a tough example for "twice as"
to make sense.
Here is a point of context: If you take "temperature" in the context
of "difference from freezing" or "different from boiling", then you
have a new scale that is /ratio/ : it does have a non-arbitrary zero
point, just as your parenthesis requires.
In another switch of context: "Dollars" are typically modeled by
economists as equal-interval data; however, you should not say that
"income-measured-in-dollars" shows equal intervals for measuring
social outcomes, since adding $50K to $10K is not at all the same
in social implications as adding $50K to $1M.
--
Rich Ulrich