Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

calculating Cronbach's alpha (reliability)

1,390 views
Skip to first unread message

sondra...@my-deja.com

unread,
Jan 16, 2001, 9:48:18 AM1/16/01
to
Hello, I am trying to calculate Cronbach's alpha as an index of the
reliability of a measure. I have done this often with other measures, but am
having trouble getting SPSS for Windows to do it for this particular measure
-- Peer Ratings of the Teacher-Child Relationship. First, I'll briefly
describe the instrument and then I'll mention what I have tried and how it
has failed.

The instrument asks children to rate on a three-point scale how well each of
their classmates "gets along with" the classroom teacher. They also rate
themselves -- and trying to ignore the self-ratings is causing the problem.
The data is entered as if the raters were listed across (i.e., horizontally)
and the ratees were listed down (i.e., vertically). So the data for an
imaginary classroom of five students might look like this (without the space
after the identification numbers):

001 32323
002 12311
003 11323
004 33333
005 11111

Here, participants' identification numbers are entered first (001, 002, 003,
004, 005), followed by the ratings that they received from their classmates
and themselves. So, for example, 001 gave himself a "3" and received the
following ratings from his 4 classmates: 2,3,2,3. Note that the ratings that
#001 GAVE to others would be read vertically (i.e., 31131), and that his
self-rating would be in the first column (a "3"). Likewise, 002's
self-rating would be in the second column (a "2"), and so on along the
diagonal.

Although it may be a bit unconventional to think of using Cronbach's alpha in
this way, I'm interested in seeing how much consensus there is among raters
in a classroom. I do NOT want to include self-ratings in this analysis.

I have some missing data, which I have entered as "9" and defined as such in
the syntax file. I have tried to give the self-ratings various values (e.g.,
9, 6, 2) to act as "placeholders" in the data file, but do not want them to
be recognized in the analysis. However: (a) if I specify "missing =
include," SPSS correctly includes those cases with missing data, and
calculates reliability for all subjects. The problem is that it includes the
placeholder value in its calculation of the alpha, even when I go back and
also define the placeholder value as "missing," too; (b) if I specify
"missing = exclude," SPSS correctly eliminates those cases with missing data,
and calculates reliability for only subjects without missing data. But
again, the problem is that it includes the placeholder value in its
calculation of the alpha. And obviously, when I define the placeholder value
as "missing," too, it throws out ALL of the cases, because every child has a
placeholder (i.e., her self-rating).

So, how do I get SPSS to calculate an alpha for every child, even when every
child has one item that I want to exclude from the analysis. As you can see
from the imaginary data above, the item that I want to exclude DIFFERS for
each child because the self rating appears in a different column for each
child (and thus is not labelled as a self-rating in the systems or ".sav"
file).

I hope this makes sense!!! THANK YOU to anyone who reads through all of this
and can give me some advice. I appreciate it a great deal. Sincerely, Sondra
Birch

Sent via Deja.com
http://www.deja.com/

Neil W. Henry

unread,
Jan 16, 2001, 11:14:04 AM1/16/01
to
I think you are out of luck as far as tricking SPSS into doing the calculation
for you.
What you can do is use a formula for alpha based on variances to hand calculate
the alpha, after calculating the necessary variances leaving out the missing
values (including the self-ratings).

Paul Spector (Sage Monograph, p.32) says that alpha equals:
(k/(k-1)) (1 - SUM Vi / VT)
where k is the number of items, Vi is the estimated variance of item i, and VT is
the estimated variance of the sum of the items.
In your example, k = 5 (and n = 5 of course). With the self-ratings changed to
missing, you get each Vi calculated based on 4 observations, and VT calculated
based on 5 items. Easy enough to get SPSS to make these calculations, then hand
calculate alpha.

You probably made an inadvertent mistake when you wrote: "So, how do I get SPSS


to calculate an alpha for every child, even when every child has one item that I

want to exclude from the analysis." There will only be ONE alpha, not one for
every child. With the dataset you describe it will presumably represent the
degree of consistency of the raters.

I think it would be wiser to develop a model of this rating behavior rather than
to try and make a routine application of the alpha, however. It seems plausible
to me that a child's self-rating would be informative when evaluating the other
children.

sondra...@my-deja.com wrote:

--
*************************************************
`o^o' * Neil W. Henry (nhe...@vcu.edu) *
-<:>- * Virginia Commonwealth University *
_/ \_ * Richmond VA 23284-2014 *
*(804)828-1301 x124 (mathematical sciences, 2037c Oliver) *
*FAX: 828-8785 http://www.people.vcu.edu/~nhenry *
*************************************************


sondra...@my-deja.com

unread,
Jan 16, 2001, 11:43:32 AM1/16/01
to
Thank you for your response. Yes, I did inadvertently refer to
calculating "an alpha for every child" when I posted the original
question. I did mean that I wanted to include all of them in
the analysis, with which I am trying to reflect the degree of
consistency among the raters.
I appreciate your prompt response and advice. Thank you!
Sincerely,
Sondra Birch

In article <3A64734B...@vcu.edu>,

Rich Ulrich

unread,
Jan 16, 2001, 4:36:38 PM1/16/01
to
- Neil shows how to approximate the answer -

On Tue, 16 Jan 2001 11:14:04 -0500, "Neil W. Henry" <nhe...@vcu.edu>
wrote:

> I think you are out of luck as far as tricking SPSS into doing the calculation


> for you.
> What you can do is use a formula for alpha based on variances to hand calculate
> the alpha, after calculating the necessary variances leaving out the missing
> values (including the self-ratings).
>
> Paul Spector (Sage Monograph, p.32) says that alpha equals:
> (k/(k-1)) (1 - SUM Vi / VT)
> where k is the number of items, Vi is the estimated variance of item i, and VT is
> the estimated variance of the sum of the items.
> In your example, k = 5 (and n = 5 of course). With the self-ratings changed to
> missing, you get each Vi calculated based on 4 observations, and VT calculated
> based on 5 items. Easy enough to get SPSS to make these calculations, then hand
> calculate alpha.
>

So far as I know, there is not a name for this statistic, and you will
have to call this "an approximation for alpha" based on estimating
the variance terms. If you are going to call it Cronbach's alpha, it
can't be your own variation on subject.

I did a Google search on < "Cronbach's alpha" computation > .
I was surprised to find a description of a STATA routine that offers
alpha using "pairwise deletion when missing" - as one option for
computing the variance-covariance matrix for alpha. This online
documentation does not warn that this result should not be called
alpha. -- I don't know what STATA offers in its written documentation
or when you run the program itself, but that seems to me to be too
casual for amateurs.

As to the question by Sondra Ham --
[ ...]


> > The instrument asks children to rate on a three-point scale how well each of
> > their classmates "gets along with" the classroom teacher. They also rate
> > themselves -- and trying to ignore the self-ratings is causing the problem.
> > The data is entered as if the raters were listed across (i.e., horizontally)
> > and the ratees were listed down (i.e., vertically). So the data for an
> > imaginary classroom of five students might look like this (without the space
> > after the identification numbers):
> >
> > 001 32323
> > 002 12311
> > 003 11323
> > 004 33333
> > 005 11111
> >
> > Here, participants' identification numbers are entered first (001, 002, 003,
> > 004, 005), followed by the ratings that they received from their classmates
> > and themselves. So, for example, 001 gave himself a "3" and received the

[ ... ]

I can't see any purpose or use of this unusual arrangement, of having
one line for a Ratee. If the data were entered that way, they can be
re-arranged by using SPSS's "FLIP" command.

On the other hand, one convenient way to get the Variances entails the
re-writing of data, so that each score (above) is on one line, indexed
by Rater and Ratee. Then you do that two-way ANOVA of

VarX by Rater(#r) Ratee(#r) .

The F-test between Ratee is what generates the alpha.
If your report wants to compare of several local VarX's, you
can compare those Fs directly, without estimating an alpha.

--
Rich Ulrich, wpi...@pitt.edu
http://www.pitt.edu/~wpilib/index.html

0 new messages