I am attempting to produce subscales of a self-report measure that was
administered to the same set of children on two occasions two years
apart.
Is there a way in SPSS of doing factor analysis on data that was
collected more than once on the same individuals?
One could easily do two separate factor analyses for each of the
occasions at which the scale was administered, but it seems that there
should be some way of capitalizing on the repeated measures nature of
the data to better estimate the factor structure.
Does this make sense? Any help would be GREATLY appreciated.
Thanks in advance,
Richard Thompson, Ph.D.
> Hi all
>
> I am attempting to produce subscales of a self-report measure that was
> administered to the same set of children on two occasions two years
> apart.
>
> Is there a way in SPSS of doing factor analysis on data that was
> collected more than once on the same individuals?
There is not a way to do what is called "confirmatory"
factor analysis, if that is what you wondered.
One proper way to *pool* data across the time periods
is achieved by a couple of steps, which I have done before --
Split-files on Period, do Frequencies to z-score the variables
by period, then Factor on the combined data.
A different sort of Factoring would use an aggregated record
for each person -- Sort by ID, Aggregate to get means; then
Factor. I think you would want to have a special purpose,
in order to justify doing this one.
>
> One could easily do two separate factor analyses for each of the
> occasions at which the scale was administered, but it seems that there
> should be some way of capitalizing on the repeated measures nature of
> the data to better estimate the factor structure.
>
> Does this make sense? Any help would be GREATLY appreciated.
Hope this helps.
--
Rich Ulrich, wpi...@pitt.edu
http://www.pitt.edu/~wpilib/index.html
I wonder if you want something like multi-modal factor analysis,
as it was called by Harshman at U of Western Ontario and
others. Perhaps the following site would be useful.
http://three-mode.leidenuniv.nl/
Best wishes
Jim
============================================================================
James M. Clark (204) 786-9757
Department of Psychology (204) 774-4134 Fax
University of Winnipeg 4L05D
Winnipeg, Manitoba R3B 2E9 cl...@uwinnipeg.ca
CANADA http://www.uwinnipeg.ca/~clark
============================================================================
Richard, I know you are the expert on SPSS, but I have done
"confirmatory" FA in SPSS by standardizing the variables, computing the
subscale scores, and using a script to regress each standardized item
through the origin of each subscale. This results in nice betas along
with confidence intervals for each item and each subscale (structure
matrix). Poor man's confirmatory FA in SPSS. I forget the reference at
the moment (maybe Lord?) multiple correlated group component factor
analysis. You have to dig through the output to collect all the beta
weights and check the CIs. I create a new table with the items arranged
so that subscale's item are presented in sequence and subscales are the
columns. Surround each subscale's beta with a border and highlight sig
beta in bold (use the CIs for that). You can relax the CIs a little bit.
Jay Lee
In article <4mb431p492to8psn8...@4ax.com>,
Don Schopflocher, PhD (Psychology)
Biostatistician, Health Surveillance
Alberta Health and Wellness
<mice_el...@yahoo.com> wrote in message
news:1110578974.0...@g14g2000cwa.googlegroups.com...
> Why not just pretend that the second administration was actually to all new
> subjects. Factoring this data set is 'akin' to having twice as many subjects
> so should improve the identification of factors. Now, also the factor scores
> (or scale scores if you want to proceed by creating sums of high loading
> items) can be calculated, and compared across the two repetitions within
> subjects. The main danger in this procedure is the possibility that the
> factor structure has changed across occasions, but if that has happened it
> would seem to defeat the notion of creating a stable scoring scheme in the
> first place.
>
Just pooling them works pretty well, until (or unless) you
submit for publication and run into a reviewer who has
set his mind against it as a point of principle.
- so I learned from a reviewer that I would not do it
the simple way, in the future. (I hope that what I teach as
a reviewer is worth more than that point.)
- It would be good if SPSS had a pooling option within the
factor procedure, I think, so that the user could avoid those
extra steps.
There is a nice word. It is "akin" to having twice as many subjects.
> I am attempting to produce subscales of a self-report measure that was
> administered to the same set of children on two occasions two years
> apart.
>
> Is there a way in SPSS of doing factor analysis on data that was
> collected more than once on the same individuals?
>
> One could easily do two separate factor analyses for each of the
> occasions at which the scale was administered, but it seems that there
> should be some way of capitalizing on the repeated measures nature of
> the data to better estimate the factor structure.
What is the exact purpose of the factor analysis? If the analysis will be
used to establish the structure, then I think it is OK to do two separate
factor analyses. If they suggest a similar structure, then you can go
ahead for subsequent analysis on the scale scores. If you will not do any
sig. tests, then I think the fact of being correlated will not have any
adverse effect on the conclusion. If you really need to quantify the
degree of similarity, then Procrustes orthogonal rotation (essentially an
orthogonal rotation technique that maximizes the similarity between two
set of loadings) can be used and factor congruence coefficients can be
computed. If you only need to describe the similarity and don't need to
test it, then I think being correlated should not be a big problem.
Certainly, if the similarity is so clear at the first stage, the rotation
may not be necessary. The above approach can be done solely by SPSS.
If you really need to do something sophisticated, you need to ask whether
the two correlation matrices are really equal, before pooling them
together. If they are, then you can pool (average) the two matrices into
one single matrix for subsequent factor analysis. In a sense, you can
view the data as two separate studies with correlated data, and apply the
techniques proposed by Becker for pooling the correlation matrices.
Becker, B. J. (2000). Multivariate meta-analysis. In H. E. A. Tinsley & S. D. Brown (Eds.),
Handbook of Applied Multivariate Statistics and Mathematical Modeling (pp. 499-525). CA:
Academic Press.
Becker, B. J. (1992). Using results from replicated studies to estimate linear models. Journal of
Educational Statistics, 17, 341-362.
Becker (1992) used the Pearson correlation, though Becker (2000) has the
formulae to pool the matrices of Fisher'zs and transform them back to
correlation after pooling the data. Obviously, this approach cannot be
easily done in SPSS, though in principle it could be done by SPSS's
matrix command.
In short, the procedure to be used is decided by the purpose and role of
the factor analysis in the research, as well as the norm of the
discipline. Actually, to some researchers confirmatory factor analysis
may readily come to their mind to analyze the correlated data. AMOS
should be able to fit the model, but I don't know whether SPSS still has
AMOS as a module.
--
Shu Fai CHEUNG
Webpage: http://sfcheung.blogspot.com
(This site is in Chinese, with a few English articles)
Email: Please join "sfcheung" and "alumni.cuhk.net" with "@" .
For spam prevention. Sorry for the inconvenience caused.
Procrustes was already menioned, parafac, too.
What I've done one time was to find the PC of each repeated
measure and use this as "reference"-structure. (it is slightly
different from the mean). From that I separated time-related
itemspecific errorterms and put them away from the loadingsmatrices.
The remaining variance was the basis for the next steps of
factor analysis then.
To do that all without estimating factor-scores intermediately
I took *all* items (the repeated measures included) into one
vectorspace and performed the finding of a principal component
of the repeated measures *in* that vectorspace - that means by
rotations of the whole system, but using only the group of
measure-repetitives as criterion.
I got then
- itemspecific errorterms for all items
-group specific errorterms (for all groups of repetitions)
- group specific principal components/axes
- common factors of the pc's of the item-groups
The interesting part was, that this was all in one vectorspace.
However this process was somehow experimental and no significance
test were available...
I could offer you my program, which allows to deal with such things
very easily, but it should be not too complicated to do that in
SPSS-matrix-language (for instance) as well.
Gottfried Helms