Why do you want to use a covariate with a chi square ?
Could you be more precise ?
> hi!
> Is there a way to control for covariates when doing a chi square ?
If one of the variables is considered an outcome (or dependent)
variable, you might use logistic regression (binary if DV has 2 levels,
the multinomial variety if DV has more than 2 levels). If neither
variable is considered an outcome, and you are simply looking for
associations, you might try loglinear analysis.
--
Bruce Weaver
bwe...@lakeheadu.ca
www.angelfire.com/wv/bwhomedir
Thanks for asking,
Serge S.
So I ask the same question now that you know more about the situation:
Is there a way to control for covariates when doing a chi square on
posttest dependant variables ? Or are there other solutions.
Thanks for the tip!
Serge
>>
> Hi Bruce,
> Suppose we are in a pretest posttest situation with a control group and
> an experimental group...I want to test the difference in proportions
> between the two groups at posttest...that being the proportion of people
> in each group who said 'yes' to a dependant dychotomic variable as
> opposed to 'no'. But I want to control for the proportions measured at
> pretest (they were not similar on the same dychotomic variable). This is
> easily done with ANCOVA when we use means, but when using proportions,
> what can we do to control for a covariate's effect ?
>
> So I ask the same question now that you know more about the situation:
> Is there a way to control for covariates when doing a chi square on
> posttest dependant variables ? Or are there other solutions.
>
> Thanks for the tip!
>
> Serge
Okay, so one of the variables (posttest) is an outcome variable. Try
logistic regression with DV = POST (0/1), and variables GRP (0/1) and
PRETEST (0/1) in the model. Optionally, you could also include the GRP
x Pretest interaction term (to see if the Group effect depends on
pre-test category).
By the way, the likelihood ratio test from logistic regression on a kx2
table is identical to the likelihood ratio chi-square that you get from
the CROSSTABS procedure. I have an example of this (using a 3x2 table)
on my SPSS page:
http://www.angelfire.com/wv/bwhomedir/spss/logistic1.txt
Cheers,
Bruce
Thank you Bruce for the answer...but the interpretation of the results
are not yet crystal clear to me...Does a regression really answer my
question? I will try to digest your suggestion (and have a look at your
page) and if I cannot figure everything out, I will come back to you
next week!
Thanks again,
Best...,
Serge
I can see the problem in another, slightly different context.
If the pre-post matters in a strong way, so they are usually the
same, then what you have for ONE group, alone, could be
McNemar's Test for changes. This sets up a table with Yes/No
Rows and on Columns, and the off-diagonal measures the
number of changes from Y to N and N to Y. For a table with
cells A,B,C,D
Y N
A B yes
C D no
- the test is a version of a sign test.
One simple formula (with no continuity correction) is
z = (B-C) / sqrt(B+C) ... since the variance of the difference
is the sum of the variances.
For TWO groups, you can have z1 and z2, computed
similarly, and the question is whether one change is larger
than the other. The answer can be another z-score,
z= [(B-C) - (B'-C')] / sqrt(B+C+ B'+C') .
This seems like a natural answer, but I don't remember seeing
anyone use it.
For this one, or for Bruce's, it is important to keep in mind
that the validity of inferences about group differences in outcome
depends on having no (sizable) differences in a covariate
that makes a difference. In other words: You do have to
be very careful about conclusions, if the groups you compare
at the end are not equal at the start. (You can reach three
different conclusions -- no difference, or either group
superior -- from one odd set of scores, depending on the
assumptions that are appropriate for measuring 'change'.
You can ignore the start, use raw change, or use 'regressed
change' in a prediction model.)
--
Rich Ulrich, wpi...@pitt.edu
http://www.pitt.edu/~wpilib/index.html
>
> I can see the problem in another, slightly different context.
>
> If the pre-post matters in a strong way, so they are usually the
> same, then what you have for ONE group, alone, could be
> McNemar's Test for changes. This sets up a table with Yes/No
> Rows and on Columns, and the off-diagonal measures the
> number of changes from Y to N and N to Y. For a table with
> cells A,B,C,D
>
> Y N
> A B yes
> C D no
That same thought crossed my mind as I was writing my response. But it
seemed to me that the OP was more interested in the main effect of
Group, and only wanted to control for pre-test. McNemar chi-square for
each group individually gets away from that.
Right. Mine is useful only if you are willing to describe
the separate changes, and *then* test whether one group's
change is bigger than the other. And: You don't just say,
"This one was significant, and that one was not" -- You carry
out the test.
Ok Bruce,
I Believe you are right! The logistic regression does the trick!
Thanks a lot!
Serge