Four treatment levels (A B C D) including one untreated control, two
outcome time points (T1, T2) for a total of 8 experimental groups.
One baseline group (T0) representing the state of the cells before any
exerimental treatment.
Each group (8 experimental plus 1 baseline) has 8 samples.
The outcome measure was destructive, so each of the 9 groups has 8
independent samples (no repeated measures).
I would like to test a) if any of the treatments cause the outcome to
change from the T0 baseline, b) if treatment type influences that
change and c) if that change progresses from outcome time 1 to outcome
time 2. Without the baseline group, I would have a standard two
factor ANOVA. However, the question of whether (and how much) any of
the treatments cause a change from that baseline is important.
The one (brief) relevant posting that I have seen suggests
"converting" this to a one-factor ANOVA (with 9 levels, in this case)
looking at the subset of post-hoc comparisons that make sense.
Does anyone have any other suggestions?
Thank you in advance for your help.
Marc Levenston
Just a quick note (I'm rushed for time) looks like a possible mixed
factor ANCOVA to me. Two factors (time 2 levels repeated measures) and
treatment ( 4 levels independent measures). Use the baseline as a
covariate.
Thom
I am in the retail energy business - I have a database of metered gas
consumption (by day) and I have regressed that consumption against
Heating Degree Days.
I am doing a polymonial regression (to the 2nd power). I do not like
the "canned" excel add-ins because it creates a new sheet for every
regression.
I have a lot of meters and want to do this analysis all on one sheet.
By "programming" the formulas in a series of cells, I can accomplish
this and can do all regressions together. (I then upload the results
to MS Access).
By searching google, I was able to discern how to derive the constant,
"x" and "x-squared" coefficients.
However, I can not compute the "R-squared" value. I've come to
understand that it is a ratio of the Explained and Total variations.
I have found "R-squared" equations for a straight line fit, the
formula must be different for polynomials.
This is what I'm looking for - a forumla to compute the "R-squared"
value from my data.
I could forward my spreadsheet to someone if they needed to understand
the mechanics of what I'm attempting to do.
Thank you in advance for your response.
Dough
Before you decide what kind of curve to use, you should consider
exploring your data with scattergrams perhaps fitting loess curves to
the data.
Hope this helps.
Art
A...@DrKendall.org
Social Research Consultants
University Park, MD USA
(301) 864-5570
1) It has? In what way? Wrong answers... or just slow and a pain to
use?
2) I'd concur on the SPSS recommendation. It will calculate the R^2
value in question. If you also find a good multi-variable statistics
book. Even if I could write down the formula by hand on paper, it
would be annoying to try to type it into text here. But yes, it is the
'portion of the total variation explained by the regression
relationship'. Off the top of my head it would be something like:
For each observed data point (y value), subtract the observed value
from the *predicted* value (using the regression equation). Square
that value, do that for each data point, then sum all of those. Divide
that sum by the sum of (y-y'bar')^2, or the measure of the 'total'
variance.(Again, you have to do the y-y'bar')^2 thing for each data
point (Where y'bar' is the total overall mean of the observed
values)). If I'm not mistaken, R^2 is then 1 minus the ratio you just
calculated...
-Brian
www.teasley.net
Because I consider human factors a critical part of quality assurance, I
usually recommend SPSS in the policy, social, and behavioral sciences.
I supplement this with other software like Wesvar and Sudaan for complex
samples, procedures in other packages and special programs for a
variety of advanced or esoteric tasks.
see
http://www.elsevier.nl/gej-ng/10/15/38/37/25/27/article.pdf
made the case originally about regression etc
it was updated in
"On the Accuracy of Statistical Procedures in Microsoft Excel 2000 and
Excel XP" (with Berry Wilson),
Computational Statistics and Data Analysis 40(4), 713-721, 2002
other stat problems in Excel
http://www.stat.uiowa.edu/~jcryer/JSMTalk2001.pdf
a summary of an earlier discussion on one of the lists
http://hesweb1.med.virginia.edu/biostat/teaching/clinicians/excel.hazards.txt
a succinct summary and great further detail can be found at
www.npl.co.uk/ssfm/ssfm1/validate/testing/excel.html
Hope this helps.
Art
A...@DrKendall.org
Social Research Consultants
University Park, MD USA
> "Arthur J. Kendall" <A...@DrKendall.org> wrote in message
>> It has been shown that Excel does not do well for statistics
>> esp regression
>
> 1) It has? In what way? Wrong answers...
Yes. See:
'On the accuracy of statistical procedures in Microsoft
Excel 97' by B.D. McCullough og B. Wilson in Computational
Statistics and Data Analysis 31 (1999).
'On the Accuracy of Statistical Distributions in Microsoft
Excel 97' by L. Knusel Computational Statistics and Data
Analysis 26 (1998).
Also try a Web search on these two titles. This will find
many relevant pages, e.g.
<URL: http://www.cof.orst.edu/net/software/excel/no-stats.php >:
Excel's statistics add-on pack is riddled with potential
disaster areas, and since it has been subjected to the
best analysis available in the world and found to be
wholly lacking, the only applicable words are 'avoid'
and 'plague'.
--
Karl Ove Hufthammer