(1990). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant using it. Psychological Inquiry, 1, 108-141, 173-180.
(1990). Corroboration and verisimilitude: Against Lakatos' "sheer leap of faith"(Working Paper, MCPS-90-01). Minneapolis: University of Minnesota, Center for Philosophy of Science.
(1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66, 195-244.
--
You received this message because you are subscribed to the Google Groups "Open Science Framework" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openscienceframe...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Certainly we care about the exact magnitude of the effect size. For any research that leads to statements about humans and the world, we need to know if the effects are important, not just if they are there. Do lonely people take hotter showers? Well if they do, then that is more interesting if the effect runs to several degrees C than if it's only a fraction of a degree. Is that context dependent? Then the size of the context effect is interesting.
Futhermore, when we want our research to have some kind of real-world impact, effect sizes are crucial. This is not limited to scenarios involving making money. For those of us who do clinical research, effect sizes are the main thing, both when evaluating diagnostic or therapeutic interventions and when trying to choose which basic research findings warrant translational research.
The final statement "before you can estimate something, you need to make sure that there is something to be estimated" strikes me as implying something of a false dichotomy between hypothesis testing and effect estimation. I cannot imagine a situation where the estimation of an effect size detracts from the scientific value of a statistical analysis. Can you?
Best wishes, Gustav
Gustav Nilsonne, MD, PhD
Researcher
+46 (0) 736-798 743
Stockholm University
Stress Research Institute
106 91 Stockholm
Karolinska Institutet
Department of Clinical Neuroscience
Nobels väg 9
171 77 Stockholm
gustav....@ki.se
________________________________________
From: openscienc...@googlegroups.com [openscienc...@googlegroups.com] on behalf of Eric-Jan Wagenmakers [ej.wage...@gmail.com]
Sent: 05 August 2015 11:34
To: openscienc...@googlegroups.com
Subject: Re: [OpenScienceFramework] article/post recommendations re: meta-science and open science
1. There are very few "nil effects". This is an old point made by Cohen. As you are well aware, it has more recently been debated in brain imaging, following a paper by Karl Friston. Friston talks about "the fallacy of classical inference", meaning that if you have enough data, significant effects are found in most of the brain. I think this is not a fallacy. Tal has argued persuasively that more data is better, and so has my colleague Michael Ingre. Crucially, the effect sizes tell us which parts of the brain are more likely to have a mechanistic relationship to behavior, and thus they guide further theorizing and experiments on causal relationships.
2. The importance of an effect depends directly on its size. A positive effect of homeopathic medicine has been demonstrated many times. It remains unbelievable because the effect sizes are not big enough to overcome our prior expectation that homeopathic drugs are ineffective. A big effect, such as the regrowth of amputated limbs due to homeopathic ointment, would warrant further investigation of causal mechanisms.
3. Causal relationships in biology are often described by means of diagrams with arrows, showing that A leads to B and so forth. Great advances have been made in many cases when the nature of the arrow has been more accurately described. Is the relationship linear or non-linear? What is its shape? Can we build a mathematical model to predict the behavior of the system? We need effect sizes. Context effects need to be modelled or held constant. In my opinion, context effects should not be used as an argument to reduce modelling to a qualitative exercise.
4. I like to agree with Tal, but this time I disagree. :-) Not everything depends on the nature of the question. I still can't think of a single instance in quantitative research where hypothesis testing would be preferable to estimation. In ESP, I would care a great deal about a believable positive hypothesis test result (though that is of course hard to imagine), but I would care a great deal more if the effect were also big enough to have real-world implications.
Perhaps we shall have to agree that we disagree. Anyway, I appreciate the opportunity to exchange views with you.
Best wishes, Gustav
Gustav Nilsonne, MD, PhD
Researcher
+46 (0) 736-798 743
Stockholm University
Stress Research Institute
106 91 Stockholm
Karolinska Institutet
Department of Clinical Neuroscience
Nobels väg 9
171 77 Stockholm
gustav....@ki.se
________________________________________
From: openscienc...@googlegroups.com [openscienc...@googlegroups.com] on behalf of Eric-Jan Wagenmakers [ej.wage...@gmail.com]
Sent: 05 August 2015 12:35
In my opinion, Meehl was wrong.
Do researchers actually care whether
an effect --when reliably detected in a very large N study-- is 0.1,
0.2, 0.3, or 0.4? No they don't, nor should they.
One reason is that
effect size is context dependent: for instance, a list length effect
manipulation can be d=0.3 with a manipulation of 10 vs 20 items, and
d=0.6 with a manipulation of 10 vs 30 items. So the exact magnitude of
d is almost never very interesting, unless you consider a concrete
real-world application in order to make money.
Do researchers care whether the effect is 0 or not? Yes they do, and
they should.
Are people more creative
in the presence of a big box? Do lonely people take hotter showers?
Does video game playing improve low-level perception? These questions
are relevant and cannot be addressed by computing effect size.
Bottom line: before you can estimate something, you need to make sure
that there is something to be estimated.
Hi Fred,
Our current quantitative theories do not address effect size predictively. I am not sure why they don't, but perhaps these models are a reflection of what researchers seek to understand about reality. And for the researchers I know, that reality is simply not captured in context-dependent effect sizes. It is often already difficult enough to demonstrate the mere presence of an effect.
EJ
And there is plenty of brilliant content and discussion to be found on the blogs of:
--