Letter from the Editor of Psychological Science

Skip to first unread message

Brian Nosek

Oct 19, 2012, 12:19:45 AM10/19/12
to Open Science Framework
I am attaching a letter from the Editor of Psychological Science proposing three new initiatives for the journal.  For non-psychologists on list, this is the flagship empirical journal of the Association for Psychological Science.  The initiatives are:

1. tutorials about power, effect size, and confidence intervals

2. disclosure statement about the research process for all submissions that would be published online with accepted articles - Answering a series of statement such as: describing how the sample size was determined, whether there were other measures not reported, etc.  

3. a new submission format: pre-registered replications 

The letter provides some additional detail.  The letter has been circulated for feedback among the editors and some others (it is not secret).  There are a wide variety of opinions about these initiatives, so it is not clear what will happen.  Some might not happen at all, or could be adapted significantly with feedback.  It would be interesting to observe the opinions of this group on these.items.  
PSCI Initiatives for 2013 (20121008).docx

Roger Giner-Sorolla

Oct 19, 2012, 11:49:45 AM10/19/12
to openscienc...@googlegroups.com, no...@virginia.edu
A very promising set of developments indeed!

But a few caveats:

I'm not sure a tutorial will be as effective as a reporting requirement. Even now, cracking open a Psych Science at random, I immediately find a paper that reports the test statistic chi squared but not the associated effect size stat such as phi (Slutske, W. S., Moffitt, T. E., Poulton, R., & Caspi, A. (2012). Undercontrolled temperament at age 3 predicts disordered gambling at age 32: A longitudinal study of a complete birth cohort. Psychological Science, 23(5), 510–516.) Overall our field needs more standardization of what is reported.

Will the Replication Reports be indexed as Psych Science articles? If so, the all-important impact factor of the journal might fall. Yet ghetto-izing them into a separate journal is also not the right call; journals of nonsignificant results and replication inevitably fail. I'd welcome the gutsier move from a journal with already high reputation. I think people wishing to evaluate other people's research will still draw distinctions between original and replication work, and it will probably be evident from the title of the publication, but this only underscores the futility of mechanical bibliometric procedures.

I also welcome the giving of "teeth" to the online supplementary material through a reporting requirement.

Denny Borsboom

Oct 19, 2012, 1:33:32 PM10/19/12
to openscienc...@googlegroups.com
"Standardization" is nice concept but it only goes so far. As an
example, you may want to have a measure of effect size for the
categorical case, but unfortunately there is no unambiguous measure of
effect size in contingency tables. Phi is one of a family of measures,
none of which are fully satisfactory; another well known measure is
the odds-ratio which is, well, odd. Almost all of these statistics
depend on the marginals, which more or less disqualifies them for any
general use, let alone as a suitable definition of effect sizes; the
ones that aren't sensitive to the marginals rely on funny assumptions
like that the categories result from a cut on a continuum, which means
their applicability varies from case to case with the implication they
can't be applied uniformly either. In my view, the situation with
effect sizes 2x2 tables is so dramatic that there is good reason to
doubt whether we at all have a clear idea of what is meant by "effect
size" in that context (it may very well be an inappropriate
generalization of the statistical default mode of thinking in terms of
linear relations between normal distributions). The situation for
multiway tables is fittingly a multiple as bad. If you insist on
having something that counts as an effect size, as far as I'm
concerned you might as well divide the chi-square by N. But the point
is: you can't have a uniform style of reporting because the problems
and their solutions aren't uniform.

Here are some good references on the topic.

Warrens MJ, On similarity coefficients for 2x2 tables and correction
for chance. Psychometrika 73:487-502.

Warrens MJ, On association coefficients for 2x2 tables and properties
that do not depend on the marginal distributions. Psychometrika

The part of statistics that most people get to see in their practical
research work is a very orderly and preprocessed tip of an otherwise
chaotic and very un-uniform iceberg. Statistics is a jungle, really.

Denny Borsboom
Department of Psychology
University of Amsterdam
Weesperplein 4
1018 XA Amsterdam
The Netherlands
+31 20 525 6882


Oct 19, 2012, 6:35:07 PM10/19/12
to openscienc...@googlegroups.com, no...@virginia.edu
I am delighted to hear about this! The pessimist in me is wary of (2) and (3), particularly (2): but if his proposed remedy (editors/reviewers being more realistic about "messy" data) is implemented, perhaps it will work. Also, re:(3), as Roger points out below, PS is to be lauded if it is are willing to jeopardize its impact factor. But perhaps the independent replications of the original finding will be cited almost as many times as the original paper (e.g., every time I build on phenomenon X, I cite all known replications of it to bolster my case that it is worthy of further investigation).



Oct 24, 2012, 3:05:34 AM10/24/12
to openscienc...@googlegroups.com, no...@virginia.edu
What a wonderful bold step on his part, I am so un-disheartened about the field and especially abt PSCI after reading it! 

I too am wary of 2 because I am pessimistic about reviewers' ability to set aside many-years-long habit of pouncing on authors for the smallest flaw in the data, even if they're told to be more lax. It's just not a mindset that most of them (us) have. What will probably happen in those answers is a lot of vague language and skipping details on the part of terrified authors. But may be there is a way to make it work... 

I think 3 would be wonderful, and I agree with Yoel that it doesn't necessarily mean PSCI's impact factor will go down. To the contrary, I suspect that even overall successful replications will find a caveat or two about the effect, which is precisely the kind of thing I am likely to cite in explaining why my conceptual replication/"follow up"/"building on such and such" study doesn't conform in every single respect to the original findings or to every single one of my hypotheses. Isn't that the more likely case anyway?

Spassena Koleva, Ph.D.
Postdoctoral Research Associate
Dept. of Psychology
University of Southern California

Sanjay Srivastava

Oct 30, 2012, 6:45:32 PM10/30/12
to openscienc...@googlegroups.com, no...@virginia.edu
My quick takes:

#1 - Hard to be against that.

#2 - Deeply ambivalent. I worry that (a) editors and reviewers across different subfields won't have the expertise to weigh revealed messiness against merit, and instead will fall back on flawed heuristics; and (b) without accountability, a disclosure system will be too easy for people to exploit by fudging or outright lying.

#3 - Some specifics will probably get tweaked after it's rolled out, but I really really like the idea of a dedicated replication section.

I've got a longer blog post here if anyone's curious:

p.s. This is my first Open Science Framework post, glad to be joining the conversation!

Brian Nosek

Oct 30, 2012, 8:41:28 PM10/30/12
to openscienc...@googlegroups.com
Glad to have you in the discussion Sanjay!

On #2, the issues you raise are worth unpacking.  

The easiest one to address, I think, is opportunity for exploitation and lying.  And, the way that the disclosure statement addresses it is by not addressing that at all.  Requiring these disclosures will not address fraud any more than the current publishing system does.  [One could make a case that needing to disclose these things could reduce fraud a little, but no need to hang one's hat on that.]  

In my view, the proper assessment of this issue is whether requiring disclosure of the research process will (a) increase transparency and (b) decrease unjustifiable research practices compared to the system that we have now.  I believe that it will do both.  My starting assumptions are that (1) most all researchers intend to do good research, (2) most all intend to report honestly, and (3) most all are influenced by motivated reasoning and other pressures to fail to disclose things in the present standard practice because there are incentives against disclosure (hide the muck, increase publishability).  A disclosure requirement will provide us honest researchers a chance to reflect on our process and decide if our evidence is really up to snuff or if we need to do a bit more to be happy sharing how we got there.  It will also provide us good researchers an opportunity to report on our excellent practices when they occur.  And, finally, because reporting is routine and required for all, after a couple rounds of practice, it will not be a big deal at all.  Standards will shift to reflect whatever are the actual practices in the field (right now we can only guess what other labs do).

The answer to Sanjay's first point is not so obvious.  Will editors fall back on flawed heuristics and not recognize reasonable differences across subdisciplines for evaluation?  I am not sure how to answer that question.  I guess I can answer with a question - Is there some way in which keeping the research process opaque to editors, reviewers and readers is actually a benefit to evaluating the research?

Yes, some research is messier than other research.  And, I have certainly observed instances of people working in areas with strong experimental control (solving irrelevant problems) waving a dismissive hand to others working in domains with little control.  But, does that fact justify keeping others ignorant of how the sausage gets made?  I'd be very interested to hear others' take on this.

Michael Cohn

Oct 30, 2012, 8:56:05 PM10/30/12
to openscienc...@googlegroups.com
My assumption has been that the disclosure statement is meant to raise
the stakes on questionable research practices.

If you fiddle with exclusion criteria or engage in a little bit of
post-hoc fudging of your initial hypotheses, and someone tries to blow
the whistle on you, it's unlikely that you'll get in any serious
trouble. Most of your colleagues have probably done the same, and
we've all been taught that journals don't want to see all the minutiae
of your analysis anyway.

However, if your publication included an unambiguous, categorical
statement that you did not do those things, then you have knowingly
committed fraud. If that comes out it's harder to brush off, and
merits a correction to the official record or even a retraction.

Does that seem realistic to others?

- Michael

Michael Cohn, PhD
Osher Center for Integrative Medicine

Sanjay Srivastava

Oct 31, 2012, 8:37:39 PM10/31/12
to openscienc...@googlegroups.com, no...@virginia.edu
I share your starting assumptions. And I think that incentives are the biggest thing that knocks people off of their honest intentions. A publication in a top journal like Psych Science is a big deal for an academic psychologist. And getting in is a highly competitive process. Put those two things together with no accountability and you create a big incentive for fudging around the truth, and because it is competitive, one that disadvantages those people who resist the temptation of incentives. So I'm not as confident as you that mandatory disclosure without accountability will increase transparency or make things better. (As for reflecting on your process or telling about your excellent practices -- you don't need mandatory disclosure for either of those things, and nothing presently stops people from doing that.)

I think an instructive example in the current environment (and also relevant to Michael's comment) is HARKing. HARKing involves deception. But it has become highly normalized -- to the point where I've had editors and reviewers suggest hypothesis I could put into my introduction to make papers more compelling. And I'm pretty sure they thought they were being helpful.

"Keeping others ignorant" is I think a rather pejorative way of putting my other concern. No submission process will ever involve providing every possible piece of information. So it's a matter of deciding what constellation of information will lead to the best decisions. And I'd say I'm worried about providing people with information that they don't know what to do with. If editors and reviewers fall back on heuristics and standards that are familiar to them, that advantages mainstream and popular paradigms and methods and works against methodological and intellectual diversity.

As an example, that has sometimes been my experience having longitudinal and other non-experimental research get evaluated by experimentalists. Some (though thankfully not all) have been taught that experiments are always good for causal inference and that nothing else ever is. Causal inference with non-experimental data is tricky business and many studies are certainly open to criticisms, including my own I'm sure. But when the response you get is "you can't talk about causation unless you run an experiment," that reflects a heuristic rather than a principled critique.

So I'm worried something similar would happen when researchers try to judge disclosures. Putting dozens of DVs in a small one-off experiment might look fishy; but running a large multi-year longitudinal study without (at least) hundreds of variables would be inefficient bordering on stupid. Will editors at a journal as broad as Psych Science always be in a position to recognize how local practices vary? I don't know. If they aren't, the negative effects are going to fall disproportionately to people who aren't doing mainstream stuff.
Reply all
Reply to author
0 new messages