What form of reviewer/author guidelines do we want? And how prescriptive? [was Re: Articles and Other Useful Statistical Resources]

42 views
Skip to first unread message

Matthew Kay

unread,
May 29, 2016, 7:55:17 PM5/29/16
to Pierre Dragicevic, David Lovis-McMahon, Transparent Statistics in HCI
(Pierre, thanks for elevating this conversation to the list. I've renamed the subject to better reflect its contents) 

I think that Paul Resnick raised some good questions in comments on the reviewer guidelines that are worth a more substantive conversation. Part of that conversation is about how strong or strict our guidance should be. Playing devil's advocate, there may be a danger in making a document that appears to accept any statistical practice. I do not think that's what we are trying to do (and I do not think it's what we have now), but on the continuum from "love and peace, everyone do what you want" and "do X and only X", we need to articulate where we want to be and make that consitent. 

Maybe we just need to do a better job of stating that the high-level bit of the document is not "do what you want". Or maybe there are more substantial changes needed. Even if preventing misguided rejections (Pierre's point #2) and rewarding good practices (#3) are short-term goals, I think there may be something productive to be done towards improving the rigor of reviews (#1), and maybe there are some stronger recommendations we can make. 

Brainstorming: do we want a consistent mechanism to explicitly label some practices as "bad", "acceptable", "best"? Possibly crazy idea: one could imagine guidelines that are designed explicitly to deal with inertia, patterned after deprecation in APIs: something like, "Practice X is acceptable in version 1.1 of this document (up to CHI 2019), but will become obsolete in version 2.0 of this document (CHI 2020). As of that time, consider Practice Y or Z". This gives people time to adjust, and an explicit mechanism for introducing changes to practice. This could be a hairy process, and we would have to be very careful about finding consensus on what practices are made obsolete, but I think there is some potential value to it.

There is a related question of audience and use case. These dictate somewhat what the form of the recommendations document should be.

Here are some possible use cases:

#1. A document that a reviewer consults (possibly in the manner of an FAQ) to get a sense of how to evaluate a particular aspect of a study. E.g., perhaps I am reviewing a paper and I have some questions about the removal of outliers in a study. I could consult the guidelines document and read that section. This might allay concerns I had about the paper (if the authors have given the necessary detail asked for in the reviewer guidelines), or it might prompt me for questions to ask in my review. If this is a use case we want to support, the document will need a clear table of contents as a "way in" and perhaps a section on "using this document".

#2. A document that a reviewer reads top-to-bottom before doing reviews. I doubt very much we will ever achieve a high number of people reading the document all the way through, just as I doubt the existing CHI reviewer guidelines are fully read by a majority of reviewers. A shorter document, or a pithy intro and general set of principles might be the most we can hope a larger number of people will read in detail.

#3. A document used by authors to a) help them properly present their results and b) defend their statistical practices when writing their papers. (a) would suggest there is value in including examples within the document (which I think would be nice---especially for us to suggest some graphical alternatives to impenetrable tables of values). The form of the document for this use case might be similar to that for #1, in that an author might be more likely to index into the document like a FAQ in order to gain suggestions for presenting their work and even citations for defending it.

#4. A document that authors read in order to guide them in how to conduct their analyses. It is debatable whether this is beyond the scope of this document. At the very least, this document can act as a place to collect references to other resources that may help authors here (such as that suggested by David). Already we have a number of citations in the reviewer guideline document. If an author reads the guidelines in a particular section that they want to learn more about, that section should point them at the necessary material to do so.

The current document is perhaps heavily skewed towards use case #1. Though, it probably needs better organization to make it really useful there. I think that supporting the other use cases is possible to varying degrees, as described above. 

Other thoughts? What have I missed? 

---Matt


On Sun, May 29, 2016 at 1:47 AM, Pierre Dragicevic <pierre....@gmail.com> wrote:
Thanks David, this seems to be an excellent resource to cite in the guidelines.

I suppose it could be useful at some point to discuss what these guidelines should be trying to achieve. For now I see three possible goals.

1. Moving towards reviews that are more rigorous and less forgiving of errors in statistical analyses/interpretations. This would be desirable but as we all keep pointing out, we're lacking statistical expertise in the reviewing pool. So I don't think we should be  too obsessed with this goal. Educational material and resources that discuss common statistical errors are plentiful and we should encourage reviewers to read them, but I don't think the CHI guidelines necessarily need to repeat these.

2. Moving towards reviews that recognize and reward good practices that are not widely recognized as such at CHI. Things like clarity and completeness, nuanced conclusions, shared material, etc. are all easy to assess even by non-expert reviewers, and if reviewers are properly educated on the importance of these, this could contribute to improving the quality and transparency of reports overall and reduce practices like p-hacking.

3. Moving towards reviews that do not reject statistical reports for the wrong reasons. I'm not sure why this one is so much overlooked. For a non-expert and/or hurried reviewer, it is tempting to use simple heuristics to assess the validity of a statistical report (e.g., does it report ANOVAs / p-values? Are the results significant? Is sample size more than X?) rather than looking at the subtleties of the analysis or at the big picture. As long as reviewers will believe in such heuristics, other recommendations will have little influence.

My hope is that we can encourage reviewers to replace their old, misguided heuristics (3) with other, better heuristics (2) for evaluating studies. Covering (3) is difficult and we may not all agree, but it seems fairly easy to list the different ways we address concrete statistical problems, and state that they're all valid. Such a statement may seem vague as recommendation to authors, but as a recommendation to reviewers it is quite specific, because it implies that using method X rather Y shouldn't be a reason for rejection.

Pierre


On Sun, May 29, 2016 at 7:56 AM, David Lovis-McMahon <dlo...@gmail.com> wrote:
I believe developing statistical guidelines for the HCI community can be furthered by providing a resource for both authors and reviewers. To that end, I thought it might be good to get that ball rolling with a fresh epidemiology methods paper by Sander Greenland and colleagues (see attached) Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N., & Altman, D. G. (2015). Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. European Journal of Epidemiology, 1-14.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stat...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/transparent-stats-hci/a14e208a-c2bb-4961-bdff-3957740ad9c7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stat...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/transparent-stats-hci/CALt9u01PdK%2BUVwNQb%2BbzG3mSEkFh3_zr%3DQrgQYSavDatmWwXzg%40mail.gmail.com.

For more options, visit https://groups.google.com/d/optout.



--

ROBERTSON Judy

unread,
May 30, 2016, 3:18:32 AM5/30/16
to Matthew Kay, Pierre Dragicevic, David Lovis-McMahon, Transparent Statistics in HCI

Hey

 

 

I like Matt’s idea of “Practice X is acceptable in version 1.1 of this document (up to CHI 2019), but will become obsolete in version 2.0 of this document (CHI 2020). As of that time, consider Practice Y or Z". An advantage of it is that people won’t want to seem out of date, whereas they might argue indignantly that their practice wasn’t “bad”.

Ideas #1 and #3 will be very useful. Idea #2 might be useful for editors or meta-reviewers. We can’t expect all reviewers to read a long document but we can expect editors to be on top of their game. Do HCI journals use specialist statistical reviewers?

 

 

Judy

Sean Munson

unread,
May 30, 2016, 2:49:59 PM5/30/16
to Matthew Kay, Pierre Dragicevic, ROBERTSON Judy, Transparent Statistics in HCI, David Lovis-McMahon
Of the suggested uses, I would prioritize a guide written for the authors (#3). I’d rather encourage and promote good practices, and tools that make them possible, over setting a higher bar in the review process without helping the community meet it. I’m still in favor of a section written for the reviewers’ perspective (#1) as well — but it would be best, perhaps, to start with what we think is reasonably well supported for authors in the CHI community.  

On obsoleting practices, I’d suggest a softer and approach. In a document that only has an effect through its ability to persuade, language that motivates authors to move to better practices and that encourages reviewers to ask why a better practice was or was not used seems more appropriate than language that is more absolutest or that implies this document has greater clout than it does. 

That would also allow for more discussion of the state of “bad” practices and the barriers to moving away from them. Is a bad practice still widely used because the tools to do better are not yet widely available? Not yet usable by anyone other than the people who created them? Have there been recent developments that make something formerly good-but-tedious more accessible? 

sean

For more options, visit https://groups.google.com/d/optout.
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stat...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

Matthew Kay

unread,
May 30, 2016, 5:29:55 PM5/30/16
to Sean Munson, Pierre Dragicevic, ROBERTSON Judy, Transparent Statistics in HCI, David Lovis-McMahon
Of the suggested uses, I would prioritize a guide written for the authors (#3). I’d rather encourage and promote good practices, and tools that make them possible, over setting a higher bar in the review process without helping the community meet it. I’m still in favor of a section written for the reviewers’ perspective (#1) as well — but it would be best, perhaps, to start with what we think is reasonably well supported for authors in the CHI community.  

I think that the point about setting a high bar in reviews without the support to meet it is a good one.

Perhaps this is one way to look at things:

- The main goal (drawing from Pierre) in creating a reviewer guide might be to pre-empt reviewers from rejecting better statistical practice just because it is unfamiliar to them. So we could create a reviewer guide focused just on describing some variety of good, possibly unfamiliar, approaches. Or even just a list of "bad reasons to reject a paper". This can be cited by authors in papers and rebuttals.

- The goals of creating an author guide might be to (a) expose authors to better practices, (b) suggest more effective and transparent ways of communicating results, and (c) point them at tools and literature for putting those approaches into practice. Thus it might include examples of good practice, possibly with pointers to papers (e.g., examples of turning tables into graphs) and/or code to generate the graphs in question.

I think that scoping may be an issue with an author guide that will need to be resolved up front (else it will balloon into a book on statistical methods, which is too big for what we want). One scope is to go back to a focus on clarity and transparency: what methods are more transparent, and what ways of communicating results are more transparent? This begins to scope such a guide. It could also give a way to index: So you want to communicate the results of an ANOVA? We suggest asking yourself: do you really want an ANOVA, or are your research questions are actually about regression coefficients or pairwise estimates? If so, consider a regression and a coefplot and/or plots of pairwise comparisons instead of an ANOVA and an ANOVA table. Here's an example of how!

Perhaps this is a way to get the softer approach Sean is suggesting?

Finally, in creating these guides we should also consider what we don't want to be, and what we could do better. I Googled for "APA statistics reporting" and got this document: http://evc-cit.info/psych018/Reporting_Statistics.pdf. It has endlessly persnickety recommendations about just how to format the results of statistical tests. Meanwhile, they recommend impenetrable prose and tables as ways of presenting results. They formalize the minutiae of reporting but have lost the big picture of communicating results clearly.

---Matt




David Lovis-McMahon

unread,
May 31, 2016, 2:19:22 PM5/31/16
to Transparent Statistics in HCI, smu...@uw.edu, pierre....@gmail.com, judy.ro...@ed.ac.uk, dlo...@gmail.com

I want to amplify the idea of an author’s guide and suggest that a reviewer’s guide is just as if not more important.

 

In my experience as the methods expert reviewing articles across disciplines (psychology, law, and anthropology), it is important to remember the reviewer’s role as a gatekeeper. Research in procedural justice and legitimacy suggest that people will oppose having blind reviewers using what is perceived to be arbitrarily decided rules. Anecdotally, I think that part of what has slowed the advance of methods in psychology has been the sense among substantive researchers that methodologists have recently been playing a game of “Aha! Gotcha!” This fosters a combativeness that invites further division along methodological grounds and stymies productive discussion.

 

To prevent that perception, I think guidelines about the review process are just as, if not more, important than guidelines for the authors. In the same way as having transparency in the legal system, having clear reviewer guidelines promotes trust, reduce anxiety on the part of authors, and reduces the potential for perceived unfairness and bias on the part of the reviewer.

 

In balancing those factors, I’ve had a lot of success with the following guide that was born out of my Experimental and Quasi-Experimental Design seminar and my time in law school. (I should note that I have yet to do a review in HCI so I’m not certain how well what I’ve done in the past maps onto the current review process in HCI).

 

My methods seminar was focused on the Cambelian framework (William R.. Shadish, Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Wadsworth Cengage learning). So every review I’ve written has incorporated four sections: Statistical Conclusion Validity, Internal Validity, Construct Validity, and External Validity. Basically, are the stats done correctly, was the study design compromised by confounding variables, did the study actually manipulate and measure the theoretical construct claimed by the researcher, and does the study design support generalizing the reported effect to other groups of participants, times, or situations. Each form of validity is predicated on the prior form being true. That is, Internal Validity cannot be guaranteed if the Statistical Conclusion Validity is suspect. However, as the gatekeeper it is important to assess all four kinds of validity under the assumption that the prior form is true.

 

In my approach, the goal of the methods reviewer is to establish whether the statistical and design evidence offered in the article is valid. Moreover, the decision to accept or reject on methods grounds applies the rule that the identified error must undermine or change the author’s conclusion. That is, as a gatekeeper it is not sufficient to point out a statistical or methodological error. It is the gatekeeper’s job to establish how that the error undermines the author’s position. By placing the burden on the reviewer to establish the harm to the author’s conclusions, the rule acts in a permissive manner. That is, if I didn’t like the use of Bayesian estimation, I’d have to establish how its use in a particular analysis undermines the author’s conclusions.

 

I believe this is also why it is helpful to have a repository of methods papers that operate at both a general level like the one I posted before as well as those that focus on a specific method like selecting priors for Bayesian estimation. This way errors uncovered by reviews can become part of the common body of methodological knowledge and hopefully prevent them from occurring in the future.


-David

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.


To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.

To post to this group, send email to transparen...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/transparent-stats-hci/AMXPR05MB2156E4C72D027B8FD8BA593A5450%40AMXPR05MB215.eurprd05.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

Mike Glueck

unread,
May 31, 2016, 3:12:07 PM5/31/16
to David Lovis-McMahon, Transparent Statistics in HCI, smu...@uw.edu, Pierre Dragicevic, judy.ro...@ed.ac.uk
I agree there is value in both guidelines to support authors, as well as reviewers.

I believe these guides should fundamentally seek to promote an awareness of best practices and hopefully foster a sensitivity to why certain aspects of statistical reporting are important.

I think the shorter these documents are, the more likely it will be that they are used.  It may be worthwhile to curate a more in-depth reference document for those who are interested, but I think a lot can be accomplished by a concise, example-driven primer.

A caveat, of course, in writing something too prescriptive, is that we may not achieve the goal of actually educating (which I think is an implicit goal).  If we only write a recipe for authors and reviewers to follow, they may not gain a deeper understanding of why they are doing it.

Personally, I am really compelled by example-based approaches.  I think it helps put abstract concepts into context which makes the principles easier to apply.  A document of general guidelines places the onus of applying the principles on the readers, who may or may not be equipped to do so effectively.  Perhaps we could curate examples of both preferred and less-preferred reporting styles that have been published in our field.  To Sean's point, speaking in terms of preferred/less-preferred may help soften these qualifications.  Some older work may very well still be good and relevant, but the way we would approach the analysis or reporting may be different now.  Using these as examples may help authors to make better selections when using prior work as "templates" for their own study designs, or gain an understanding of what parts may need to be updated given current preferred practices.  I believe there is tremendous power is showing not only positive but also negative examples.

Also, I'd like to suggest that as a companion to a primer document for authors and reviewers, we may also consider producing a series of short humourous videos.  For authors "have you ever written...?", or reviewers "have you ever read...?".  Half poking fun at (to help personalize), but also constructively using the scenario as a platform for discussion.  If we can broach this subject in an entertaining way, we may gain more traction and avoid coming off as some kind of stats-police.  

- Mike



To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stat...@googlegroups.com.

To post to this group, send email to transparen...@googlegroups.com.

Pierre Dragicevic

unread,
May 31, 2016, 6:26:24 PM5/31/16
to Mike Glueck, David Lovis-McMahon, Transparent Statistics in HCI, smu...@uw.edu, ROBERTSON Judy
Those are all great comments.

I also find that a reviewer's guide would more interesting and perhaps more needed.

An author's guide would be great of course and it's a common request, but an enormous amount of time and energy has been already spent on this issue: hundreds of textbooks, methodology papers, etc., all of these are author's guides. We're not short of resources authors can read if they're willing to invest time. In HCI recently, there's the book edited by Judy and Maurits for example, or Matt et al's paper on Bayesian methods. It's overwhelming of course, and many would love a document that summarizes it all for them, but the truth is that there are about as many approaches and analysis/reporting styles as books and articles. I don't think it will be easy to come up with an author guide that's approved by all the people in this mailing list (let alone the entire CHI community) while not being so unspecific as to be mostly useless.

I do think there are ways to come up with an author's guide that's novel and useful if it focuses on a specific theme, e.g., on how to do transparent statistics. Ideas like using an example-based approach, or adopting and entertaining/funny tone are also definitely worth exploring. But perhaps this wouldn't require many authors to be involved, and wouldn't need a stamp of approval from the CHI PC.

The relatively large number of people involved in this group (75 members for now) is however a unique opportunity to collectively agree on a reviewer's guide. We may all have different ways of doing stats and disagree on some specific questions, but it doesn't matter, because as I said the guide could just be a union of different common practices. It doesn't mean we'll adopt an "anything goes" approach as we can decide to, e.g., endorse a practice only if it's advocated by at least a few papers from the methodology literature. Some sources say remove outliers, some sources say don't bother, some say systematically correct for multiplicity, some say it's not that simple, some say test for normality, some say it's silly, etc,. I think a good guide should acknowledge all issues that are difficult and controversial, and suggest the reviewer to look elsewhere if possible.

This goes very much in the direction of what David suggests, in trying to make the review process more fair and less random (also see our alt.chi paper on this). I like the law analogy and I completely agree with the philosophy of "placing the burden on the reviewer to establish the harm to the author’s conclusions". As far as I'm concerned, David totally nails it. What should matter to a reviewer is the validity (or perhaps the credibility, expressed on a continuous subjective scale) of the author's conclusions, and if a reviewer isn't sure about some particular aspect of a method (which is fine), they could simply abstain from negatively commenting on it.

At the same time, I agree with Mike that a good reviewer's guide should promote best practices, and I'm not quite sure how to do this as well. Perhaps by using a sort of ordering or rating system? Sharing data is a ++, planned analyses are a +++, while reporting statistical significance without means / effect sizes would be a --? Not sure...

A reviewer's guide like this could act as a guide for authors as well, as they could read it and get a sense of the "rules" by which they can play. But in contrast with an author's guide a la APA, this wouldn't police authors or force them to go in a unique direction, and wouldn't necessarily need to put the bar much higher. CHI authors could continue to use their traditional methods if they wish to, but they could also decide to explore less familiar methods without the fear of being punished for doing weird things -- hopefully, they would even be encouraged to do so. Everyone would be given the opportunity to improve and polish their methods at their own pace, and choose their favorite school of thought.

Concerning obsoleting practices, it's a good idea but I agree with Sean that we should try to be soft, as it's unlikely that the CHI community will be open to any type of strong prescription, even if it's delayed in time. Alternatively, a guide could label some controversial questions as unresolved, and update them the next year if a consensus is reached. Until a question is unresolved, reviewers would be discouraged to force their personal opinion on the matter in their own reviews, but they would encouraged to contribute their thoughts for the next version of the guide.

Concerning length, short is good, but if we want it to be more detailed, a FAQ format should be easy to process.

Pierre

Jessica Hullman

unread,
May 31, 2016, 10:01:37 PM5/31/16
to Pierre Dragicevic, Mike Glueck, David Lovis-McMahon, Transparent Statistics in HCI, Sean Munson, ROBERTSON Judy
Pierre's last comments have me envisioning a reviewer checklist of sorts, where each item is framed as a question or statement (e.g., 'Does the paper include claims that results from significance testing support the authors' hypothesis?', 'Does the summary of results discuss effect size?' etc). Depending on whether the answer is no or yes, the reviewer could either move on to the next point or look up that item in an appendix that gives more detail on why its a problem, and then bring up that point in their review. In contrast to a rating mechanism, which could be hard to validate, a checklist format could organize the reviewing process while educating the reviewer on finer points they aren't familiar with. People may be motivated to use a checklist if it makes reviewing easier/quicker (i.e., here's all the things I need to check, and for every violation I have a useful point to make in my review). How much the checklist should be targeted just to statistics versus to other aspects of experimental design/explication is an open question, as is what specific items it should include, but I could see it encompassing both problems (like the examples above) but also providing a way for a reviewer to classify whether they are dealing with a specific type of analysis (this is a Bayesian hierarchical model, here's what I need to know about what to look for to evaluate it etc) 

Jessica

Jessica Hullman
Assistant Professor, Information School
Adjunct Assistant Professor, Computer Science & Engineering
University of Washington

Judy Kay

unread,
May 31, 2016, 10:08:33 PM5/31/16
to Jessica Hullman, Pierre Dragicevic, Mike Glueck, David Lovis-McMahon, Transparent Statistics in HCI, Sean Munson, ROBERTSON Judy
On a slightly different tack, can we identify some exemplar papers and to add some commentary of why they were chosen as exemplars. 

People learn well from concrete examples.

Though the down-side is that any one paper is clearly very specific.

But if people had some papers to suggest, I am sure people would find them very helpful.

On 1 June 2016 at 12:01, Jessica Hullman <jessica...@gmail.com> wrote:
Pierre's last comments have me envisioning a reviewer checklist of sorts, where each item is framed as a question or statement (e.g., 'Does the paper include claims that results from significance testing support the authors' hypothesis?', 'Does the summary of results discuss effect size?' etc). Depending on whether the answer is no or yes, the reviewer could either move on to the next point or look up that item in an appendix that gives more detail on why its a problem, and then bring up that point in their review. In contrast to a rating mechanism, which could be hard to validate, a checklist format could organize the reviewing process while educating the reviewer on finer points they aren't familiar with. People may be motivated to use a checklist if it makes reviewing easier/quicker (i.e., here's all the things I need to check, and for every violation I have a useful point to make in my review). How much the checklist should be targeted just to statistics versus to other aspects of experimental design/explication is an open question, as is what specific items it should include, but I could see it encompassing both problems (like the examples above) but also providing a way for a reviewer to classify whether they are dealing with a specific type of analysis (this is a Bayesian hierarchical model, here's what I need to know about what to look for to evaluate it etc) 

Jessica

Jessica Hullman
Assistant Professor, Information School
Adjunct Assistant Professor, Computer Science & Engineering
University of Washington
On Tue, May 31, 2016 at 3:25 PM, Pierre Dragicevic <pierre....@gmail.com> wrote:

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stat...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

Mike Glueck

unread,
Jun 1, 2016, 11:51:44 AM6/1/16
to Judy Kay, Jessica Hullman, Pierre Dragicevic, David Lovis-McMahon, Transparent Statistics in HCI, Sean Munson, ROBERTSON Judy

David Lovis-McMahon

unread,
Jun 1, 2016, 1:35:50 PM6/1/16
to Transparent Statistics in HCI, judy...@sydney.edu.au, jessica...@gmail.com, pierre....@gmail.com, dlo...@gmail.com, smu...@uw.edu, judy.ro...@ed.ac.uk
Just a quick riff with a brief comment somewhat salient comment. 

Funny enough, just got published in Psych Science last week Marginally Significant Effects as Evidence for Hypotheses

There is some interesting information about reporting trends but since their data is available up on OSF, I remixed it to look at a couple of different ways.

(Note the apparent coding error in Developmental in 1980.)



And what really stood out to me was the apparent changes over three decades in Social Psychology.


In the 1990's the density of p-values was pretty well dispersed across the range of p-values. Then in 2000 the distribution shifted in pretty strongly to a peak at .10 before swinging the other direction to a peak at ..05+ in 2010. I'd have to ask around but I wonder if this was a byproduct of guidelines changing at the journal regarding "marginal p-values." 




To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.

To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/transparent-stats-hci/AMXPR05MB2156E4C72D027B8FD8BA593A5450%40AMXPR05MB215.eurprd05.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

Pierre Dragicevic

unread,
Jun 1, 2016, 1:37:39 PM6/1/16
to Mike Glueck, Judy Kay, Jessica Hullman, David Lovis-McMahon, Transparent Statistics in HCI, Sean Munson, ROBERTSON Judy
I saw this post before and the quotes are hilarious.

It's also a good example of what I was talking about, because the post's author (and he's far from being the only one) explains that the only correct approach is to report p as either significant or not significant, while in reality sources exist that recommend the exact opposite (including the Greenland et al paper sent by David, the APA guidelines, and back to Fisher himself).

Pierre

David Lovis-McMahon

unread,
Jun 1, 2016, 1:38:08 PM6/1/16
to Transparent Statistics in HCI, jessica...@gmail.com, pierre....@gmail.com, mikeg...@gmail.com, dlo...@gmail.com, smu...@uw.edu, judy.ro...@ed.ac.uk, judy...@sydney.edu.au
I think making note of exemplar papers can have some definite benefits. One way could be to use them as the jumping off point for specific examples in something like a faq. Where you take a brief summary of the article and either real data or simulated data and use it to demonstrate a particular analysis technique. Then suggest the original exemplar article for further reading.

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.

To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/transparent-stats-hci/AMXPR05MB2156E4C72D027B8FD8BA593A5450%40AMXPR05MB215.eurprd05.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

David Lovis-McMahon

unread,
Jun 1, 2016, 1:43:59 PM6/1/16
to Transparent Statistics in HCI, mikeg...@gmail.com, dlo...@gmail.com, smu...@uw.edu, judy.ro...@ed.ac.uk
Perhaps it would be beneficial to split up the Reviewer's Guide, the Author's Guide, and a FAQ that integrates the two view points? The two general guides can provide a sort of short manifesto of what the aims of transparency are for both authors and reviewers with specific examples, best practices, and other information provided in an online FAQ / community?

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.

To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/transparent-stats-hci/AMXPR05MB2156E4C72D027B8FD8BA593A5450%40AMXPR05MB215.eurprd05.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsubscri...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

Pierre Dragicevic

unread,
Jun 3, 2016, 11:18:15 AM6/3/16
to David Lovis-McMahon, Transparent Statistics in HCI, Mike Glueck, Sean Munson, ROBERTSON Judy
I like the idea of splitting the manifesto and the FAQ.

Anyone has ideas on ways of moving forward and encouraging as many people as possible to participate? I'm not opposed to restarting a doc from scratch.

Pierre

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.

To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.

To post to this group, send email to transparen...@googlegroups.com.


For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.

To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.

To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/transparent-stats-hci/AMXPR05MB2156E4C72D027B8FD8BA593A5450%40AMXPR05MB215.eurprd05.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stats-hci+unsub...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stat...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stat...@googlegroups.com.

To post to this group, send email to transparen...@googlegroups.com.

Matthew Kay

unread,
Jun 3, 2016, 12:59:15 PM6/3/16
to Pierre Dragicevic, ROBERTSON Judy, David Lovis-McMahon, Sean Munson, Mike Glueck, Transparent Statistics in HCI

I like the idea of splitting, and I think it will help everyone contribute to the aspects they think are most important. We can always later revisit the state of each document to decide what seems most viable, and possibly reorganize.

A starting point could be:

- A high level reviewer guide somewhat in line with David's suggestion: a way to evaluate work with respect to how statistics supports the claims a work is making (basically how to evaluate the high level contributions without getting bogged down in persnickety details). The forest.

- A reviewing FAQ somewhat like the second half of the current document, aimed at answering particular questions reviewers might have and giving authors the tools to defend their practices. The trees.

- A repository of recommended resources and HCI exemplary papers with a paragraph each explaining their value, in line with Judy's suggestion. I think this is a good compromise on the need for an author guide versus the problem of duplicating all of the methods literature.

On that last point, if that last document goes well we might also consider coming up with a community process for nominating new papers to be added to that document. Perhaps even a process that gives out a "clearest stats of CHI" award to a few papers a year. Then the winners would be added to the doc along with nomination paragraphs. This also gives the award more weight (since being added to the doc might increase the probability of citation).

Does that split sound reasonable?

---Matt

Chat Wacharamanotham

unread,
Jun 8, 2016, 9:28:36 AM6/8/16
to Transparent Statistics in HCI
Hi,

This discussion is awesome! However, it may be daunting for people to catch up. (I spent 6 hours + a big whiteboard to do so; see the end of this email). Below, I summarize the state of the discussion and added my thoughts (in blue).

Next actions:
  • Split out the current document for the high-level guideline and the FAQ
  • Collect exemplary papers: I created a folder on the drive. If any of you come across good examples, upload an PDF with annotation where it is good & why there.

P.S. Sorry for late responses. I just came back from vacation, and starting my official employment is more hectic than I expected.

Cheers,
Chat
--------------------

SUMMARY

What are we doing?
Creating a reviewers' guide to transparent statistics at CHI. [General direction of the thread. In particular, David May 31; Pierre, June 1; Jessica, June 1]

What are we NOT doing?
Creating a guide for authors. Why: see the end of this email

Why are we doing it?
Our immediate goals are to…

1. avoid unfair rejections of statistical reports. [Pierre, May 29] In particular, when unfamiliar statistical methods are used [Matt, May 30] or controversial issues in statistics [Pierre, June 1]

2. promote best practices in statistics [Pierre May 29 & June 1]

The long-term goal is to improve rigor in the reviews [Pierre, May 29 & Matt, May 30].

How are we going to do it? --- Content
Yes:
    • Remind reviewers of the four types of validity (conclusion, internal, construct, external) [Daid, May 31; Pierre, June 1]
    • If a reviewer is unsure about statistical method used in the submission, avoid negatively commenting on it. [Pierre, June 1]

No:
    • Qualitative analysis [Matt, Pierre, Steve, Sean on Docs]

Unclear:
    • How much will this guideline cover issues in experimental design/explication [Jessica, June 1]

How are we going to do it? --- Style
Yes:
    • Provide concrete examples [Mike, May 31]
    • Accept that there are multiple way of statistical practices [Pierre, June 1]

No:
    • Sanctioning or prohibiting particular practices [Mike, May 31]

How are we going to do it? --- Format
  • High-level guide [Matt, May 30; Judy May 30; Matt June 3] This should have the following properties:
    • Short enough to easily convince most meta-reviewers (ideally all reviewers) to read it as a whole.
    • Can incorporate checklist (or a decision tree) for reviewers for actionable steps while they review papers [Jessica, June 1]
    • Should include excerpts of concrete examples contrasting "prefer" vs. "avoid"
  • FAQ [Matt, May 30 and June 3]
    • Should be randomly accessible by reviewers (no need to read top-to-bottom) [Matt, June 3]
    • Should be cite-able by authors (and recognized by CHI PC) when controversial issues arise in the reviews [Matt, May 30 and June 3].
  • List of exemplary papers [Judy, June 1; Matt, June 3]
    • We should indicate specific part and write down reasons why we think that part is exemplary.


Ideas for the future
  • Exemplary paper: nomination process [Matt, June 3]
  • Versioning the guideline [Matt, May 30], but we need to avoid arguments if authors refer to previous versions of the guideline. [Judy, May 30]
  • Short engaging videos about common problems


Why we are not doing a guide for the authors?
  • There are existing resources [Pierre, June 1]
  • It doesn't need this many people and/or approval from CHI PC [Pierre, June 1]
  • Difficult to scope [Matt, May 30]
  • The authors can also benefit from the guide for the reviewer [Pierre, June 1]




Matthew Kay

unread,
Jun 8, 2016, 5:54:44 PM6/8/16
to Chat Wacharamanotham, Transparent Statistics in HCI
Thanks everyone for the lively discussion so far, and thanks Chat for the
excellent summary!

To help us move forward, I've split the documents out per our current
approximate consensus and made skeletons for the various new documents. In
addition to those Chat described above, I also added a document for
resources for authors (for things like textbooks, websites, or methods
papers; as distinct from exemplary papers in HCI).

I added a README file describing the new organization of the Drive to help
give everyone a sense of what the documents are and how to contribute:
https://docs.google.com/document/d/1e4B5mP5Jcv0sr5HS1fkwYAKQbB4xhIFv6-hFaQWKmTg/edit.
I recommend starting there to get a sense of the new organization of the
Drive.

Thanks again everyone!

---Matt

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stat...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Pierre Dragicevic

unread,
Jun 11, 2016, 10:27:37 AM6/11/16
to Matthew Kay, Chat Wacharamanotham, Transparent Statistics in HCI, Anne Roudaut
Thanks Chat for the great summary and Matthew for reorganizing the google drive!

Perhaps a deadline would help move things forward? Anne, do you have a deadline to suggest?

Pierre


Eric Hekler

unread,
Jun 13, 2016, 12:05:48 PM6/13/16
to Pierre Dragicevic, Matthew Kay, Chat Wacharamanotham, Transparent Statistics in HCI, Anne Roudaut
Hi, all.

  I'm sorry for being "out" during these discussion.  I just finished my NSF deadline and so I can come back and contribute now.  I'll read through Chat's summary and look through Matt's rework on Wednesday but just wanted to throw out there that if anyone has a specific task for me to do, feel free to assign me (I'm looking to you, in particular, Matt) and I'll do it.

   Beyond that, I'll try to jump in soon (and, I'm particularly interested in taking the time to think through Paul's formulation of the methodological problem we are running into).  

Eric


--
Eric Hekler, Ph.D.
Assistant Professor
Director, Designing Health Lab
School of Nutrition and Health Promotion
Barrett Honors Faculty
Senior Sustainability Scientist
Arizona State University
Mailing address: (Note, different from my office address):
550 North 3rd St.
Phoenix, AZ 85004
Office Location
Arizona Biomedical Collaborative Building 1 (Click this for a map: ABC1)
Office in room 121
Fax:  (602) 827-2253 


NOTE: I do not regularly check my email over the weekends.  As such, please be advised that if you email me on Friday or over the weekend, I will likely be slow to respond. 



Matthew Kay

unread,
Jun 13, 2016, 1:42:20 PM6/13/16
to Eric Hekler, Pierre Dragicevic, Chat Wacharamanotham, Transparent Statistics in HCI, Anne Roudaut
Thanks Eric! Beyond the high-level rework I probably won't get back to this in earnest until after Thursday this week (my dissertation defense).

There are plenty of TODOs on the stack currently. I tend to prefer an expand-and-contract approach to writing, and I think we have a lot of expand left to do before we start cutting and refining. 

- In the reviewing guidelines, the current state is that we have split them, but not truly reorganized the content. The FAQ introduction needs to be rewritten based on what we've proposed (see Chat's summary). 

- The Guidelines document probably deserves a short intro, and then we need to start iterating on what guidelines we want and in what form. I think that we have plenty of inspiration to draw upon in writing that document: David Lovis-McMahon's email about focusing on assessing validity rather than getting bogged down in details, the recent paper Kasper suggested (http://journals.plos.org/ploscompbiol/article?id=10.1371%2Fjournal.pcbi.1004961), and Pierre's high-level principles (currently at the top of the FAQ document, but probably should be moved to Guidelines) could be good places to start.

- Contributing suggested resources to the Exemplar Papers and the Resources for Authors documents.

- Organizing or fleshing out any of the other proposals in the Main Proposal document.

Any of the above would be well-appreciated ways to dive in, as would any other way you can think of.

Best,

---Matt

P.S. Incidentally this is an excellent question and I'm adding this TODO list to the README document :)

Matthew Kay

unread,
Jun 13, 2016, 1:50:20 PM6/13/16
to Anne Roudaut, Pierre Dragicevic, Chat Wacharamanotham, Transparent Statistics in HCI
Given that, I think we should aim for draft Reviewer Guidelines and Reviewer FAQ by July 15th.

The question then is, are there other things we might like to propose at that time. For example, changes to PCS?

---Matt

On Mon, Jun 13, 2016 at 12:37 AM, Anne Roudaut <roud...@gmail.com> wrote:
Hi all,

For the deadline technically we won’t send anything to ACs/reviewers before mid-september. However if we want to share this initiative with other sub-committees I would need a little bit more time to draft something for the ACs (that will be embedded with other guidelines). If I can have something for 15 of July it would be brilliant.

Anne


<IMG_1710.jpeg>

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stat...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/transparent-stats-hci/92A11380-6785-4466-9B1C-BCEF448617D7%40ifi.uzh.ch.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Transparent Statistics in HCI" group.
To unsubscribe from this group and stop receiving emails from it, send an email to transparent-stat...@googlegroups.com.
To post to this group, send email to transparen...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/transparent-stats-hci/CADHJc0E_bNV5h2pUEyZuAaHqN-hAhBtMvh%3D0xQ6aCW%3Di4oMUkQ%40mail.gmail.com.

For more options, visit https://groups.google.com/d/optout.

Chat Wacharamanotham

unread,
Jun 20, 2016, 12:41:02 PM6/20/16
to Transparent Statistics in HCI
Hi,

I extracted some points from the old reviewer guideline for a draft of the high-level guideline

In drafting this, I selected a set of points that are either essential or teasing the readers enough to dig into FAQ and other resources. Also, I tried to prime readers with positive emotion by using cheerful, informal text, and added some cartoons. ;)

I'd love to hear your comments, particularly, on the following aspects:
  • Are there any essential points that should be added back in?
  • How does the structure sound? (problem, mindset, judging, praising, resources)
  • Will the informal, cheerful tone be potentially problematic?
  • There are a lot of TODOs, mostly concrete examples. If you happen of have some great examples, add them in!

Cheers,
Chat

Martin Schmettow

unread,
Aug 30, 2016, 2:30:32 PM8/30/16
to Transparent Statistics in HCI, pierre....@gmail.com, dlo...@gmail.com
Hi everyone,

i just went over the reviewer guidelines and like it so far, in tone and structure. 
For the judging statistics parts I may have something to add, but want to discuss it first. 

What I think could be carved out further is whether style of analysis matches the research goal. 
For inferential statistics we currently have two styles: quantitative and hypothesis testing. 
Frequently, authors are making the wrong choice, using a hypothesis test, 
when they should actually ask: how strong is the effect? In such a case, 
the reviewer could suggest to  change the style of analysis in major revision. 
For the authors this is no catastrophy, as they only have to redo the analysis in the proper style.
They may have to temper their conclusions, though.

For this to work, reviewers would need some additional heuristics to identify the type research 
(applied or theoretical) and whether the inferential style matches the stakes.

background:

Two weeks ago i went to Lee & Wagenmakers workshop "Bayesian Cognitive Modelling". 
Highly recommended! Still, what troubled me at some point was their critical attitude towards 
credibility intervals. They favored the Bayes factor, which in theory is cool, but a diva in practice.
After a brief discussion we agreed that, in applied science, with its real stakes, the quantitative approach 
works best, as it seemlessly connects with rational decision making. 
In cognitive psychology, people are testing formal theories under very controlled conditions and HT makes perfect sense.

CU, Martin.
Reply all
Reply to author
Forward
0 new messages