to read or not to read

38 views
Skip to first unread message

Glenn Hampson

unread,
May 3, 2021, 7:29:27 PM5/3/21
to The Open Scholarship Initiative

Here’s a good critique (Scientific success by numbers (nature.com) by Cassidy Sugimoto* about a book that (based on her review---I haven’t actually read this) promises to help scientists “navigate their careers” and “maximize [their] odds of success” by understanding how to work the metrics that authors spend so much time complaining about. If this is a fair summary, then yikes. I mean, I’m sure this book will have an audience, but it seems a bit like publishing a book on how to exploit tax loopholes or cheat on exams without getting caught…. See Amazon.com: The Science of Science (9781108492669): Wang, Dashun, Barabási, Albert-László: Books.

 

On the other hand, none less than Magdalena Skipper reviewed this book and wrote (as noted on Amazon) “'In their engaging book, Wang and Barabási take a fresh look at the science of science. They convincingly argue that in the age of big data and AI applying the scientific method to science itself not only helps understand how science works but may even enhance it. We are compelled to consider the determinants of individual careers and what this means in the age of large-scale scientific collaborations. These and other questions around the meaning of scientific impact, in academia and beyond, make the book highly relevant to scientists, academic administrators and funders alike. By the time the final, forward-looking chapter ends we are hooked on all the correlations and predictions, and so it is only fitting that we are invited to join in, to help shape the field which is likely to be driven by a human-machine collaboration.”

 

Has anyone else actually read this? There seems to be a difference of opinion between these experts on whether it’s worth the $30…

 

Best,

 

Glenn

 

 

*formerly the head of NSF’s Science of Team Science division

Bryan Alexander

unread,
May 3, 2021, 9:45:39 PM5/3/21
to Glenn Hampson, The Open Scholarship Initiative
I haven't read the book, but that's a damning critique.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/004201d74074%2423152010%24693f6030%24%40nationalscience.org.


--

Biagioli, Mario

unread,
May 4, 2021, 12:00:02 AM5/4/21
to Glenn Hampson, The Open Scholarship Initiative
I agree that Sugimoto's review is both critical and powerful.  I also appreciate her thoughtful position about metrics in general, and her and her colleagues' call to integrate both quantitative and qualitative evidence.  Just based on that,  I'd be biased toward agreeing with her critique of the limitation of Wang and Barabasi's book, and to buy into the comforting feeling that her critique enables:  if we expose the limitations of some of the new science of science, we might contribute to making their research more nuanced and empirically inclusive.  We could then use some of their more interesting results while integrating them with qualitative, contextual evaluations.  (Essentially the Leiden declaration expanded to the science of science). 

My gut feeling, however, is that that ship has sailed.  I have the Wang et al. book on my desk (unread) but have read some of the work of Wang, Song, and Barabasi over the years (starting with a 2015 Science article about the possibility of predicting long-term impact of articles based on short-term evidence), and theirs is no Derek de Solla Price stuff.  I believe that if you read any of their and their fellow travelers' work in Science, Nature, or Plos  you will probably agree with me. (Bergstrom's 2018 review article in Science is quite comprehensive).  

I tend to consider myself a critic of metrics -- probably more so that Sugimoto -- but I dont think it takes much to realize that, even if you do NOT appreciate this kind of research, the science of science represents the next generation of metrics, not its involution. The goal of the science of science is predictive, not just descriptive as in traditional metrics of evaluation, which looks like actuarial tables by comparison.

I wish I could agree with Sugimoto that, moving forward, we could still achieve the proper balance between quantitative and qualitative, bringing the best evidence of both kinds together.  But I do not think these new generation of science scientists think that way.  They are happily running with their models, building new and more sophisticated ones, making both specific/defensible and grand/indefensible claims as the specific genre allows. And their claims will probably be taken seriously by administrators who take numbers over nuanced narratives any day.  And that's in part because those admins have been trained to think that way by good old metrics. (I wouldn't be surprised if they saw this work as being in the business of providing indicators like JIFs, but better).  In sum, metrics have let the science of science out of the bag, and I don't think it's going back in.   To the contrary, it could be that traditional descriptive metrics will look, twenty yrs from now, like De Solla Price's work looks to us today.

This is 90% speculation, but I'm ready to bet real money -- like $5 -- on it!




Mario Biagioli
Distinguished Professor, School of Law and Department of Communication, UCLA

New books: 



From: osi20...@googlegroups.com <osi20...@googlegroups.com> on behalf of Glenn Hampson <gham...@nationalscience.org>
Sent: Monday, May 3, 2021 4:29 PM
To: 'The Open Scholarship Initiative' <osi20...@googlegroups.com>
Subject: to read or not to read
 
--

Ivo Grigorov

unread,
May 6, 2021, 3:14:47 PM5/6/21
to Biagioli, Mario, Glenn Hampson, The Open Scholarship Initiative
Hardly THE method to poll researchers on the need for such books, but its curious how this Twitter poll evolved

https://twitter.com/oaforclimate/status/1389482380629581828?s=21


Please note that I work flex hours from time to time, so if you receive this email outside your working hours, there are no expectations to reply immediately. 


Ivo Grigorov 
Research Secretariat
Grant Support & Open Science

 

DTU Aqua Beskrivelse: http://www.dtu.dk/images/DTU_email_line_01.gif

 

Danmarks Tekniske Universitet/

Technical University of Denmark

Beskrivelse: http://www.dtu.dk/images/DTU_email_logo_01.gif

Institut for Akvatiske Ressourcer/National Institute of Aquatic Resources

Kemitorvet. Bygning 202

2800  Kgs. Lyngby

Direct +45 21 31 63 74 
iv...@aqua.dtu.dk
www.aqua.dtu.dk

Twitter | LinkedIn Profile | Skype Contact: ivo_grigorov


On 5 May 2021, at 08.50, Biagioli, Mario <biag...@law.ucla.edu> wrote:



Fiore, Steve

unread,
May 6, 2021, 8:17:42 PM5/6/21
to Biagioli, Mario, Ivo Grigorov, Glenn Hampson, The Open Scholarship Initiative
Hi Everyone - like others, I ordered the book (just received) but have yet to read it.  I am familiar with the field, and, in this field, Wu, Wang, and Evans (2019), is one of my favorite papers of recent years (https://www.nature.com/articles/s41586-019-0941-9).  That paper was particularly interesting in how they analyzed disruption, as well as integrated machine learning and natural language processing with their big data approach.  More importantly, the replicated their methods via analyses of GitHub repos. So in the paper they compare three different domains in their study of disruption and team size (i.e., teams based upon publications, based upon patents, and based upon GitHub repos).  Second, they provide a number of interesting metrics. For example, they look at those who both build on prior knowledge and those who disrupt the normal growth of knowledge.  They used a measure developed/published in Management Science ("A Dynamic Network Measure of Technological Change") that is really cool as it comes up with a new metric for disruption.  But I was also impressed with how they dug deeper into the bibliometrics of not simply who was being cited, but also what was being cited (reviews or empirical work) as well as what was cited in the papers that were being cited (e.g., the original (old) papers or merely reviews of that old work).  Also, the visualizations of these different citation paths are, themselves, beautiful.  These points are merely to illustrate what I think can be done well with this form of research.

With all of that said, like Cassidy articulates so well in her review, I have reservations about how far to take this blending of bibliometrics with network science.  Cassidy notes a number of the limitations that come from this kind of research, and, more importantly, the negative implications of using them for guidance.  I'd like to add a complement to that and focus on what I see as limitations of the Science of Science (SciSci) for understanding the collaborative elements, that is, the actual teamwork in the science.  As background, last year, on the Science of Team Science listserv, a colleague shared a new SciSci type paper examining international collaborations studying COVID-19 and requested comments on the article.  After reading that paper, I provided some comments that could be similarly levied against much of the SciSci field for overreach in the kinds of inferences being made on data so decontextualized.  For those interested, below is the original post on the new paper ("Consolidation in a crisis: Patterns of international collaboration in early COVID-19 research"), and following that are my comments.  I've highlighted my points about what is missing in SciSci studies of scientific collaboration - that is, although they can provide some sense of ‘what’ is occurring, they're limited in helping us understand any of the “why” these findings occurred.

Best,
Steve


From: A public forum for scientists. <scien...@sciencelistserv.org>
Sent: Thursday, July 23, 2020 1:04 PM
To: scien...@sciencelistserv.org <scien...@sciencelistserv.org>
Subject: Message: 9

 

Dear Friends,

Our paper on international collaboration on COVID-19 research is published now in PLOS One:

 

Abstract:

This paper seeks to understand whether a catastrophic and urgent event, such as the first months of the COVID-19 pandemic, accelerates or reverses trends in international collaboration, especially in and between China and the United States. A review of research articles produced in the first months of the COVID-19 pandemic shows that COVID-19 research had smaller teams and involved fewer nations than pre-COVID-19 coronavirus research. The United States and China were, and continue to be in the pandemic era, at the center of the global network in coronavirus related research, while developing countries are relatively absent from early research activities in the COVID-19 period. Not only are China and the United States at the center of the global network of coronavirus research, but they strengthen their bilateral research relationship during COVID-19, producing more than 4.9% of all global articles together, in contrast to 3.6% before the pandemic. In addition, in the COVID-19 period, joined by the United Kingdom, China and the United States continued their roles as the largest contributors to, and home to the main funders of, coronavirus related research. These findings suggest that the global COVID-19 pandemic shifted the geographic loci of coronavirus research, as well as the structure of scientific teams, narrowing team membership and favoring elite structures. These findings raise further questions over the decisions that scientists face in the formation of teams to maximize a speed, skill trade-off. Policy implications are discussed.


Happy to have comments.

Caroline

 

Caroline S. Wagner
Milton & Roslyn Wolf Chair in International Affairs
John Glenn School of Public Affairs
The Ohio State University
Columbus, Ohio USA 43210


 

From: Fiore, Steve <sfi...@IST.UCF.EDU>
Sent: Sunday, July 26, 2020 4:36 PM
To: SCIT...@LIST.NIH.GOV
Subject: Re: International collaboration on COVID-19

 

Thanks for sharing, Carolyn.  This is an important finding about what is happening when science responds to a global crisis.  Finding that researchers seem to rely on familiarity during emergencies, shows that they are not really different from most others. More interesting are the findings about small teams, suggesting, also, a tie to research in other fields on teams and adaptability.  Finally, most important are the results about elite universities.  As you note, relying on reputation, rather than what might be more relevant expertise, or, in fact, greater talent, at non-elite universities, is another marker for future study.  So this is an informative bibliometric analysis contributing to our understanding of science at a macro-level.   

 

But, in your paper, you note that researchers lack a clear focus.  How do you know that based upon analyzing a distribution of publications?  Focus depends on the level of analyses.  The teams, themselves, I’m sure, were quite focused in whatever it was they studied.  Perhaps, at the level of the ‘field’, there was what appears to be a lack of focus.  But, at the same time, one person’s lack of focus (or field's) is another’s exploration stage.  With that said, in the paper, there are also speculations about coordination costs.  How do we know these teams had high coordination costs?  They were not queried about them.  Similarly, the paper says that teams weighed an inherent tradeoff about their collaboration choice.  Again, how do we know that is the case?  Inferring psychological processes from citation analyses seems problematic - in the absence of asking the scientists, we remain too disconnected from those actually doing the work.   


In short, although we have some sense of ‘what’ is occurring, what this type of analysis does not help us understand is any of the “why” these findings are occurring.  Short of the exogenous trigger event, a global pandemic, we have no sense of the causal factors for any collaboration outcomes, let alone, processes.  The paper speaks of team structure, but from the study of teams, we know there is more than simply size or international membership to team structure.   Said another way, each one of the papers analyzed has a team behind them, and, associated with it, a rich set of interaction processes, a particular form of interdependency, a unique hierarchy, a composite of complementary cognition, shared and unshared attitudinal profiles, etc. etc. None of these can be understood by only looking at publications.  Although bibliometrics and scientometrics studies, with their tens of thousands of data points, might be able to inform macroscopic patterns of collaboration, they are but one lens onto this phenomenon.  The Science of Team Science was specifically created to provide the theoretical and methodological foundation through which to understand not only the “what” of teamwork, but also the “why”.  Behind every single one of the thousands of pre-prints and publications making up a bibliometric data set, there are hundreds of hours of individual work and teamwork. Although it is fine to speculate about the macro-phenomena bibliometrics and scientometrics can uncover, such analyses can only go so far.  And any recommendations about policy decisions should be similarly tempered.  These types of analyses are not appropriate for making inferences about the actual teamwork processes and emergent phenomena occurring at the level of the team, that is, the rich world of interaction ‘in’ the teams producing (or failing to produce) publications that are treated merely as a single data point for citation analyses in bibliometrics studies.   

 

Best,

Steve

 

 




From: 'Ivo Grigorov' via The Open Scholarship Initiative <osi20...@googlegroups.com>
Sent: Thursday, May 6, 2021 3:14 PM
To: Biagioli, Mario <biag...@law.ucla.edu>
Cc: Glenn Hampson <gham...@nationalscience.org>; The Open Scholarship Initiative <osi20...@googlegroups.com>
Subject: Re: to read or not to read
 
Hardly THE method to poll researchers on the need for such books, but its curious how this Twitter poll evolved


Please note that I work flex hours from time to time, so if you receive this email outside your working hours, there are no expectations to reply immediately.

Ivo Grigorov
Research Secretariat
Grant Support & Open Science
Danmarks Tekniske Universitet/
Technical University of Denmark
Institut for Akvatiske Ressourcer/National Institute of Aquatic Resources
Kemitorvet. Bygning 202
2800  Kgs. Lyngby


Wagner, Caroline

unread,
May 7, 2021, 8:37:11 AM5/7/21
to Fiore, Steve, Biagioli, Mario, Ivo Grigorov, Glenn Hampson, The Open Scholarship Initiative
Dear Steve and friends,
Thanks for this thoughtful treatment. The problem that Cassidy Sugimoto points out, and that Steve touches on here, is that many computationally oriented researchers come into science studies for the data, without an understanding or theoretical basis for examining the large-scale patterns they observe. Unfamiliar with the history of the field, they become excited when they derive patterns from connections, but for many, these inquiries may look beautiful but, without a theoretical basis, they do not add to understanding science dynamics. Moreover, they claim to have invented a new field, when the study of science communications is nearly a century old (if we go back to Lotka 1926).

That said, it is possible to apply theories of dynamics to infer what is happening within large groups--ones that operate according to patterns unbeknownst to the actors themselves. Sociology/dynamics does provide us with theories about how communications networks operate and what they show about knowledge communities. In the COVID work, we inferred a lack of focus in the research community by studying keyword clusters compared to the pre-COVID work. We saw that research related to COVID measured close to chaos in the early days of the pandemic while the community tried to understand what was happening. (The same communities in pre-COVID days were highly structured.) This is not to pass judgment on any specific team or teams, but to examine focus or lack of focus at the community level. Moreover, our analysis was limited to collaboration at a distance -- international collaboration -- which often does not include face-to-face connection. This configuration changes the nature of communications, reducing tacit exchange and emphasizing explicit codification, which makes it 'easier' to study. The structure of the networked links and communications determines behavior to a greater extent when working remotely than we would expect from groups working face-to-face. 

The reputation economy of scholarship encourages scholars to discover something 'new.' The need for recognition can encourage overreach, and certainly the introduction of computational analysis to social sciences and humanities is such a moment. Incorporating computational scientists into SSH fields may require team scientists to help us, perhaps.
Caroline 







The Ohio State University
Caroline S. Wagner, PhD

Wolf Chair in International Affairs
John Glenn College of Public Affairs Battelle Center for Science & Engineering Policy
Page Hall 210U, 1810 College Road N, Columbus, OH 43210
6142927791 Office / 614-206-8636 Mobile
wagne...@osu.edu / http://glenn.osu.edu/faculty/


From: osi20...@googlegroups.com <osi20...@googlegroups.com> on behalf of Fiore, Steve <sfi...@ist.ucf.edu>
Sent: Thursday, May 6, 2021 8:17 PM
To: Biagioli, Mario <biag...@law.ucla.edu>; Ivo Grigorov <iv...@aqua.dtu.dk>

Glenn Hampson

unread,
May 7, 2021, 9:20:38 AM5/7/21
to Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative

Is there a name for this phenomenon when a researcher Zoom-bombs a field that isn’t theirs? (Or should there be?)

image001.png

David Wojick

unread,
May 7, 2021, 9:35:33 AM5/7/21
to Glenn Hampson, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative
Can you describe this phenomenon. Glenn? Sounds interesting.

David

On May 7, 2021, at 10:20 AM, Glenn Hampson <gham...@nationalscience.org> wrote:



Is there a name for this phenomenon when a researcher Zoom-bombs a field that isn’t theirs? (Or should there be?)

 

From: Wagner, Caroline <wagne...@osu.edu>
Sent: Friday, May 7, 2021 5:37 AM
To: Fiore, Steve <sfi...@ist.ucf.edu>; Biagioli, Mario <biag...@law.ucla.edu>; Ivo Grigorov <iv...@aqua.dtu.dk>
Cc: Glenn Hampson <gham...@nationalscience.org>; The Open Scholarship Initiative <osi20...@googlegroups.com>
Subject: Re: to read or not to read

 

Dear Steve and friends,

Thanks for this thoughtful treatment. The problem that Cassidy Sugimoto points out, and that Steve touches on here, is that many computationally oriented researchers come into science studies for the data, without an understanding or theoretical basis for examining the large-scale patterns they observe. Unfamiliar with the history of the field, they become excited when they derive patterns from connections, but for many, these inquiries may look beautiful but, without a theoretical basis, they do not add to understanding science dynamics. Moreover, they claim to have invented a new field, when the study of science communications is nearly a century old (if we go back to Lotka 1926).

 

That said, it is possible to apply theories of dynamics to infer what is happening within large groups--ones that operate according to patterns unbeknownst to the actors themselves. Sociology/dynamics does provide us with theories about how communications networks operate and what they show about knowledge communities. In the COVID work, we inferred a lack of focus in the research community by studying keyword clusters compared to the pre-COVID work. We saw that research related to COVID measured close to chaos in the early days of the pandemic while the community tried to understand what was happening. (The same communities in pre-COVID days were highly structured.) This is not to pass judgment on any specific team or teams, but to examine focus or lack of focus at the community level. Moreover, our analysis was limited to collaboration at a distance -- international collaboration -- which often does not include face-to-face connection. This configuration changes the nature of communications, reducing tacit exchange and emphasizing explicit codification, which makes it 'easier' to study. The structure of the networked links and communications determines behavior to a greater extent when working remotely than we would expect from groups working face-to-face. 

 

The reputation economy of scholarship encourages scholars to discover something 'new.' The need for recognition can encourage overreach, and certainly the introduction of computational analysis to social sciences and humanities is such a moment. Incorporating computational scientists into SSH fields may require team scientists to help us, perhaps.

Caroline 

 

 

 

 

 

 

The Ohio State University
Caroline S. Wagner, PhD
Wolf Chair in International Affairs
John Glenn College of Public Affairs Battelle Center for Science & Engineering Policy
Page Hall 210U, 1810 College Road N, Columbus, OH 43210
6142927791 Office / 614-206-8636 Mobile
wagne...@osu.edu / http://glenn.osu.edu/faculty/

<image001.png>

Joe Esposito

unread,
May 7, 2021, 9:37:06 AM5/7/21
to David Wojick, Glenn Hampson, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative
“Democracy at work”

Sent from my iPhone

On May 7, 2021, at 9:35 AM, David Wojick <dwo...@craigellachie.us> wrote:

Can you describe this phenomenon. Glenn? Sounds interesting.

Glenn Hampson

unread,
May 7, 2021, 9:58:45 AM5/7/21
to David Wojick, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative

LOL. Just as Caroline was describing---e.g., “Unfamiliar with the history of the field, they become excited when they derive patterns from connections, but for many, these inquiries may look beautiful but, without a theoretical basis, they do not add to understanding science dynamics.” We see this all this time as scientists try to apply their expertise across disciplines in search of patterns and connections, but these patterns and connections are sometimes a better fit for the Journal of Spurious Correlations than for actual science.

Lisa Hinchliffe

unread,
May 7, 2021, 10:40:37 AM5/7/21
to Glenn Hampson, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative
If this book was a blog post, I might think of this differently but ... given it was published by CUP, which I presume has book manuscripts reviewed by experts in the field, is it possible that the field is not of one mind on these topics? Or, alternatively, that it is undergoing disruption? 

Disclosure: I have not read the book.  

Lisa


___

Lisa Janicke Hinchliffe
lisali...@gmail.com





David Wojick

unread,
May 7, 2021, 12:33:52 PM5/7/21
to Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, Glenn Hampson, The Open Scholarship Initiative

Speaking of dynamics, I discovered, or at least named, a basic case many years ago. I used it a lot but never published on it so here is a snapshot.


It is called the "issue storm." The issue storm is the pattern of the message traffic that arises and evolves when a major issue hits an organization or community. The point is that these patterns and their evolution can vary greatly from case to case. Moreover, these differences can make a lot of difference when one is dealing with an issue storm, or trying to understand what is going on.


We OSI-people live in an issue-driven environment. Issue storms are often unpredictable, which is why we often cannot say what we will be working on in a week, sometimes in a day. 


They can spring up, grow and move suddenly, or very slowly. They vary greatly in size and duration, as well as structure. Just like storms in the weather. They also consume cognitive resources, like thought and attention, sometimes in vast quantities. For example, when a big news story breaks, millions or even billions of people can be thinking and communicating about it at the same time. That is a truly big cognitive event. 


Unfortunately, mapping and seeing an issue storm, present or past, requires detailed data on the message traffic. At a minimum we need the identity and timing of each message. Of course content is also needed if one wants to observe the reasoning. But one can get a qualitative sense without that data.


David


On May 7, 2021, at 9:37 AM, Wagner, Caroline <wagne...@osu.edu> wrote:


Dear Steve and friends,
Thanks for this thoughtful treatment. The problem that Cassidy Sugimoto points out, and that Steve touches on here, is that many computationally oriented researchers come into science studies for the data, without an understanding or theoretical basis for examining the large-scale patterns they observe. Unfamiliar with the history of the field, they become excited when they derive patterns from connections, but for many, these inquiries may look beautiful but, without a theoretical basis, they do not add to understanding science dynamics. Moreover, they claim to have invented a new field, when the study of science communications is nearly a century old (if we go back to Lotka 1926).

That said, it is possible to apply theories of dynamics to infer what is happening within large groups--ones that operate according to patterns unbeknownst to the actors themselves. Sociology/dynamics does provide us with theories about how communications networks operate and what they show about knowledge communities. In the COVID work, we inferred a lack of focus in the research community by studying keyword clusters compared to the pre-COVID work. We saw that research related to COVID measured close to chaos in the early days of the pandemic while the community tried to understand what was happening. (The same communities in pre-COVID days were highly structured.) This is not to pass judgment on any specific team or teams, but to examine focus or lack of focus at the community level. Moreover, our analysis was limited to collaboration at a distance -- international collaboration -- which often does not include face-to-face connection. This configuration changes the nature of communications, reducing tacit exchange and emphasizing explicit codification, which makes it 'easier' to study. The structure of the networked links and communications determines behavior to a greater extent when working remotely than we would expect from groups working face-to-face. 

The reputation economy of scholarship encourages scholars to discover something 'new.' The need for recognition can encourage overreach, and certainly the introduction of computational analysis to social sciences and humanities is such a moment. Incorporating computational scientists into SSH fields may require team scientists to help us, perhaps.
Caroline 






The Ohio State University
Caroline S. Wagner, PhD
Wolf Chair in International Affairs
John Glenn College of Public Affairs Battelle Center for Science & Engineering Policy
Page Hall 210U, 1810 College Road N, Columbus, OH 43210
6142927791 Office / 614-206-8636 Mobile
wagne...@osu.edu / http://glenn.osu.edu/faculty/
<Outlook-at0zev1s.png>

Glenn Hampson

unread,
May 7, 2021, 3:53:05 PM5/7/21
to Lisa Hinchliffe, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative

Interesting point Lisa. Hopefully not to sound too grumpy here, but I’ve never considered Sci of Sci a settled field anyway, so yes, maybe you’re right. Their approach, though, IMHO, has been the opposing of Zoom bombing. Rather than saying “I’m an expert in field x and I’m going to use my expertise to make observations about another field ,” they have said “I’m an expert in field x and I’m going to ignore what experts in field y have said.” Specifically, rather than build on extensive work in psych, communication and marketing, sci of sci has tried to reinvent the wheel on a lot of fundamental questions like, for instance, how to reach skeptical audiences. Granted, there’s a lot of niche work involved, but there were just a million Sackler panels a few years ago on Sci of Sci that could have been easily answered at just about any marketing 101 conference. So, bombs away, I think, or at least, make room for other ideas. Apologies again if I sound grumpy (I know I am…these 3 a.m. conferences are brutal…).

 

Good weekend,

 

Glenn

 

From: osi20...@googlegroups.com <osi20...@googlegroups.com> On Behalf Of Lisa Hinchliffe
Sent: Friday, May 7, 2021 7:40 AM
To: Glenn Hampson <gham...@nationalscience.org>

image001.png

Lisa Hinchliffe

unread,
May 7, 2021, 9:48:21 PM5/7/21
to Glenn Hampson, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative
Having now read the opening chapter - which points back to Thomas Kuhn as one of the science of science scholars - I see less a tone of "we invented something" ... instead seems to say "we are applying this thing that others have also been doing to some particular use cases." Granted, only through the first chapter! Lisa

David Wojick

unread,
May 8, 2021, 4:19:48 PM5/8/21
to Lisa Hinchliffe, Glenn Hampson, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative
Kuhn et al is a good place to start. I was in that revolution in Phil Sci, as a grad student. Our slogan was "science is as science does", in contrast to the prior school which wanted to say how science should be (Popper for example). 

So Sci Sci is at least 80 years old. Taking the next step, citation network analysis is also old, at least 40 years or so. So what these folks are hyping is likely the analysis of newer networks, say along the lines of altmetrics. Part of the problem may be that Barabási is famous for unmercifully hyping network analysis. I see one of his books is even on "The Universal Laws of Success". Woohoo!

I have always thought that citation analysis was unsatisfactory, in part because the citation relation is extremely vague. So looking at other networks is certainly worthwhile (most are fragments of issue storms). That this is the key to happiness is less certain.

Maybe you can let us know, Lisa. 

Speaking of happiness, I have a conjecture that I do not see reflected in the discussions of impact. That is that impact, however it is defined, is probably a zero sum game. That is there is only so much to go around, so everyone can not increase theirs, quite the contrary. Mind you this may only be true of normal science. In revolutions there is a lot more room for greatness. Just a thought.

David

On May 7, 2021, at 10:48 PM, Lisa Hinchliffe <lisali...@gmail.com> wrote:



<image001.png>

Elizabeth Gadd

unread,
May 8, 2021, 4:31:07 PM5/8/21
to Glenn Hampson, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative
Epistemic trespassing. https://academic.oup.com/mind/article/128/510/367/4850765

Dr Elizabeth Gadd FHEA

Research Policy Manager (Publications)

Research and Enterprise Office

Loughborough University

Loughborough, UK, LE11 3TU

 

Chair: INORMS Research Evaluation Working Group

Chair: Lis-Bibliometrics

Champion: ARMA Research Evaluation SIG

 

Phone: +44 (0)1509228594

Twitter: @lizziegadd

Web: https://lizziegadd.wordpress.com/

Working hours: M: 8.30-5/ Tu: 8.30-5/ W: 8.30-3/ F: 8.30-12.30


On 7 May 2021, at 14:21, Glenn Hampson <gham...@nationalscience.org> wrote:



Is there a name for this phenomenon when a researcher Zoom-bombs a field that isn’t theirs? (Or should there be?)

 

From: Wagner, Caroline <wagne...@osu.edu>
Sent: Friday, May 7, 2021 5:37 AM
To: Fiore, Steve <sfi...@ist.ucf.edu>; Biagioli, Mario <biag...@law.ucla.edu>; Ivo Grigorov <iv...@aqua.dtu.dk>
Cc: Glenn Hampson <gham...@nationalscience.org>; The Open Scholarship Initiative <osi20...@googlegroups.com>
Subject: Re: to read or not to read

 

Dear Steve and friends,

Thanks for this thoughtful treatment. The problem that Cassidy Sugimoto points out, and that Steve touches on here, is that many computationally oriented researchers come into science studies for the data, without an understanding or theoretical basis for examining the large-scale patterns they observe. Unfamiliar with the history of the field, they become excited when they derive patterns from connections, but for many, these inquiries may look beautiful but, without a theoretical basis, they do not add to understanding science dynamics. Moreover, they claim to have invented a new field, when the study of science communications is nearly a century old (if we go back to Lotka 1926).

 

That said, it is possible to apply theories of dynamics to infer what is happening within large groups--ones that operate according to patterns unbeknownst to the actors themselves. Sociology/dynamics does provide us with theories about how communications networks operate and what they show about knowledge communities. In the COVID work, we inferred a lack of focus in the research community by studying keyword clusters compared to the pre-COVID work. We saw that research related to COVID measured close to chaos in the early days of the pandemic while the community tried to understand what was happening. (The same communities in pre-COVID days were highly structured.) This is not to pass judgment on any specific team or teams, but to examine focus or lack of focus at the community level. Moreover, our analysis was limited to collaboration at a distance -- international collaboration -- which often does not include face-to-face connection. This configuration changes the nature of communications, reducing tacit exchange and emphasizing explicit codification, which makes it 'easier' to study. The structure of the networked links and communications determines behavior to a greater extent when working remotely than we would expect from groups working face-to-face. 

 

The reputation economy of scholarship encourages scholars to discover something 'new.' The need for recognition can encourage overreach, and certainly the introduction of computational analysis to social sciences and humanities is such a moment. Incorporating computational scientists into SSH fields may require team scientists to help us, perhaps.

Caroline 

 

 

 

 

 

 

The Ohio State University
Caroline S. Wagner, PhD
Wolf Chair in International Affairs
John Glenn College of Public Affairs Battelle Center for Science & Engineering Policy
Page Hall 210U, 1810 College Road N, Columbus, OH 43210
6142927791 Office / 614-206-8636 Mobile
wagne...@osu.edu / http://glenn.osu.edu/faculty/

<image001.png>

Glenn Hampson

unread,
May 8, 2021, 10:14:57 PM5/8/21
to Elizabeth Gadd, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative

Aha! It’s paywalled article but the abstract reads “Epistemic trespassers judge matters outside their field of expertise. Trespassing is ubiquitous in this age of interdisciplinary research and recognizing this will require us to be more intellectually modest.”

 

From: Elizabeth Gadd <E.A....@lboro.ac.uk>
Sent: Saturday, May 8, 2021 1:31 PM
To: Glenn Hampson <gham...@nationalscience.org>

David Wojick

unread,
May 9, 2021, 4:54:37 PM5/9/21
to Glenn Hampson, Elizabeth Gadd, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative
If you want to read it try https://www.academia.edu/34743123/Epistemic_Trespassing

However, if you are arguing that these authors are not qualified to write a book on the science of science I would like to hear your argument.

David

On May 8, 2021, at 11:14 PM, Glenn Hampson <gham...@nationalscience.org> wrote:



Joyce Ogburn

unread,
May 10, 2021, 10:21:11 AM5/10/21
to David Wojick, Glenn Hampson, Elizabeth Gadd, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative

If you read the article the author doesn’t cite much if any of the history of science literature that might dispute his thesis. He uses Linus Pauling as an example of a poacher, but cites only one source to support this assertion that someone brought to his attention, which means he doesn’t know this field at all. Pauling is a complicated and influential figure 20th century science whose contributions and failtures are well documented, so perhaps a different example of a failed crossover would have served better. Also, he uses only a few examples drawn from high profile figures. What happens outside of the attention of news media and popular reading/press? He finds fault with Dawkins discussing religion, but what about the biologist E.O. Wilson and his writing on consilience of science and religion? Does that follow the same pattern? Not enough examples or in depth analysis of why they were not successful.

 

A host of physicists crossed over into molecular biology and made significant contributions to the study and application of genetics. Biochemists contribute to medicine all the time. What about these stories?

 

Joyce

 

Sent from Mail for Windows 10

From: osi20...@googlegroups.com <osi20...@googlegroups.com> on behalf of Fiore, Steve <sfi...@ist.ucf.edu>
Sent: Thursday, May 6, 2021 8:17 PM
To: Biagioli, Mario <biag...@law.ucla.edu>; Ivo Grigorov <iv...@aqua.dtu.dk>
Cc: Glenn Hampson <gham...@nationalscience.org>; The Open Scholarship Initiative <osi20...@googlegroups.com>
Subject: Re: to read or not to read

 

Hi Everyone - like others, I ordered the book (just received) but have yet to read it.  I am familiar with the field, and, in this field, Wu, Wang, and Evans (2019), is one of my favorite papers of recent years (https://www.nature.com/articles/s41586-019-0941-9).  That paper was particularly interesting in how they analyzed disruption, as well as integrated machine learning and natural language processing with their big data approach.  More importantly, the replicated their methods via analyses of GitHub repos. So in the paper they compare three different domains in their study of disruption and team size (i.e., teams based upon publications, based upon patents, and based upon GitHub repos).  Second, they provide a number of interesting metrics. For example, they look at those who both build on prior knowledge and those who disrupt the normal growth of knowledge.  They used a measure developed/published in Management Science ("A Dynamic Network Measure of Technological Change") that is really cool as it comes up with a new metric for disruption.  But I was also impressed with how they dug deeper into the bibliometrics of not simply who was being cited, but also what was being cited (reviews or empirical work) as well as what was cited in the papers that were being cited (e.g., the original (old) papers or merely reviews of that old work).  Also, the visualizations of these different citation paths are, themselves, beautiful.  These points are merely to illustrate what I think can be done well with this form of research.

 

With all of that said, like Cassidy articulates so well in her review, I have reservations about how far to take this blending of bibliometrics with network science.  Cassidy notes a number of the limitations that come from this kind of research, and, more importantly, the negative implications of using them for guidance.  I'd like to add a complement to that and focus on what I see as limitations of the Science of Science (SciSci) for understanding the collaborative elements, that is, the actual teamwork in the science.  As background, last year, on the Science of Team Science listserv, a colleague shared a new SciSci type paper examining international collaborations studying COVID-19 and requested comments on the article.  After reading that paper, I provided some comments that could be similarly levied against much of the SciSci field for overreach in the kinds of inferences being made on data so decontextualized.  For those interested, below is the original post on the new paper ("Consolidation in a crisis: Patterns of international collaboration in early COVID-19 research"), and following that are my comments.  I've highlighted my points about what is missing in SciSci studies of scientific collaboration - that is, although they can provide some sense of ‘what’ is occurring, they're limited in helping us understand any of the “why” these findings occurred.

 

Best,

Steve

 

1.       Gaming the Metrics: Misconduct and Manipulation in Academic Research (MIT Press, 2020)
https://mitpress.mit.edu/books/gaming-metrics

 

 

Glenn Hampson

unread,
May 10, 2021, 2:01:20 PM5/10/21
to Joyce Ogburn, David Wojick, Elizabeth Gadd, Wagner, Caroline, Fiore, Steve, Biagioli, Mario, Ivo Grigorov, The Open Scholarship Initiative

Agreed Joyce. What I’m reminded of this morning is an example not mentioned in this paper: epistemic trespassing at the policy level. I’ve been “observing” UNESCO’s open science policy deliberations for three days now---two more to go---and the amount of trespassing going on here is, well, dispiriting at times. Science ministers from around the world are shaping policy on what the future of open science should look like, but they’re doing this by editing the document that emerged from UNESCO’s global consultation process last year (which, personally, I didn’t care for to begin with---I thought the docuemnt was too ideological and not focused enough on the big picture recommendations we’ve made in OSI). They are, in other words, trespassing big time into the fields of scholarly publishing and science communication.

 

Granted, it’s heartening on the one hand to see how committed this group is to the future of science. Also, this is not a group of neophytes---I don’t mean to even remotely suggest they aren’t experts. But the field of open science is filled with detail and nuances that are well below the paygrade of most of the dignitaries in this group to really understand; also, their priority here is to approve the existing text quickly so it can move onto the next stage, and not get sidetracked into debates informed by input from the host of observers who are powerless to help, clarify and inform (only ministers can speak).

 

So, there we go. Another case of trespassing gone bad, although I guess this is a “normal” part of high level policymaking, right?---i.e., at some point, the people who are going to mark up and vote on a policy draft are not the same experts who produced that draft.

image001.png
image002.png
Reply all
Reply to author
Forward
0 new messages