Here’s a good critique (Scientific success by numbers (nature.com) by Cassidy Sugimoto* about a book that (based on her review---I haven’t actually read this) promises to help scientists “navigate their careers” and “maximize [their] odds of success” by understanding how to work the metrics that authors spend so much time complaining about. If this is a fair summary, then yikes. I mean, I’m sure this book will have an audience, but it seems a bit like publishing a book on how to exploit tax loopholes or cheat on exams without getting caught…. See Amazon.com: The Science of Science (9781108492669): Wang, Dashun, Barabási, Albert-László: Books.
On the other hand, none less than Magdalena Skipper reviewed this book and wrote (as noted on Amazon) “'In their engaging book, Wang and Barabási take a fresh look at the science of science. They convincingly argue that in the age of big data and AI applying the scientific method to science itself not only helps understand how science works but may even enhance it. We are compelled to consider the determinants of individual careers and what this means in the age of large-scale scientific collaborations. These and other questions around the meaning of scientific impact, in academia and beyond, make the book highly relevant to scientists, academic administrators and funders alike. By the time the final, forward-looking chapter ends we are hooked on all the correlations and predictions, and so it is only fitting that we are invited to join in, to help shape the field which is likely to be driven by a human-machine collaboration.”
Has anyone else actually read this? There seems to be a difference of opinion between these experts on whether it’s worth the $30…
Best,
Glenn
*formerly the head of NSF’s Science of Team Science division
--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/004201d74074%2423152010%24693f6030%24%40nationalscience.org.
Please note that I work flex hours from time to time, so if you receive this email outside your working hours, there are no expectations to reply immediately.
Ivo Grigorov |
|
DTU Aqua |
|
Danmarks Tekniske Universitet/ Technical University of Denmark |
|
Institut for Akvatiske Ressourcer/National Institute of Aquatic Resources |
|
Kemitorvet. Bygning 202 |
|
2800 Kgs. Lyngby |
|
Twitter | LinkedIn Profile | Skype Contact: ivo_grigorov |
--------
Stephen M. Fiore, Ph.D.
Vice-president, President-elect, International Network for the Science of Team Science
Professor, Cognitive Sciences, Department of Philosophy
Director,
Cognitive Sciences Laboratory,
Institute for Simulation & Training
From: A public forum for scientists. <scien...@sciencelistserv.org>
Sent: Thursday, July 23, 2020 1:04 PM
To: scien...@sciencelistserv.org <scien...@sciencelistserv.org>
Subject: Message: 9
Dear Friends,
Our paper on international collaboration on COVID-19 research is published now in PLOS One:
Abstract:
This paper seeks to understand whether a catastrophic and urgent event, such as the first months of the COVID-19 pandemic, accelerates or reverses trends in international collaboration, especially in and between China and the United States. A review of research articles produced in the first months of the COVID-19 pandemic shows that COVID-19 research had smaller teams and involved fewer nations than pre-COVID-19 coronavirus research. The United States and China were, and continue to be in the pandemic era, at the center of the global network in coronavirus related research, while developing countries are relatively absent from early research activities in the COVID-19 period. Not only are China and the United States at the center of the global network of coronavirus research, but they strengthen their bilateral research relationship during COVID-19, producing more than 4.9% of all global articles together, in contrast to 3.6% before the pandemic. In addition, in the COVID-19 period, joined by the United Kingdom, China and the United States continued their roles as the largest contributors to, and home to the main funders of, coronavirus related research. These findings suggest that the global COVID-19 pandemic shifted the geographic loci of coronavirus research, as well as the structure of scientific teams, narrowing team membership and favoring elite structures. These findings raise further questions over the decisions that scientists face in the formation of teams to maximize a speed, skill trade-off. Policy implications are discussed.
Happy to have comments.
Caroline
Caroline S. Wagner
Milton & Roslyn Wolf Chair in International Affairs
John Glenn School of Public Affairs
The Ohio State University
Columbus, Ohio USA 43210
From: Fiore, Steve <sfi...@IST.UCF.EDU>
Sent: Sunday, July 26, 2020 4:36 PM
To: SCIT...@LIST.NIH.GOV
Subject: Re: International collaboration on COVID-19
Thanks for sharing, Carolyn. This is an important finding about what is happening when science responds to a global crisis. Finding that researchers seem to rely on familiarity during emergencies, shows that they are not really different from most others. More interesting are the findings about small teams, suggesting, also, a tie to research in other fields on teams and adaptability. Finally, most important are the results about elite universities. As you note, relying on reputation, rather than what might be more relevant expertise, or, in fact, greater talent, at non-elite universities, is another marker for future study. So this is an informative bibliometric analysis contributing to our understanding of science at a macro-level.
But, in your paper, you note that researchers lack a clear focus. How do you know that based upon analyzing a distribution of publications? Focus depends on the level of analyses. The teams, themselves, I’m sure, were quite focused in whatever it was they studied. Perhaps, at the level of the ‘field’, there was what appears to be a lack of focus. But, at the same time, one person’s lack of focus (or field's) is another’s exploration stage. With that said, in the paper, there are also speculations about coordination costs. How do we know these teams had high coordination costs? They were not queried about them. Similarly, the paper says that teams weighed an inherent tradeoff about their collaboration choice. Again, how do we know that is the case? Inferring psychological processes from citation analyses seems problematic - in the absence of asking the scientists, we remain too disconnected from those actually doing the work.
In short, although we have some sense of ‘what’ is occurring, what this type of analysis does not help us understand is any of the “why” these findings are occurring. Short of the exogenous trigger event, a global pandemic, we have no sense of the causal factors for any collaboration outcomes, let alone, processes. The paper speaks of team structure, but from the study of teams, we know there is more than simply size or international membership to team structure. Said another way, each one of the papers analyzed has a team behind them, and, associated with it, a rich set of interaction processes, a particular form of interdependency, a unique hierarchy, a composite of complementary cognition, shared and unshared attitudinal profiles, etc. etc. None of these can be understood by only looking at publications. Although bibliometrics and scientometrics studies, with their tens of thousands of data points, might be able to inform macroscopic patterns of collaboration, they are but one lens onto this phenomenon. The Science of Team Science was specifically created to provide the theoretical and methodological foundation through which to understand not only the “what” of teamwork, but also the “why”. Behind every single one of the thousands of pre-prints and publications making up a bibliometric data set, there are hundreds of hours of individual work and teamwork. Although it is fine to speculate about the macro-phenomena bibliometrics and scientometrics can uncover, such analyses can only go so far. And any recommendations about policy decisions should be similarly tempered. These types of analyses are not appropriate for making inferences about the actual teamwork processes and emergent phenomena occurring at the level of the team, that is, the rich world of interaction ‘in’ the teams producing (or failing to produce) publications that are treated merely as a single data point for citation analyses in bibliometrics studies.
Best,
Steve
Is there a name for this phenomenon when a researcher Zoom-bombs a field that isn’t theirs? (Or should there be?)
From: Wagner, Caroline <wagne...@osu.edu>
Sent: Friday, May 7, 2021 5:37 AM
To: Fiore, Steve <sfi...@ist.ucf.edu>; Biagioli, Mario <biag...@law.ucla.edu>; Ivo Grigorov <iv...@aqua.dtu.dk>
Cc: Glenn Hampson <gham...@nationalscience.org>; The Open Scholarship Initiative <osi20...@googlegroups.com>
Subject: Re: to read or not to read
Dear Steve and friends,
Thanks for this thoughtful treatment. The problem that Cassidy Sugimoto points out, and that Steve touches on here, is that many computationally oriented researchers come into science studies for the data, without an understanding or theoretical basis for examining the large-scale patterns they observe. Unfamiliar with the history of the field, they become excited when they derive patterns from connections, but for many, these inquiries may look beautiful but, without a theoretical basis, they do not add to understanding science dynamics. Moreover, they claim to have invented a new field, when the study of science communications is nearly a century old (if we go back to Lotka 1926).
That said, it is possible to apply theories of dynamics to infer what is happening within large groups--ones that operate according to patterns unbeknownst to the actors themselves. Sociology/dynamics does provide us with theories about how communications networks operate and what they show about knowledge communities. In the COVID work, we inferred a lack of focus in the research community by studying keyword clusters compared to the pre-COVID work. We saw that research related to COVID measured close to chaos in the early days of the pandemic while the community tried to understand what was happening. (The same communities in pre-COVID days were highly structured.) This is not to pass judgment on any specific team or teams, but to examine focus or lack of focus at the community level. Moreover, our analysis was limited to collaboration at a distance -- international collaboration -- which often does not include face-to-face connection. This configuration changes the nature of communications, reducing tacit exchange and emphasizing explicit codification, which makes it 'easier' to study. The structure of the networked links and communications determines behavior to a greater extent when working remotely than we would expect from groups working face-to-face.
The reputation economy of scholarship encourages scholars to discover something 'new.' The need for recognition can encourage overreach, and certainly the introduction of computational analysis to social sciences and humanities is such a moment. Incorporating computational scientists into SSH fields may require team scientists to help us, perhaps.
Caroline
Caroline S. Wagner, PhD
Wolf Chair in International Affairs
John Glenn College of Public Affairs Battelle Center for Science & Engineering Policy
Page Hall 210U, 1810 College Road N, Columbus, OH 43210
6142927791 Office / 614-206-8636 Mobile
wagne...@osu.edu / http://glenn.osu.edu/faculty/
On May 7, 2021, at 9:35 AM, David Wojick <dwo...@craigellachie.us> wrote:
Can you describe this phenomenon. Glenn? Sounds interesting.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/41250D05-1C3D-4107-B293-2C9E55E45F96%40craigellachie.us.
LOL. Just as Caroline was describing---e.g., “Unfamiliar with the history of the field, they become excited when they derive patterns from connections, but for many, these inquiries may look beautiful but, without a theoretical basis, they do not add to understanding science dynamics.” We see this all this time as scientists try to apply their expertise across disciplines in search of patterns and connections, but these patterns and connections are sometimes a better fit for the Journal of Spurious Correlations than for actual science.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/006b01d74343%24c28b0700%2447a11500%24%40nationalscience.org.
Speaking of dynamics, I discovered, or at least named, a basic case many years ago. I used it a lot but never published on it so here is a snapshot.
It is called the "issue storm." The issue storm is the pattern of the message traffic that arises and evolves when a major issue hits an organization or community. The point is that these patterns and their evolution can vary greatly from case to case. Moreover, these differences can make a lot of difference when one is dealing with an issue storm, or trying to understand what is going on.
We OSI-people live in an issue-driven environment. Issue storms are often unpredictable, which is why we often cannot say what we will be working on in a week, sometimes in a day.
They can spring up, grow and move suddenly, or very slowly. They vary greatly in size and duration, as well as structure. Just like storms in the weather. They also consume cognitive resources, like thought and attention, sometimes in vast quantities. For example, when a big news story breaks, millions or even billions of people can be thinking and communicating about it at the same time. That is a truly big cognitive event.
Unfortunately, mapping and seeing an issue storm, present or past, requires detailed data on the message traffic. At a minimum we need the identity and timing of each message. Of course content is also needed if one wants to observe the reasoning. But one can get a qualitative sense without that data.
David
Interesting point Lisa. Hopefully not to sound too grumpy here, but I’ve never considered Sci of Sci a settled field anyway, so yes, maybe you’re right. Their approach, though, IMHO, has been the opposing of Zoom bombing. Rather than saying “I’m an expert in field x and I’m going to use my expertise to make observations about another field ,” they have said “I’m an expert in field x and I’m going to ignore what experts in field y have said.” Specifically, rather than build on extensive work in psych, communication and marketing, sci of sci has tried to reinvent the wheel on a lot of fundamental questions like, for instance, how to reach skeptical audiences. Granted, there’s a lot of niche work involved, but there were just a million Sackler panels a few years ago on Sci of Sci that could have been easily answered at just about any marketing 101 conference. So, bombs away, I think, or at least, make room for other ideas. Apologies again if I sound grumpy (I know I am…these 3 a.m. conferences are brutal…).
Good weekend,
Glenn
From: osi20...@googlegroups.com <osi20...@googlegroups.com> On Behalf Of Lisa Hinchliffe
Sent: Friday, May 7, 2021 7:40 AM
To: Glenn Hampson <gham...@nationalscience.org>
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/CAKjLim2bBLgkePOA2-w3CKEP1cb%2BrmNbdo6cfvG4qSjwORy5Ow%40mail.gmail.com.
<image001.png>
Dr Elizabeth Gadd FHEA
Research Policy Manager (Publications)
Research and Enterprise Office
Loughborough University
Loughborough, UK, LE11 3TU
Chair: INORMS Research Evaluation Working Group
Chair: Lis-Bibliometrics
Champion: ARMA Research Evaluation SIG
Phone: +44 (0)1509228594
Twitter: @lizziegadd
Web: https://lizziegadd.wordpress.com/
Working hours: M: 8.30-5/ Tu: 8.30-5/ W: 8.30-3/ F: 8.30-12.30
On 7 May 2021, at 14:21, Glenn Hampson <gham...@nationalscience.org> wrote:
Is there a name for this phenomenon when a researcher Zoom-bombs a field that isn’t theirs? (Or should there be?)
From: Wagner, Caroline <wagne...@osu.edu>
Sent: Friday, May 7, 2021 5:37 AM
To: Fiore, Steve <sfi...@ist.ucf.edu>; Biagioli, Mario <biag...@law.ucla.edu>; Ivo Grigorov <iv...@aqua.dtu.dk>
Cc: Glenn Hampson <gham...@nationalscience.org>; The Open Scholarship Initiative <osi20...@googlegroups.com>
Subject: Re: to read or not to read
Dear Steve and friends,
Thanks for this thoughtful treatment. The problem that Cassidy Sugimoto points out, and that Steve touches on here, is that many computationally oriented researchers come into science studies for the data, without an understanding or theoretical basis for examining the large-scale patterns they observe. Unfamiliar with the history of the field, they become excited when they derive patterns from connections, but for many, these inquiries may look beautiful but, without a theoretical basis, they do not add to understanding science dynamics. Moreover, they claim to have invented a new field, when the study of science communications is nearly a century old (if we go back to Lotka 1926).
That said, it is possible to apply theories of dynamics to infer what is happening within large groups--ones that operate according to patterns unbeknownst to the actors themselves. Sociology/dynamics does provide us with theories about how communications networks operate and what they show about knowledge communities. In the COVID work, we inferred a lack of focus in the research community by studying keyword clusters compared to the pre-COVID work. We saw that research related to COVID measured close to chaos in the early days of the pandemic while the community tried to understand what was happening. (The same communities in pre-COVID days were highly structured.) This is not to pass judgment on any specific team or teams, but to examine focus or lack of focus at the community level. Moreover, our analysis was limited to collaboration at a distance -- international collaboration -- which often does not include face-to-face connection. This configuration changes the nature of communications, reducing tacit exchange and emphasizing explicit codification, which makes it 'easier' to study. The structure of the networked links and communications determines behavior to a greater extent when working remotely than we would expect from groups working face-to-face.
The reputation economy of scholarship encourages scholars to discover something 'new.' The need for recognition can encourage overreach, and certainly the introduction of computational analysis to social sciences and humanities is such a moment. Incorporating computational scientists into SSH fields may require team scientists to help us, perhaps.
Caroline
Caroline S. Wagner, PhD
Wolf Chair in International Affairs
John Glenn College of Public Affairs Battelle Center for Science & Engineering Policy
Page Hall 210U, 1810 College Road N, Columbus, OH 43210
6142927791 Office / 614-206-8636 Mobile
wagne...@osu.edu / http://glenn.osu.edu/faculty/
<image001.png>
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/006b01d74343%24c28b0700%2447a11500%24%40nationalscience.org.
Aha! It’s paywalled article but the abstract reads “Epistemic trespassers judge matters outside their field of expertise. Trespassing is ubiquitous in this age of interdisciplinary research and recognizing this will require us to be more intellectually modest.”
From: Elizabeth Gadd <E.A....@lboro.ac.uk>
Sent: Saturday, May 8, 2021 1:31 PM
To: Glenn Hampson <gham...@nationalscience.org>
If you read the article the author doesn’t cite much if any of the history of science literature that might dispute his thesis. He uses Linus Pauling as an example of a poacher, but cites only one source to support this assertion that someone brought to his attention, which means he doesn’t know this field at all. Pauling is a complicated and influential figure 20th century science whose contributions and failtures are well documented, so perhaps a different example of a failed crossover would have served better. Also, he uses only a few examples drawn from high profile figures. What happens outside of the attention of news media and popular reading/press? He finds fault with Dawkins discussing religion, but what about the biologist E.O. Wilson and his writing on consilience of science and religion? Does that follow the same pattern? Not enough examples or in depth analysis of why they were not successful.
A host of physicists crossed over into molecular biology and made significant contributions to the study and application of genetics. Biochemists contribute to medicine all the time. What about these stories?
Joyce
Sent from Mail for Windows 10
From: osi20...@googlegroups.com <osi20...@googlegroups.com> on behalf of Fiore, Steve <sfi...@ist.ucf.edu>
Sent: Thursday, May 6, 2021 8:17 PM
To: Biagioli, Mario <biag...@law.ucla.edu>; Ivo Grigorov <iv...@aqua.dtu.dk>
Cc: Glenn Hampson <gham...@nationalscience.org>; The Open Scholarship Initiative <osi20...@googlegroups.com>
Subject: Re: to read or not to read
Hi Everyone - like others, I ordered the book (just received) but have yet to read it. I am familiar with the field, and, in this field, Wu, Wang, and Evans (2019), is one of my favorite papers of recent years (https://www.nature.com/articles/s41586-019-0941-9). That paper was particularly interesting in how they analyzed disruption, as well as integrated machine learning and natural language processing with their big data approach. More importantly, the replicated their methods via analyses of GitHub repos. So in the paper they compare three different domains in their study of disruption and team size (i.e., teams based upon publications, based upon patents, and based upon GitHub repos). Second, they provide a number of interesting metrics. For example, they look at those who both build on prior knowledge and those who disrupt the normal growth of knowledge. They used a measure developed/published in Management Science ("A Dynamic Network Measure of Technological Change") that is really cool as it comes up with a new metric for disruption. But I was also impressed with how they dug deeper into the bibliometrics of not simply who was being cited, but also what was being cited (reviews or empirical work) as well as what was cited in the papers that were being cited (e.g., the original (old) papers or merely reviews of that old work). Also, the visualizations of these different citation paths are, themselves, beautiful. These points are merely to illustrate what I think can be done well with this form of research.
With all of that said, like Cassidy articulates so well in her review, I have reservations about how far to take this blending of bibliometrics with network science. Cassidy notes a number of the limitations that come from this kind of research, and, more importantly, the negative implications of using them for guidance. I'd like to add a complement to that and focus on what I see as limitations of the Science of Science (SciSci) for understanding the collaborative elements, that is, the actual teamwork in the science. As background, last year, on the Science of Team Science listserv, a colleague shared a new SciSci type paper examining international collaborations studying COVID-19 and requested comments on the article. After reading that paper, I provided some comments that could be similarly levied against much of the SciSci field for overreach in the kinds of inferences being made on data so decontextualized. For those interested, below is the original post on the new paper ("Consolidation in a crisis: Patterns of international collaboration in early COVID-19 research"), and following that are my comments. I've highlighted my points about what is missing in SciSci studies of scientific collaboration - that is, although they can provide some sense of ‘what’ is occurring, they're limited in helping us understand any of the “why” these findings occurred.
Best,
Steve
--------
Stephen M. Fiore, Ph.D.
Vice-president, President-elect, International Network for the Science of Team Science
Professor, Cognitive Sciences, Department of Philosophy
Director, Cognitive Sciences Laboratory, Institute for Simulation & Training
1. Gaming the Metrics: Misconduct and Manipulation in Academic Research (MIT Press, 2020)
https://mitpress.mit.edu/books/gaming-metrics
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/C57EB760-F926-46B5-A8CF-4C910685C552%40craigellachie.us.
Agreed Joyce. What I’m reminded of this morning is an example not mentioned in this paper: epistemic trespassing at the policy level. I’ve been “observing” UNESCO’s open science policy deliberations for three days now---two more to go---and the amount of trespassing going on here is, well, dispiriting at times. Science ministers from around the world are shaping policy on what the future of open science should look like, but they’re doing this by editing the document that emerged from UNESCO’s global consultation process last year (which, personally, I didn’t care for to begin with---I thought the docuemnt was too ideological and not focused enough on the big picture recommendations we’ve made in OSI). They are, in other words, trespassing big time into the fields of scholarly publishing and science communication.
Granted, it’s heartening on the one hand to see how committed this group is to the future of science. Also, this is not a group of neophytes---I don’t mean to even remotely suggest they aren’t experts. But the field of open science is filled with detail and nuances that are well below the paygrade of most of the dignitaries in this group to really understand; also, their priority here is to approve the existing text quickly so it can move onto the next stage, and not get sidetracked into debates informed by input from the host of observers who are powerless to help, clarify and inform (only ministers can speak).
So, there we go. Another case of trespassing gone bad, although I guess this is a “normal” part of high level policymaking, right?---i.e., at some point, the people who are going to mark up and vote on a policy draft are not the same experts who produced that draft.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/459ABB3D-B5A3-4364-91F2-58685E7799BB%40hxcore.ol.