Iam a cultural and medical anthropologist with interests encompassing the anthropology of science, biomedicine and psychiatry; addiction and its treatment; suggestion and healing; and post-socialist transformations in Eurasia. I am particularly concerned with the circulation of new forms of knowledge and clinical intervention produced by biomedicine, neuroscience and psychiatry. My work follows therapeutic technologies as they move both from "bench to bedside" and from one cultural or institutional setting to another, examining how they intersect with the lives of practitioners and patients.
My book Governing Habits: Treating Alcoholism in the Post-Soviet Clinic was published by Cornell University Press in the Fall of 2016. Based on fourteen months of fieldwork in St. Petersburg among institutions dealing with substance abuse, this book examines the political-economic, epidemiological and clinical changes that have transformed the knowledge and medical management of alcoholism and addiction in Russia over the past twenty years.
Two new projects, both based largely in North America, are in an earlier stage of development. The first of these, a collaboration with Stephanie Lloyd (Laval University) and researchers in the Department of Psychiatry at McGill University, examines the emerging field of "behavioral epigenetics," with a particular focus on research about suicidal risk. We are in the process of carrying out an ethnographic study to examine how neuroscientists, geneticists and psychiatrists draw upon the latest scientific knowledge to explain suicide, and how family members, in turn, take up these explanations. I have also begun a second project, which will examine how contemporary logics, practices and politics of mental health and illness intersect with class distinctions and aspirations for upward mobility among undergraduates in the United States.
From September 2007 to February 2010 I held a postdoctoral fellowship in the CIHR Strategic Training Program in Culture and Mental Health Services Research in the Division of Social and Transcultural Psychiatry at McGill University.
I also founded, edit and frequently contribute to Somatosphere, a collaborative academic weblog focused on medical anthropology at its intersections with cultural psychiatry, bioethics and science and technology studies.
Neurosciences and psychiatry overlap when the identification of anomalous neural activity is mapped to behavioural or cognitive phenomena in the context of assessment or diagnosis of patients. This means in practice that technologies developed for recording neural activity can come to play a role in psychiatry. Given this, there is a clear need to examine not only the relationship between neuroscience and psychiatry, but also the use of neurotechnology in psychiatry. The specifics of how such technologies operate become particularly salient when they are placed in the context of a practice aimed at evaluating human behaviour, such as psychiatry.
In some cases, neurotechnology can rely on artificial intelligence (AI), especially in the prediction, or analysis of neural recording data (Glaser et al. 2017; Kellmeyer 2018). This represents a significant element worth its own investigation, again because it is deployed in a context of evaluating human behaviour. How AI develops and is used in this kind of context is in need of analysis. In this paper the analysis will involve identification of key normative differences between brain-based intelligence and artificial intelligence. To do this we point to some general complexities of human intelligence (HI), especially as based in complex, reasoned activity.
As a means of blending technological advances with human understanding, we recommend discussion that draws upon a variety of discussants and sources of information. This fits with more familiar psychiatric methods involving doctor/patient encounters, which are typically framed as a type of discussion. Even with a power or authority imbalance between psychiatrist and patient, the conversational form of interaction forms the basis on which patients reveal their felt experience for expert appraisal by the psychiatrist. The interpolation of technological norms into these otherwise interpersonal spaces may serve to undermine that space. Technology appears to offer objective answers to problems and so can seem to overshadow the subtleties of more discursive approaches to human problems. Particular care ought to be taken in developing neuropsychiatric accounts of human cognition and behaviour where diagnosis of psychiatric disorder is at stake. This and related issues are central in this paper.
The above noted issues are pertinent in evidence-gathering as part of psychiatric assessment or diagnosis. Privileging causal explanations of action, or neurobiologically reductive bases for action and behaviour, may well lead to a sort of reason-curtailment wherein the scope of reasons available to account for action and behaviour is reduced. This could result in a too reductive account of complex human behaviour both in terms of rationality, and action, especially as it relates to the perceptions both of patients and of practitioners. Where behaviours appear to be explained by some causal story supported by data, discursive accounts of those same behaviours may become less influential. This may sound somewhat abstract, but with reference to neurotechnologies, we can offer a useful example of this reductive potential.
The instrumental potential of neural states also underwrites pharmacological intervention in cognitive and mental states, as well as treatments such as DBS. This is the rationale for using DBS in cases of obsessive compulsive disorder, persistent depression, or anorexia nervosa for instance (Klein et al. 2016; Maslen et al. 2015; Widge and Sahay 2016). It is also key in the development of neurotechnologies, such as neuroprosthetics for speech, which might be seen as likely successors to a neuropharmacological-psychiatric industry (Parastarfeizabadi and Kouzani 2017). At any rate, these instrumentalisations are embedded within a recognisably discursive practise. Psychiatric assessments, despite or because of power imbalances between practitioner and patient, allow for a therapeutic identification of problems wherein psychological traits shade into indicators of disease, as with brain lesions. The regime here is one of observation and discussion, then to report, intervene and treat. The process can be repeated until a satisfactory outcome arises. In the case of a closed-loop neurotechnology controlled by software, this regime is altered.
A central part of what is at issue here is the role of AI as decision-maker in the examples above. A kind of hybridised control appears in these cases, the agent herself having more or less limited control. Where the control in question concerns neural states, and thereby mental states, this is all the more acute. AI-powered methods can provide effective predictions about brain activity quickly, and unexpectedly, from huge amounts of data (Bzdok and Meyer-Lindenberg 2018). These methods are often very complex and opaque, and consequently are often not understood well. This is especially the case where there is some ambition to expand the use of machine learning and related systems in psychiatry (Dwyer et al. 2018). While our focus here are the systems used, we are also signalling that the field into which such systems will appear may require further analysis. This includes methodological ramifications which might be very wide-ranging indeed (Kitchin 2014).
Machine learning techniques designed to generalise from complex and varied data in order to predict particular cases tend to rely upon statistical methods. These may be of various types, but a common feature is that they are often deployed effectively as a black box (Samek et al. 2017). This, in general, can be seen as a problem with machine learning approaches. Despite their often impressive successes, these machine learning applications remain inexplicable in some important respects, owing to their mathematical complexity and opaque processing methods. This inexplicability may even be prevalent among those involved with developing the applications (Hart and Wyatt 1990).
Even with efforts to restructure the patient-clinician relation, the complexity of shared decision-making practices and the possibility for dialogue is not easy to resolve (Thompson 2007). Paternalism in general is less and less popular in modern medical models of patient engagement, as it does not sufficiently prioritise patient autonomy. Yet if this paternalism is displaced into a machine, one that cannot be understood by the patient (and perhaps not even the physician, at least in terms of its processing), additional issues arise. Even if a diagnosis were to be correct, it would have limited legitimacy. This is because a diagnosis may falter when the strategy relies somehow on a process with at least one inexplicable element.
The issue is not that neuroscience does not, or should not, play a significant role in psychiatry, but that at present at least its commitment to a simple reductionist paradigm is also affording the researchers a degree of naivety and lack of social awareness that is of concern. The effect is that, unlike traditional psychiatric encounters which, despite issues of power and inequality, are nevertheless inherently social interactions, the emerging role of neuroscience in psychiatry suggests the role of individual experts and doctors might be deferred by the apparently objective, and self-determining technology. (Cohn 2016, 180)
For our purposes here, we propose that AI can include anything that seeks to reproduce or simulate methods of decision-making, and reasoning, by technological means. By HI, we point very broadly to the human ability to reason, such that a basis for distinguishing between causal or caused activity and intentional action can be established. On this account, action can be interpreted as being done for reasons, whereas caused activity occurs owing to physical laws.
3a8082e126