Kinds of Uncertainty: On Doubt as Practice

In his recent books, Plastic Reason: An Anthropology of Brain Science in Embryogenetic Terms (University of California Press, 2016) and After Ethnos (Duke University Press, 2018), the anthropologist Tobias Rees explores the curiosity required to escape established ways of knowing, and to open up what he calls “new spaces for thinking + doing.” Rees argues that acknowledging – and even embracing – the ignorance and uncertainty that underpin all forms of knowledge production is a crucial methodological part of that process of escape. In his account, doubt and instability are bound up with a radical openness that is necessary for breaking apart existing gaps and allowing the new/different to emerge – in the natural but also in the human sciences. But are there limits to such an embrace of epistemic uncertainty? How does this particular uncertainty interact with other forms of uncertainty, including existential uncertainties that we experience as vulnerable human beings? And how does irreducible epistemic uncertainty relate to ethical claims about how to live a good life? What is the relation of a radical political practice of freedom with art? After a workshop on his work at the Zurich Center for the History of Knowledge in 2017, Vanessa Rampton, Branco Weiss Fellow at the Chair of Practical Philosophy, ETH Zurich, explored these themes with Rees.

 

1. The Human

Vanessa Rampton (VR): Tobias, your recent work aims to destabilize and question common understandings of the human. I wonder how you would place your work in relation to other engagements with ‘selfhood’ within the history of philosophy, and the history of the human sciences more widely. Because there are so many ways of thinking of the self – for example the empirical, bodily self, or the rational self, or the self as relational, a social construct – that you could presumably draw on. But I also know that you want to move beyond previous attempts to capture the nature and meaning of ‘the human self’. What are the stakes of this destabilization of the human? What do you hope to achieve with it?

Tobias Rees (TR): In a way, it isn’t me who destabilizes the human. It is events in the world. As far as I can tell, we find ourselves living in a world that has outgrown the human, that fails it. If I am interested in the historicity of the figure of the human –– a figure that has been institutionalized in the human sciences –– then insofar as I am interested in rendering visible the stakes of this failure. And in exploring possibilities of being human after the human. Even of a human science after the human.

VR: When you say the human, what do you mean?

Vanessa Rampton, Branco Weiss Fellow, ETH Zurich

TR: I mean at least three different things. First, I mean a concept. We moderns usually take the human for granted. We take it for granted, that is, that there is something like the human. That there is something that we –– we humans –– all share. Something that is independent from where we are born. Or when. Independent of whether we are rich or poor, old or young, woman or man. Independent of the color of our skin. Something that constitutes our humanity. In short, something that is truly universal: the human. However, such a universal of the human is of rather recent origin. This is to say, someone had to have the idea to begin articulating an abstract, in its validity universal and thus time and place independent, concept of the human. And it turns out that this wasn’t something people wondered about or aspired to formulate before the 17th century.

Second, I mean a whole ontology – that the invention of the human between the 17th and the 19th century amounted to the invention of a whole understanding of how the real is organized. The easiest way to make this more concrete is to point out that almost all authors of the human, from Descartes to Kant, stabilized this new figure by way of two differentiations. On the one hand, humans were said to be more than mere nature; on the other hand, it was claimed that humans are qualitatively different from mere machines. Here the human, thinking thing in a world of mere things, subject in a world of objects, endowed with reason, and there the vast and multitudinous field of nature and machines, reducible –– in sharp contrast to humans –– to math and mechanics. The whole vocabulary we have available to describe ourselves as human silently implies that the truly human opens up beyond the merely nature. And whenever we use the term ‘human,’ we ultimately rely on and reproduce this ontology.

Third, I mean a whole infrastructure. The easiest way to explain what I mean by this is to gesture to the university: the distinction between humans on the one hand and nature and machines on the other quite simply mirrors the concept of the human, insofar as it implies two different kinds of realities, as it emerged between the 17th and 19th century. Now, it may sound odd, even provocative, but I think there can be little doubt that today the two differentiations that stabilized the human –– more than mere nature, other than mere machines ––fail. From research in artificial intelligence to research in animal intelligence, en passant microbiome research or climate change. One consequence of these failures is that the vocabulary we have available to think of ourselves as human fails us. And I am curious about the effects of these failures: what are their effects on what it means to be human? What are their effects on the human sciences –– insofar as those sciences are contingent on the idea that there is a separate, set apart human reality and insofar as their explanations, their sense making concepts are somewhat contingent on the idea of a universal figure of the human, that is, on the ‘the’ in ‘the human’? Can the human sciences, given that they are the institutionalized version of the figure of the human, even be the venue through which we can understand the failures of the human? Let me add that I am much less interested in answering these questions than in producing them: making visible the uncertainty of the human is one way of explaining what I think of as the philosophical stakes of the present. And I think these stakes are huge: for each one of us qua human, for the humanities and human sciences, for the universities. The department I am building at the Berggruen Institute in Los Angeles revolves around just these questions.

‘Human embryonic stem cells’ by Jenny Nichols. Credit: Jenny Nichols. CC BY

VR: What led you to doubt the concept of the human and the human sciences?

TR: My first book, Plastic Reason, was concerned with a rather sweeping event that occurred around the late 1990s: the chance discovery that some basic embryonic processes continue in adult brains. Let me put this discovery in perspective: it had been known since the 1880s that humans are born with a definite number of nerve cells, and it was common wisdom since the 1890s that the connections between neurons are fully developed by age twenty or so. The big question everyone was asking at the beginning of the twentieth century was: how does a fixed and immutable brain allow for memory, for learning, for behavioral changes? And the answer that eventually emerged was the changing intensity of synaptic communication. Consequently, most of twentieth-century neuroscience was focused on understanding the molecular basis of how synapses communicate with one another –– first in electrophysiological and then in genetic terms.

When adult cerebral plasticity was discovered in the late 1990s the focus on the synapse –– which had basically organized scientific attention for a century –– was suddenly called into question. The discovery that new neurons continue to be born in the adult human brain, that these new neurons migrate and differentiate, that axons continue to sprout, that dendritic spines continuously appear and disappear not only suggested that the brain was perhaps not the fixed and immutable machine previously imagined; it also suggested that synaptic communication was hardly the only dynamic element of the brain and hence not the only possible way to understand how we form memory or learn. What is more, it suggested that chemistry was not the only language for understanding the brain.

The effect was enormous. Within a rather short period of time, less than ten years, the brain ceased to be the neurochemical machine it had been for most of the twentieth century, but without – and this I found so intriguing – without immediately becoming something else. The beauty of the situation was that no one knew yet how to think the brain. It was a wild, an untamed, an in-between state, a no longer not-yet, a moment of incredibly intense, unruly openness that no one could tame. The whole goal of my research was to capture something of this irreducible openness and its intensity.

Anyway, when trying to capture something of the radical openness in which my fieldwork was unfolding, I began to wonder about my own field of research: if the taken for granted key concepts of brain science, that is, the concepts that constituted and stabilized the brain as an object, could become historical in a rather short period of time, then what about the terms and concepts of the human sciences? Which terms might constitute the human in such a situation? These questions led me to the obsession of trying to write brief, historicizing accounts of the key terms of the human sciences, first and foremost the human itself: when did the time and place independent concept of the human, of the human sciences we operate with emerge? And this then led me to the terms that stabilize the human: culture, society, politics, civilization, history, etc. When were these concepts invented –– concepts that silently transport definitions of who and what we are and of how the real is organized? When were they first used to describe and define humans, to set them apart as something in themselves? Where? Who articulated them? What concepts –– or ways of thinking –– existed before they emerged? And are there instances in the here and now that escape the human?

Somewhere along the way, while doing fieldwork at the Gates Foundation actually, I recognized that the vocabulary the human sciences operate with didn’t really exist before the time around 1800, plus or minus a few decades, and that their sense-making, explanatory quality relies on a figure of the human –– on an understanding of the real –– that has become untenable. I began to think that the human, just like the brain, had begun to outgrow the histories that had framed it. You said earlier, Vanessa, that I am interested in destabilizing common understandings of the human. Another way of describing my work, one I would perhaps prefer, would be to say that through the chance combination of fieldwork and historical research I discovered the instability –– and the insufficiency –– of the concept of the human we moderns take for granted and rely on. I want to make this insufficiency visible and available. The human is perhaps more uncertain than it has ever been.

VR: Listening to you, I cannot help but think that there are strong parallels between your work and the history of concepts as formulated by, say, Reinhart Koselleck or Raymond Williams. I can nevertheless sense that there is a difference –– and I wonder how you would articulate this difference?

TR: First, I am not a historian of concepts. I am primarily a fieldworker and hence operate in the here and now. What arouses my curiosity is when, in the course of my field research a ‘given,’ something we simply take for granted, is suddenly transformed into a question: an instance in which something that was obvious becomes insufficient, in which the world or some part thereof escapes it and thereby renders it visible as what it is, a mere concept. From the perspective of this insufficiency I then turn to its historicity: I show where this concept came from, when it was articulated, why, under what circumstances, and also how it never stood still and constantly mutated. But in my work this history of a concept, if one wants to call it that, is not end in itself. It is a tool to make visible some openness in the present that my fieldwork has alerted me to. In other words, the historicity is specific: the specific product of an event in the here and now, a specificity produced by way of fieldwork.

Reid Hoffman Professor of Humanities at the New School of Social Research in New York, Director of the Transformations of the Human Program at the Berggruen Institute in Los Angeles.

Second, my interest in the historicity –– rather than the history –– of concepts runs somewhat diagonal to presuppositions on which the history of concepts has been built. Koselleck, for example, was concerned with meaning or semantics and with society as the context in which changes in meaning occur. That is to say, Koselleck –– and as much is true for Williams –– operated entirely within the formation of the human. They both took it for granted that there is a distinctive human reality that is ultimately constituted by the meaning humans produce and that unfolds in society. Arguably, the human marked the condition of the possibility of their work. It is interesting to note that neither Koselleck nor Williams, nor even Quentin Skinner, ever sought to write the history of the condition of possibility of their work: they never historicized the figure of the human on which they relied. On the contrary, they simply took it for granted as the breakthrough to the truth. If I am interested in concepts and their historicity, then it is only because I am interested in the historicity of the concept of the human as a condition of possibility. How to invent the possibility of a human science beyond this condition of possibility is a question I find as intriguing as it is urgent: how to break with the ontology implied by the human? How to depart from the infrastructure of the human, while not giving up a curiosity about things human, whatever human then actually means?

 

2. Epistemic Uncertainty

VR: I am wondering if all concepts can outgrow their histories. Isn’t this more difficult in the case of, say, ‘the body’ or ‘language,’ than for our more doctrinal concepts – liberalism and socialism, for example?

TR: Your question implies, I think, a shift in register. Up until now we talked about the human and its concepts and institutions but now we are moving to a more general epistemic question: are all concepts subject to their historicity? And if so, what does this imply? Seeing as you mentioned the body, let’s take the idea –– so obvious to us today –– that we are bodies, that it is through our warm, sentient, haptic bodies that we are at home in the world. Over the last fifty years or so, really since the 1970s, a large social science literature has emerged around the body and around how we embody certain practices and so on. Much of this literature, of course, relies on Mauss on the one hand and on Merleau-Ponty on the other. And if one works through the anthropology or history of the body, one notes that most authors take the body simply as a given. It is as if they were saying, ‘Of course humans are, were, and always will be bodies.’

But were humans always bodies? At the very least one could ask when, historically speaking, did the concept of the body first emerge? When did humans first come up with a concept of the body and thus experience themselves as bodies? What work was necessary –– from physiology to philosophy –– for this emergence? To ask this question requires the readiness to expose oneself to the possibility that the category of the body and the analytical vocabulary that is contingent on this category is not obvious. There might have been times before the body –– and there might be times after it. For example, if one reads books about ancient Greece, say Bruno Snell’s The Discovery of the Mind, one learns that archaic Greek didn’t have a word for what we call the body. The Greeks had a word for torso. They had two words for skin, the skin that protects and the skin that is injured. They had terms for limbs. But the body, understood as a thing in itself, as having a logic of its own, as an integrated unit, didn’t exist.

‘Carved stone relief of Greek physician and patient’ . Credit: Wellcome Collection. CC BY

One version of taking up Snell’s observation is to say: the Greeks maybe did not have a word for body –– but of course they were bodies and therefor the social or cultural study of the body is valid even for archaic Greece. What I find problematic about such a position is that it implies that the Greeks were ignorant and that our concepts –– the body –– mark a breakthrough to the truth: we have universalized the body, even though it is a highly contingent category. Perhaps a better alternative is to systematically study how the ‘realism of the body’ on which the social and cultural study of the body is contingent became possible. A history of this possibility would have to point out that the concept of a universal body –– understood as an integrated system or organism that has a dynamic and logic of its own and that is the same all over the world –– is of rather recent origin. It doesn’t really exist before the 19th century. In any case, there are no accounts of the body –– or the experience of the body –– before that time and philosophies of the body seem to be almost exclusively a thing of the first half, plus or minus, of the twentieth century. Sure, anatomy is much older, and there were corpses, but a corpse is not a body. The alternative to the realism of the body that I briefly sketched here would imply that one can no longer naively –– by which I mean in an unexamined way –– subscribe to the body as a given. The body then has become uncertain. I am interested in fostering precisely this kind of epistemic uncertainty. To me, epistemic uncertainty is an escape from truth and thus a matter of freedom.

VR: Perhaps a kind of taken-for-granted approach to the body is so bound up with what you call ‘the human’ that questioning it is necessary for your work.

TR: Indeed, although my work led me to assume that what is true for the human or the body is true for all concepts. Every concept we have is time and place specific and thus irreducible, instable and uncertain.  But to return to the human: we live in a moment in time that produces the uncertainty of the human all by itself. I render this uncertainty visible by evoking the historicity of the human, and this in turn leads me to wonder if one could say that the human was a kind of intermezzo – a transient figure that was stable for a good 350 years but that can no longer be maintained.

VR: I wonder what you would reply if I were to say: but isn’t that obvious? Concepts are historically contingent, so what else is new?

TR: In my experience, most people grant contingency within a broader framework that they silently exempt from contingency itself. For example, if contingency means that different societies have different kinds of concepts, then society is the framework within which contingency is allowed: but society itself is exempt from contingency. One could make similar arguments with respect to culture. If we say that things are culturally specific, that some cultures have meanings that others don’t have, or entirely different ways of ordering the world, then we exempt culture from contingency.

All of this is to say, sure, you are right, social and cultural contingency are obviously not new. But what if you would venture to be a bit more radical. What if you would not exempt society and culture from contingency? Talk to a social scientist about society being contingent, and they become uncomfortable. Or they reply that maybe the concept of society didn’t exist but that people were of course always social beings, living in social relations. This is a half movement in thought. It assumes that the word has merely captured the real as it is –– but misses that the configuration of the real they refer to has been contingent on the epistemic configuration on which the concept of society has depended. We could say that the one thing a social scientist cannot afford is the contingency of the category of the social.

What I am interested in is the contingency of the very categories that make knowledge production possible. To some degree, I am conducting fieldwork to discover such contingencies, to generate an irreducible uncertainty: as an end in itself and also as a tool to bring into view in which precise sense the present is outgrowing –– escaping –– our understanding and experience of the world.

 

3. Knowledge Production Under Conditions of Uncertainty/Ignorance

VR: I imagine there is a kind of parallel here with how natural scientists would react to the fact that their concepts no longer fit, for example by developing a more up-to-date way of thinking the brain to replace the synaptic model. But it strikes me that, if done properly, this task is much more radical for practitioners of the human sciences. This is because all of our concepts – including such fundamental ones as the human and the body – are historically contingent, that we have to do away with universal categories. Our task is to fundamentally destabilize ourselves as historical subjects, as academics, as knowers. And I guess a key question is how this destabilization, this rendering visible of uncertainties, can nevertheless be linked to the kinds of knowledge production we have come to expect from the human sciences.

TR:  The question, perhaps, is what one means by knowledge production in the human sciences. I think that the human sciences have been primarily practiced as a decoding sciences. That is to say, researchers in the human sciences usually don’t ask ‘What is the human?’ No, they already knew what the human is: a social and cultural being, endowed with language. Equipped with this knowledge they then make visible all kinds of things in terms of society and culture. In addition, perhaps, one could argue that the human sciences have established themselves as guardians of the human – that is, they have been practiced in defensive terms. For example, whenever an engineer argues that machines can think and that humans are just another kind of machine, the human sciences react by defending the human against the machine. The most famous example here would maybe be Hubert Dreyfus against Seymour Papert. A similar argument though could be made with respect to genetics and genetic reductionism.

Now, if one destabilizes the figure of the human neither one of these two forms of knowledge production can be maintained. I think that this is why many in the human sciences experience the destabilization of the human as outrageous provocation. If one gets over this provocation one is left with two questions. The first is: what modes of knowledge production become possible through this destabilization of the human? Especially when this destabilization means that the entire ontological setup of the human sciences fail. Can the human sciences entertain, let alone address this question, given that they are the material infrastructure of the figure of the human that fails? Or does one need new venues of research? I often think here of the relation between modern art and the nineteenth century academy.

VR: That reminds me of Foucault.

TR: Foucault was an anti-humanist –– but he remained uniquely concerned with human reality. I think the stakes here – I say this as an admirer of Foucault – are more radical. So my second question is: what happens to the human? I am acutely interested in maintaining the possibility of the universality of the human after the human. Letting go of the idea seems disastrous. So how can one think things human without relying on a substantive or positive concept of what the human is? My tentative answer is research in the form of exposure: the task is to expose the normative concept of the human in the present, by way of fieldwork, to identify instances that escape the human and break open new spaces of possibility, each time different ones, ones that presumably don’t add up. The goal of this kind of research-as-exposure is not to arrive at some other, better conception of the human, but to render uncertain established ways of thinking the human or of being human and to thereby render the human visible and available as a question.

VR:  So if you don’t want to talk about what the human is, I’m wondering if the appropriate question would be about what the human is not.

‘Human microbial ecosystem, artistic representation’ by Rebecca D Harris. Credit: Rebecca D Harris. CC BY

TR: I think such an inversion doesn’t get us very far. I would rather say that I am interested in operating along two lines. One line revolves around the effort to produce ignorance. That is, I conduct research not so much in order to produce knowledge but the uncertainty of knowledge. The other line wonders how one could conduct research under conditions of irreducible ignorance or uncertainty, or how to begin one’s research without relying on universals. A comparative history of this or that always presupposes something stable. As does any social or cultural study. In both cases I am interested in a productive or restless uncertainty –– or second-order ignorance –– not only with respect to the human. In a way, what I am after is the reconstitution of uncertainty, of not knowing, by way of a concept of research that maintains throughout the possibility of truth.

If you were to press me to offer a systematic answer I would say, as a philosophically inclined anthropologist, that I conduct fieldwork/research because I am simultaneously interested in where our concepts of the human come from, in whether there are instances in the here and now that escape these concepts, and in rendering available the instability –– the restlessness –– of the category or the categories of the human, both as an end in itself and as a means to bring the specificity of the present into view. It strikes me as particularly important to note that what I am after is not post-humanism. As far as I can tell most post-humanists hold on to the 18th-century ontology produced by the human but then delete the human from this ontology. What interests me is to break with the whole ontology. Not once and for all but again and again. Nor am I interested in the correction of some error à la Bruno Latour – as if behind the human we can discover some essential truth –– call it Actor Network Theory –– that the moderns have forgotten and that the non-moderns have preserved and that we now all can re-instantiate to save the world.

I am not so much interested in a replacement approach –– what comes after the human? –– than in rendering visible a multiplicity of failures, each one of which opens up onto new spaces of possibility. After all, how Artificial Intelligence derails the human is rather different from how microbiome research derails it or climate change. These derailments don’t add up to something coherent. As I see it, it is precisely this not-adding-up –– this uncertainty –– that makes freedom possible. Perhaps this form of research is closer to contemporary art than to social science research, that could well be. Anyhow, the department I try to build at the Berggruen Institute revolves around the production of precisely such instances of failure and freedom.

 

Tobias Rees is Reid Hoffman Professor of Humanities at the New School of Social Research in New York, Director of the Transformations of the Human Program at the Berggruen Institute in Los Angeles, and Fellow of the Canadian Institute for Advanced Research. His new book, After Ethnos is published by Duke in October 2018.

Vanessa Rampton is Branco Weiss Fellow at the Chair of Philosophy with Particular Emphasis on Practical Philosophy, ETH Zurich, and at the Institute for Health and Social Policy, McGill University. Her current research is on ideas of progress in contemporary medicine.

Between discomfort and trauma

Emily J.M. Knox (ed.). Trigger Warnings: History, Theory, Context; London: Rowman & Littlefield, 2017; 298 pages; hardback £54.95; ISBN 978-1-4422-7371-9 .

by Sarah Chaney

In August 2016, the University of Chicago sent a letter to new students that received a great deal of academic and media interest. In the letter John “Jay” Ellison, Dean of Students, stated that the university was committed to “intellectual freedom”, indicating that other concepts referred to – “safe spaces” and “trigger warnings” among them – were antithetical to this notion. The connection between these concepts, as well as the letter itself, was much debated at the time, and the issues raised appear to be the starting point for many of the essays in this book. Are students’ minds really being coddled, or are there valuable things to be learnt from the use of trigger warnings and the debate surrounding them?

Trigger Warnings: History, Theory, Context does not take a clear-cut and dogmatic approach to the topic (as some others have done, most prominently those who object outright to the idea of trigger warnings like Greg Lukianoff and Jonathan Haidt). Most authors in this volume adopt a carefully critical view of trigger warnings that also seeks to understand and explore their implications and uses. The book focuses on higher education in North America; the location is only to be expected, perhaps, as this is where the bulk of debate has taken place. A few essays do look beyond higher education to the broader context from which trigger warnings emerged, including a rather Whiggish history of trigger warnings based on retrospective diagnosis of Post-Traumatic Stress Disorder (chapter 1) and a more incisive look at the use of trigger warnings in the treatment of eating disorders since the 1970s (chapter 3).

The volume claims to be interdisciplinary, although contributions largely stem from those working in the arts, humanities and social sciences. This is understandable: these fields have probably been the most affected by calls for trigger warnings, as well as being concerned with the practice of critical thinking and debate (which, according to their detractors, trigger warnings stifle). The inclusion of a number of authors with a background in library and information studies raises an interesting angle for historians about the way collections are labelled and configured. As Emily Knox indicates in the introduction, the American Library Association has long been opposed to the rating of texts, a practice which holds political connotations and has tended to be fairly arbitrary, usually based on the attitudes of a small group of people. Despite voicing this opposition, however, Knox goes on to raise the central tenet that runs throughout this book: while trigger warnings can and may be used as a form of censorship, teachers and lecturers also have an obligation to consider the welfare of their students.

These two potentially conflicting ideas are reflected in the division of the book into two parts. Starting with the context and theory around trigger warnings, the second half moves on to specific case studies, designed to try and offer some practical guidance for teachers. While Kari Storla does this excellently in her piece on handling traumatic topics in classroom discussion, other case studies are less satisfying and the first half of the book is ultimately of more interest to the historian, grappling as it does with the controversies raised by trigger warnings and placing them in wider context. Are warnings important for welfare, or damaging to students’ critical thinking? Do they protect or censor? Do they fulfil a genuine need for students or do universities use them to avoid confronting systemic issues around student welfare? Most authors do not resolve these questions – indeed, few come down squarely on one side or the other. This in itself reflects the complexity of the debate. It is, of course, possible in each case cited above for both things to be true, even in the same example.

Take Stephanie Houston Grey’s chapter on the history of warnings around eating disorders. This is one of the most thought-provoking and well-written articles in the book. Grey explores the public health response to eating disorders in the late 1970s, which she argues was one of the first instances in which widespread efforts were made to restrict speech on the grounds of preventing contagion. This “moral panic” resulted in crackdowns on eating-disordered individuals, most prominently online, which stripped basic civil rights from people but was nonetheless unsuccessful in reducing the prevalence of eating disorders. Grey’s thoughtful examination of one specific example that began nearly thirty years before trigger warnings became widespread online is an interesting opportunity for reflection on the emergence of triggers. In the case of eating disorders, labelling images and words as triggering might have begun from concerns about people’s welfare, but ultimately became repressive and silencing of people with eating disorders. Providing “critical thinking tools and skill sets”, Grey suggests, might instead assist people to engage in more productive conversations around eating disorders.

Although the context of public concern about contagion is very different from the modern emphasis on managing individual trauma, there are certain lines of similarity with other pieces in the book. Indeed, an emphasis on critical thinking tools to aid welfare is one of the most practical suggestions that emerges from the volume as a whole. As Storla notes, one of the biggest myths around the use of trigger warnings is the assumption that a blanket warning alone can somehow prevent students from experiencing trauma. Storla’s “trauma-informed pedagogy” instead provides a nuanced framework which incorporates student participation at every turn. Her classes develop their own guidelines, debate the use of warnings at the start of the course and consider the difference between discomfort and trauma. This provides a lesson to students in considering multiple viewpoints (in particular those of the rest of the class). Similarly, in their chapter Kristina Ruiz-Mesa, Julie Matos and Gregory Langner suggest that encouraging students to consider the differing backgrounds of their audiences can be a valuable lesson in public speaking. In both cases, trigger warnings become part of the educational content rather than being in opposition to it.

Trigger warnings can, then, be about opening up conversation as well as closing it down. Several authors, including Jane Gavin-Herbert and Bonnie Washick, suggest that student demands for trigger warnings may not even necessarily be about individual experiences of trauma but based in wider concerns about structural violence and inequality. Taking seriously and discussing these concerns may have more impact than a simplistic warning. Indeed, Storla argues that one of her techniques – the use of “safe words” by which students can bring an end to class discussion without having to give a personal reason for doing so – has never been used by a student in her classroom. However, its existence as part of a set of communal guidelines, she feels, means students are safe and supported and thus able to engage more fully in debates. Paradoxically, having the opportunity to censor discussion might actually promote it.

As a general guide, most of the authors in this volume agree that trigger warnings are an ethical and legal practice that can and should be put in place as part of increasing access to higher education. The people most likely to request trigger warnings are minority groups, who are also at greatest risk of experiencing trauma. The problem, however, comes when these issues are individualised, as neoliberal interpretations of trigger warnings have tended to do. Bonnie Washick’s sympathetic critique of the equal access argument for trigger warnings raises the way in which warnings have led to the expectation that individuals who might be “triggered” are viewed as responsible for managing their own reactions. While trigger warnings might have begun as a form of activism and social protest, they have since been medicalised (through the framework of Post-Traumatic Stress Disorder) and individualised. By taking a critical and contextual approach to trigger warnings, both teachers and students can gain from discussing them.

Trigger Warnings: History, Theory and Context is a valuable contribution to the debate around trigger warnings in higher education today, as well as an interesting exploration into some of the nuances around why and how such a concept has emerged. An edited volume particularly suits the topic, allowing for multiple and varied perspectives. No reader will agree with everything they read here, but then that’s precisely the point. If, collectively, the authors in this book achieve any one thing it is to persuade this reader at least that trigger warnings have the potential to generate more insightful debate and critical thought than they risk preventing.

Sarah Chaney is a Research Fellow at Queen Mary Centre for the History of the Emotions, on the Wellcome Trust funded ‘Living With Feeling’ project. Her current research focuses on the history of compassion in healthcare, from the late nineteenth century to the present day. Her previous research has been in the history of psychiatry, in particular the topic of self-inflicted injury. Her first monograph, Psyche on the Skin: A History of Self-Harm was published by Reaktion in February 2017.

Skin, muscle, bone, brain, fluid

Jennifer Wallis, Investigating the Body in the Victorian Asylum. Doctors, Patients, and Practices, (Cham, Switzerland: Palgrave Macmillan, 2017); xvi, 276 pages; 9 b/w illustrations; hardback £20.00; ISBN, 978-3-319-56713-6.

by Louise Hide

Skin, muscle, bone, brain, fluid – Jennifer Wallis has given each its own chapter in this exemplary mesh of medical, psychiatric and social history that spans work carried out in the latter decades of the nineteenth century in Yorkshire’s West Riding Pauper Lunatic Asylum. The body – usually the dead body – is at the centre of the book, playing an active role in the construction of knowledge and the evolution of practices and technologies in the physical space of the pathology lab, as well as in the emerging disciplines of the mental sciences, neurology and pathology. Wallis explores how, in the desperate quest to uncover aetiologies and treatments for mental disorders, there was a growing conviction that ‘the truth of any disease lay deep within the fabric of the body’ (Kindle: 3822). General paralysis of the insane (GPI) is central to the book. A manifestation of tertiary syphilis and a common cause of death in male asylum patients, it was one of the few conditions that produced identifiable lesions in the brain, raising hopes that the post-mortem examination could yield new discoveries around the organic origins of other mental diseases. Investigating the Body in the Victorian Asylum is, therefore, not only about how the body of the asylum patient was framed by changing socio-medical theories and practices, but about how it was productive of them too.

Whilst reading this lucidly written monograph, it soon becomes clear that West Riding was no asylum back-water. Its superintendent, James Crichton-Browne, was determined to forge a reputation in scientific research and West Riding became the first British asylum to appoint its own pathologist in 1872. Wallis has not only marshalled a vast amount of secondary literature, but made a deep and far-reaching foray into the West Riding archives, analysing some 2,000 case records of patients who died there between 1880 and 1900. Drawing on case books, post-mortem reports, administrative records and photographs, Wallis has created a refreshingly original way of conceptualising the asylum patient. Rather than exploring his – as it usually was in ‘cases’ of general paralysis – role within tangled networks of external social agencies and medical practices, she turns her focus to the inner unchartered terrain of unclaimed corpses. She shows how the autopsy provided different ways of ‘seeing’ as the interior of the body was ‘surfaced’ through a range of new and evolving practices and technologies, such as microscopy and clinical photography.  Processes for preserving human tissue and conducting post-mortem examinations were enhanced, as were methods for observing and testing tissue samples, and for recording findings. None of these practices was without an ethical dimension, such as a patient’s right to privacy and anonymity.

Doctors, perhaps, gleaned most from the living as they examined and observed patients on admission and in the wards; pathologists could venture into the deep tissues of the body, which were out of bounds for as long as a patient remained alive. Yet the two states could not be separated quite so neatly and Wallis turns her attention to the growing tensions between pathologists and asylum doctors as both scrambled to plant their disciplinary stake in the ground, navigating boundaries between the living and the dead body. How, I wondered, were practices mirrored at the London County Council pathology lab, which opened at Claybury in 1893 and also investigated various forms of tertiary syphilis, including GPI and tabes dorsalis, as well as alcoholism and tuberculosis? Wallis does touch on other laboratories, but it would be interesting to know a little more about how they associated with each other.

One of the many strengths of the book is the way in which Wallis makes connections between social and cultural mores and the impact of wider political and medical developments. Germ theory was, of course, highly influential. Wallis touches on the ‘pollution’ metaphor but might have expanded on the trope of the ‘syphilitic’ individual as a vector of moral depravity in the western context – an unexpected swerve of narrative into the belief systems of the Nuer jars slightly. Otherwise, Wallis provides a fascinating investigation of the social framing of the male body with GPI, explaining how atrophied muscle and degenerating organs might be interpreted as an assault on masculinity in a period of high industrialisation. Soft bones could be equated to a loss of virility and femininity; broken bones forced asylums to ask whether they might be due to the actions of brutal attendants, rough methods of restraint, or of physical degeneration in the patient.

Investigating the Body in the Victorian Asylum provides a meticulously researched and thoroughly readable – for all – social history of an important development in the mental sciences in the nineteenth century, centring it around the evolving practices of post-mortem examinations. I particularly like the way in which Wallis writes herself, her research process and her thinking into the book. Her respectful treatment not only of the asylum patients but of the medical and nursing staff who cared for and treated them is threaded through from beginning to end. One might not expect to be gripped by descriptions of ‘fatty muscles’, ‘boggy brains’ and ‘flabby livers’, but Wallis reveals a fascinating story that is full of originality and tells us as much about nineteenth century medical practice as about the patient himself.

Louise Hide is a Wellcome Trust Fellow in Medical Humanities and based in the Department of History, Classics and Archaeology at Birkbeck, University of London. Her research project is titled ‘Cultures of Harm in Residential Institutions for Long-term Adult Care, Britain 1945-1980s’. Her monograph Gender and Class in English Asylums, 1890-1914 was published in 2014.

 

The evolution of Raymond Aron: biological thought and the philosophy of history

In the current issue of HHS, Isabel Gabel, from the University of Chicago, analyses the links between evolutionary thought and the philosophy of history in France – showing how, in the work of Raymond Aron in particular, a moment of epistemic crisis in evolutionary theory was crucial to the formation of his thought. Here, Isabel speaks to Chris Renwick about these unexpected links between evolutionary biology and he philosophy of history. The full article is available here.

Chris Renwick (CR):Isabel,we should start with an obvious question: Raymond Aron, the main focus for your article, is a thinker most readers of History of the Human Sciences will be familiar with. But few – and I count myself among them – will have put Aron in the context you have done. What led you to connect Aron and evolutionary biology together?  

Isabel Gabel (IG): Yes, this was a real revelation for me too. I knew Aron as a sociologist, public intellectual, and Cold War liberal, but had never seen his early interest in biology mentioned anywhere. It was actually in the archives of Georges Canguilhem, at the CAPHÉS in Paris, that I stumbled upon a reference to Aron and Mendelian genetics.  In 1988 there was a colloquium organized in Aron’s honor, and Canguilhem’s remarks on Aron’s earliest years, and the problem of the philosophy of history in the 1930s, had been collected and published along with several others in a small volume. At the time, Canguilhem felt that not enough importance had been given to the fact that his late friend had abandoned a research project on Mendelian biology, as he put it. This totally surprised and, needless to say, delighted me.  I quickly found a copy of Introduction to the Philosophy of History, and began reading.

As someone who works in both history of science and intellectual history, I frame my research questions to address both fields. Aron’s development as a thinker is really a perfect illustration of how these two fields converge, because his encounter with biology can be so precisely localized in time and space. It wasn’t just that he made the obvious connection between theories of evolution and philosophical approaches to history. Rather, it was the very specific moment in which he happened to encounter evolutionary theory, and that this happened in a very French context, which so profoundly shaped his thought.

CR: An important part of your article involves outlining the context of French debates about evolution, which provides the backdrop for Aron’s early intellectual development. As a historian of evolutionary thought myself, I found this part fascinating and something I had only really encountered periodically in my research – Naomi Beck’s work on Herbert Spencer’s reception in France is one example of where I have read about these kinds of issues before. The French context seems strikingly different from the Anglophone one. What do you think the Francophone context brings to our discussions of both the history of evolutionary thought and the human science that’s related to it? 

IG: The French context is absolutely central to this story. Everything from the specifics of the French education system, to the cultural politics of Darwinism in France, to the state of the French left in the twenties and thirties played a role in how and why Aron brought evolutionary theory and the philosophy of history together. First, because debates about evolutionary mechanisms were, if not insulated from Anglophone science, at least somewhat resistant to the incursion of external concepts, the epistemic crisis of neo-Lamarckism could only have happened in France. Also, while it’s important to note that Aron’s self-understanding was very post-Henri Bergson, there is no denying Bergson’s influence on early-twentieth-century French biology. All of which is to say that mid-century France is a fascinating case for understanding the feedback loop between biology and philosophy.

Moreover, it’s the very specificity of the French case that makes it so useful for thinking through methodological questions such as the one you raise about the shared history of evolutionary thought and the human sciences. In recent years, there has been renewed interest in bringing science and humanities/social science into dialogue with one another, an impulse that historians of science should of course welcome. Part of what the story of Aron and the philosophy of history in mid-century France can teach us is how contingent these influences can be. In other words, as evolutionary theory evolves over time, so too do the ways we interpret its meaning for the human past. In France in the twenties and thirties, it was the limits of science that were most instructive to Aron. French biologists couldn’t quite bridge between observations and experiments in the present, and the theory of evolution they believed explained past events. Objectivity became, for Aron, partly about acknowledging the limits of both positivism and philosophical idealism, i.e. a way of negotiating the relationship between the limits of observation and the limits of theory.

The French context therefore instructs us not to buy in too quickly to the idea that science offers facts and humanities subsequently layer on interpretation. This picture does a disservice to both the science and the humanities. What becomes visible in the case of Aron and French evolutionary theory is that biology and philosophy were encountering parallel epistemic crises, and therefore that neither one could singlehandedly save or authorize the other.

CR: Another issue that I thought was important in connection with Raymond Aron is liberalism. As you explain in your article, most people think of liberalism when they think of Aron. However, we don’t necessarily think of liberalism when we think about evolutionary biology. Liberalism and evolutionary biology have such a fascinating and entangled history. Why do you think we are now so surprised to find people like Aron were so interested in it? 

IG: Those who know Aron by reputation as a Cold War liberal may be surprised, because the conversations he helped shape were about ideology and international order. But I don’t know that everyone will be surprised that Aron was so interested in biology, so much as they might be unsettled. We associate any contact between political beliefs and evolutionary theory with deeply illiberal commitments, with racism, eugenics, and just plain old bad history. And while it’s true that we should approach attempts to import scientific data into humanist frameworks with caution, we also shouldn’t grant science more explanatory power than it can hold. In recent history, the liberal position has been a vigorous critique of biological determinism, but as Stephen Jay Gould and others repeatedly teach us, the point is not simply that society or history is autonomous from the biological, but that biology itself is not as determinist or totalizing as we sometimes understand it to be. That’s why reading the work of scientists themselves is so important, because it brings out the provisional, ambiguous, and contentious nature of their endeavors. It shows that they aren’t stripping the world of contingency, but rather prodding at and making visible new contingencies.

CR: The history you uncover in your article is incredibly revealing in what it tells us about the intellectual origins of not just Aron’s thought but the milieu out of which many people like him emerged. Do you think there is anything in that history that is of particular relevance or importance for the present?  

IG: Yes, I do think there are really instructive parallels with the present. Aron came of age in a time of enormous political upheaval and two catastrophic world wars. Political and epistemological upheaval go together, and so this generation of French thinkers can speak to our own anxieties about the eclipse of humanities and social sciences by STEM fields. One way to think about this history’s relevance would be to see Aron as a cautionary tale – the science changes quite quickly as the Modern Evolutionary Synthesis takes shape, DNA explodes as a new way to understand life over time, and antihumanism gains cultural strength in France. So it’s not clear that Aron’s study of biology really got him where he wanted to go. But I actually think this picture is a little too cynical, because it ignores what’s so interesting about Aron’s philosophy to begin with. He understood that biology and philosophy were facing some of the same questions, such as how to understand the past from the perspective of the present, and whether laws that explained the present could be known to have operated the same way in the past.

In this way, we ought to pay attention to how STEM fields and the humanities are speaking to some of the same questions. For example there’s been a lot of energy around the concept of the Anthropocene recently, and it’s a perfect opportunity for historians to contribute to a conversation about something that is both a scientific claim—that humans have become a geological agent—and a historical, political, and moral one. We can offer a longer-term understanding of how history and natural history have spoken to one another in the past, how the human has been constructed through philosophy, human sciences, and natural sciences, and how thinking about the end of civilization is saturated with political imagination. Deborah Coen’s work on history of scaling is a great example, as is Nasser Zakariya’s recent book, A Final Story.

CR: I was very interested in what you had to say in your article about bridging the gap between intellectual history and history of science, which is an important issue for an interdisciplinary journal like HHS. The material or practice turn in history of science has been  important  in creating this division, as you explain in your conclusion. This turn needn’t rule out the human, of course, and it hasn’t, as work on subjects like the body shows. But it’s clear, as you explain, that many historians of science see intellectual history as something that needn’t concern them. Why do you think belief is misplaced and what do you think we would all gain by putting the two together again? 

IG: I hope that the story I’ve told in my article illustrates one immediate benefit of overcoming the longstanding division between intellectual history and history of science. Namely, that there is historical work that just hasn’t been done as a result. Aron’s early interest in evolutionary theory, and its effect on his philosophy of history, is not an isolated case. There is enormous potential in fields like the history of knowledge, history of the humanities, as well as in fields like environmental humanities, to bring the tools of intellectual history and history of science to bear on any number of subjects.

But also within intellectual history, the elision of science has meant flawed or at least partial understandings of figures as enormously influential as Aron. At the same time, within the history of science the material turn that you mention led to a kind of reflexive suspicion of philosophy, which John Tresch has written about. Tresch sees the potential of intellectual history in a broader scale for history of science – get beyond the case study. I think this is part of the story, but that on an even more basic level the history of science will be better told if its methodological framework can accommodate the conceptual feedback that exists between science and philosophy, in addition to the feedback between science and society, institutions, and technology. One of the most exciting things about reading the work of French biologists is discovering the degree to which philosophical questions preoccupied them not as extra-scientific or ex post facto interpretations, but as urgent problems to which their research was addressed.

Isabel Gabel is Postdoctoral Fellow at the Committee on Conceptual and Historical Studies of Science at the University of Chicago. Her current book manuscript, Biology and the Historical Imagination: Science and Humanism in Twentieth-Century France, provides a genealogy of the relationship between developments in the fields of evolutionary theory, genetics, and embryology, and the emergence of structuralism and posthumanism in France.

Chris Renwick is Senior Lecturer n Modern History at the University of York, and an editor of History of the Human Sciences. His most recent book is Bread For all (Allen Lane).

Pageantry and the picket line: on the psychology of striking

by Hannah Proctor

Watching the current University and College Union (UCU) picket lines from afar – I’m a postdoctoral fellow based in Germany – I was trying to think if I’d ever come across any psychological writings on striking, and, more specifically on picket lines. Of course, as Chris Millard has pointed out already in this series, strikes are not primarily expressions of feeling; they are withdrawals of labour. Indeed, references to strikers’ ‘deep’ or ‘strong’ feelings in letters by university Vice-Chancellors seem to downplay the material demands being made by striking workers. I was nonetheless interested in finding out whether theorisations of the psychological experience of picket lines – as specific spatial, temporal and interpersonal phenomena – already exist.

Perhaps unsurprisingly a search of the multiple psychoanalytic journals included in the Psychoanalytic Electronic Publishing database for ‘picket line’ yielded just fourteen results.  I looked at the two earliest examples that appeared on this list and in both cases the picket line appeared as a fraught symbol for individual bourgeois analysands. In a discussion of compulsive hand-washing from 1938 a female patient writes a short story whose protagonist is based on her hotel maid’s participation in an elevator operator strike. The patient took up the workers’ cause, organising meetings in support of their actions. In this period of political involvement, the analyst reports, that patient’s hand-washing stopped. As soon as her involvement with the strike ceased (interpreted by the analyst as a form of sublimation), her ‘compulsive’ behaviour resumed.[ref]George S. Goldman, ‘A Case of Compulsive Handwashing’, Psychoanalytic Quarterly, 7 (1938), 96-121.[/ref] In an article from 1943 a ‘frigid hysteric’ patient dreams of a bus trip being cancelled due to a transport strike. In the subsequent interpretation of the dream, which includes a long cutting from a newspaper article on a strike of charwomen the patient had read, the analyst interprets her reaction to the story as relating to her ‘desperately struggling for male status’; she did not ordinarily support strikes on political groundsm but did so in this case due to the gender of the workers.[ref]Edmund Bergler, ‘A Third Function of the ‘Day Residue’ in Dreams’ Psychoanalytic Quarterly, (1943) 12, 353-370.[/ref] In neither example are the patients themselves involved directly in the strikes and neither have first-hand experience of picket lines; the strikes’ psychic significance is tied to existing individual neuroses. Of course, it might be that non-psychoanalytic theories, with less sinister assumptions about group psychology, might be a better place to start for approaching the question at hand. But instead I found myself thinking about the possibility of approaching the question from different fields altogether.

While researching for a recent article reflecting on commemorations of the 1917 October Revolution I found myself reading about early twentieth-century mass spectacles, left-wing pageants and revolutionary dance troupes in the Soviet Union and America.[ref]https://www.radicalphilosophy.com/article/revolutionary-commemoration[/ref] In many of these cases the relationship between ‘actual’ historical events and ‘fictional’ theatrical reenactments proved to be blurry. As the title of a 1933 piece Edith Segal choreographed with the Needle Trades Workers Dance Group in New York indicates – Practice for the Picket Line – workers in union or party affiliated dance groups would create scenarios drawing on their own experiences, which would in turn function as rehearsals for future political action.[ref]Ellen Graff, Stepping Left: Dance and Politics in New York City, 1928–1942 (Durham, NC: Duke University Press, 1999), p. 43.[/ref] But as a historian of the ‘psy’ disciplines with an interest in affective histories of the left, I was particularly intrigued by how the psychological function of such performances was articulated by their creators, participants and audiences. Perhaps these examples, though remote from the ‘psy’ disciplines, could provide material for thinking through the psychic dimension of the collective experience of picketing.

In January 1913 silk weavers and dyers in Paterson, New Jersey went on strike after four workers were fired for complaining about the introduction of a new four-loom technology that required a less skilled workforce. With the strike still on-going but little coverage of it in the mainstream press, activists and intellectuals in New York collaborated with the striking workers to produce an elaborate pageant in Madison Square Gardens on June 7 1913 , sponsored by the International Workers of the World (IWW). The pageant was intended to publicise the strike and raise money for the strike fund, which was urgently needed as the striking workers and their families were at risk of starvation. But the pageant’s purpose was financial, propagandistic and educational, it was also emotional. The Pageant saw 1,029 strikers reenacting the dramatic events of the picket lines punctuated by familiar songs from the labour movement in which the audience was invited to join.[ref]For the programme of the pageant and other associated primary documents, see:  ‘Paterson Strike Pageant’, The Drama Review: TDR, 15, 3 (1971), 60-71.[/ref] The dramatic, fast-paced temporality of the staged strike differed markedly from the drawn-out nature of the real one but the worker-performers found the rehearsal process gave them a chance to reflect on and process their experiences.  Almost 15,000 people watched the performance which was then described in detail in New York newspapers. The sympathetic leftist publication Solidarity claimed that the performance ‘seized the imagination’, while the hostile New York Times accused it of ‘stimulating mad passion against law and order’[ref]These reviews are cited in Steve Golin, The Fragile Bridge: Paterson Silk Strike, 1913 (Philadelphia: Temple University Press, 1988), p. 166, p. 169.[/ref]. Although these accounts differed in their political assessment of the production they both emphasised its psychological power. Although the strike was simulated, the passions the reenactment stimulated were real.

The pageant failed to raise significant amounts of money and many subsequently declared it a failure, which distracted workers and took them away from the real pickets outside the mill.[ref]See, Elizabeth Gurley Flynn, ‘The Truth About the Paterson Strike’, Rebel Voices: an IWW Anthology, ed. Joyce Kornbluh (Ann Arbor: University of Michigan Press, 1965 pp. 214-226.[/ref] Indeed, the organisers produced the spectacle at a loss. The Paterson silk strike itself was soon defeated. Workers began returning to the factory in July and many of their demands were never met. Yet some discussions both by contemporaries and historians insist that the pageant succeeded not only as an aesthetic innovation which inspired future artistic endeavours – John Reed, one of the New York intellectuals who instigated the production would soon leave for Europe; his book Ten Days That Shook the World would become a defining account of the October Revolution inspiring Sergei Eisenstein’s October in turn – but also as a cognitive and affective interpersonal experience which similarly outlived the performance itself. Though sufficient funds were not raised, consciousnesses were  raised (to use the vocabulary of the pageant’s participants and chroniclers).[ref]See, for example, Leslie Fishbein, ‘The Paterson Pageant (1913): The Birth of Docudrama as a Weapon in the Class Struggle’, New York History, 72, 2 (1991), 197-233, Linda Nochlin, ‘The Paterson Strike Pageant of 1913’, Art in America, 62, 1974, 64-68. In her discussion of Segal’s performance Ellen Graff writes that ‘Radicals hoped that mock demonstrations… would prepare workers for actual confrontations as well as engage their sympathies and raise political consciousness.’ Stepping Left, p. 43.[/ref]

The terms ‘class consciousness’ and ‘political consciousness’ in reflections on the performance function as psychological concepts despite rarely having been explicitly understood as such, and as concepts which seem to have gone largely un-thematised within the ‘psy’ disciplines. One starting point for trying to think more about the psychology of the picket line would be to think more carefully about how these terms were used in this context, how they allude to political concepts elaborated by Lenin, Rosa Luxemburg, György Lukács and others, but also depart from or complicate them. I’d be interested in thinking about how an emphasis on gaining a broad intellectual understanding of a political situation was combined with an insistence of the importance of immediate emotional experiences, how emotional experience can allow individuals to situate themselves within a collective, and so on.

Perhaps more central, however, would be a consideration of the function of re-enactment as a form of reflection upon political action (and even as a form of political action in its own right) enabling strikers and their supporters to better understand and communicate their struggle. This might open up ways of approaching the (necessarily very different) forms of reflection, representation and dissemination that attend current disputes. Of course, there is often a kind of theatricality to the picket line as has been evident during the current UCU strikes – which can function to communicate demands, alleviate the tedium of standing around in the cold all day, attract more people to the picket etc – but the example of the pageant brings into focus a slight different set of questions about how striking workers represent and communicate their struggles to themselves and others to forge and sustain the solidarity necessary to resist capitulation. This seems particularly urgent in a context in which collective memories of labour organising can be hard to locate.

The current strikes have not succeeded yet, but UCU branches’ rejection of the deal proposed on Monday (March 12th) indicates that something has shifted during this dispute, which may mark the beginning of a broader resistance to the wider marketization of higher education in the UK. University vice-chancellors’ insistence on invoking the ‘deep’ or ‘strong’ feelings of their striking employees can be read as attempts to reduce picket lines to sites of emotional fracas, or coordinated temper tantrums, strangely divorced from the collective withdrawal of labour. But attending to the psychological dimensions of the picket line could potentially do something very different, offering space for acknowledging the anxiety, frustration, boredom and anger associated with striking, while also allowing us to explore how joyful interpersonal collective experiences can participate in building and sustaining political movements.

Hannah Proctor is a Fellow of the Institute for Cultural Inquiry, Berlin.

Image attribution: The accompanying image, ‘Bus load of children of Paterson, N.J., strikers (silk workers) in May Day parade – New York City] [graphic]’ has been sourced form the online catalogue of the Library of Confress. There are no known restrictions on reproduction. The original can be viewed here: http://www.loc.gov/pictures/resource/cph.3b00599/

On the emotional and material politics of the strike

by Chris Millard

Strikes stir the emotions. The solidarity of picketing, the anxiety of students missing classes, the anger of those who feel wronged enough to withdraw their labour. There are doubtless strong feelings behind the current University and College Union (UCU) industrial action to defend ‘defined benefit’ pensions. These varied feelings have been mobilized in a number of ways over the past few weeks, and this short post (building on some tweets here and here is an attempt to analyse a bit further the emotional politics of striking.

There are a number of distinct parts (both core and peripheral) of the strike action at play at any one time: the picket line, supportive demonstrations and rallies, teach-outs, the implied dissent, and the withdrawal of labour itself. A key aspect of the emotional politics of this strike can be seen in the confusion (both deliberate and unwitting) between a number of these elements.The Vice-Chancellor of The University of Sheffield, Professor Sir Keith Burnett, sent an email to staff where those on strike were characterized as ‘communicat[ing] the strength of their feelings through strike action’ (you can see an extract from that email in this tweet from Sheffield historian, Simon Stevens). This was not a hostile characterization, but it was a serious misunderstanding. It crystallised out some of the issues I’ve had with the strike and on the picket.

As others have pointed out in response to that communication, the point of a strike is not an expression of feeling, it is to disrupt the operations of the employer to force them back to the negotiating table. As Simon Stevens himself put it on Twitter: ‘A strike is an effort to rebalance the material interests shaping the employer’s behaviour by shutting down production and/or operations.’ But this material intervention seems lost, forgotten or at the very least undersold in the current dispute. The idea (often tacit) that ‘so long as you don’t cross the picket, you are not strike-breaking’ is at issue here. If you do work for the university at home, or in a coffee shop, or anywhere else on strike days, you are not striking. If you sit at home on a strike day, and edit an article to submit to the Research Excellence Framework (REF) on a strike day, you are not striking. So far, so orthodox.

The opposite side to this misapprehension is where emotions come in. It is related to the fact that universities increasingly have been seen in recent years as battlegrounds for free speech. The most recent eruptions are over the Prevent agenda and no-platforming; universities have of course for many years been associated with radical protest and demonstrations. These aspects, the expression of dissent, opinion and protest – often couched in terms of articulation of feelings – have become conflated with strike action. Picket lines superficially have many of the material trappings of protests: banners, placards, megaphones and chanting. But their object is quite distinctive from that of a protest. They exist to demarcate a strike area, to put the strike into a spatial idiom. Even if someone is ‘only’ crossing a picket line for ‘one meeting’ they risk signaling that they do not support the strike, regardless of their actual intentions.

We appear to be in a period of flux in how we think about picket lines, and there is real ambiguity about members of other unions, especially if those unions do not support (or are prevented from supporting) the picketing taking place. However, it is perfectly possible to (at least partially) mitigate the crossing of a picket – with stories across universities of people working who brought hot drinks and snacks out to picketers. I am much less sympathetic to people in UCU, often in senior, permanent positions, who choose not to strike because they don’t agree with the particular issue being foregrounded, or the particular tactics employed. If you choose to freelance when asked for collective action, I think you’ve got some hard thinking to do about solidarity. Saying ‘I don’t agree with this or that’ about the action, mistakes a collective agreement to withdraw labour (overwhelmingly voted for by members) for individuals having a protest. By framing the action in this way, protest is foregrounded, and the spatial and material disruption of the strike disappears. Sir Keith met with Sheffield UCU representatives on Tuesday – and was snapped holding an ‘Official Picket’ sign. However, he still committed this error, inviting UCU delegates into his office, which was across the picket line. They refused to cross, but met him later in a neutral venue.

So picket lines are not protests, and they are not about expressions of feelings: they are a spatial manifestation of the collective withdrawal of labour. Strikes are also material interventions that are undermined by all work, even when it doesn’t cross the picket line.” Well, so what? As I see it, this brings into focus the demand to reschedule teaching, which had previously been backed by, by many institutions, by a threat to deduct up to 100% of pay for each day teaching was not rescheduled. (Most institutions have backed down under public pressure on this particular point. A list of institutions not understood to have backed down on this point at the time of writing can be seen here). However, the fact that the demand was made at all is important, in both material and emotional terms. According to the view that mistakes strike action for an expression of feeling, once the feeling is expressed, there is no reason why the teaching can’t be done. It can be rescheduled (the logistical impossibility notwithstanding), because making the point was the point, rather than the withdrawal of labour. In other words: the supportive demonstrations, the protest, the signs, the placards have obscured the core of the strike, i.e. the withdrawal of labour.

But materially this matters too. Pay has already been, or will be, docked for the work not done, so clearly the employers are engaging on this material, financial level. This relation is in turn connected to the financial interests of students: they are paying their fees, so why shouldn’t they get the promised teaching? However, the employers do not follow through on this view of the financial politics of the strike; they do not, for example, propose to repay people for making up the work. They also (as noted) threatened to dock pay again for the refusal to reschedule. This only makes sense if the strike is transformed into a free-floating expression of dissent, of emotion, of feeling. But that logic doesn’t tally with pay-docking.

If the strike is an expression of feeling, then employers should not dock pay on strike days, and the demand to reschedule will become understandable. (The corresponding logic of holding a 14-day demonstration outside places of work is another matter.) On the other hand, if the docking of pay is legitimate as a response to the withdrawal of labour (and the withdrawal is put in spatial terms by a picket line), then the demand to make it that labour is arguably a challenge to the right to strike after the fact. Emotions and feelings matter hugely in this action. But they are not the point of the strike.

Chris Millard is Lecturer in the History of Medicine and Medical Humanities at the University of Sheffield, and book reviews editor of History of the Human Sciences.

This article represents the views of the author only, and is not written on behalf of History of The Human Sciences, its editors, or editorial board.

The featured image, ‘Penn[sylvania] on The Picket Line — 1917’ comes from ‘Women of Protest: Photographs from the Records of the National Woman’s Party,’ Manuscript Division, Library of Congress, Washington, D.C. There are no known copyright restrictions with this image. The original can be viewed at this link: https://www.loc.gov/resource/mnwp.160022

#notallgeographers

by Felicity Callard

Human geography – a discipline in the hinterland of the human sciences – is a discipline preoccupied with praxis. Analyses of the relationship between what the geographer writes, what the geographer says, and what the geographer does have animated many of the discipline’s vigorous epistemological and political battles. It is unsurprising, then, that the University & College Union (UCU) strike over Universities Superannuation Scheme (USS) pensions has brought questions of praxis into fraught focus. Indeed, in Marxist and other radical geographies – whose histories are generally traced back to the 1960s  – the strike has been a privileged site of analytical and activist attention. But tensions today have not been solely about which geographers are – and are not – on the picket lines. Broader issues over where the discipline of geography is made, and who comes to represent that discipline are at stake. On the picket lines and on social media, geography’s present and past – both in material and fantasmatic form – are being worked up and worked through

On the first day of the strike, the Vice Chancellor (VC) of the University of Sussex, Adam Tickell, issued a statement that made it clear that he did not believe that there was an ‘affordable proposal’ for pensions that would satisfy both USS and the Pensions Regulator. As the hours passed, Tickell appeared uncompromising in the face of calls for him to join other VCs who had called for a return to negotiations. An interview with him conducted shortly before the strikes was re-circulated – where he was quoted as saying ‘The younger me may have taken part in the strikes, I don’t know about the current me.’

So far, perhaps so predictable. But Tickell’s words about strike participation could not but carry particular weight given that they had been uttered not only by an economic geographer, but one of the most prominent theorists of neo-liberalism. Indeed, Tickell, in the 1990s and 2000s, had published – often in articles co-authored with Jamie Peck – what became some of the most widely read, and remain some of the most widely taught, economic-geographical anatomizations of post-Fordism, neoliberalism, and global finance. (You can see a list of Tickell’s publications here.) On day 2 of the strike, I addressed Adam Tickell on Twitter lamenting how ‘my younger (geography undergrad & grad) self would not have wanted to imagine that I would be reading your work, years on, to help in the fight against what you are now upholding.’ (These perturbances are as much about disciplinary memory as well as about a discipline’s moral rectitude.) As the days of the strike passed, anger against the position adopted by Tickell amongst geographers grew, to the point where the lustre of the esteemed author was at risk of being  tarnished by the apparent intransigence of the university head. By day 4 of the strike, an anonymous parody Twitter account for Adam Tickle. VC. was up and running; it was tweeting about the strike and about the disparity between the alleged ‘early’ and ‘late’ Tickell.

Laura Gill at the University of Sussex picket line holds a placard reading ‘SLAY THE NEOLIBERAL BEAST,’ a quotation from Adam Tickell. From a tweet by Benjamin Fowler, and used with both his and Laura Gill’s approval. Original tweet available at: https://twitter.com/B_B_Fowler/status/968481178444541952

Many human sciences have wrestled with how best to bring into focus the object that demands analysis (in this case, the current crisis within universities manifest through the USS strike) – debating which frameworks best allow us to understand that object, as well as the role of those variously positioned in relation to that object. In this sense, Tickell has become a useful figure. Through him, many more general issues – that are not actually about one, or even several individuals, and that relate to the production of academic knowledge and the organization of today’s universities – can be debated and contested.  Here those debates centre on the extent to which one university manager’s earlier publications on neoliberalism could and should be used to understand the current crisis in toto, as well as on the extent to which the existence of those same writings should give added weight to the moral opprobrium directed to that same manager’s current stance. There are two separate issues, here. One might, with Barnett, think that neoliberalism ‘was and is a crap concept.’ In this case, one might argue that Tickell’s earlier writings – and his formulation of concepts therein – don’t much help us in understanding, let alone combating, what is unfolding in universities today. Our energies would be better used if they sourced better writings from the archives and activism of geography – as well as from other social sciences and social movements. But that does not imply that the disruption provoked by the inferred disparity between ‘early’ and ‘late’ Tickell is misplaced. If the former concern is largely epistemological, the latter is as much ethico-political as epistemological.

Geographers Derek McCormack and James Palmer hold placards in the strikes quoting from Adam Tickell’s research papers. From a tweet by Tina Fawcett, and used with approval from her as well as from Derek McCormack and James Palmer. Original tweet available at: https://twitter.com/fawcett_tina/status/968777039384928262

Here we have a scene in which the history of geography, and the politics of that history, is undergoing disturbance. (The Adam Tickle Twitter parody account explicitly reworks the discipline’s historiography through its satiric phrase ‘formerly an economic geographer of note’.)  And below the contretemps over Tickell, something else pulses in the discipline’s corpus – something I do not think has been worked through. That is geography’s collective relationship to the long, and continuing, career of Nigel Thrift. Thrift is another prominent geographer and social theorist who was a highly visible and, in the words of student-facing website The Tab, ‘divisive’  VC at the University of Warwick. In the course of his tenure there, the institution – as Times Higher Education put it– experienced a number of controversies.’ In relation to Thrift, there is obvious scope to reflect on the relationship between his earlier work on left politics and his later  career as a university manager. And there have been, online, some serious, critical reflections on this. But in the standard outlets for academic production, such as journals, there has been – as far as I know – very little substantive discussion. This is a noticeable – and meaning-ful – lacuna.

But I want to return to the affective and political disturbance generated by the stance taken by Adam Tickell. And to one reason why the apparent disparity between the so-called ‘early’ and ‘late’ Tickell seems to have been experienced by many – including me – as peculiarly wounding. To my mind, we should not uniformly expect or demand thinkers and writers to be free of contradiction. I recall, here, the opening of the obituary of one of geography’s most prominent radical theorists and activists, Neil Smith, which drew on words spoken by the radical geographer David Harvey (also Smith’s doctoral supervisor as well as colleague at CUNY) at Smith’s memorial service: ‘Neil Smith was the perfect practicing Marxist – completely defined by his contradictions’. (Such inconsistencies did not sway Smith’s steadfast commitment to radical politics.)

Contradiction in and of itself is not the problem. Then what is? Let’s look at how the passing of time is staged. Tickell said that while his ‘younger me’ may have taken part in the strikes, he was not sure about his older, contemporary self. Such a sentence resonates with a powerful discourse in which left politics is positioned as a childish practice, one that might well need to be given up as adulthood ensues. (Recall Saint Paul’s exhortation to ‘put away childish things.’) And this is not unconnected with the rhetorical campaign that Universities UK has been waging in an attempt to persuade others of the pragmatism, reasonableness and maturity of their assessment that there is no clear option around pensions other than the one they have proposed.

That the discipline of geography has produced a number of today’s UK Vice Chancellors (including Paul Boyle at the University of Leicester, Paul Curran at City University of London and Judith Petts at Plymouth University) – as well as the current UK Conservative Prime Minister – makes it urgent for many of us on the picket lines to demonstrate that geography as a discipline and as a political project is not exclusively held by or in those figures. The figure who might regard strikes as childish things needs to be substituted; another articulation of the social world, and of the geographer’s role in making it, needs to take their place. Hence geographers from UCL carrying a placard during the strikes announcing that ‘Not all geographers are neo-liberal vice-chancellors.’ Or the social and economic geographer, Alison Stenning, using the hashtag #notallgeographers, tweeting that, in spite of some ‘ignominious attention [that] certain geographers are getting’ geographers had nonetheless ‘been pretty impressive on the picket lines & the Twitter frontline.’

But I want to conclude with the outlines of a psychosocial argument, one that dismantles the apparent disjunction between the early and the late – or the gap that appears, as one Twitter user put it, within ‘the radical academic turned hard-line conservative’. Beneath the concern that many of us geographers have for the stances taken by prominent individuals within the discipline, perhaps lies a deeper wound that has not substantially been acknowledged or worked through. And that is the possibility that the very criticality of much of what passed for ‘critical geography,’ in the 1990s and beyond, precisely constituted the register of the successful and upward-moving academic. That criticality was part and parcel of adhering to and advancing a certain kind of theoretically-smart ‘knowledge’ that was required as evidence that would help one advance – even to the level of VC – within a professionalized space. Being critical in a particular way in the 1990s was, indeed, one of the pathways to advancement. And many of those ‘critical’ publications were at the heart of, rather than in conflict with, the current remaking of the university.

Rather than the adult putting away childish things, or the late eclipsing the early, what if the child made the adult? What if the early led to – was continuous with – the late, rather than being disavowed by it? If this were the case, then it would put many of us – and I include myself explicitly, here – in an uncomfortable position. For let us acknowledge the affective payoff that can accompany lamenting the eclipse of the early by the late: in addition to anger, it is possible to feel secure in one’s conviction that one has now cast out the late as the politically compromised. The radical credentials of a good geography are safe. By contrast, a situation in which there is no easy division between the early and late, the putatively radical and the compromised, is much more affectively and politically tricky to navigate. And this leads to some difficult questions that I have pondering over – on and off the pickets – these last few days

First, how do those of us inside as well as outside geography tell the history of critical geography in and beyond the 1990s? This is certainly important epistemologically – it’s part of the history of the human sciences that deserves greater attention than it has currently received. But it’s also central to how we understand what has been happening to the university. And this should help us think through how we might best use the strike in which we are currently involved to challenge what we see as most pernicious about these recent transformations.

Second, where and how is geography made? Where does it do its work? While there has been some interest in the apparent abundance of geographers who have become VCs, I don’t think we (those of us in and near geography and the history of the human sciences) have remotely got to grips with how to account for this. If there is that abundance in comparison with other disciplines, how does that reroute our accounts of where and how geography as an epistemological formation wields power? The tight relationship between PPE (the University of Oxford’s degree in philosophy, politics and economics) and the UK’s twentieth-century elite is a topic of frequent discussion. Beyond Neil Smith’s account of Isaiah Bowman, where are the historically and sociologically astute analyses of hard and soft geographical power?

Third, how do we widen the circles for forms of critical praxis that are not beholden to discourse and practices of promotion and managerial success in academia? What does that mean for those of us making geographies on and off the picket lines today? The interventions of black studies and anti-colonial studies, in particular, provide numerous routes through which to envisage – and put into practice – the reshaping of geography and of the university.

And there is one final note in relation to my previous point. It would be too easy to construe the historical tale of geography’s travails as a white boys’ story. Many of the protagonists in this post – those who have wielded power, and those configured as radicals who have contested it – do indeed fall within this category. But there are also many, ongoing attempts on the picket lines and on social media to disrupt that historical account, and to disrupt the future paths that geography and the university might take. As I finish this post, the geographer Gail Davies, for example, is unearthing the complex role that management consultants have played in the USS valuation and in the discursive shift that university senior managements have made towards ‘flexible pensions’. There is perhaps more work to be done along these lines before we can, indeed, comfort ourselves with the thought: #notallgeographers

I am profoundly indebted to Stan (Constantina) Papoulias in the writing of this blog post. They clarified for me much of what was most interesting in the figuring of the early and the late – in particular in relation to how a certain kind of criticality went hand in glove with the late twentieth-century transformations of the academy. Our discussions have taken place as we both take strike action in our respective universities.

Felicity Callard is Professor of Social Research in the Department of Psychosocial Studies at Birkbeck, University of London, Director of the Birkbeck Institute of Social Research and Editor-in Chief of History of the Human Sciences.

This article represents the views of the author only, and is not written on behalf of History of The Human Sciences, its editors, or editorial board.

Striking is the best medicine

by Rhodri Hayward

Six days into the current Universities and Colleges Union (UCU) strike against pension cuts, Universities UK (UUK), the representative body for British higher education management, launched a series of tweets and videos in support of University Mental Health Day. In a move that is now pretty familiar, the presentations shifted attention from a toxic environment in which staff and students now experience unprecedented levels of mental distress, to a series of tips for self care – joining a club, eating well, pursuing a hobby – in which much of the responsibility for well-being is placed back upon the shoulders of the individual sufferer. As the UUK Mental Health Policy Officer advised in a Twitter video, ‘Don’t be afraid to take time for yourself.’

I guess to many of the viewers, this advice must have seemed spectacularly mistimed. At the precise moment that the UUK was outlining its commitment to ending anxiety and depression in higher education, the wider organisation was working to significantly change pension conditions, undermining the secure livelihood once promised to university staff. It would be foolish, however, to dismiss the advice out of hand. The idea of ‘making time for oneself’ has been a central part of the labour struggle for the last three centuries. As E. P. Thompson argued many years ago, once employers had hammered into modern workers the idea that ‘time is money’, employees’ struggle shifted from the preservation of traditional rights to the recovery of lost time.[ref]E.P. Thompson. ‘Time, Work-Discipline, and Industrial Capitalism’, Past & Present 38 (1967):56-97; (4), p. 34. [/ref]

The attack on future pensions, and the different analyses offered by UUK and by EP Thompson, all point to ways that different notions of temporality are caught up in academic work: not simply in the way it is organised but also in the way that it is experienced. The unremitting busyness of academic life, mostly complained of but occasionally worn as a ridiculous badge of honour, throws colleagues into a relentless present in which prospect and perspective are all too often lost to the insistent clamour of everyday demands. This sense of the overwhelming present is only heightened, as the critic Mark Fisher noted, by the precariousness of modern casualised labour, which offers no secure place from which to understand our past or project our future hopes.[ref]Mark Fisher, Capitalist Realism: Is there no alternative?(Chichester: Zero Books, 2009), p.34 [/ref] Strikes offer us an opportunity to disengage, to escape a constricting present and get a sense of where we stand in time. Many strikes, certainly most of the strikes I have participated in, are kind of nostalgic: they mark a world we are on the brink of losing, or perhaps have lost. Others, like this current strike, quickly go beyond that, taking us out of the present to remind us there is a future to make. They give us, as UUK recommended, the opportunity to take time for ourselves. In our present crisis, strikes are the best medicine we have.

Rhodri Hayward is Reader in History in the School of History at Queen Mary University of London, and one of the editors of History of The Human Sciences.

The accompanying image, ‘Image taken from page 5 of “The Universal Strike of 1899. [A tale.]”‘ has been been taken from the British Library’s flickr site. The original can be viewed here.

This article represents the views of the author only, and is not written on behalf of History of The Human Sciences, its editors, or editorial board.

Book Review: ‘About Method.’

Jutta Schickore, About Method: Experimenters, Snake Venom and the History of Writing Scientifically. Chicago: University of Chicago Press, 2017. 316 pp., US$50.00. ISBN: 978-0-226-44998-2 (hbk).

by Peter Hobbins

If scientists reflect only infrequently on their commitment to experimental method, contends Jutta Schickore, then historians and sociologists have been equally remiss in interrogating this lacuna. In her carefully considered About Method, Schickore interrogates the history of snake venom research to dissect the ‘methods discourse’ promulgated by key practitioners from 1650 to 1950. In historicising her actors’ statements about ‘proper’ experimental practice – over time and across emergent disciplinary boundaries – Schickore proffers a tripartite framework for evaluating their epistemological imperatives. Encompassing ‘protocols’, ‘methodological views’, and ‘commitments to experimentation’, her novel schema is applicable to unpicking disciplinary investment in experimentation across diverse scientific communities.

The author’s focus on snake venom is neither arbitrary nor arcane. At the outset she foregrounds one of the most astonishing scientific projects of eighteenth-century natural history: Felice Fontana’s studies of viper venom. Undertaking literally thousands of experiments, this Tuscan naturalist sought to understand far more than simply the pathophysiology of being injected with venom. In enumerating the quantity, variety, variability and enduring uncertainties attendant upon his observations, Fontana reflected deeply upon the heuristic purpose, design and conduct of experiments.

The sheer scale of his vivisectional program – unsurpassed until well into the twentieth century – was thus paralleled by Fontana’s epistemological legacy. Indeed, this very continuity justifies Schickore’s selective focus in tracking methods discourse across three centuries. ‘For more than 250 years’, she remarks, ‘venom research was imbued with a strong sense of tradition both in terms of techniques and results and in terms of the methodology of experimentation’ (p.4). Moreover – and importantly for scholars working across the human sciences – venom research continues to intersect with multiple biomedical disciplines, including biochemistry, physiology, pathology, bacteriology and immunology. It is indeed an apposite field for asking what experimenters believed they were actually doing.

The result is a coherent and largely consistent unravelling of the dialectic linking programmatic statements about method with pragmatic experimental experience. Eschewing a Foucauldian formulation of ‘discourse’, Schickore instead defines ‘methods discourse’ as the rhetorical framing of good experimental practice. Yet, she insists, ‘methods discourse does not constitute a specific genre of text’ (p.215). Rather, three tiers of elements can be discerned across shifting modes of experimentation and expression. At its most ordinary, methods discourse simply outlines the specifics of experimental or observational design – a ‘protocol’. The next level, ‘methodological views’, articulates the procedures deemed necessary to generate empirically verifiable results. The deepest stratum, ‘commitments to experimentation’, encapsulates ‘the imperative that scientific ideas must be confronted with, or based on, empirical findings’ (p.213).

Just what those findings are, and how they can be validly obtained, lies at the heart of each of Schickore’s close readings in historical context. Commencing in the early modern shadow of Roger Bacon, she turns first to Francesco Redi’s 1664 text, Observations on Vipers. Under the patronage of the Tuscan court, Redi combined animal experiments, dissections and observation of human cases to detail the effects of being injected with venom (or ‘envenomation’). His commitment to the repetition of experiments both challenged prevailing rubrics received from ancient authorities, and delineated the full range of experimental circumstances that might alter the outcome of a given trial. Convinced that he had thereby vindicated his veracity as a natural philosopher, Redi concluded that the toxic agent in snakebite was viper’s venom. Yet neither his experiments nor his epistemology led him to query how it caused death.

In contrast, French apothecary Moyse Charas insisted that the viper’s ‘yellow fluid’ was inert. Rather, it was the serpent’s enraged spirits which were transmitted to its victim during a bite. Charas’s response to Redi’s trials was to assert that uniformity of experimental results – rather than the variability of procedural circumstances – carried the greatest epistemic weight. Charas thus emphasised both the heuristic value of definitive outcomes, and the importance of comparative trials. Unlike Redi, his narrative sought both to explain away inconsistent results, and to interleave his recordings with causal explanations. Rather than testing their respective truth-claims, Schickore teases out how the dispute between Charas and Redi ‘tells us much about how the general commitment to experimentation was fleshed out … [and] how flexible and fluid were the methodological statements employed by early modern experimentalists’ (p.52).

Schickore turns next to physician Richard Mead, a British medical maven whose Mechanical Account of Poisons was reworked over multiple editions from 1702 to 1747. In contrast with many fellow clinicians, Mead recapitulated the necessity for experimentation according to the methodological purity of mechanical philosophy – primarily the works of Isaac Newton. Yet, remarks Schickore, ‘Mead’s treatise does not seem to be informed by any practical challenges he might have encountered in his research’ (p.76). If his commitment to empiricism was overt, his methodological views remained decidedly opaque. Indeed, the most remarkable transition across the various versions of Mead’s work was the incorporation of others’ experimental results. These transformed his mechanical conception of venom, from sharp salts that burst blood ‘globules’ to an agent that vitiated the victim’s nervous fluid.

Mead’s work proved powerful across the Anglophone world, but paled in comparisons with Fontana’s studies, which spanned the final third of the eighteenth century. Indeed Fontana’s oeuvre forms the conceptual and chronological pivot for About Method. The central chapters inspect selected protocols and rhetorical structures drawn from his 700-word opus, Treatise on the Venom of the Viper. Here, Schickore focuses on Fontana’s place in shaping two formative strands of methods discourse: the value and delimitations of repetition, and the heuristic purpose of prolixity, the extravagant use of detailed text that proliferated page after page after page.

Across the biological sciences, Fontana’s fame arose chiefly from his insistence upon conducting repeated experiments, reporting in great detail their minor procedural divergences. ‘In Fontana’s work’, Schickore notes, ‘the leitmotif is the phrase “I varied the experiment in a hundred different ways”. It appears over and over again’ (p.84). In each case the apparatus and protocol were carefully laid out, including dead-ends and failures, in order ultimately to design the simplest trials capable of generating the purest results. As the earlier chapters highlight, there was no novelty to insisting on repetition or comparison. Rather, Schickore contends, Fontana’s fundamental innovation was a thoroughgoing commitment to exploring almost infinite variations in the conduct of his experiments, and their impact upon the outcomes.

Allied with this procedural largesse – including its horrific toll on animal life – was Fontana’s careful documentation of his practices and inferences. The result was a prodigious text configured as a narrative with ‘the flavor of a (very gruesome) scientific adventure story’ (p.82). For Fontana, prolixity not only buttressed his ‘epistemological sovereignty’ – in the words of Ohad Parnes[ref]Parnes, O. (2003) ‘From Agents to Cells: Theodor Schwann’s Research Notes of the Years 1835–1838’, in Frederic L. Holmes, Jürgen Renn and Hans-Jörg Rheinberger, eds., Reworking the Bench: Research Notebooks in the History of Science. Dordrecht: Kluwer Academic Publishers, pp. 119–39.[/ref] – but invited readers to share his journey as individual protocols, outcomes and interpretations were concatenated into an exhaustive chain of investigation. It struck me that Fontana’s work predated Alexander von Humboldt’s synoptic insistence on recording every conceivable detail of the physical and biological world. Both men ultimately struggled with aggregating and selectively representing their accumulated data.

This concern, indeed, animates the second half of About Method. The acknowledged heir to Fontana, at least in the Atlantic world, was Philadelphia physician, physiologist and littérateur, Silas Weir Mitchell. Schickore’s discussion of Mitchell broadens the analysis of methods discourse to consider its intersections with nascent if highly contentious definitions of ‘scientific medicine’ across the second half of the nineteenth century. She contends that Mitchell’s experimental and textual strategies resulted from two contemporaneous concerns: the urge to adjudicate upon ‘rational’ therapeutics, and the growing public opprobrium of vivisection. In contrast with my own focus on the epistemology, ontology and ethics of vivisection in venom research, Schickore explores Mitchell’s insistence on comparative experimentation and the abstraction of his results into tabulated data.[ref]Hobbins, P. (2017) Venomous Encounters: Snakes, Vivisection and Scientific Medicine in Colonial Australia. Manchester: Manchester University Press.[/ref]

Mitchell’s insistence on comparison was not animated by a growing concern with experimental variability. Rather, it reflected the inherent diversity of snakebite. Bemoaning the poorly documented natural history of envenomation in humans, Mitchell also conceded that laboratory animals – especially dogs – responded in markedly different ways to nominally consistent toxins. Furthermore, by the late 1860s it was becoming apparent that there was no singular ‘snake venom’; its differentiation by species was followed, from the 1880s, by an appreciation that venoms themselves comprised multiple active constituents. These acknowledgements of biological individuality sat uncomfortably with Mitchell’s commitment to vivisection, and may have prompted his turn to tabulated data to facilitate ‘the synoptic presentation of evidence’ (p.138). Schickore suggests that this drive for concision shaped an evolving methods discourse in the latter half of the nineteenth century.

On the one hand, therefore, Schickore warns against a teleological reading of emergent disciplines. Both experimental protocols and methods discourse crossed multiple fields of practice in the nineteenth century. Not to survey this breadth risks omitting pertinent experimental mentalities and methodologies. On the other hand, there is a certain telos to Schickore’s own rendering of the imperative of the busy reader. The medical publishing transformations from 1850 to 1900, she argues, pushed back against Fontana’s prolixity in favour of brevity and structural regimentation. This is certainly one reading, but Mitchell was as well regarded for his prose as his science; might not an alternative pathway have favoured experimental virtuosity matched by rhetorical verbosity?

The last quarter of the book explores the epistemological implications of the formation of specialised practitioner communities. If this organizational gambit was itself a means of mastering the exponential growth in experimenters and publications, methods discourse also increasingly addressed the twinned problems of control and standardization amid the burgeoning ‘agency of substances that were not directly observable’ (p.175). Textually, Schickore observes, by the 1930s scientific papers had foregone any lingering narrative elements and largely adopted the modular introduction-methods-results-discussion format familiar to current-day biomedicine. ‘This bland list of standardized procedures and methods could hardly be any more different from Fontana’s graphic prose’, she laments (p.212).

The final chapters likewise become more synoptic and feel a little harried, in contrast with the elaborate, close reading that precedes them. Turning to centrifuges, electrophoresis and debates over the degree to which venom consist of protein, these chapters comprise a more contextual sweep across the biomedical literature. This rendering parallels the fate of Schickore’s twentieth-century protagonists who ‘found themselves on shifting grounds [as] theoretical approaches multiplied, concepts changed meanings, and new analytic techniques were being developed’ (p.200). Indeed, proliferating instruments, reagents, procedures and analyses themselves became a barrier to unambiguous empirical interpretation. It now became the place of survey reports and review articles – rather than individual studies – to reflect upon the intent and value of experimental methods. This final section, however, comes to a rather abrupt end, without a clear explanation for why 1950 marks a specific terminus in methods discourse.

About Method nevertheless remains true to its title. It surveys a three-century span not to tell a comprehensive history of venom research, but to intricately contextualise the shifting ways in which modern scientists have committed publicly and procedurally to experimental method. The focus on Atlantic world investigators necessarily side-lines scholarship on venom research in Asia, India, Australia and Africa, while Schickore’s engagement with the ethics and heuristics of vivisection is restrained rather than foregrounded. The book also treads a fine analytical line between the elaborate specifics of laboratory praxis and the literary technologies and witnessing procedures articulated by Steven Shapin and Simon Schaffer in their seminal work[ref]Shapin, S. and Schaffer, S. (2011/1985) Leviathan and the Air Pump: Hobbes, Boyle, and the Experimental Life. Princeton: Princeton University Press.[/ref]. Yet, written in a pleasant and at times jocular style, Schickore’s text sustains an intellectual rigour and precision throughout. In asking fundamental questions about what experimenters believed they were doing, its interpretive value for scholars across the biomedical and human sciences is undoubted.

Peter Hobbins is a historian of science, technology and medicine. A postdoctoral research fellow at the University of Sydney, his work focuses on the epistemology of research and its ontological products. He is the author of Venomous Encounters: Snakes, Vivisection and Scientific Medicine in Colonial Australia.

 

 

 

 

History and trauma across the disciplines: psychoanalytic narratives in the mid twentieth century

In the new issue of History of the Human Sciences, Matt ffytche analyses the exclusion of traumatic histories from psychoanalytic accounts of the mid-twentieth century, through a detailed engagement with the figure of the father (and of family authority) in different forms of psychoanalytic theory. Focusing especially on the work of the German psychoanalyst, Alexander Mitscherlich, ffytche traces the filtering out of the historical experiences of Nazism and the war from psychoanalytic narratives of the social – but then their return in texts of the 1980s and 1990s, under the banner of a new interest in historical trauma.  Here, HHS Editor in Chief, Felicity Callard, interviews Matt about his article

Felicity Callard (FC): Maybe we can start off with the institutional context in which you work. You have recently transitioned from being the director of a Centre for Psychoanalytic Studies to becoming the head of a new Department of Psychosocial and Psychoanalytic Studies. Can you tell us more about this new department, and what its emergence tells us about the history and sociology of psychoanalysis in the present?

Matt ffytche (Mf): It’s a very exciting moment for us, and a fascinating, transitional moment for the discipline. In many UK institutions, programmes connected to psychoanalysis have been in long-term decline, I think mainly because of the way in which Centres or Units which were once set up in relationship with schools of psychology or health, have found the disciplinary ground being whittled away from under their feet as the institutions which housed them have gone more and more quantitative. In the humanities, I think interest in psychoanalysis has remained steady (usually in its Lacanian form) but just as part of the general critical mix – it has rarely dealt in full-scale psychoanalytic programmes. The University of Essex, along with Birkbeck and a few other institutions, have bucked this trend and found a real impetus to growth around such topics as psychoanalysis and the psychosocial – and this may have something to do with social science, which allows forms of research and enquiry connected to psychoanalytic viewpoints to stay in touch with mental health, without being limited by the specific positivist agendas of the natural science disciplines. At the same time, the social science platform does allow the critical dimensions of psychoanalysis, and the psychosocial, to be explored and extended, and paired up to contemporary feminist and postcolonial agendas, amongst other things. The Centre at Essex did originally emerge out of the Sociology department during the 1990s, I think out of a small research group interested in psychoanalysis and mental health. But it has grown so much, particularly over the last decade, that it was impossible to reabsorb it back into Sociology – so the change this year was really about recognising that we were a fully-functioning department of our own, with two BAs and a new BA Childhood Studies coming in, and various Masters programmes and a large body of PhD students. Where exactly this all goes remains to be seen – but credit has to go to Essex for recognising the potential of new kinds of disciplines in the current climate, or at least the need to respond flexibly to the kinds of topics school leavers are wanting to work on. And many of them do want topics connected to mental health, or a more narrative psychology, or one with critical dimensions. I think the future of psychoanalysis in the academy (which is different from its future as a clinical, professional practise) has a lot to do with whether it can continue to be rethought, in conjunction with the rethinking that has been going on in other disciplines for a while.

 FC: Your article focuses on the father and on fatherhood. There has been, recently, a fair amount of writing in and around the history of the human sciences, on the mother and motherhood (two comprise Rebecca Plant’s Mom and Ann Harrington’s essay ‘Mother love and mental illness: an emotional history’).There has arguably been less on the father and fatherhood. Would you agree? And what are you hoping to encourage in terms of future research on the father? 

Mf: That’s a difficult one for me to answer mainly because I was driven less by an involvement in the stakes of fathers per se, than ‘the father’ as this vanishing or dwindling moment in mid-century psychoanalytic social psychology, in which nearly all the authors are registering the same thing: fathers no longer have the status they once had; and people’s subjective formation, and social formation, is increasingly bypassing what for psychoanalysis had been thought of as a foundational ‘confrontation’ with the father. That was the perception. The issue for me was less about whether any of these approaches were accurately responding to the experience of families, and more to do with the way the family – and gender – had been inscribed in theories of socialisation, and how aspects of this seemed to be imploding in the 1960s. Which was surely a necessary thing, given the sweeping reassessment of patriarchy which (in the time frame of my article) was about to be made in the 1970s. So I’m less able to comment in relation to representations of motherhood and fatherhood now. What I can add as a side-note is the way in which, in psychoanalysis of the 1940s and 50s – especially in the UK – a shift had already been made, by figures such as Melanie Klein, Donald Winnicott, and John Bowlby, to accord far more status to the mother. They near enough reversed Freud’s emphasis on sons and fathers – which Freud had speculatively extended back through cultural time, beyond the epoch of Oedipus and into deep, Darwinian primeval history – into one of mothers and babies, as foundational for the future of emotional life. But this shift didn’t seem to make its way into the more mainstream social psychological authors inflected by psychoanalysis, who were still evidently reading the Freud of the 1920s into the social experience of the 1950s and 1960s.

FC: Your essay is one of a number of recent reconsiderations of how to think the (socio)political in relation to psychoanalysis. I’m thinking, for example, of Michal Shapira’s The War Inside, Dagmar Herzog’s Cold War Freud and Daniel Pick’s The Pursuit of the Nazi Mind. How would you account for the blossoming of such an interest, now, and how would you distinguish your own approach to this question? 

Mf: It’s been an interesting development, and one I feel closely associated with – Dagmar Herzog has now joined me as Editor of Psychoanalysis and History, and Daniel Pick and I collaborated on an edited volume – Psychoanalysis in the Age of Totalitarianism (Routledge, 2016) – for which Dagmar and Michal Shapira both contributed chapters. One way of looking at it is that these are all attempts to move the history of psychoanalysis forwards from a concentration on the ‘Freud era’ to what came after. Some of this is simply about extending the history of psychoanalysis forward through the century, and responding to the opening up of more recent archives. But it was also a way of getting beyond the idea that the history of psychoanalysis necessarily has to do with the history of Freud. It’s in the 1920s to 40s in particular, that you start to get a much broader involvement of the second wave of psychoanalysts in other disciplinary fields (pedagogy, criminology, child psychology, sociology, etc). In the totalitarian book I was intrigued by the ways in which Frankfurt school sociologists had responded to the rise of fascism and anti-semitism with a wave of psychoanalytically-informed research and critical theory [. The article for HHS was in some ways an attempt to keep tracking that alliance onwards in subsequent history.

As to why this and similar initiatives amongst historians might be happening now? I think there’s also a recognition that there is a huge postwar history of psychoanalysis that remains to be told – too much for any single volume because of the degree to which psychoanalytic ideas implicated themselves so successfully in many different cultures in the middle decades of the twentieth century. And it’s always a fascinating history, because of the ways in which psychoanalysis delves into people’s fears and fantasies, or makes unusual social interventions, challenges existing assumptions, etc. It’s hard to characterise my own input here, but if anything I’d say that I pursue the history of psychoanalysis because of the complex ways in which it poses questions about the nature of identity and subjectivity that are relevant far beyond the practise of psychotherapy. For this reason, psychoanalysis is always renewing its engagements with philosophy, sociology and critical theory – an alliance very present within current psychosocial studies.

FC: Your were trained in literature as well as in history, and your article made clear to me how we much we need to move with agility between the social sciences and the humanities — including, in particular, literary studies — if we are properly to analyse the history of psychoanalysis. How would you describe your approach to interdisciplinarity in relation to the history of the human sciences?

Mf: I’ve never felt myself committed to any discipline exactly, and have generally simply pursued questions that I have felt to be crucial, about the way subjectivity is understood in the modern era, particularly in relation to individualism. And that’s true for thinkers that have influenced me as well – which REF panel would you submit Franz Fanon to, or Julia Kristeva, or Walter Benjamin, or Freud for that matter? It’s a huge problem at the moment – despite all the lip-service paid to interdisciplinarity, most elements of the way academic research and teaching are set up conspire to make liaison across fields difficult. My own tendency is to choose interdisciplinary ground not just in order to put fragments of a bigger social picture together, but in order to materialise entirely new domains. For instance, one of the areas that is intriguing me going forwards is the role of ‘fantasy’ in the social sciences. There’s obviously a psychoanalytic link here, because fantasy is an object of enquiry for psychoanalysis. But it goes wider than this. We tend to think of fantasy as something inaccessible, or illusory, or private, or without social agency. But how could you study the history of modern racism, for instance, without putting together materials from literature and culture, with other discourses circulating in the historical and social science archives, and further data from psychosocial work with the dreams and fantasies that regularly feature in relation to racial hatred? Ideally one needs to be housed in an institution that allows for that capacity, to bring these kinds of materials together across various disciplinary divides, because the logic of the material itself demands it. And one also has to work with others to find ways of arbitrating across different criteria of knowledge-formation, different theoretical frameworks.

FC: Whatever else your essay does, I read it as a fascinating contribution to the history of the emotions — in the way that you narrate a complex history whereby empathy and coolness are, variously, objects of analysis and epistemic virtues upheld by those doing the analysing. Can you say more about how this element of your argument reorients how we might think about the histories of disciplines? 

Mf: Interestingly, that point about empathy was the one that occurred to me last of all, in this article that went through several iterations in the last couple of years. Or it’s the furthest point I reached in my thinking. What really struck me was the way in which psychoanalysts drawn to social science, in the 1950s and 1960s, ended up in some instances detaching themselves from the kind of empathic relation to their subjects that psychotherapy might foster. Instead, they became more abstract ‘diagnosticians’ of societies. With Alexander Mitscherlich and Margarete Mitscherlich in particular (the German psychoanalysts who are my focus in the latter half of the article) there was the added factor that this was postwar Germany, and – for them – empathy with their subjects was not an option; or they willingly exchanged empathy for moral critique. For them, Germans had been unable to acknowledge their emotions, and their past, and what had resulted was a kind of postwar social pathology, despite the apparent economic successes. This shift in the Mitscherlichs was something I could pull out by comparing certain texts, and in particular looking at the side effects of certain disciplinary lenses on the way human subjects were rendered. More recently I participated in a workshop on ‘Denial’ at the Pears Institute in London, which brought together historians, literary scholars social scientists, and psychoanalysts, and for me it raised related sets of questions about the ways in which historical subjects (in this case the agents of certain forms of violence and genocide) are rendered differently by different disciplines. One of the things that shifts is whether the observer is called upon to empathise with the object of enquiry, called to empathise by the discipline itself. What assumptions do disciplines build in about a common ‘human’ or moral viewpoint? To what extent do they use empathy to ‘flesh out’ the minds and motives of perpetrators; and by contrast, to what extent – by abandoning subjective levels of description – do other attitudes create ‘impersonal’ subjects, who then appear to act in incomprehensible, unjustifiable ways? It’s certainly an area for further work.

Matt ffytche is head of the Department of  Psychosocial and Psychoanalytic Studies at the University of Essex, and the author of The Foundation of the Unconscious (Cambridge)

Felicity Callard is Editor-in-Chief of History of the Human Sciences and director of the Institute for Social Research at Birkbeck, University of London.