The new special issue of History of the Human Sciences, edited by Sarah Marks, focuses on psychotherapy in Europe. Articles range across the twentieth century, tracing psychoanalysis in Greece, the transnational shaping of Yugoslav psychotherapy, hypnosis in Hungary, the role of suggestion in Soviet medicine, mindfulness in Britain, and Dialectical Behaviour Therapy in Sweden. In parallel, History of Psychology have published a special issue on psychotherapy in the Americas, edited by Rachael Rosner. Here, Marks and Rosner discuss the authors’ contributions, and what’s at stake when writing about the history of psychotherapy.
Sarah Marks (SM): Perhaps we can start by tracing how the idea for these issues came about. You and I first met at a conference at University College London in 2013 organised by myself and Sonu Shamdasani on the history of psychotherapy – but the idea for these parallel issues came from you: what was the motivation behind the idea, and the particular focus of Europe and the Americas?
Rachael Rosner (RR): Your conference was a watershed moment for me personally. For years I had been trying to figure out where the history of psychotherapy belonged. The history of science? The history of medicine? The history of the social, behavioral and human sciences? Psychotherapy straddles all of them, but from the standpoint of historians asking shared questions, there wasn’t yet a home base. Your conference was an important step in that direction.
Rachael Rosner
Sonu followed in 2016 with a mini think-tank on transcultural histories of psychotherapy, which you and I attended. Felicity Callard (who had been at the 2013 conference) had just assumed co-editorship of History of the Human Sciences and Nadine Weidman had just become editor of History of Psychology. It seemed like Felicity and Nadine would likely encourage good work coming out of this nascent community. So the idea just clicked that you and I might guest-edit coordinated issues as a way of continuing the momentum. The idea was inspired by a strategy National Institutes of Health researchers had used in the late 1960s to nurture psychotherapy researchers. They published the proceedings of a workshops on psychotherapy research methods in two journals simultaneously, American Psychologist and Archives of General Psychiatry. I thought we might try something similar. Thankfully, you, Nadine and Felicity were enthusiastic. Your expertise was in European psychotherapy and mine in American, so we would focus on those regions. But this was just a starting point. Excellent work is being done on the history of psychotherapy in Asia and India and, hopefully soon also, in Africa.
SM: Both of these issues try to put the question of place at the centre of the debate – both in terms of local specificities, and the transfer of knowledge and practice across borders and cultures. For Europe, it’s curious how much long-term continuity there was despite the geopolitical divisions of the Cold War – practices including hypnosis, suggestion and group psychoanalysis which emerged in Western and Central Europe in the earlier half of the century remained in play in different parts of Eastern Europe well into the 1960s and 1970s. And we also see the crucial importance of transatlantic connections in both directions, especially from America to Europe in recent years. How did transnational and transcultural stories play out within the Americas?
RR: What is astonishing is how many of the innovations in the Americas were local improvisations on European trends. It’s not surprising that this transfer of knowledge happened within psychoanalysis, but our special issue illustrates that it was happening in other domains too. In Argentina, as Alejandro Dagfal shows, French ideas consistently spurred psychotherapeutic innovations. Jennifer Lambe’s and Cristiana Facchinetti’s and Alexander Jabert’s pieces also show the French influence, in this case among followers of French spiritualist Alan Kardec (Kardecian Spiritists). Erica Dyck’s and Patrick Farrell’s paper on LSD therapy tells the story of a disaffected British psychiatrist who found support in the isolation of the Canadian prairies. In America the trans-Atlantic trends were more heterogenous and reciprocal. British psychotherapists played a huge role in catapulting American Aaron Beck to stardom, just as Beck’s CBT helped British clinicians gain advantage with the NHS. The only article in our special issue that doesn’t follow the transcultural theme is Deborah Weinstein’s account of how family therapists in America embraced the removal of homosexuality from the Diagnostic and Statistical Manual of Mental Disorder (DSM-III) and came to normalize both homosexuality and gay families.
SM: We know that many forms of psychotherapeutic have long been entangled with religious or spiritual practices, right back to the Quaker Tuke family at the York Retreat in the 1890s – and Matthew Drage shows in HHS that Buddhism has remained a significant driving force in the transmission of mindfulness practice in Britain, even as it has become bound up with cognitive science and evidence-based outcomes studies in recent years. It seems that religion played an even more central role in psychotherapy – albeit in slightly different ways – in North and South America in the 20th Century. Could you tell us more about what your authors found in relation to this?
RR: Yes, you’re right. Psychotherapies in the Americas tapped deeply into spiritual trends right from the beginning.David Schmit’s biography of Warren Felt Evans, founder of the Mind Cure movement, takes the story of religion and psychotherapy in America farther back even than Eric Caplan’s work. Americans continued to embrace the religious aspect, even if they didn’t always recognize it as such. Carl Rogers was a minister before he became a psychologist, for instance, and client-centered therapy was as much an expression of religious as psychological imperatives; immigrant psychoanalysts who made such a big mark on American psychotherapy, like Erich Fromm, Erik Erikson and Victor Frankl, were also fully engaged with religious questions. When D. T. Suzuki brought his Buddhist practices to America mid-century, Erich Fromm and behavior therapist Albert J. (Mickey) Stunkard were hugely enthusiastic. These are just some examples of how the religious impulse remained strong throughout the history of American psychotherapy. We might imagine that Catholicism would come into play, especially in Central and South American psychotherapies, and there is scholarship to suggest as much. But the big surprise in our special issue was Kardecian Spiritism in Cuba and Brazil. Kardecian Spiritism had no presence at all in North America. So this is an exciting line of research.
Sarah V. Marks
SM: You yourself have done considerable work on the history of Cognitive Behavioural Therapy (CBT) in America, especially in relation to the work of Aaron Beck. Could you tell us a bit more about how you have started to write this in to the broader history of psychotherapy?
RR: Beck’s Cognitive Therapy (CT) can be difficult to grasp from the standpoint of the history of the human sciences because there is little in it that speaks to the subjective or the emotional—his ideas don’t intersect with art, literature, philosophy, the linguistic turn, etc. This lack of intersection, however, is also what makes Beck’s CT interesting historically. CT flourished at the turn of the 21st century in the U.S. and the U.K. precisely because of tensions between objectivity and subjectivity. Most psychoanalysts by then were plunging even deeper into the subjective, under the influence of Lacan, Foucault, and others. But the vast majority of non-analytic therapists—largely psychologists and social workers—were making a mad dash in the opposite direction, to objectivity. The rise of the Randomised Control Trial meant that therapists seeking federal research funding or reimbursement for treatment had no choice but to embrace objectivity. Beck was in the right place at the right time. He had been plying CT since the early 1960, with only moderate success. But now, suddenly, by 1985 or so, CT and CBT were the gold standard. They met the clinical, economic and research needs of a large number of therapists. Interestingly, the supremacy of the objective didn’t mean that Beck’s followers abdicated the subjective. They have rather been engaged in a subtle dance between objectivity and subjectivity that is fascinating to study historically.
SM: I’m aware that you’re writing a biography at the moment – could you say a bit about the challenges and rewards of biography as a genre?
RR: Historians of science often malign biography as soft scholarship. Mike Sokal has done a good job challenging this assumption, but there’s more work to do. One of the major challenges of writing biography is convincing historians that the argument is not parochial or hagiographical. That’s a tall order. I believe that biography is uniquely well-suited to the history of psychotherapy. Psychotherapy actually defies the categories historians use for bracketing our subject matter. I do not believe that psychotherapy is in fact a sub-genre of medicine, or science, the behavioral sciences, religion, psychology, or anything else. No profession has managed to corner the market on its practice. Psychotherapy is, rather, a historical chameleon. Maybe “shape shifter” is a more accurate description. Psychotherapy quickly assumes the characteristics, colors, virtues and temperament of the person practicing it—whether that person is a doctor, a minister, a rabbi, a mystic, a housewife, a psychologist or a brush salesman. Each iteration is unique to the practitioner. Biography taps into that idiographic quality. We can write social, cultural, intellectual, and other kinds of histories of psychotherapy, and they are all worthwhile. But biographies get to the core of psychotherapy because they get to the core of the person who is practicing it. Several years ago I attended the annual conference of BIO (Biographer’s International Organization), and the keynote speaker remarked that what she loves about biography is that it is experience in our shared humanity. Biographers are trying to make emotional contact, to have a shared experience, with their subjects. I love that.
SM: Like a number of my colleagues, you write from the perspective of the humanities and historical research, but you also have a background in the clinical world, and you believe strongly in the importance of writing for an audience of practitioners. Could you tell us a bit more about why this is important, and what is at stake when writing histories for this readership?
RR: I became a historian, in part, in order to agitate clinicians. The back-story is that my father was a clinical psychologist who had trained at the University of Chicago in the 1960s with people like Roy Grinker, Jr. and Bruno Bettelheim. Carl Rogers had just left Chicago, but his influence there was still very strong. Our home library included books by Freud, Jung, Fromm, Bettllheim, Rogers and others, all of whom I read avidly. I had intended to become a clinical psychologist like my father, but it became clear during my training that I was not cut out for clinical practice. Clinicians were making all kinds of assumptions about human nature I wasn’t prepared to make. They were being trained to solve problems, not to think critically, but thinking critically seemed to be where I lived. I’d been in search of a mechanism through which to bring that kind of critical inquiry to the community of clinicians about whom I cared so much. The History and Theory Area in the Department of Psychology at York University (Toronto), where I did my Ph.D. under Dr. Raymond Fancher, offered that kind of mechanism. History as practiced there was all about engaging psychologists in difficult conversations about what they do and why.
Psychotherapists
fill a unique niche in western society. They are tasked with the care of
emotional lives when those lives have become rocky and troubled. Neither the
government nor medicine nor the church is particularly good at meeting this
need, so this is a crucial function. Every therapist I have ever met, including
my father, believed that theirs is a noble calling. They rarely, if ever,
question the intrinsic and self-evident goodness of what they do. But to my
mind it’s crucial that they do just that—or run the risk of doing harm. Sadly,
I know too many stories where therapists’ over-confidence made matters worse
for a patient, not better. This is a situation ripe for historical agitation,
for inviting therapists to ask hard questions and, in the process, to take a
more circumspect and thoughtful stance in their work.
Sarah Marks is a postdoctoral researcher at Birkbeck, University of London and Reviews Editor for History of the Human Sciences. She writes on the psy-disciplines during the Cold War, and currently works with the Wellcome-funded Hidden Persuaders project.
Claire L. Shaw. Deaf in the USSR: Marginality, Community and Soviet Identity, 1917-1991; Ithaca: Cornell University Press; 310 pages; hardback $49.95; ISBN: 1501713663
In a picture taken during the 1933 May Day Parade in Moscow, we witness a procession of young athletes with firm bodies walking towards the Red Square. Dressed in a uniform of sporty blouses and practical shorts, the athletes are on their way to the Lenin Mausoleum, where they can salute the USSR’s top leaders. It’s a display seen a hundred times over – one that historians in training study in a first year course, or the general public has seen in any given documentary on life in the USSR. It would be a wholly unremarkable picture, if it were not for one detail. The first column of male and female athletes carries a banner which reads ‘glukhonemye’ or ‘deaf-mutes’. ‘With their cheerful appearance, the deaf-mutes testified to their readiness to fight alongside the working class of the USSR for the general line of the party and its leader, comrade Stalin’ the then magazine for deaf-mutes Zhizn glukhonemykh wrote about the event. Deaf people seemed intent on participating in Soviet life. They dedicated themselves to overcoming the obstacles to their inclusion into the Soviet project in general and the industrial workforce in particular. For it was the Soviet project, many leading figures in the burgeoning deaf community felt, that gave them the opportunities to emancipate themselves. No longer were they the dependent, disabled people they had been under the tsarist regime – now they could become valuable members of the working class.
Lenin’s Mausoleum. Attribution: R. Seiben, via Wikimedia Commons. CC-BY-SA-3.0 https://creativecommons.org/licenses/by-sa/3.0/
However much the deaf athletes, or the editors of Zhizn glukhonemykh, subscribed to a narrative of radical inclusion, or framed perfecting the deaf masses as a Soviet aim pur sang, they were also confronted with exclusion. In everyday life not everyone was equally capable of realizing the utopian rhetoric of overcoming deafness. The deaf people on the May Day Parade picture marched alongside their hearing comrades but also distinguished themselves by carrying a banner proclaiming their deaf-muteness. This was illustrative of the separate institutions that helped deaf soviet citizens develop a distinguished communal identity, but also at times kept them at a substantial distance from the hearing world.
It is precisely these kinds of tensions between the deaf identity project and the Soviet identity project, between inclusion and exclusion, sameness and difference, which lies at the heart of Claire Shaw’sDeaf in the USSR: Marginality, Community and Soviet Identity. Shaw writes a history of deafness in the USSR from the February Revolution of 1917, to the collapse of the USSR in 1991, while situating deafness in the broader programme of Soviet selfhood. She examines the different Soviet conceptions of deafness throughout the period as influenced by factors ranging from self-advocacy, science, defectology, schooling and technology; to institutionalization, ideology and professionalization. To this end, Shaw draws on deaf journalism, films and literature produced by deaf and hearing people alike, as well as personal memoires. The main body of her source material hails from the institutional archive of VOG, an acronym that covered the different names that the Russian Society of the Deaf bore throughout the period under scrutiny. According to Shaw, VOG offers a lens through which we can gain an understanding of what it meant to be deaf that is both broad and in-depth. The society was involved with activities concerning housing, education, sign language, literacy, labour placement, cultural work, and social services, and was, as Shaw notes early on, a locus for ‘both Soviet governance and grassroots activism and community building.’ By the end of the 19070s it was estimated that more than 98% of Russian deaf people were members of VOG, although the core of its operations were directed from Moscow and to a lesser extent St. Petersburg. Inevitably, and with some exceptions, much of Shaw’s focus is on these cities.
The first chapter traces the foundation of VOG in 1926 after a period of reconceptualising deafness in reaction to the tsarist period and in exchange with the new Soviet ideas. Deaf people drew upon models developed by women and ethnic minorities to turn their differences into a path towards Sovietness while simultaneously insisting that ‘the affairs of the deaf-mutes are their own.’ Chapter two brings us to the 1930s when VOG becomes an organization of mass politics and deaf people try to write themselves into the Stalinist transformative narrative. At the same time, fears about those deaf people who could not live up to the ideal spread within the deaf organization. Chapter three examines the break in deaf history that was the Great Patriotic War. Disabled war veterans raised the overall status of people with disabilities and the postwar state infrastructure was rebuilt with an emphasis on welfare. Both trends rendered VOG a stronger and more centrally controlled organization. They also raised the existing tensions in the deaf community between striving for autonomy and being ‘passive’ recipients of expertise and care services. Chapter four zooms in on the Golden Age of deafness during the 1950s and 1960s in which deaf cultural institutions and educational efforts flourished. Deaf people came close to a functional hybrid deaf/Soviet identity that was also advertised to the world at large. Chapter five takes a detour to follow up on a nationwide debate about deaf criminality and lingering fears concerning deafness, femaleness, marginality, and otherness., while chapter six tracks the downfall of the deaf cultural community in the Brzehnev era: deaf models of selfhood gave way to curative and technological visions. Finally, an epilogue outlines with broad strokes the evolutions deafness underwent after the collapse of Soviet Union.
Deaf in the USSR is often at its most compelling when it grapples with the category of deafness itself. Many of our conceptions of what disability and deafness actually are have roots in 20th century disability and Deaf activism, and scholarship from the UK and the US. These conceptions bear specific political and historical connotations that are not self-evidently transferable to the context of Soviet Russia. Proponents of global disability studies have been rewriting this Anglo-American conceptual framework of disability to suit local contexts for quite some time now, but what place the former ‘Soviet world’ is to be assigned within global disability studies is still quite unclear. Few authors have tried their hand at the endeavour (See, for instance, the work of Michael Rembis & Natalia Pamuła [in Polish]).
Shaw employs her national case study to elaborate on specific Soviet understandings of deafness. A social interpretation of deafness, for example, was prevalent in the USSR decades before disability activists in the UK and the US formulated the social model of disability. Moreover, Shaw does so without falling into the trap of completely disconnecting the history of the USSR from international developments. After all, the social model of disability, as developed in the UK in the 1970s, was inspired by Marxism, while early Soviet conceptions of deafness in turn were influenced by 19th century conceptions of deafness hailing from German and French deaf education.
Dr Claire Shaw, author of ‘Deaf in the USSR.’
‘Could a defective body ever embody the Soviet ideal?’ is the question that returns throughout Deaf in the USSR. It is used by Shaw as a window onto the moulding of the Soviet self and, more importantly, onto the limitations of this moulding. While Shaw sporadically touches upon the subject of how deafness was related to other defective bodies, the topic is never fully addressed. Shaw emphasizes how work and employment were essential to overcoming deafness and approaching the Soviet ideal. In this regard deafness distinguishes itself from other disabilities, as it does not make access to physical labour quite as difficult. A limited discussion of the relation between ‘Soviet’ deafness and other forms of ‘Soviet’ disability would not have been uncalled for, especially as Shaw seems to take issue with the dire picture of disability in the USSR painted by researchers such as Michael Rasell and Elena Iarskaia-Smirnova.
Shaw is clearly interested in how studying deafness in the USSR can shed light on more than the history of deafness itself. At several points throughout the book she demonstrates that deafness can be useful for reevaluating broader historiographical debates. In the case of the 1933 May Day Parade photograph, she asserts that such forms of deaf inclusion shed a new light on this period. The 1930s have often been depicted as a decade in which earlier, more plural socialist visions of equality and emancipation where completely buried by the dictatorial regime of Stalin. Shaw’s broader reflections could have been worked through in more depth, but they show an important willingness to leave behind the type of disability history that follows an ‘add disability and stir’ recipe. It is in these attempts that the reader sometimes catches a glimpse of the full potential of disability as a as category of historical analysis: valuable both in its own right, and in its ability to pinpoint questions about a society at large.
Anaïs Van Ertvelde is a PhD student at the Leiden University Institute for History on the ERC funded project Rethinking Disability: The Global Impact of the International Year of Disabled Persons (1981) in Historical Perspective. Her current research focuses on how government experts, disability movements and people with disabilities themselves conceive of, and deal with, disability in the wake of the UN international year. She uses a cross-‘iron curtain’ perspective that involves three local case studies and their global entanglements: Belgium, Poland, and Canada.
In his recent books, Plastic Reason: An Anthropology of Brain Science in Embryogenetic Terms (University of California Press, 2016) and After Ethnos (Duke University Press, 2018), the anthropologist Tobias Rees explores the curiosity required to escape established ways of knowing, and to open up what he calls “new spaces for thinking + doing.” Rees argues that acknowledging – and even embracing – the ignorance and uncertainty that underpin all forms of knowledge production is a crucial methodological part of that process of escape. In his account, doubt and instability are bound up with a radical openness that is necessary for breaking apart existing gaps and allowing the new/different to emerge – in the natural but also in the human sciences. But are there limits to such an embrace of epistemic uncertainty? How does this particular uncertainty interact with other forms of uncertainty, including existential uncertainties that we experience as vulnerable human beings? And how does irreducible epistemic uncertainty relate to ethical claims about how to live a good life? What is the relation of a radical political practice of freedom with art? After a workshop on his work at the Zurich Center for the History of Knowledge in 2017, Vanessa Rampton, Branco Weiss Fellow at the Chair of Practical Philosophy, ETH Zurich, explored these themes with Rees.
1. The Human
Vanessa Rampton (VR): Tobias, your recent work aims to destabilize and question common understandings of the human. I wonder how you would place your work in relation to other engagements with ‘selfhood’ within the history of philosophy, and the history of the human sciences more widely. Because there are so many ways of thinking of the self – for example the empirical, bodily self, or the rational self, or the self as relational, a social construct – that you could presumably draw on. But I also know that you want to move beyond previous attempts to capture the nature and meaning of ‘the human self’. What are the stakes of this destabilization of the human? What do you hope to achieve with it?
Tobias Rees (TR): In a way, it isn’t me who destabilizes the human. It is events in the world. As far as I can tell, we find ourselves living in a world that has outgrown the human, that fails it. If I am interested in the historicity of the figure of the human –– a figure that has been institutionalized in the human sciences –– then insofar as I am interested in rendering visible the stakes of this failure. And in exploring possibilities of being human after the human. Even of a human science after the human.
VR: When you say the human, what do you mean?
Vanessa Rampton, Branco Weiss Fellow, ETH Zurich
TR: I mean at least three different things. First, I mean a concept. We moderns usually take the human for granted. We take it for granted, that is, that there is something like the human. That there is something that we –– we humans –– all share. Something that is independent from where we are born. Or when. Independent of whether we are rich or poor, old or young, woman or man. Independent of the color of our skin. Something that constitutes our humanity. In short, something that is truly universal: the human. However, such a universal of the human is of rather recent origin. This is to say, someone had to have the idea to begin articulating an abstract, in its validity universal and thus time and place independent, concept of the human. And it turns out that this wasn’t something people wondered about or aspired to formulate before the 17th century.
Second, I mean a whole ontology – that the invention of the human between the 17th and the 19th century amounted to the invention of a whole understanding of how the real is organized. The easiest way to make this more concrete is to point out that almost all authors of the human, from Descartes to Kant, stabilized this new figure by way of two differentiations. On the one hand, humans were said to be more than mere nature; on the other hand, it was claimed that humans are qualitatively different from mere machines. Here the human, thinking thing in a world of mere things, subject in a world of objects, endowed with reason, and there the vast and multitudinous field of nature and machines, reducible –– in sharp contrast to humans –– to math and mechanics. The whole vocabulary we have available to describe ourselves as human silently implies that the truly human opens up beyond the merely nature. And whenever we use the term ‘human,’ we ultimately rely on and reproduce this ontology.
Third, I mean a whole infrastructure. The easiest way to explain what I mean by this is to gesture to the university: the distinction between humans on the one hand and nature and machines on the other quite simply mirrors the concept of the human, insofar as it implies two different kinds of realities, as it emerged between the 17th and 19th century. Now, it may sound odd, even provocative, but I think there can be little doubt that today the two differentiations that stabilized the human –– more than mere nature, other than mere machines ––fail. From research in artificial intelligence to research in animal intelligence, en passant microbiome research or climate change. One consequence of these failures is that the vocabulary we have available to think of ourselves as human fails us. And I am curious about the effects of these failures: what are their effects on what it means to be human? What are their effects on the human sciences –– insofar as those sciences are contingent on the idea that there is a separate, set apart human reality and insofar as their explanations, their sense making concepts are somewhat contingent on the idea of a universal figure of the human, that is, on the ‘the’ in ‘the human’? Can the human sciences, given that they are the institutionalized version of the figure of the human, even be the venue through which we can understand the failures of the human? Let me add that I am much less interested in answering these questions than in producing them: making visible the uncertainty of the human is one way of explaining what I think of as the philosophical stakes of the present. And I think these stakes are huge: for each one of us qua human, for the humanities and human sciences, for the universities. The department I am building at the Berggruen Institute in Los Angeles revolves around just these questions.
‘Human embryonic stem cells’ by Jenny Nichols. Credit: Jenny Nichols. CC BY
VR: What led you to doubt the concept of the human and the human sciences?
TR: My first book, Plastic Reason, was concerned with a rather sweeping event that occurred around the late 1990s: the chance discovery that some basic embryonic processes continue in adult brains. Let me put this discovery in perspective: it had been known since the 1880s that humans are born with a definite number of nerve cells, and it was common wisdom since the 1890s that the connections between neurons are fully developed by age twenty or so. The big question everyone was asking at the beginning of the twentieth century was: how does a fixed and immutable brain allow for memory, for learning, for behavioral changes? And the answer that eventually emerged was the changing intensity of synaptic communication. Consequently, most of twentieth-century neuroscience was focused on understanding the molecular basis of how synapses communicate with one another –– first in electrophysiological and then in genetic terms.
When adult cerebral plasticity was discovered in the late 1990s the focus on the synapse –– which had basically organized scientific attention for a century –– was suddenly called into question. The discovery that new neurons continue to be born in the adult human brain, that these new neurons migrate and differentiate, that axons continue to sprout, that dendritic spines continuously appear and disappear not only suggested that the brain was perhaps not the fixed and immutable machine previously imagined; it also suggested that synaptic communication was hardly the only dynamic element of the brain and hence not the only possible way to understand how we form memory or learn. What is more, it suggested that chemistry was not the only language for understanding the brain.
The effect was enormous. Within a rather short period of time, less than ten years, the brain ceased to be the neurochemical machine it had been for most of the twentieth century, but without – and this I found so intriguing – without immediately becoming something else. The beauty of the situation was that no one knew yet how to think the brain. It was a wild, an untamed, an in-between state, a no longer not-yet, a moment of incredibly intense, unruly openness that no one could tame. The whole goal of my research was to capture something of this irreducible openness and its intensity.
Anyway, when trying to capture something of the radical openness in which my fieldwork was unfolding, I began to wonder about my own field of research: if the taken for granted key concepts of brain science, that is, the concepts that constituted and stabilized the brain as an object, could become historical in a rather short period of time, then what about the terms and concepts of the human sciences? Which terms might constitute the human in such a situation? These questions led me to the obsession of trying to write brief, historicizing accounts of the key terms of the human sciences, first and foremost the human itself: when did the time and place independent concept of the human, of the human sciences we operate with emerge? And this then led me to the terms that stabilize the human: culture, society, politics, civilization, history, etc. When were these concepts invented –– concepts that silently transport definitions of who and what we are and of how the real is organized? When were they first used to describe and define humans, to set them apart as something in themselves? Where? Who articulated them? What concepts –– or ways of thinking –– existed before they emerged? And are there instances in the here and now that escape the human?
Somewhere along the way, while doing fieldwork at the Gates Foundation actually, I recognized that the vocabulary the human sciences operate with didn’t really exist before the time around 1800, plus or minus a few decades, and that their sense-making, explanatory quality relies on a figure of the human –– on an understanding of the real –– that has become untenable. I began to think that the human, just like the brain, had begun to outgrow the histories that had framed it. You said earlier, Vanessa, that I am interested in destabilizing common understandings of the human. Another way of describing my work, one I would perhaps prefer, would be to say that through the chance combination of fieldwork and historical research I discovered the instability –– and the insufficiency –– of the concept of the human we moderns take for granted and rely on. I want to make this insufficiency visible and available. The human is perhaps more uncertain than it has ever been.
VR: Listening to you, I cannot help but think that there are strong parallels between your work and the history of concepts as formulated by, say, Reinhart Koselleck or Raymond Williams. I can nevertheless sense that there is a difference –– and I wonder how you would articulate this difference?
TR: First, I am not a historian of concepts. I am primarily a fieldworker and hence operate in the here and now. What arouses my curiosity is when, in the course of my field research a ‘given,’ something we simply take for granted, is suddenly transformed into a question: an instance in which something that was obvious becomes insufficient, in which the world or some part thereof escapes it and thereby renders it visible as what it is, a mere concept. From the perspective of this insufficiency I then turn to its historicity: I show where this concept came from, when it was articulated, why, under what circumstances, and also how it never stood still and constantly mutated. But in my work this history of a concept, if one wants to call it that, is not end in itself. It is a tool to make visible some openness in the present that my fieldwork has alerted me to. In other words, the historicity is specific: the specific product of an event in the here and now, a specificity produced by way of fieldwork.
Reid Hoffman Professor of Humanities at the New School of Social Research in New York, Director of the Transformations of the Human Program at the Berggruen Institute in Los Angeles.
Second, my interest in the historicity –– rather than the history –– of concepts runs somewhat diagonal to presuppositions on which the history of concepts has been built. Koselleck, for example, was concerned with meaning or semantics and with society as the context in which changes in meaning occur. That is to say, Koselleck –– and as much is true for Williams –– operated entirely within the formation of the human. They both took it for granted that there is a distinctive human reality that is ultimately constituted by the meaning humans produce and that unfolds in society. Arguably, the human marked the condition of the possibility of their work. It is interesting to note that neither Koselleck nor Williams, nor even Quentin Skinner, ever sought to write the history of the condition of possibility of their work: they never historicized the figure of the human on which they relied. On the contrary, they simply took it for granted as the breakthrough to the truth. If I am interested in concepts and their historicity, then it is only because I am interested in the historicity of the concept of the human as a condition of possibility. How to invent the possibility of a human science beyond this condition of possibility is a question I find as intriguing as it is urgent: how to break with the ontology implied by the human? How to depart from the infrastructure of the human, while not giving up a curiosity about things human, whatever human then actually means?
2. Epistemic Uncertainty
VR: I am wondering if all concepts can outgrow their histories. Isn’t this more difficult in the case of, say, ‘the body’ or ‘language,’ than for our more doctrinal concepts – liberalism and socialism, for example?
TR: Your question implies, I think, a shift in register. Up until now we talked about the human and its concepts and institutions but now we are moving to a more general epistemic question: are all concepts subject to their historicity? And if so, what does this imply? Seeing as you mentioned the body, let’s take the idea –– so obvious to us today –– that we are bodies, that it is through our warm, sentient, haptic bodies that we are at home in the world. Over the last fifty years or so, really since the 1970s, a large social science literature has emerged around the body and around how we embody certain practices and so on. Much of this literature, of course, relies on Mauss on the one hand and on Merleau-Ponty on the other. And if one works through the anthropology or history of the body, one notes that most authors take the body simply as a given. It is as if they were saying, ‘Of course humans are, were, and always will be bodies.’
But were humans always bodies? At the very least one could ask when, historically speaking, did the concept of the body first emerge? When did humans first come up with a concept of the body and thus experience themselves as bodies? What work was necessary –– from physiology to philosophy –– for this emergence? To ask this question requires the readiness to expose oneself to the possibility that the category of the body and the analytical vocabulary that is contingent on this category is not obvious. There might have been times before the body –– and there might be times after it. For example, if one reads books about ancient Greece, say Bruno Snell’s The Discovery of the Mind, one learns that archaic Greek didn’t have a word for what we call the body. The Greeks had a word for torso. They had two words for skin, the skin that protects and the skin that is injured. They had terms for limbs. But the body, understood as a thing in itself, as having a logic of its own, as an integrated unit, didn’t exist.
‘Carved stone relief of Greek physician and patient’ . Credit: Wellcome Collection. CC BY
One version of taking up Snell’s observation is to say: the Greeks maybe did not have a word for body –– but of course they were bodies and therefor the social or cultural study of the body is valid even for archaic Greece. What I find problematic about such a position is that it implies that the Greeks were ignorant and that our concepts –– the body –– mark a breakthrough to the truth: we have universalized the body, even though it is a highly contingent category. Perhaps a better alternative is to systematically study how the ‘realism of the body’ on which the social and cultural study of the body is contingent became possible. A history of this possibility would have to point out that the concept of a universal body –– understood as an integrated system or organism that has a dynamic and logic of its own and that is the same all over the world –– is of rather recent origin. It doesn’t really exist before the 19th century. In any case, there are no accounts of the body –– or the experience of the body –– before that time and philosophies of the body seem to be almost exclusively a thing of the first half, plus or minus, of the twentieth century. Sure, anatomy is much older, and there were corpses, but a corpse is not a body. The alternative to the realism of the body that I briefly sketched here would imply that one can no longer naively –– by which I mean in an unexamined way –– subscribe to the body as a given. The body then has become uncertain. I am interested in fostering precisely this kind of epistemic uncertainty. To me, epistemic uncertainty is an escape from truth and thus a matter of freedom.
VR: Perhaps a kind of taken-for-granted approach to the body is so bound up with what you call ‘the human’ that questioning it is necessary for your work.
TR: Indeed, although my work led me to assume that what is true for the human or the body is true for all concepts. Every concept we have is time and place specific and thus irreducible, instable and uncertain. But to return to the human: we live in a moment in time that produces the uncertainty of the human all by itself. I render this uncertainty visible by evoking the historicity of the human, and this in turn leads me to wonder if one could say that the human was a kind of intermezzo – a transient figure that was stable for a good 350 years but that can no longer be maintained.
VR: I wonder what you would reply if I were to say: but isn’t that obvious? Concepts are historically contingent, so what else is new?
TR: In my experience, most people grant contingency within a broader framework that they silently exempt from contingency itself. For example, if contingency means that different societies have different kinds of concepts, then society is the framework within which contingency is allowed: but society itself is exempt from contingency. One could make similar arguments with respect to culture. If we say that things are culturally specific, that some cultures have meanings that others don’t have, or entirely different ways of ordering the world, then we exempt culture from contingency.
All of this is to say, sure, you are right, social and cultural contingency are obviously not new. But what if you would venture to be a bit more radical. What if you would not exempt society and culture from contingency? Talk to a social scientist about society being contingent, and they become uncomfortable. Or they reply that maybe the concept of society didn’t exist but that people were of course always social beings, living in social relations. This is a half movement in thought. It assumes that the word has merely captured the real as it is –– but misses that the configuration of the real they refer to has been contingent on the epistemic configuration on which the concept of society has depended. We could say that the one thing a social scientist cannot afford is the contingency of the category of the social.
What I am interested in is the contingency of the very categories that make knowledge production possible. To some degree, I am conducting fieldwork to discover such contingencies, to generate an irreducible uncertainty: as an end in itself and also as a tool to bring into view in which precise sense the present is outgrowing –– escaping –– our understanding and experience of the world.
3. Knowledge Production Under Conditions of Uncertainty/Ignorance
VR: I imagine there is a kind of parallel here with how natural scientists would react to the fact that their concepts no longer fit, for example by developing a more up-to-date way of thinking the brain to replace the synaptic model. But it strikes me that, if done properly, this task is much more radical for practitioners of the human sciences. This is because all of our concepts – including such fundamental ones as the human and the body – are historically contingent, that we have to do away with universal categories. Our task is to fundamentally destabilize ourselves as historical subjects, as academics, as knowers. And I guess a key question is how this destabilization, this rendering visible of uncertainties, can nevertheless be linked to the kinds of knowledge production we have come to expect from the human sciences.
TR: The question, perhaps, is what one means by knowledge production in the human sciences. I think that the human sciences have been primarily practiced as a decoding sciences. That is to say, researchers in the human sciences usually don’t ask ‘What is the human?’ No, they already knew what the human is: a social and cultural being, endowed with language. Equipped with this knowledge they then make visible all kinds of things in terms of society and culture. In addition, perhaps, one could argue that the human sciences have established themselves as guardians of the human – that is, they have been practiced in defensive terms. For example, whenever an engineer argues that machines can think and that humans are just another kind of machine, the human sciences react by defending the human against the machine. The most famous example here would maybe be Hubert Dreyfus against Seymour Papert. A similar argument though could be made with respect to genetics and genetic reductionism.
Now, if one destabilizes the figure of the human neither one of these two forms of knowledge production can be maintained. I think that this is why many in the human sciences experience the destabilization of the human as outrageous provocation. If one gets over this provocation one is left with two questions. The first is: what modes of knowledge production become possible through this destabilization of the human? Especially when this destabilization means that the entire ontological setup of the human sciences fail. Can the human sciences entertain, let alone address this question, given that they are the material infrastructure of the figure of the human that fails? Or does one need new venues of research? I often think here of the relation between modern art and the nineteenth century academy.
VR: That reminds me of Foucault.
TR: Foucault was an anti-humanist –– but he remained uniquely concerned with human reality. I think the stakes here – I say this as an admirer of Foucault – are more radical. So my second question is: what happens to the human? I am acutely interested in maintaining the possibility of the universality of the human after the human. Letting go of the idea seems disastrous. So how can one think things human without relying on a substantive or positive concept of what the human is? My tentative answer is research in the form of exposure: the task is to expose the normative concept of the human in the present, by way of fieldwork, to identify instances that escape the human and break open new spaces of possibility, each time different ones, ones that presumably don’t add up. The goal of this kind of research-as-exposure is not to arrive at some other, better conception of the human, but to render uncertain established ways of thinking the human or of being human and to thereby render the human visible and available as a question.
VR: So if you don’t want to talk about what the human is, I’m wondering if the appropriate question would be about what the human is not.
‘Human microbial ecosystem, artistic representation’ by Rebecca D Harris. Credit: Rebecca D Harris. CC BY
TR: I think such an inversion doesn’t get us very far. I would rather say that I am interested in operating along two lines. One line revolves around the effort to produce ignorance. That is, I conduct research not so much in order to produce knowledge but the uncertainty of knowledge. The other line wonders how one could conduct research under conditions of irreducible ignorance or uncertainty, or how to begin one’s research without relying on universals. A comparative history of this or that always presupposes something stable. As does any social or cultural study. In both cases I am interested in a productive or restless uncertainty –– or second-order ignorance –– not only with respect to the human. In a way, what I am after is the reconstitution of uncertainty, of not knowing, by way of a concept of research that maintains throughout the possibility of truth.
If you were to press me to offer a systematic answer I would say, as a philosophically inclined anthropologist, that I conduct fieldwork/research because I am simultaneously interested in where our concepts of the human come from, in whether there are instances in the here and now that escape these concepts, and in rendering available the instability –– the restlessness –– of the category or the categories of the human, both as an end in itself and as a means to bring the specificity of the present into view. It strikes me as particularly important to note that what I am after is not post-humanism. As far as I can tell most post-humanists hold on to the 18th-century ontology produced by the human but then delete the human from this ontology. What interests me is to break with the whole ontology. Not once and for all but again and again. Nor am I interested in the correction of some error à la Bruno Latour – as if behind the human we can discover some essential truth –– call it Actor Network Theory –– that the moderns have forgotten and that the non-moderns have preserved and that we now all can re-instantiate to save the world.
I am not so much interested in a replacement approach –– what comes after the human? –– than in rendering visible a multiplicity of failures, each one of which opens up onto new spaces of possibility. After all, how Artificial Intelligence derails the human is rather different from how microbiome research derails it or climate change. These derailments don’t add up to something coherent. As I see it, it is precisely this not-adding-up –– this uncertainty –– that makes freedom possible. Perhaps this form of research is closer to contemporary art than to social science research, that could well be. Anyhow, the department I try to build at the Berggruen Institute revolves around the production of precisely such instances of failure and freedom.
Tobias Rees is Reid Hoffman Professor of Humanities at the New School of Social Research in New York, Director of the Transformations of the Human Program at the Berggruen Institute in Los Angeles, and Fellow of the Canadian Institute for Advanced Research. His new book, After Ethnos is published by Duke in October 2018.
Vanessa Rampton is Branco Weiss Fellow at the Chair of Philosophy with Particular Emphasis on Practical Philosophy, ETH Zurich, and at the Institute for Health and Social Policy, McGill University. Her current research is on ideas of progress in contemporary medicine.
In August 2016, the University of Chicago sent a letter to new students that received a great deal of academic and media interest. In the letter John “Jay” Ellison, Dean of Students, stated that the university was committed to “intellectual freedom”, indicating that other concepts referred to – “safe spaces” and “trigger warnings” among them – were antithetical to this notion. The connection between these concepts, as well as the letter itself, was much debated at the time, and the issues raised appear to be the starting point for many of the essays in this book. Are students’ minds really being coddled, or are there valuable things to be learnt from the use of trigger warnings and the debate surrounding them?
Trigger Warnings: History, Theory, Context does not take a clear-cut and dogmatic approach to the topic (as some others have done, most prominently those who object outright to the idea of trigger warnings like Greg Lukianoff and Jonathan Haidt). Most authors in this volume adopt a carefully critical view of trigger warnings that also seeks to understand and explore their implications and uses. The book focuses on higher education in North America; the location is only to be expected, perhaps, as this is where the bulk of debate has taken place. A few essays do look beyond higher education to the broader context from which trigger warnings emerged, including a rather Whiggish history of trigger warnings based on retrospective diagnosis of Post-Traumatic Stress Disorder (chapter 1) and a more incisive look at the use of trigger warnings in the treatment of eating disorders since the 1970s (chapter 3).
The volume claims to be interdisciplinary, although contributions largely stem from those working in the arts, humanities and social sciences. This is understandable: these fields have probably been the most affected by calls for trigger warnings, as well as being concerned with the practice of critical thinking and debate (which, according to their detractors, trigger warnings stifle). The inclusion of a number of authors with a background in library and information studies raises an interesting angle for historians about the way collections are labelled and configured. As Emily Knox indicates in the introduction, the American Library Association has long been opposed to the rating of texts, a practice which holds political connotations and has tended to be fairly arbitrary, usually based on the attitudes of a small group of people. Despite voicing this opposition, however, Knox goes on to raise the central tenet that runs throughout this book: while trigger warnings can and may be used as a form of censorship, teachers and lecturers also have an obligation to consider the welfare of their students.
These two potentially conflicting ideas are reflected in the division of the book into two parts. Starting with the context and theory around trigger warnings, the second half moves on to specific case studies, designed to try and offer some practical guidance for teachers. While Kari Storla does this excellently in her piece on handling traumatic topics in classroom discussion, other case studies are less satisfying and the first half of the book is ultimately of more interest to the historian, grappling as it does with the controversies raised by trigger warnings and placing them in wider context. Are warnings important for welfare, or damaging to students’ critical thinking? Do they protect or censor? Do they fulfil a genuine need for students or do universities use them to avoid confronting systemic issues around student welfare? Most authors do not resolve these questions – indeed, few come down squarely on one side or the other. This in itself reflects the complexity of the debate. It is, of course, possible in each case cited above for both things to be true, even in the same example.
Take Stephanie Houston Grey’s chapter on the history of warnings around eating disorders. This is one of the most thought-provoking and well-written articles in the book. Grey explores the public health response to eating disorders in the late 1970s, which she argues was one of the first instances in which widespread efforts were made to restrict speech on the grounds of preventing contagion. This “moral panic” resulted in crackdowns on eating-disordered individuals, most prominently online, which stripped basic civil rights from people but was nonetheless unsuccessful in reducing the prevalence of eating disorders. Grey’s thoughtful examination of one specific example that began nearly thirty years before trigger warnings became widespread online is an interesting opportunity for reflection on the emergence of triggers. In the case of eating disorders, labelling images and words as triggering might have begun from concerns about people’s welfare, but ultimately became repressive and silencing of people with eating disorders. Providing “critical thinking tools and skill sets”, Grey suggests, might instead assist people to engage in more productive conversations around eating disorders.
Although the context of public concern about contagion is very different from the modern emphasis on managing individual trauma, there are certain lines of similarity with other pieces in the book. Indeed, an emphasis on critical thinking tools to aid welfare is one of the most practical suggestions that emerges from the volume as a whole. As Storla notes, one of the biggest myths around the use of trigger warnings is the assumption that a blanket warning alone can somehow prevent students from experiencing trauma. Storla’s “trauma-informed pedagogy” instead provides a nuanced framework which incorporates student participation at every turn. Her classes develop their own guidelines, debate the use of warnings at the start of the course and consider the difference between discomfort and trauma. This provides a lesson to students in considering multiple viewpoints (in particular those of the rest of the class). Similarly, in their chapter Kristina Ruiz-Mesa, Julie Matos and Gregory Langner suggest that encouraging students to consider the differing backgrounds of their audiences can be a valuable lesson in public speaking. In both cases, trigger warnings become part of the educational content rather than being in opposition to it.
Trigger warnings can, then, be about opening up conversation as well as closing it down. Several authors, including Jane Gavin-Herbert and Bonnie Washick, suggest that student demands for trigger warnings may not even necessarily be about individual experiences of trauma but based in wider concerns about structural violence and inequality. Taking seriously and discussing these concerns may have more impact than a simplistic warning. Indeed, Storla argues that one of her techniques – the use of “safe words” by which students can bring an end to class discussion without having to give a personal reason for doing so – has never been used by a student in her classroom. However, its existence as part of a set of communal guidelines, she feels, means students are safe and supported and thus able to engage more fully in debates. Paradoxically, having the opportunity to censor discussion might actually promote it.
As a general guide, most of the authors in this volume agree that trigger warnings are an ethical and legal practice that can and should be put in place as part of increasing access to higher education. The people most likely to request trigger warnings are minority groups, who are also at greatest risk of experiencing trauma. The problem, however, comes when these issues are individualised, as neoliberal interpretations of trigger warnings have tended to do. Bonnie Washick’s sympathetic critique of the equal access argument for trigger warnings raises the way in which warnings have led to the expectation that individuals who might be “triggered” are viewed as responsible for managing their own reactions. While trigger warnings might have begun as a form of activism and social protest, they have since been medicalised (through the framework of Post-Traumatic Stress Disorder) and individualised. By taking a critical and contextual approach to trigger warnings, both teachers and students can gain from discussing them.
Trigger Warnings: History, Theory and Context is a valuable contribution to the debate around trigger warnings in higher education today, as well as an interesting exploration into some of the nuances around why and how such a concept has emerged. An edited volume particularly suits the topic, allowing for multiple and varied perspectives. No reader will agree with everything they read here, but then that’s precisely the point. If, collectively, the authors in this book achieve any one thing it is to persuade this reader at least that trigger warnings have the potential to generate more insightful debate and critical thought than they risk preventing.
Sarah Chaney is a Research Fellow at Queen Mary Centre for the History of the Emotions, on the Wellcome Trust funded ‘Living With Feeling’ project. Her current research focuses on the history of compassion in healthcare, from the late nineteenth century to the present day. Her previous research has been in the history of psychiatry, in particular the topic of self-inflicted injury. Her first monograph, Psyche on the Skin: A History of Self-Harm was published by Reaktion in February 2017.
Jennifer Wallis, Investigating the Body in the Victorian Asylum. Doctors, Patients, and Practices, (Cham, Switzerland: Palgrave Macmillan, 2017);xvi, 276 pages; 9 b/w illustrations; hardback £20.00; ISBN, 978-3-319-56713-6.
by Louise Hide
Skin, muscle, bone, brain, fluid – Jennifer Wallis has given each its own chapter in this exemplary mesh of medical, psychiatric and social history that spans work carried out in the latter decades of the nineteenth century in Yorkshire’s West Riding Pauper Lunatic Asylum. The body – usually the dead body – is at the centre of the book, playing an active role in the construction of knowledge and the evolution of practices and technologies in the physical space of the pathology lab, as well as in the emerging disciplines of the mental sciences, neurology and pathology. Wallis explores how, in the desperate quest to uncover aetiologies and treatments for mental disorders, there was a growing conviction that ‘the truth of any disease lay deep within the fabric of the body’ (Kindle: 3822). General paralysis of the insane (GPI) is central to the book. A manifestation of tertiary syphilis and a common cause of death in male asylum patients, it was one of the few conditions that produced identifiable lesions in the brain, raising hopes that the post-mortem examination could yield new discoveries around the organic origins of other mental diseases. Investigating the Body in the Victorian Asylum is, therefore, not only about how the body of the asylum patient was framed by changing socio-medical theories and practices, but about how it was productive of them too.
Whilst reading this lucidly written monograph, it soon becomes clear that West Riding was no asylum back-water. Its superintendent, James Crichton-Browne, was determined to forge a reputation in scientific research and West Riding became the first British asylum to appoint its own pathologist in 1872. Wallis has not only marshalled a vast amount of secondary literature, but made a deep and far-reaching foray into the West Riding archives, analysing some 2,000 case records of patients who died there between 1880 and 1900. Drawing on case books, post-mortem reports, administrative records and photographs, Wallis has created a refreshingly original way of conceptualising the asylum patient. Rather than exploring his – as it usually was in ‘cases’ of general paralysis – role within tangled networks of external social agencies and medical practices, she turns her focus to the inner unchartered terrain of unclaimed corpses. She shows how the autopsy provided different ways of ‘seeing’ as the interior of the body was ‘surfaced’ through a range of new and evolving practices and technologies, such as microscopy and clinical photography. Processes for preserving human tissue and conducting post-mortem examinations were enhanced, as were methods for observing and testing tissue samples, and for recording findings. None of these practices was without an ethical dimension, such as a patient’s right to privacy and anonymity.
Doctors, perhaps, gleaned most from the living as they examined and observed patients on admission and in the wards; pathologists could venture into the deep tissues of the body, which were out of bounds for as long as a patient remained alive. Yet the two states could not be separated quite so neatly and Wallis turns her attention to the growing tensions between pathologists and asylum doctors as both scrambled to plant their disciplinary stake in the ground, navigating boundaries between the living and the dead body. How, I wondered, were practices mirrored at the London County Council pathology lab, which opened at Claybury in 1893 and also investigated various forms of tertiary syphilis, including GPI and tabes dorsalis, as well as alcoholism and tuberculosis? Wallis does touch on other laboratories, but it would be interesting to know a little more about how they associated with each other.
One of the many strengths of the book is the way in which Wallis makes connections between social and cultural mores and the impact of wider political and medical developments. Germ theory was, of course, highly influential. Wallis touches on the ‘pollution’ metaphor but might have expanded on the trope of the ‘syphilitic’ individual as a vector of moral depravity in the western context – an unexpected swerve of narrative into the belief systems of the Nuer jars slightly. Otherwise, Wallis provides a fascinating investigation of the social framing of the male body with GPI, explaining how atrophied muscle and degenerating organs might be interpreted as an assault on masculinity in a period of high industrialisation. Soft bones could be equated to a loss of virility and femininity; broken bones forced asylums to ask whether they might be due to the actions of brutal attendants, rough methods of restraint, or of physical degeneration in the patient.
Investigating the Body in the Victorian Asylum provides a meticulously researched and thoroughly readable – for all – social history of an important development in the mental sciences in the nineteenth century, centring it around the evolving practices of post-mortem examinations. I particularly like the way in which Wallis writes herself, her research process and her thinking into the book. Her respectful treatment not only of the asylum patients but of the medical and nursing staff who cared for and treated them is threaded through from beginning to end. One might not expect to be gripped by descriptions of ‘fatty muscles’, ‘boggy brains’ and ‘flabby livers’, but Wallis reveals a fascinating story that is full of originality and tells us as much about nineteenth century medical practice as about the patient himself.
Louise Hide is a Wellcome Trust Fellow in Medical Humanities and based in the Department of History, Classics and Archaeology at Birkbeck, University of London. Her research project is titled ‘Cultures of Harm in Residential Institutions for Long-term Adult Care, Britain 1945-1980s’. Her monograph Gender and Class in English Asylums, 1890-1914 was published in 2014.
In the current issue of HHS, Isabel Gabel, from the University of Chicago, analyses the links between evolutionary thought and the philosophy of history in France – showing how, in the work of Raymond Aron in particular, a moment of epistemic crisis in evolutionary theory was crucial to the formation of his thought. Here, Isabel speaks to Chris Renwick about these unexpected links between evolutionary biology and he philosophy of history. The full article is available here.
Chris Renwick (CR):Isabel,we should start with an obvious question: Raymond Aron, the main focus for your article, is a thinker most readers of History of the Human Sciences will be familiar with. But few – and I count myself among them – will have put Aron in the context you have done. What led you to connect Aron and evolutionary biology together?
Isabel Gabel (IG): Yes, this was a real revelation for me too. I knew Aron as a sociologist, public intellectual, and Cold War liberal, but had never seen his early interest in biology mentioned anywhere. It was actually in the archives of Georges Canguilhem, at the CAPHÉS in Paris, that I stumbled upon a reference to Aron and Mendelian genetics. In 1988 there was a colloquium organized in Aron’s honor, and Canguilhem’s remarks on Aron’s earliest years, and the problem of the philosophy of history in the 1930s, had been collected and published along with several others in a small volume. At the time, Canguilhem felt that not enough importance had been given to the fact that his late friend had abandoned a research project on Mendelian biology, as he put it. This totally surprised and, needless to say, delighted me. I quickly found a copy of Introduction to the Philosophy of History, and began reading.
As someone who works in both history of science and intellectual history, I frame my research questions to address both fields. Aron’s development as a thinker is really a perfect illustration of how these two fields converge, because his encounter with biology can be so precisely localized in time and space. It wasn’t just that he made the obvious connection between theories of evolution and philosophical approaches to history. Rather, it was the very specific moment in which he happened to encounter evolutionary theory, and that this happened in a very French context, which so profoundly shaped his thought.
CR: An important part of your article involves outlining the context of French debates about evolution, which provides the backdrop for Aron’s early intellectual development. As a historian of evolutionary thought myself, I found this part fascinating and something I had only really encountered periodically in my research – Naomi Beck’s work on Herbert Spencer’s reception in France is one example of where I have read about these kinds of issues before. The French context seems strikingly different from the Anglophone one. What do you think the Francophone context brings to our discussions of both the history of evolutionary thought and the human science that’s related to it?
IG: The French context is absolutely central to this story. Everything from the specifics of the French education system, to the cultural politics of Darwinism in France, to the state of the French left in the twenties and thirties played a role in how and why Aron brought evolutionary theory and the philosophy of history together. First, because debates about evolutionary mechanisms were, if not insulated from Anglophone science, at least somewhat resistant to the incursion of external concepts, the epistemic crisis of neo-Lamarckism could only have happened in France. Also, while it’s important to note that Aron’s self-understanding was very post-Henri Bergson, there is no denying Bergson’s influence on early-twentieth-century French biology. All of which is to say that mid-century France is a fascinating case for understanding the feedback loop between biology and philosophy.
Moreover, it’s the very specificity of the French case that makes it so useful for thinking through methodological questions such as the one you raise about the shared history of evolutionary thought and the human sciences. In recent years, there has been renewed interest in bringing science and humanities/social science into dialogue with one another, an impulse that historians of science should of course welcome. Part of what the story of Aron and the philosophy of history in mid-century France can teach us is how contingent these influences can be. In other words, as evolutionary theory evolves over time, so too do the ways we interpret its meaning for the human past. In France in the twenties and thirties, it was the limits of science that were most instructive to Aron. French biologists couldn’t quite bridge between observations and experiments in the present, and the theory of evolution they believed explained past events. Objectivity became, for Aron, partly about acknowledging the limits of both positivism and philosophical idealism, i.e. a way of negotiating the relationship between the limits of observation and the limits of theory.
The French context therefore instructs us not to buy in too quickly to the idea that science offers facts and humanities subsequently layer on interpretation. This picture does a disservice to both the science and the humanities. What becomes visible in the case of Aron and French evolutionary theory is that biology and philosophy were encountering parallel epistemic crises, and therefore that neither one could singlehandedly save or authorize the other.
CR: Another issue that I thought was important in connection with Raymond Aron is liberalism. As you explain in your article, most people think of liberalism when they think of Aron. However, we don’t necessarily think of liberalism when we think about evolutionary biology. Liberalism and evolutionary biology have such a fascinating and entangled history. Why do you think we are now so surprised to find people like Aron were so interested in it?
IG: Those who know Aron by reputation as a Cold War liberal may be surprised, because the conversations he helped shape were about ideology and international order. But I don’t know that everyone will be surprised that Aron was so interested in biology, so much as they might be unsettled. We associate any contact between political beliefs and evolutionary theory with deeply illiberal commitments, with racism, eugenics, and just plain old bad history. And while it’s true that we should approach attempts to import scientific data into humanist frameworks with caution, we also shouldn’t grant science more explanatory power than it can hold. In recent history, the liberal position has been a vigorous critique of biological determinism, but as Stephen Jay Gould and others repeatedly teach us, the point is not simply that society or history is autonomous from the biological, but that biology itself is not as determinist or totalizing as we sometimes understand it to be. That’s why reading the work of scientists themselves is so important, because it brings out the provisional, ambiguous, and contentious nature of their endeavors. It shows that they aren’t stripping the world of contingency, but rather prodding at and making visible new contingencies.
CR: The history you uncover in your article is incredibly revealing in what it tells us about the intellectual origins of not just Aron’s thought but the milieu out of which many people like him emerged. Do you think there is anything in that history that is of particular relevance or importance for the present?
IG: Yes, I do think there are really instructive parallels with the present. Aron came of age in a time of enormous political upheaval and two catastrophic world wars. Political and epistemological upheaval go together, and so this generation of French thinkers can speak to our own anxieties about the eclipse of humanities and social sciences by STEM fields. One way to think about this history’s relevance would be to see Aron as a cautionary tale – the science changes quite quickly as the Modern Evolutionary Synthesis takes shape, DNA explodes as a new way to understand life over time, and antihumanism gains cultural strength in France. So it’s not clear that Aron’s study of biology really got him where he wanted to go. But I actually think this picture is a little too cynical, because it ignores what’s so interesting about Aron’s philosophy to begin with. He understood that biology and philosophy were facing some of the same questions, such as how to understand the past from the perspective of the present, and whether laws that explained the present could be known to have operated the same way in the past.
In this way, we ought to pay attention to how STEM fields and the humanities are speaking to some of the same questions. For example there’s been a lot of energy around the concept of the Anthropocene recently, and it’s a perfect opportunity for historians to contribute to a conversation about something that is both a scientific claim—that humans have become a geological agent—and a historical, political, and moral one. We can offer a longer-term understanding of how history and natural history have spoken to one another in the past, how the human has been constructed through philosophy, human sciences, and natural sciences, and how thinking about the end of civilization is saturated with political imagination. Deborah Coen’s work on history of scaling is a great example, as is Nasser Zakariya’s recent book, A Final Story.
CR: I was very interested in what you had to say in your article about bridging the gap between intellectual history and history of science, which is an important issue for an interdisciplinary journal like HHS. The material or practice turn in history of science has been important in creating this division, as you explain in your conclusion. This turn needn’t rule out the human, of course, and it hasn’t, as work on subjects like the body shows. But it’s clear, as you explain, that many historians of science see intellectual history as something that needn’t concern them. Why do you think belief is misplaced and what do you think we would all gain by putting the two together again?
IG: I hope that the story I’ve told in my article illustrates one immediate benefit of overcoming the longstanding division between intellectual history and history of science. Namely, that there is historical work that just hasn’t been done as a result. Aron’s early interest in evolutionary theory, and its effect on his philosophy of history, is not an isolated case. There is enormous potential in fields like the history of knowledge, history of the humanities, as well as in fields like environmental humanities, to bring the tools of intellectual history and history of science to bear on any number of subjects.
But also within intellectual history, the elision of science has meant flawed or at least partial understandings of figures as enormously influential as Aron. At the same time, within the history of science the material turn that you mention led to a kind of reflexive suspicion of philosophy, which John Tresch has written about. Tresch sees the potential of intellectual history in a broader scale for history of science – get beyond the case study. I think this is part of the story, but that on an even more basic level the history of science will be better told if its methodological framework can accommodate the conceptual feedback that exists between science and philosophy, in addition to the feedback between science and society, institutions, and technology. One of the most exciting things about reading the work of French biologists is discovering the degree to which philosophical questions preoccupied them not as extra-scientific or ex post facto interpretations, but as urgent problems to which their research was addressed.
Isabel Gabel is Postdoctoral Fellow at the Committee on Conceptual and Historical Studies of Science at the University of Chicago. Her current book manuscript, Biology and the Historical Imagination: Science and Humanism in Twentieth-Century France, provides a genealogy of the relationship between developments in the fields of evolutionary theory, genetics, and embryology, and the emergence of structuralism and posthumanism in France.
Chris Renwick is Senior Lecturer n Modern History at the University of York, and an editor of History of the Human Sciences. His most recent book is Bread For all (Allen Lane).
Watching the current University and College Union (UCU) picket lines from afar – I’m a postdoctoral fellow based in Germany – I was trying to think if I’d ever come across any psychological writings on striking, and, more specifically on picket lines. Of course, as Chris Millard has pointed out already in this series, strikes are not primarily expressions of feeling; they are withdrawals of labour. Indeed, references to strikers’ ‘deep’ or ‘strong’ feelings in letters by university Vice-Chancellors seem to downplay the material demands being made by striking workers. I was nonetheless interested in finding out whether theorisations of the psychological experience of picket lines – as specific spatial, temporal and interpersonal phenomena – already exist.
Perhaps unsurprisingly a search of the multiple psychoanalytic journals included in the Psychoanalytic Electronic Publishing database for ‘picket line’ yielded just fourteen results. I looked at the two earliest examples that appeared on this list and in both cases the picket line appeared as a fraught symbol for individual bourgeois analysands. In a discussion of compulsive hand-washing from 1938 a female patient writes a short story whose protagonist is based on her hotel maid’s participation in an elevator operator strike. The patient took up the workers’ cause, organising meetings in support of their actions. In this period of political involvement, the analyst reports, that patient’s hand-washing stopped. As soon as her involvement with the strike ceased (interpreted by the analyst as a form of sublimation), her ‘compulsive’ behaviour resumed.[ref]George S. Goldman, ‘A Case of Compulsive Handwashing’, Psychoanalytic Quarterly, 7 (1938), 96-121.[/ref] In an article from 1943 a ‘frigid hysteric’ patient dreams of a bus trip being cancelled due to a transport strike. In the subsequent interpretation of the dream, which includes a long cutting from a newspaper article on a strike of charwomen the patient had read, the analyst interprets her reaction to the story as relating to her ‘desperately struggling for male status’; she did not ordinarily support strikes on political groundsm but did so in this case due to the gender of the workers.[ref]Edmund Bergler, ‘A Third Function of the ‘Day Residue’ in Dreams’ Psychoanalytic Quarterly, (1943) 12, 353-370.[/ref] In neither example are the patients themselves involved directly in the strikes and neither have first-hand experience of picket lines; the strikes’ psychic significance is tied to existing individual neuroses. Of course, it might be that non-psychoanalytic theories, with less sinister assumptions about group psychology, might be a better place to start for approaching the question at hand. But instead I found myself thinking about the possibility of approaching the question from different fields altogether.
While researching for a recent article reflecting on commemorations of the 1917 October Revolution I found myself reading about early twentieth-century mass spectacles, left-wing pageants and revolutionary dance troupes in the Soviet Union and America.[ref]https://www.radicalphilosophy.com/article/revolutionary-commemoration[/ref] In many of these cases the relationship between ‘actual’ historical events and ‘fictional’ theatrical reenactments proved to be blurry. As the title of a 1933 piece Edith Segal choreographed with the Needle Trades Workers Dance Group in New York indicates – Practice for the Picket Line – workers in union or party affiliated dance groups would create scenarios drawing on their own experiences, which would in turn function as rehearsals for future political action.[ref]Ellen Graff, Stepping Left: Dance and Politics in New York City, 1928–1942 (Durham, NC: Duke University Press, 1999), p. 43.[/ref] But as a historian of the ‘psy’ disciplines with an interest in affective histories of the left, I was particularly intrigued by how the psychological function of such performances was articulated by their creators, participants and audiences. Perhaps these examples, though remote from the ‘psy’ disciplines, could provide material for thinking through the psychic dimension of the collective experience of picketing.
In January 1913 silk weavers and dyers in Paterson, New Jersey went on strike after four workers were fired for complaining about the introduction of a new four-loom technology that required a less skilled workforce. With the strike still on-going but little coverage of it in the mainstream press, activists and intellectuals in New York collaborated with the striking workers to produce an elaborate pageant in Madison Square Gardens on June 7 1913 , sponsored by the International Workers of the World (IWW). The pageant was intended to publicise the strike and raise money for the strike fund, which was urgently needed as the striking workers and their families were at risk of starvation. But the pageant’s purpose was financial, propagandistic and educational, it was also emotional. The Pageant saw 1,029 strikers reenacting the dramatic events of the picket lines punctuated by familiar songs from the labour movement in which the audience was invited to join.[ref]For the programme of the pageant and other associated primary documents, see: ‘Paterson Strike Pageant’, The Drama Review: TDR, 15, 3 (1971), 60-71.[/ref] The dramatic, fast-paced temporality of the staged strike differed markedly from the drawn-out nature of the real one but the worker-performers found the rehearsal process gave them a chance to reflect on and process their experiences. Almost 15,000 people watched the performance which was then described in detail in New York newspapers. The sympathetic leftist publication Solidarity claimed that the performance ‘seized the imagination’, while the hostile New York Times accused it of ‘stimulating mad passion against law and order’[ref]These reviews are cited in Steve Golin, The Fragile Bridge: Paterson Silk Strike, 1913 (Philadelphia: Temple University Press, 1988), p. 166, p. 169.[/ref]. Although these accounts differed in their political assessment of the production they both emphasised its psychological power. Although the strike was simulated, the passions the reenactment stimulated were real.
The pageant failed to raise significant amounts of money and many subsequently declared it a failure, which distracted workers and took them away from the real pickets outside the mill.[ref]See, Elizabeth Gurley Flynn, ‘The Truth About the Paterson Strike’, Rebel Voices: an IWW Anthology, ed. Joyce Kornbluh (Ann Arbor: University of Michigan Press, 1965 pp. 214-226.[/ref] Indeed, the organisers produced the spectacle at a loss. The Paterson silk strike itself was soon defeated. Workers began returning to the factory in July and many of their demands were never met. Yet some discussions both by contemporaries and historians insist that the pageant succeeded not only as an aesthetic innovation which inspired future artistic endeavours – John Reed, one of the New York intellectuals who instigated the production would soon leave for Europe; his book Ten Days That Shook the World would become a defining account of the October Revolution inspiring Sergei Eisenstein’s Octoberin turn – but also as a cognitive and affective interpersonal experience which similarly outlived the performance itself. Though sufficient funds were not raised, consciousnesses were raised (to use the vocabulary of the pageant’s participants and chroniclers).[ref]See, for example, Leslie Fishbein, ‘The Paterson Pageant (1913): The Birth of Docudrama as a Weapon in the Class Struggle’, New York History, 72, 2 (1991), 197-233, Linda Nochlin, ‘The Paterson Strike Pageant of 1913’, Art in America, 62, 1974, 64-68. In her discussion of Segal’s performance Ellen Graff writes that ‘Radicals hoped that mock demonstrations… would prepare workers for actual confrontations as well as engage their sympathies and raise political consciousness.’ Stepping Left, p. 43.[/ref]
The terms ‘class consciousness’ and ‘political consciousness’ in reflections on the performance function as psychological concepts despite rarely having been explicitly understood as such, and as concepts which seem to have gone largely un-thematised within the ‘psy’ disciplines. One starting point for trying to think more about the psychology of the picket line would be to think more carefully about how these terms were used in this context, how they allude to political concepts elaborated by Lenin, Rosa Luxemburg, György Lukács and others, but also depart from or complicate them. I’d be interested in thinking about how an emphasis on gaining a broad intellectual understanding of a political situation was combined with an insistence of the importance of immediate emotional experiences, how emotional experience can allow individuals to situate themselves within a collective, and so on.
Perhaps more central, however, would be a consideration of the function of re-enactment as a form of reflection upon political action (and even as a form of political action in its own right) enabling strikers and their supporters to better understand and communicate their struggle. This might open up ways of approaching the (necessarily very different) forms of reflection, representation and dissemination that attend current disputes. Of course, there is often a kind of theatricality to the picket line as has been evident during the current UCU strikes – which can function to communicate demands, alleviate the tedium of standing around in the cold all day, attract more people to the picket etc – but the example of the pageant brings into focus a slight different set of questions about how striking workers represent and communicate their struggles to themselves and others to forge and sustain the solidarity necessary to resist capitulation. This seems particularly urgent in a context in which collective memories of labour organising can be hard to locate.
The current strikes have not succeeded yet, but UCU branches’ rejection of the deal proposed on Monday (March 12th) indicates that something has shifted during this dispute, which may mark the beginning of a broader resistance to the wider marketization of higher education in the UK. University vice-chancellors’ insistence on invoking the ‘deep’ or ‘strong’ feelings of their striking employees can be read as attempts to reduce picket lines to sites of emotional fracas, or coordinated temper tantrums, strangely divorced from the collective withdrawal of labour. But attending to the psychological dimensions of the picket line could potentially do something very different, offering space for acknowledging the anxiety, frustration, boredom and anger associated with striking, while also allowing us to explore how joyful interpersonal collective experiences can participate in building and sustaining political movements.
Hannah Proctoris a Fellow of the Institute for Cultural Inquiry, Berlin.
Image attribution: The accompanying image, ‘Bus load of children of Paterson, N.J., strikers (silk workers) in May Day parade – New York City] [graphic]’ has been sourced form the online catalogue of the Library of Confress. There are no known restrictions on reproduction. The original can be viewed here: http://www.loc.gov/pictures/resource/cph.3b00599/
Strikes stir the emotions. The solidarity of picketing, the anxiety of students missing classes, the anger of those who feel wronged enough to withdraw their labour. There are doubtless strong feelings behind the current University and College Union (UCU) industrial action to defend ‘defined benefit’ pensions. These varied feelings have been mobilized in a number of ways over the past few weeks, and this short post (building on some tweets here and here is an attempt to analyse a bit further the emotional politics of striking.
There are a number of distinct parts (both core and peripheral) of the strike action at play at any one time: the picket line, supportive demonstrations and rallies, teach-outs, the implied dissent, and the withdrawal of labour itself. A key aspect of the emotional politics of this strike can be seen in the confusion (both deliberate and unwitting) between a number of these elements.The Vice-Chancellor of The University of Sheffield, Professor Sir Keith Burnett, sent an email to staff where those on strike were characterized as ‘communicat[ing] the strength of their feelings through strike action’ (you can see an extract from that email in this tweet from Sheffield historian, Simon Stevens). This was not a hostile characterization, but it was a serious misunderstanding. It crystallised out some of the issues I’ve had with the strike and on the picket.
As others have pointed out in response to that communication, the point of a strike is not an expression of feeling, it is to disrupt the operations of the employer to force them back to the negotiating table. As Simon Stevens himself put it on Twitter: ‘A strike is an effort to rebalance the material interests shaping the employer’s behaviour by shutting down production and/or operations.’ But this material intervention seems lost, forgotten or at the very least undersold in the current dispute. The idea (often tacit) that ‘so long as you don’t cross the picket, you are not strike-breaking’ is at issue here. If you do work for the university at home, or in a coffee shop, or anywhere else on strike days, you are not striking. If you sit at home on a strike day, and edit an article to submit to the Research Excellence Framework (REF) on a strike day, you are not striking. So far, so orthodox.
The opposite side to this misapprehension is where emotions come in. It is related to the fact that universities increasingly have been seen in recent years as battlegrounds for free speech. The most recent eruptions are over the Prevent agenda and no-platforming; universities have of course for many years been associated with radical protest and demonstrations. These aspects, the expression of dissent, opinion and protest – often couched in terms of articulation of feelings – have become conflated with strike action. Picket lines superficially have many of the material trappings of protests: banners, placards, megaphones and chanting. But their object is quite distinctive from that of a protest. They exist to demarcate a strike area, to put the strike into a spatial idiom. Even if someone is ‘only’ crossing a picket line for ‘one meeting’ they risk signaling that they do not support the strike, regardless of their actual intentions.
We appear to be in a period of flux in how we think about picket lines, and there is real ambiguity about members of other unions, especially if those unions do not support (or are prevented from supporting) the picketing taking place. However, it is perfectly possible to (at least partially) mitigate the crossing of a picket – with stories across universities of people working who brought hot drinks and snacks out to picketers. I am much less sympathetic to people in UCU, often in senior, permanent positions, who choose not to strike because they don’t agree with the particular issue being foregrounded, or the particular tactics employed. If you choose to freelance when asked for collective action, I think you’ve got some hard thinking to do about solidarity. Saying ‘I don’t agree with this or that’ about the action, mistakes a collective agreement to withdraw labour (overwhelmingly voted for by members) for individuals having a protest. By framing the action in this way, protest is foregrounded, and the spatial and material disruption of the strike disappears. Sir Keith met with Sheffield UCU representatives on Tuesday – and was snapped holding an ‘Official Picket’ sign. However, he still committed this error, inviting UCU delegates into his office, which was across the picket line. They refused to cross, but met him later in a neutral venue.
So picket lines are not protests, and they are not about expressions of feelings: they are a spatial manifestation of the collective withdrawal of labour. Strikes are also material interventions that are undermined by all work, even when it doesn’t cross the picket line.” Well, so what? As I see it, this brings into focus the demand to reschedule teaching, which had previously been backed by, by many institutions, by a threat to deduct up to 100% of pay for each day teaching was not rescheduled. (Most institutions have backed down under public pressure on this particular point. A list of institutions not understood to have backed down on this point at the time of writing can be seen here). However, the fact that the demand was made at all is important, in both material and emotional terms. According to the view that mistakes strike action for an expression of feeling, once the feeling is expressed, there is no reason why the teaching can’t be done. It can be rescheduled (the logistical impossibility notwithstanding), because making the point was the point, rather than the withdrawal of labour. In other words: the supportive demonstrations, the protest, the signs, the placards have obscured the core of the strike, i.e. the withdrawal of labour.
But materially this matters too. Pay has already been, or will be, docked for the work not done, so clearly the employers are engaging on this material, financial level. This relation is in turn connected to the financial interests of students: they are paying their fees, so why shouldn’t they get the promised teaching? However, the employers do not follow through on this view of the financial politics of the strike; they do not, for example, propose to repay people for making up the work. They also (as noted) threatened to dock pay again for the refusal to reschedule. This only makes sense if the strike is transformed into a free-floating expression of dissent, of emotion, of feeling. But that logic doesn’t tally with pay-docking.
If the strike is an expression of feeling, then employers should not dock pay on strike days, and the demand to reschedule will become understandable. (The corresponding logic of holding a 14-day demonstration outside places of work is another matter.) On the other hand, if the docking of pay is legitimate as a response to the withdrawal of labour (and the withdrawal is put in spatial terms by a picket line), then the demand to make it that labour is arguably a challenge to the right to strike after the fact. Emotions and feelings matter hugely in this action. But they are not the point of the strike.
Chris Millard is Lecturer in the History of Medicine and Medical Humanities at the University of Sheffield, and book reviews editor of History of the Human Sciences.
This article represents the views of the author only, and is not written on behalf of History of The Human Sciences, its editors, or editorial board.
The featured image, ‘Penn[sylvania] on The Picket Line — 1917’ comes from ‘Women of Protest: Photographs from the Records of the National Woman’s Party,’ Manuscript Division, Library of Congress, Washington, D.C. There are no known copyright restrictions with this image. The original can be viewed at this link: https://www.loc.gov/resource/mnwp.160022
Human geography – a discipline in the hinterland of the human sciences – is a discipline preoccupied with praxis. Analyses of the relationship between what the geographer writes, what the geographer says, and what the geographer does have animated many of the discipline’s vigorous epistemological and political battles. It is unsurprising, then, that the University & College Union (UCU) strike over Universities Superannuation Scheme (USS) pensions has brought questions of praxis into fraught focus. Indeed, in Marxist and other radical geographies – whose histories are generally traced back to the 1960s – the strike has been a privileged site of analytical and activist attention. But tensions today have not been solely about which geographers are – and are not – on the picket lines. Broader issues over where the discipline of geography is made, and who comes to represent that discipline are at stake. On the picket lines and on social media, geography’s present and past – both in material and fantasmatic form – are being worked up and worked through
On the first day of the strike, the Vice Chancellor (VC) of the University of Sussex, Adam Tickell, issued a statement that made it clear that he did not believe that there was an ‘affordable proposal’ for pensions that would satisfy both USS and the Pensions Regulator. As the hours passed, Tickell appeared uncompromising in the face of calls for him to join other VCs who had called for a return to negotiations. An interview with him conducted shortly before the strikes was re-circulated – where he was quoted as saying ‘The younger me may have taken part in the strikes, I don’t know about the current me.’
So far, perhaps so predictable. But Tickell’s words about strike participation could not but carry particular weight given that they had been uttered not only by an economic geographer, but one of the most prominent theorists of neo-liberalism. Indeed, Tickell, in the 1990s and 2000s, had published – often in articles co-authored with Jamie Peck – what became some of the most widely read, and remain some of the most widely taught, economic-geographical anatomizations of post-Fordism, neoliberalism, and global finance. (You can see a list of Tickell’s publications here.) On day 2 of the strike, I addressed Adam Tickell on Twitter lamenting how ‘my younger (geography undergrad & grad) self would not have wanted to imagine that I would be reading your work, years on, to help in the fight against what you are now upholding.’ (These perturbances are as much about disciplinary memory as well as about a discipline’s moral rectitude.) As the days of the strike passed, anger against the position adopted by Tickell amongst geographers grew, to the point where the lustre of the esteemed author was at risk of being tarnished by the apparent intransigence of the university head. By day 4 of the strike, an anonymous parody Twitter account for Adam Tickle. VC. was up and running; it was tweeting about the strike and about the disparity between the alleged ‘early’ and ‘late’ Tickell.
Laura Gill at the University of Sussex picket line holds a placard reading ‘SLAY THE NEOLIBERAL BEAST,’ a quotation from Adam Tickell. From a tweet by Benjamin Fowler, and used with both his and Laura Gill’s approval. Original tweet available at: https://twitter.com/B_B_Fowler/status/968481178444541952
Many human sciences have wrestled with how best to bring into focus the object that demands analysis (in this case, the current crisis within universities manifest through the USS strike) – debating which frameworks best allow us to understand that object, as well as the role of those variously positioned in relation to that object. In this sense, Tickell has become a useful figure. Through him, many more general issues – that are not actually about one, or even several individuals, and that relate to the production of academic knowledge and the organization of today’s universities – can be debated and contested. Here those debates centre on the extent to which one university manager’s earlier publications on neoliberalism could and should be used to understand the current crisis in toto, as well as on the extent to which the existence of those same writings should give added weight to the moral opprobrium directed to that same manager’s current stance. There are two separate issues, here. One might, with Barnett, think that neoliberalism ‘was and is a crap concept.’ In this case, one might argue that Tickell’s earlier writings – and his formulation of concepts therein – don’t much help us in understanding, let alone combating, what is unfolding in universities today. Our energies would be better used if they sourced better writings from the archives and activism of geography – as well as from other social sciences and social movements. But that does not imply that the disruption provoked by the inferred disparity between ‘early’ and ‘late’ Tickell is misplaced. If the former concern is largely epistemological, the latter is as much ethico-political as epistemological.
Geographers Derek McCormack and James Palmer hold placards in the strikes quoting from Adam Tickell’s research papers. From a tweet by Tina Fawcett, and used with approval from her as well as from Derek McCormack and James Palmer. Original tweet available at: https://twitter.com/fawcett_tina/status/968777039384928262
Here we have a scene in which the history of geography, and the politics of that history, is undergoing disturbance. (The Adam Tickle Twitter parody account explicitly reworks the discipline’s historiography through its satiric phrase ‘formerly an economic geographer of note’.) And below the contretemps over Tickell, something else pulses in the discipline’s corpus – something I do not think has been worked through. That is geography’s collective relationship to the long, and continuing, career of Nigel Thrift. Thrift is another prominent geographer and social theorist who was a highly visible and, in the words of student-facing website The Tab, ‘divisive’ VC at the University of Warwick. In the course of his tenure there, the institution – as Times Higher Education put it–‘experienced a number of controversies.’ In relation to Thrift, there is obvious scope to reflect on the relationship between his earlier work on left politics and his later career as a university manager. And there have been, online, some serious, critical reflections on this. But in the standard outlets for academic production, such as journals, there has been – as far as I know – very little substantive discussion. This is a noticeable – and meaning-ful – lacuna.
But I want to return to the affective and political disturbance generated by the stance taken by Adam Tickell. And to one reason why the apparent disparity between the so-called ‘early’ and ‘late’ Tickell seems to have been experienced by many – including me – as peculiarly wounding. To my mind, we should not uniformly expect or demand thinkers and writers to be free of contradiction. I recall, here, the opening of the obituary of one of geography’s most prominent radical theorists and activists, Neil Smith, which drew on words spoken by the radical geographer David Harvey (also Smith’s doctoral supervisor as well as colleague at CUNY) at Smith’s memorial service: ‘Neil Smith was the perfect practicing Marxist – completely defined by his contradictions’. (Such inconsistencies did not sway Smith’s steadfast commitment to radical politics.)
Contradiction in and of itself is not the problem. Then what is? Let’s look at how the passing of time is staged. Tickell said that while his ‘younger me’ may have taken part in the strikes, he was not sure about his older, contemporary self. Such a sentence resonates with a powerful discourse in which left politics is positioned as a childish practice, one that might well need to be given up as adulthood ensues. (Recall Saint Paul’s exhortation to ‘put away childish things.’) And this is not unconnected with the rhetorical campaign that Universities UK has been waging in an attempt to persuade others of the pragmatism, reasonableness and maturity of their assessment that there is no clear option around pensions other than the one they have proposed.
That the discipline of geography has produced a number of today’s UK Vice Chancellors (including Paul Boyle at the University of Leicester, Paul Curran at City University of London and Judith Petts at Plymouth University) – as well as the current UK Conservative Prime Minister – makes it urgent for many of us on the picket lines to demonstrate that geography as a discipline and as a political project is not exclusively held by or in those figures. The figure who might regard strikes as childish things needs to be substituted; another articulation of the social world, and of the geographer’s role in making it, needs to take their place. Hence geographers from UCL carrying a placard during the strikes announcing that ‘Not all geographers are neo-liberal vice-chancellors.’ Or the social and economic geographer, Alison Stenning, using the hashtag #notallgeographers, tweeting that, in spite of some ‘ignominious attention [that] certain geographers are getting’ geographers had nonetheless ‘been pretty impressive on the picket lines & the Twitter frontline.’
But I want to conclude with the outlines of a psychosocial argument, one that dismantles the apparent disjunction between the early and the late – or the gap that appears, as one Twitter user put it, within ‘the radical academic turned hard-line conservative’. Beneath the concern that many of us geographers have for the stances taken by prominent individuals within the discipline, perhaps lies a deeper wound that has not substantially been acknowledged or worked through. And that is the possibility that the very criticality of much of what passed for ‘critical geography,’ in the 1990s and beyond, precisely constituted the register of the successful and upward-moving academic. That criticality was part and parcel of adhering to and advancing a certain kind of theoretically-smart ‘knowledge’ that was required as evidence that would help one advance – even to the level of VC – within a professionalized space. Being critical in a particular way in the 1990s was, indeed, one of the pathways to advancement. And many of those ‘critical’ publications were at the heart of, rather than in conflict with, the current remaking of the university.
Rather than the adult putting away childish things, or the late eclipsing the early, what if the child made the adult? What if the early led to – was continuous with – the late, rather than being disavowed by it? If this were the case, then it would put many of us – and I include myself explicitly, here – in an uncomfortable position. For let us acknowledge the affective payoff that can accompany lamenting the eclipse of the early by the late: in addition to anger, it is possible to feel secure in one’s conviction that one has now cast out the late as the politically compromised. The radical credentials of a good geography are safe. By contrast, a situation in which there is no easy division between the early and late, the putatively radical and the compromised, is much more affectively and politically tricky to navigate. And this leads to some difficult questions that I have pondering over – on and off the pickets – these last few days
First, how do those of us inside as well as outside geography tell the history of critical geography in and beyond the 1990s? This is certainly important epistemologically – it’s part of the history of the human sciences that deserves greater attention than it has currently received. But it’s also central to how we understand what has been happening to the university. And this should help us think through how we might best use the strike in which we are currently involved to challenge what we see as most pernicious about these recent transformations.
Second, where and how is geography made? Where does it do its work? While there has been some interest in the apparent abundance of geographers who have become VCs, I don’t think we (those of us in and near geography and the history of the human sciences) have remotely got to grips with how to account for this. If there is that abundance in comparison with other disciplines, how does that reroute our accounts of where and how geography as an epistemological formation wields power? The tight relationship between PPE (the University of Oxford’s degree in philosophy, politics and economics) and the UK’s twentieth-century elite is a topic of frequent discussion. Beyond Neil Smith’s account of Isaiah Bowman, where are the historically and sociologically astute analyses of hard and soft geographical power?
Third, how do we widen the circles for forms of critical praxis that are not beholden to discourse and practices of promotion and managerial success in academia? What does that mean for those of us making geographies on and off the picket lines today? The interventions of black studies and anti-colonial studies, in particular, provide numerous routes through which to envisage – and put into practice – the reshaping of geography and of the university.
And there is one final note in relation to my previous point. It would be too easy to construe the historical tale of geography’s travails as a white boys’ story. Many of the protagonists in this post – those who have wielded power, and those configured as radicals who have contested it – do indeed fall within this category. But there are also many, ongoing attempts on the picket lines and on social media to disrupt that historical account, and to disrupt the future paths that geography and the university might take. As I finish this post, the geographer Gail Davies, for example, is unearthing the complex role that management consultants have played in the USS valuation and in the discursive shift that university senior managements have made towards ‘flexible pensions’. There is perhaps more work to be done along these lines before we can, indeed, comfort ourselves with the thought: #notallgeographers
I am profoundly indebted to Stan (Constantina) Papoulias in the writing of this blog post. They clarified for me much of what was most interesting in the figuring of the early and the late – in particular in relation to how a certain kind of criticality went hand in glove with the late twentieth-century transformations of the academy.Our discussions have taken place as we both take strike action in our respective universities.
Felicity Callard is Professor of Social Research in the Department of Psychosocial Studies at Birkbeck, University of London, Director of the Birkbeck Institute of Social Research and Editor-in Chief of History of the Human Sciences.
This article represents the views of the author only, and is not written on behalf of History of The Human Sciences, its editors, or editorial board.
Six days into the current Universities and Colleges Union (UCU) strike against pension cuts, Universities UK (UUK), the representative body for British higher education management, launched a series of tweets and videos in support of University Mental Health Day. In a move that is now pretty familiar, the presentations shifted attention from a toxic environment in which staff and students now experience unprecedented levels of mental distress, to a series of tips for self care – joining a club, eating well, pursuing a hobby – in which much of the responsibility for well-being is placed back upon the shoulders of the individual sufferer. As the UUK Mental Health Policy Officer advised in a Twitter video, ‘Don’t be afraid to take time for yourself.’
I guess to many of the viewers, this advice must have seemed spectacularly mistimed. At the precise moment that the UUK was outlining its commitment to ending anxiety and depression in higher education, the wider organisation was working to significantly change pension conditions, undermining the secure livelihood once promised to university staff. It would be foolish, however, to dismiss the advice out of hand. The idea of ‘making time for oneself’ has been a central part of the labour struggle for the last three centuries. As E. P. Thompson argued many years ago, once employers had hammered into modern workers the idea that ‘time is money’, employees’ struggle shifted from the preservation of traditional rights to the recovery of lost time.[ref]E.P. Thompson. ‘Time, Work-Discipline, and Industrial Capitalism’, Past & Present 38 (1967):56-97; (4), p. 34. [/ref]
The attack on future pensions, and the different analyses offered by UUK and by EP Thompson, all point to ways that different notions of temporality are caught up in academic work: not simply in the way it is organised but also in the way that it is experienced. The unremitting busyness of academic life, mostly complained of but occasionally worn as a ridiculous badge of honour, throws colleagues into a relentless present in which prospect and perspective are all too often lost to the insistent clamour of everyday demands. This sense of the overwhelming present is only heightened, as the critic Mark Fisher noted, by the precariousness of modern casualised labour, which offers no secure place from which to understand our past or project our future hopes.[ref]Mark Fisher, Capitalist Realism: Is there no alternative?(Chichester: Zero Books, 2009), p.34 [/ref] Strikes offer us an opportunity to disengage, to escape a constricting present and get a sense of where we stand in time. Many strikes, certainly most of the strikes I have participated in, are kind of nostalgic: they mark a world we are on the brink of losing, or perhaps have lost. Others, like this current strike, quickly go beyond that, taking us out of the present to remind us there is a future to make. They give us, as UUK recommended, the opportunity to take time for ourselves. In our present crisis, strikes are the best medicine we have.
Rhodri Hayward is Reader in History in the School of History at Queen Mary University of London, and one of the editors of History of The Human Sciences.
The accompanying image, ‘Image taken from page 5 of “The Universal Strike of 1899. [A tale.]”‘ has been been taken from the British Library’s flickr site. The original can be viewed here.
This article represents the views of the author only, and is not written on behalf of History of The Human Sciences, its editors, or editorial board.