Sociologica. V.12 N.1 (2018)
ISSN 1971-8853

Openings

Lucy SuchmanCentre for Science Studies, Lancaster University (United Kingdom) http://www.lancaster.ac.uk/sociology/about-us/people/Lucy-Suchman

Lucy Suchman is Professor of Anthropology of Science and Technology in the Department of Sociology at Lancaster University. Before taking up her present post, she was a Principal Scientist at Xerox’s Palo Alto Research Center, where she spent twenty years as a researcher. Her research is focused on technological imaginaries and material practices of technology design, particularly developments at the interface of bodies and machines.

Published: 2018-07-26

A research project for me occurs at the conjoining of what I could characterize as three intersecting fields. These are (in no particular order, insofar as each is equally important):

– A concern; that is, something in the world that I would like to support and/or in which I hope to intervene;

– A body of scholarship; that is, ways of theorizing the world that feel illuminating, and comprise an ongoing process of thinking together to which I’d like to contribute;

– A location; that is, a place (in multiple senses of that term) from which I’m able and willing to act.

These are fields of thought and action that I am both actively engaged in generating, and that simultaneously capture and compel me in particular (indeterminate) directions at any given time. At the risk of over-rationalizing (as retrospective reconstructions invariably do), I can trace the course of my research life to date in these terms, beginning in the United States in the 1960s and 1970s.

The radical politics of those decades in the U.S. circled around concerns of race, war, and increasing concentrations of multinational corporate power. Diffracted through the contemporary anthropological orientations of the University of California at Berkeley, where I was a student, these concerns inspired a turn characterized by Professor Laura Nader as “studying up” (Nader, 1974). This trope urged a shifting of the anthropological gaze from those marginalized by centralizations of power, to the élite institutions and agencies in which political and economic resources were increasingly concentrated. At the same time, my encounters with teachers at UC Berkeley like the great symbolic interactionist Herbert Blumer (1969/1986), along with the expanding fields of ethnomethodology and conversation analysis, opened up the possibilities for what I would now characterize as a performative account of the mundane production of social order. The intersection of these lines led me to the idea of a PhD project that would involve a critical, interactionist analysis of the everyday operations of corporate power.

My search for access to a multinational corporation in the late 1970s took me in an unexpected direction, one that proved highly consequential for my life and work over the ensuing twenty years. My serendipitous arrival at Xerox’s Palo Alto Research Center (Xerox PARC) opened the space for a range of collaborations: critical engagement with cognitive and computer scientists around questions of intelligence and interactivity; collaboration with system designers aimed at respecifying central issues for them including the human-machine interface and usability; extensive studies of work settings oriented to articulating technologies as sociotechnical practice; engagement with an emerging international network of computer scientists and system designers committed to more participatory forms of system development with relevant workers/users; activism within relevant computer research networks to raise awareness of those alternatives; and iterative enactment of an ethnographically informed, participatory design practice within the context of the research center and the wider corporation. Although this extended history of collaborative experimentation and engagement was unquestionably fruitful, it also raised a number of questions for me regarding the politics of design, including the systematic placement of politics beyond the limits of the designer’s frame (see Suchman, 2011; 2013).

The end of my tenure at Xerox PARC took me in 2000 to Lancaster University in the UK, and more specifically the Department of Sociology and the Centre for Science Studies. Building on the research enabled by my years at the research center, but freed now from the constraints of that location, I was able to return more directly to the political concerns that had taken me there. More specifically, I began to look for a way that I might enter worlds of critical scholarship and activism aimed at interrupting the trajectories of U.S. militarism in which, as a U.S. citizen, I felt implicated. I hoped to build on the foundations already laid by my previous research, at the intersections of AI/robotics, human-computer interaction, anthropology and STS. My learning curve in relation to military worlds was (and still is) a steep one, having spent my life to this point avoiding any contact with those worlds. First steps included reading and engagement with the work of others.

My current work in this area is animated by a concern with what geographer of militarism Derek Gregory (2004, p. 20) identifies as the “architectures of enmity”; that is, the sociotechnologies that facilitate enactments of “us” and “them” (see Suchman et al., 2017). A point of contact with my previous work has been the military trope of “situational awareness”, which I’ve engaged both through studies of a project in immersive simulations for military training (Suchman, 2015; 2016a), and in the context of a campaign, led by Human Rights Watch, to “Stop Killer Robots.”1 The campaign is premised on the observation that the threat posed by robotic weapons is not the prospect of a Terminator-style humanoid, but the more mundane progression of increasing automation in military weapon systems. Of particular concern are initiatives to automate the identification of particular categories of humans (those in a designated area, or who fit a specified and machine-readable profile) as legitimate targets for killing. A crucial issue here is that this delegation of “the decision to kill” presupposes the specification, in a computationally tractable way, of algorithms for the discriminatory identification of a legitimate target. The latter, under the Rules of Engagement, International Humanitarian Law and the Geneva Conventions, is an opponent who is engaged in combat and poses an “imminent threat”.

We have ample evidence for the increasing uncertainties involved in differentiating combatants from non-combatants under contemporary conditions of war fighting (even apart from crucial contests over the legitimacy of targeting protocols). And however partial and fragile their reach, the international legal frameworks governing war fighting are our best current hope for articulating limits on killing. The precedent for a ban on lethal autonomous weapons lies in the United Nations Convention on Certain Conventional Weapons (CCW), the body created to prohibit or restrict the use of “certain conventional weapons which may be deemed to be excessively injurious or have indiscriminate effects.”2 Since the launch of the campaign for a ban in 2013, the CCW has put the debate on lethal autonomous weapons onto its agenda. In April of 2016, I presented testimony on the impossibility of automating the capacity of “situational awareness”, accepted within military circles as necessary for discrimination between legitimate and illegitimate targets, and as a prerequisite to legal killing (Suchman, 2016b). My ethnomethodological background, and my earlier engagements with artificial intelligence (Suchman, 2007) alerted me to the fact that prescriptive frameworks like the laws of war (or any other human-designated directives) presuppose, rather than specify, the capacities for comprehension and judgment required for their implementation in any actual situation. It is precisely those capacities that artificial intelligences lack, now and for the foreseeable future.

While those of us engaged in thinking through science and technology studies (STS) are preoccupied with the contingent and shifting distributions of agency that comprise complex sociotechnical systems, the hope for calling central human actors to account for the effects of those systems rests on the possibility of articulating relevant normative and legal frameworks (Suchman & Weber, 2016). This means that we need conceptions of agency that recognize the inseparability of humans and technologies, and the always contingent nature of autonomy, in ways that help to reinstate human deliberation at the heart of matters of life, social justice, and death. This concern, informed by rich bodies of relevant scholarship at the intersections of sociology, anthropology, STS, and cultural/political geography (to name only those with which I am most immediately engaged), animates my current efforts to relocate and deepen longstanding heuristics for the articulation of contemporary social formations.

References

Blumer, H. (1986) Symbolic Interactionism: Perspective and Method. Berkeley: University of California Press. (Original work published 1969).

Gregory, D. (2004). The Colonial Present. Oxford: Blackwell.

Nader, L. (1974). Up the Anthropologist: Perspectives Gained from Studying Up. In D. Hymes (Ed.), Reinventing Anthropology (pp. 284–311). New York: Vintage.

Suchman, L. (2007.) Human-Machine Reconfigurations: Plans and Situated Actions (Revised edition). New York: Cambridge University Press.

Suchman, L. (2011). Anthropological Relocations and the Limits of Design. Annual Review of Anthropology, 40, 1–18.

Suchman, L. (2013). Consuming Anthropology. In A. Barry & G. Born (Eds.), Interdisciplinarity: Reconfigurations of the Social and Natural Sciences (pp. 141–160). London: Routledge.

Suchman, L. (2015). Situational Awareness: Deadly Bioconvergence at the Boundaries of Bodies and Machines. Media Tropes, V(1), 1–24.

Suchman, L. (2016a). Configuring the Other: Sensing War through Immersive Simulation. Catalyst: Feminism, Theory, Technoscience, 2(1). https://catalystjournal.org/index.php/catalyst/article/view/suchman/html

Suchman, L. (2016b). Situational Awareness and Adherence to the Principle of Distinction as a Necessary Condition for Lawful Autonomy. In R. Geiss & H. Lahmann (Eds.), Lethal Autonomous Weapon Systems: Technology, Definition, Ethics, Law & Security (pp. 273–283). Berlin, Germany: Federal Foreign Office, Division Conventional Arms Control.

Suchman, L. & Weber, J. (2016). Human-Machine Autonomies. In N. Bhuta, S. Beck, R. Geis, H.-Y. Liu, & C. Kreis (Eds.), Autonomous Weapons Systems (pp. 75–102). Cambridge, UK: Cambridge University Press.

Suchman, L., Follis, K., & Weber, J. (2017). Tracking and Targeting: Sociotechnologies of (In)security. Science. Technology and Human Values, 42(6), 983–1002.


  1. See https://www.stopkillerrobots.org/ Accessed June 29, 2018.

  2. https://www.unog.ch/80256EE600585943/(httpPages)/4F0DEF093B4860B4C1257180004B1B30?OpenDocument Accessed June 29, 2018.