Sociologica. V.15 N.2 (2021), 179–185
ISSN 1971-8853

Lucy Suchman in Conversation with Ana Gross

Ana GrossIndependent researcher

Ana Gross is an independent researcher working on the social aspects of science and technology.

Lucy SuchmanDepartment of Sociology, Lancaster University (United Kingdom) https://www.lancaster.ac.uk/sociology/people/lucy-suchman
ORCID https://orcid.org/0000-0001-9752-4684

Lucy Suchman is Professor Emerita, Anthropology of Science and Technology at Lancaster University in the UK. She spent twenty years as a researcher at Xerox’s Palo Alto Research Center (PARC). Her current research extends her critical engagement with the fields of human-computer interaction and artificial intelligence to the question of whose bodies are incorporated into military systems, how and with what consequences for the possibility for a less violent world.

Submitted: 2021-07-15 – Accepted: 2021-08-03 – Published: 2021-09-30

Ana Gross (AG): Can you tell me about your research experience at Xerox Palo Alto Research Center (PARC)?

Lucy Suchman (LS): I came to Xerox PARC as a PhD student in anthropology immersed in studies of face-to-face human interaction, with its extraordinary choreography, the moment-to-moment co-construction of our mutual intelligibility. At PARC I encountered other disciplines seemingly asking similar questions, but this time about human computer interaction, or interactive machines. And so that was a kind of immediate hook for me to think about. At that time the idea of interactive machines was, you know, quite surprising: you didn’t hear technologies being characterized in that way, but also of course it resonated so much with my own background. And so eventually my doctoral dissertation ended up being a study of interaction at the interface, taking seriously the idea that human machine interaction was interaction in a kind of comparative sense to the way that we then understood how human interaction worked. My study of human machine interaction happened in the context of an expert systems group at Xerox PARC, which took me into the realm of AI and cognitive science, and associated ways of trying to conceptualize communication, understanding, mutual intelligibility and so forth. There was a small but significant number of my co-researchers who found my study interesting, it was really thought provoking for them at that time. So it was through those apparently shared interests in interactivity and cognition that we came together, although as an interactionist anthropologist I was never directly engaged with cognition, say in the way that Edwin Hutchins (1995) was. I was always thinking about human machine interaction much more relationally, and already in a Science and Technology Studies (STS) kind of way. But still there were many resonances, and I think apparent points of similarity and actual differences, which created a very generative space for me to work in. So it was an entry into this world, and then looking for places where I could work in that world that were interesting to me and that I could make interesting to people there. I think this is a long-standing anthropological move. Part of what comes out of doing ethnographic work involves reframing the problem. Looking at the dominant way in which things are being framed and thinking about how that can be interrupted and reworked, and I think that's a really productive way to enter into interdisciplinarity. And that kind of reframing is one of the strategies for, again, working across similarities and differences.

AG: What prompted you to move from Xerox PARC to Lancaster University?

LS: For me it was primarily the changes in the industry that occurred towards the end of the 1990s. There appeared to be a kind of embrace of a lot of the work that I and my research group, the Work Practice and Technology area, had been doing — at that point we were in a laboratory called Knowledge and Practices so we would seem to have made huge headway. But actually, at the same time that that kind of language was embraced and, in some ways, appropriated, there were differences between our understandings and those differences became less and less acceptable. I felt that there was less room or space for difference or commitment to long term research projects: it was more about the short term of returns to shareholders and what would be the next billion-dollar app. My immediate manager at the time accused me of being “resistant to change,” and I realized that I was resistant to getting in line with the corporate programme, which was the change that was increasingly being imposed on us as researchers. When I moved into a halftime position at Lancaster University, I found it to be enormously liberating. It wasn't until I left Xerox PARC that I realized how much I had had to fold myself into particular shapes to fit in there, and so going to Lancaster was an incredible sense of opening. It was building on work that I had done before but now being able to do it in a space where thinking critically, opening up problems, trying to understand things in a more nuanced way was celebrated and supported. The department of Sociology at Lancaster was particularly interdisciplinary, with scholars working on STS, feminist theory and gender studies, so it enabled me to expand the critical side of my work and return to an early interest in anti-militarism as a focus. The interdisciplinarity of our department was crucial: I would never have gotten a job in an Anthropology department. While I love Anthropology it tends to be quite orthodox, I mean it can be very radical in many ways but there is a kind of disciplinary orthodoxy there. And if I had had a career in academic anthropology only, I probably would have been quite frustrated.

AG: How do you see the future of social science and industry collaborations beyond the in-house anthropologist model? And how can academic ideas translate better into other worlds?

LS: It is really important to emphasize what a distinctive place Xerox PARC was in my early years there. It was a very open, academically oriented research centre, founded by academics, largely populated by academics who of course also had some government contracts, particularly with the Defence Advanced Research Projects Agency, at that time. Xerox PARC was created under the auspices of economic prosperity, in line with other great industrial research complexes like IBM or Bell Labs. However, as the years went by and the computing industry grew, and Xerox’s place in the computing industry became more uncertain, that open research space started to close down. There was an increasingly intensified focus on quarterly business analyst reports, short term returns, etc. And this is really antithetical to research; that is when collaborations between academia and industry become really difficult. I guess I cannot be terribly optimistic about the spaces within industry where there is genuine investment in any kind of long-term research, or anything that requires ongoing engagement over time. Although there is an opening to the social sciences within industry for the most part it is extremely instrumental. And at the same time of course it is so important for sociologists, anthropologists, people working in STS to actually understand what is going on in industry as a site for research, as a way of looking for opportunities to learn how things work from the inside, I think that is really valuable. But I would not have huge expectations of finding really generative places to do research within industry at the moment. In terms of translation I’ve long felt the need to write for multiple audiences, in multiple genres. What I like most is writing academic papers, because I love scholarship, I love citation, and I like reading and thinking with other people, which is how I think of the best sense of academic papers. The careful findings of the antecedents of what you want to write about and then building on those arguments and starting new conversations. I love that kind of writing and it is quite important as a space for us to be able to do challenging critical thinking. Then the question is how you take what you learn there and voice it in different modalities. In the past few years, for example, I’ve become involved with the International Committee for Robot Arms Control, which is part of a larger effort, headed by Human Rights Watch, against autonomous weapon systems. I’ve spoken at events in the UN and engaged in these movements for arms control and disarmament, which is incredibly important but also really limiting. You have to accept the idea that all of these countries, all of these states, are going to continue to be militarized, and then in a way you are trying to avoid the worst, which would be to automate the identification of targets. I have also been following closely the developments of the National Security Commission on AI, which is a collaboration of Silicon Valley promoters like Eric Schmidt, former Google Alphabet CEO, and technophiles in the US military, to promote AI investment. The National Security Commission came up with a seven-hundred-page report and I wrote a blog post for the AI Now Institute entitled “Six Unexamined Premises Regarding Artificial Intelligence and National Security” (Suchman, 2021a). Things that were absolutely assumed and unquestioned in this report that I thought were the fundamental things that we needed to talk about, and I also did a podcast1 on that. I think it is important to challenge ourselves to expand our voices and collaborate with other people in advancing academic ideas in other areas. Telling your story in different ways is crucial and generative, both to different audiences and to yourself. In some spaces it is very difficult to do that work of reframing. For example I was invited to a workshop convened by the Defense Innovation Board, advising the Pentagon on a set of AI Ethics Principles. And I found it difficult to engage in that discussion, because everything that I wanted to talk about was outside of the frame. So, yes it was hard. That does not mean that we don’t need to keep working at it. But sometimes it is difficult to find a voice in those conversations without being appropriated, without adding legitimacy to a project you don’t agree with. I was also invited by Microsoft to be an ethics adviser to their Integrated Visual Augmentation System (IVAS) program (Kipman, 2021), and for a moment I thought, what a fantastic ethnographic opportunity! But then I asked what the terms of my engagement would be and basically was told that I would never have been able to say anything publicly about it. So it requires careful consideration to avoid being one of the ingredients in the recipe “add a social scientist and stir”!

AG: It seems to me that current AI development is being shaped and informed by cognitivist understandings of interaction, meaning-making and context-making. The AI field has been mainly informed by cognitive theory when it comes to interaction and communication. Why do you think the AI field has failed to incorporate social models of interaction? Do you think this is somehow a disciplinary battle that social scientists lost?

LS: I think it has partly to do with the history of the behavioural sciences, where psychology, which was folded more closely into the biological and behavioural sciences, put the focus on the individual cognizer, and was in this sense more aligned with the scientific framings of the computational modelling of the brain and the neuro-scientific modelling of the computer. When I came to Xerox PARC, when people talked about interdisciplinarity they meant, well we have, you know, computer scientists, we have computational linguists, we have physicists, and we have cognitive psychologists. The cognitive psychologists were mainly working on information processing, so there was already such a strong alignment between computer science and individualist or psychological models as ways of understanding cognition. Social scientists came to work on the periphery of such alignments, they worked around the edges. There is, however, some progress being made in trying to introduce critical social scientific sensibilities into the AI field. If you go to the Computer Human Interaction (CHI) Conference and the burgeoning conferences on human-centred AI, there are increasing numbers of young researchers who are quite conversant with and inspired by different aspects of critical social science and humanities, who are really not satisfied with that superficial add-on version of social science.

AG: I’d like to touch upon your current work on AI based military systems and your reformulation of the concept of “situational awareness”. Your work suggests that human interactivity with a given environment and other types of agencies is paramount in generating context, meaning-making and more importantly, in the shaping of moral and ethical decisions in the battlefield. Does your work fundamentally suggest that machines will always be incapable of such moral and ethical accomplishments? Or does it suggest that it is a matter of building the right machines underpinned by the right anthropological and sociological models?

LS: You are asking a question which is at the heart of the work I have done and which I am very much still thinking about. When I first started my engagement with human computer interaction and AI, I was careful not to make programmatic arguments about the essential nature of humans and machines. My argument has always been that it’s not about diminishing the machine’s capacities but letting machines be the specific artifacts that they are, not aiming to have them be approximations of humans. Social understandings of human context-making are quite different to the behaviourist models of perceptual abilities, for example, that have been used to build AI, based on input/output information processing exchanges between the environment and the cognizer. Conversation analysis and ethnomethodology brought this crucial shift from thinking about the environment or world as out there to worlds as ongoing accomplishments. The much more radically interactive propositions coming out of interaction studies, and also the propositions coming out as part of the poststructuralist, performative turn in the social sciences, challenged this notion of context or environment as a mere container or closed world view. Karen Barad says that we are not in the world, nor are we of the world, we are part of the world’s differential becoming (Barad, 2007, p. 91). This is quite different to the ways in which different disciplines — cognitive psychology, computational science, neuroscience, and related disciplines — advanced individual cognitivist models of information processing and awareness based on the assumption of closed worlds. I’ve just finished writing a paper entitled Imaginaries of Omniscience: Automating intelligence in the US Department of Defence (Suchman, in review), looking at the history of cognitive models of situational awareness in the military, and more specifically the Observe, Orient, Decide, Act framework. Even in the more cybernetic versions, the assumption is that there is a kind of whitespace out there, a closed world loop. You will never get from those models of situational awareness the kind of openness, contingency and indeterminacy that characterizes and forms the basis of human interaction. And it becomes increasingly clear that in order to make AI systems operate in the openness and contingency of human worlds it is necessary to reengineer these worlds as closed worlds. I worry about this in relation to automation of weapons systems and military training. If you look at military training simulations, arguably what is happening in that field is that it is done in the name of simulating the world out there that the soldiers will encounter, so that when they encounter it, they can recognize what is going on. But the other way of reading simulation is that it is about generating a closed world, which the soldiers can take with them wherever they are deployed. This in some way will make them impervious to any kind of openness to what is going on around them. And you can see that even more in this IVAS HoloLens project coming out of Microsoft, with is the idea of a head mounted display for frontline infantry, where basically their engagement with the world will be mediated through an information processing interface. These technologies are being developed in the name of expanding situational awareness, when in reality they are closing awareness to maintain closed worlds. The thing that radically interrupts the whole computational and AI project is the openness and contingency of human interaction, you can see each round or iteration of AI as an effort to close worlds. I am always looking to be surprised, and I monitor developments in AI and robotics to see whether something radically changes this argument, but so far nothing has.

AG: When did you kind of come out as a feminist or how did feminism encounter you? What do you think are the most ground-breaking feminist ideas at the moment, both at a political and theoretical level?

LS: I’m not someone who really identified as feminist in the 70s, or even in the 80s. I began to read feminist writing and realized that I was a feminist, becoming increasingly aware of the long and complex history of feminism in all of its different aspects. I realized how increasingly important it became to mark one’s work in relation to feminism, rather than just appreciating or acknowledging other people’s work as feminist. I can’t remember exactly who I first read but I think for me the greatest resonances were possibly Judith Butler and Donna Haraway along with the post structuralist move within the social sciences, which of course resonated very much for me as well with my more ethnomethodological background. The whole idea of the performative agencies of discourse, the inseparability of the material and the semiotic, the idea that social structures have to be understood in their ongoing reproduction, rather than as given. And the fact that such structures have to be reproduced, but that there are also always slippages in that cycle of reproduction and that those slippages are points of potential intervention for transformation. One of my earlies feminist papers, first published in 1988, was called Computerization and Women’s Knowledge, which I co-authored with my colleague Brigitte Jordan, where we compared childbirth and office work (Suchman & Jordan, 1988). We talked about how in both cases, technological projects like Western biomedicine and high tech childbirth, and office automation shared a failure to recognise the longstanding knowledge that informed childbirth and work practices respectively. The comparison also served to recognise the limits of the formalisation of computation and other kinds of so-called technological improvements. Feminist technoscience is an incredibly exciting and burgeoning field. Some of the most exciting intersections for me are between feminism and post or decolonial thinking. Michelle Murphy2 and Max Liboiron3 are, for example, working at the intersections of feminist, environmental, indigenous, decolonial thinking and they are shifting the frames of reference in terms of literatures in very exciting, challenging ways. In the area of militarism and demilitarization there is really interesting work on questions about how identification of civilians and combatants is done, and the gendered and racialized aspects of that, like the work of Christiane Wilke (2017), which is really pushing the boundaries of what it means to be a civilian.

AG: Can you tell me about your methodological toolkit? You have written from a methodological standpoint about the videotape; the prototype; the face-to-face interview. What has been your methodological proposition throughout your career?

LS: Having a disciplinary background in anthropology there’s certainly for me an appreciation of ethnography, in the sense of being present over extended periods of time in the worlds that you are trying to understand. I believe this allows you to get an understanding that you cannot get in any other way. In some ways I could think that I did a 20-year ethnography at Xerox PARC, and this is probably the place that I know best from the inside out and that has been a tremendous resource for me. There was certainly a period of time when I was also strongly influenced by interaction studies, conversation analysis, ethnomethodology, video analysis and so forth. And again, I think I learnt things from these methods that I could not have learned any other way, in particular about the interface or human machine interaction, and for my own dissertation that proved to be extremely generative. We used video analysis in our work practice studies at PARC, and co-design projects, and we found video incredibly valuable in the sense of capturing unremarkable and mundane choreographies that people would not be able to tell you about. However, video is also limiting as there are things that you miss given that you have a really tight frame on things, so the contextualization required is incredibly important. I also managed get involved in some really rich projects concerning prototyping, for example a project that we did with the California Department of Transportation with a team of civil engineers where we were able to do ethnographic work understanding their work as civil engineers, do some close studies of their work at their workstations, and actually create a prototype document system for them and test it out. This was a multi-methods project, and this is what made it so rich. So, I think being at Xerox PARC was an opportunity to explore and work with a really extraordinary range of methods that for me came out from anthropology, interaction analysis, the field of computer supported cooperative work; participatory design; STS, etc. And yet now my writing and research is more and more reliant on secondary sources. Working on the military, I’ve not been able to figure out a way to do research about this field ethnographically for a variety of reasons, but particularly reasons of access. So I am finding myself, for better or worse, more reliant on a range of diverse secondary sources and archival materials, which are also very generative. I am very grateful to have had the opportunity of working with so many different methods and the contexts that made that possible. The main point would be that I am against any kind of methodological orthodoxy. I think it is important to craft the problem you want to engage with and where you engage it, then think about the kind of methodological toolkit that you can draw on to do the work that you are trying to do.

References

Barad, K. (2007). Meeting the Universe Halfway. Durham: Duke University Press.

Hutchins, E. (1995). Cognition in the Wild. Cambridge: MIT Press.

Kipman, A. (2021). Army Moves Microsoft HoloLens-Based Headset from Prototyping to Production Phase. Official Microsoft Blog, 31 March. https://blogs.microsoft.com/blog/2021/03/31/army-moves-microsoft-hololens-based-headset-from-prototyping-to-production-phase/

Suchman, L. (2021a). Six Unexamined Premises Regarding Artificial Intelligence and National Security. Medium, 31 March. https://medium.com/@AINowInstitute/six-unexamined-premises-regarding-artificial-intelligence-and-national-security-eff9f06eea0

Suchman, L. (2021). Imaginaries of Omniscience: Automating intelligence in the US Department of Defence. Social Studies of Science.

Suchman, L., & Jordan, B. (1988). Computerization and Women’s Knowledge. In K. Tijdens, M. Jennings, I. Wagner, & M. Weggelaar (Eds.), Women, Work and Computerization (pp. 153–160). Amsterdam: North Holland.

Wilke, C. (2021). Seeing and Unmaking Civilians in Afghanistan: Visual Technologies and Contested Professional Visions. Science, Technology, & Human Values, 42(6), 1031–1060. https://doi.org/10.1177%2F0162243917703463


  1. https://techpolicy.press/ai-and-national-security-examining-first-principles-a-conversation-with-lucy-suchman/↩︎

  2. https://technoscienceunit.org/↩︎

  3. https://civiclaboratory.nl/↩︎