Sociologica. V.18 N.3 (2024), 137–146
ISSN 1971-8853

Can Democracy Survive AI?

Gina NeffQueen Mary University London (United Kingdom); Minderoo Centre for Technology & Democracy, University of Cambridge (United Kingdom) https://www.mctd.ac.uk/
ORCID https://orcid.org/0000-0001-9090-924X

Gina Neff is Professor of Responsible AI at the Digital Environment Research Institute at Queen Mary University London (United Kingdom) and Director of the Minderoo Centre for Technology & Democracy at the University of Cambridge (United Kingdom).

Submitted: 2025-01-14 – Accepted: 2025-01-15 – Published: 2025-01-22

Abstract

This essay examines the fundamental tension between artificial intelligence technologies and democratic governance, arguing that AI’s inherent tendencies toward centralization and control pose significant challenges to democratic societies. Drawing on science and technology studies and critical analyses of technological politics, I argue that current AI implementations embody four key anti-democratic characteristics: they represent powerful technologies of centralization and control; they fuel ideologies of unchecked economic growth; they prioritize efficiency over accountability; and they enable absolute control coupled with unaccountable power. The analysis synthesizes historical parallels between computing and control, contemporary developments in AI infrastructure, and emerging policy frameworks to demonstrate how AI’s technical architecture and commercial implementation systematically undermine democratic values of transparency, accountability, and public participation. Through examination of recent political developments and corporate practices, the essay reveals how AI’s centralization of power and erosion of public oversight threaten democratic institutions. I conclude that democracy’s survival in an AI-driven future depends on reimagining and rebuilding digital technologies with democratic accountability at their core, requiring new frameworks for public oversight and corporate governance.

Keywords: AI; Science and Technology Studies; Economic Sociology; technology; theory.

Acknowledgements

This research is supported by the ESRC Digital Good Network through grant number ES/X502352/1.

“In controversies about technology and society,
there is no idea more provocative than the notion that
technical things have political qualities.”

— Langdon Winner, Do Artifacts Have Politics?, 1980, p. 121.

In his essay “Do Artifacts Have Politics?” (1980), Langdon Winner retells the history of how Robert Moses built the overpasses on the Southern State Parkway on New York’s Long Island. This story holds that infrastructural choices shape how people can move and what they can do. The account, from Robert’s history of Moses, says that Moses intentionally made bridges lower to keep urban Blacks off nearby Jones Beach. Historians largely discount this detail in Robert Caro’s explanation (1974). Moses may have been a racist. The overpasses parkways like Southern State were indeed lower. This coincidence of facts does not mean that Moses’s intentions were the reason.

What do we do with this re-reading of the idea of intentionality of artifacts, of technology today? If an anchoring essay in the field of Science and Technology Studies (STS) may be wrong in its details, can it still be right to suggest that technologies can have inherent politics?

The question of whether AI technologies have a politics is one that I want to explore. I want to suggest that we can indeed do a political analysis of large-scale technologies like the one that Winner suggested, and I would argue that the evidence for the political bent on AI is clear, even if our previous analysis of the politics of digital technologies was wrong. My essay stems from a talk delivered in November 2024 for the celebration of 25 years of the Center on Organizational Innovation at Columbia University, which fell on the same week as the re-election of Donald Trump as President of the United States. This coincidence of events makes for an ideal time to reflect on what we have learned in twenty-five years of applying STS to digital technologies and on the changes in the political imaginaries about technologies over that period. The Columbia gathering in November 2025 also celebrated twenty-five plus years of David Stark’s training students (of which I’m a proud beneficiary), working with collaborators around the world (again, very proud to have written with David, see Neff & Stark, 2004), and questioning the relationship between technology and democracy, a project that I and others influenced by David aim to continue. Any shadow over the mood of the COI celebration coincided with our reflections on the assumptions that we had 25 years ago: namely that digital technologies represented a possibility for political transformations would be hopeful, liberal, emancipatory, and progressive. I will question that assumption in this essay.

“The things we call technologies” Winner argued, “are ways of building order in our world” (1980, p. 127). That is as fine a way to introduce my point that democratic societies may not be able to afford the impact of the suite of technologies, products, services, hardware, and data value chains that we now commonly refer to collectively as “AI”. What I will argue in this essay is that there is a momentum of large-scale socio-technical systems, and that momentum, those drives and pushes for AI, have inherently political values. Dan McQuillan wrote, “AI is political because it acts in the world in ways that affect the distribution of power, and its political tendencies are revealed in ways that it sets up boundaries and separations” (2022, p. 2). AI technologies are political in ways that matter for the future of democratic societies. What follows is a reflection on the current balance of power between people who propose that we use these systems, tools and technologies in ever-increasing areas of public and private life and people who live within democratic societies.

1 Defining Democracy, Organizing and AI

By democracy, I refer to a paradigm of 20th-century liberal democracy that holds decision-making should be accountable and transparent to the public. I also refer broadly to participatory efforts within capitalism in the spirit that Bowles and Gintis (1993) wrote about, namely broad participation from people in the decisions about their lives, including at their workplaces:

People ought to have a voice, and in some sense an equally effective voice, in the decisions that affect their lives. Modern liberal democratic theory generally supports the application of both democratic and liberal principles to the state, while supporting the application of the liberal principle alone to the economy. Thus, according to liberal democratic norms, capitalist economies in which effective claims on resources and command over labor generally reside in property owners and their representatives may represent a just form of social organization providing, of course, that markets are sufficiently competitive (p. 98).

The founding of COI recognized that digital technologies and mechanisms of organizing are deeply intertwined and that innovations in technologies could lead to experiments in how work and governance are organized. From the description of the work,

The Center on Organizational Innovation promotes research and experimentation with new forms of collaboration, communication, and coordination afforded by emerging interactive technologies. At mid-century, organizational analysts at Columbia University, including Peter Blau, Alvin Gouldner, Paul Lazarsfeld, and Robert Merton, charted rise of bureaucratic organizations and the emergence of mass communication through case studies of work groups and the demographics of audience reception. In our new century, we chart the emergence of collaborative organizational forms in an era when social interaction moves seamlessly between encounters face-to-face and at the digital interface (COI, “Who we are”).

Artificial intelligence has come to take on so many definitions as to render the term almost meaningless. Generative AI and Large Language Models have captured the public imagination over what AI is and the possibilities for it. However, this has occurred at the risk of masking the shift to large-scale automated processing of data. Consider a broader definition of AI from a piece of work that I did with the UK’s Trades Union Congress in 2024:

AI is a machine-based system that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment. Such systems will have functions that include prediction, planning, classification, pattern recognition, organization, perception, the recognition of speech, sound, or image; the generation of text, sound, or image; language translation, communication, learning, representation, and problem-solving. A system does not cease to be an artificial intelligence system solely because of human involvement in the system (TUC, 2024).

In what follows, I lay out four premises for considering the anti-democratic politics of today’s instantiation of AI technologies.

2 AI Is a Set of Powerful Technologies of Centralization and Control

The first Trump administration was less than 100 days old when Kate Crawford packed a hotel ballroom in Austin, Texas, for her SXSW lecture Dark Days: AI and the Rise of Fascism (2017). Crawford said in that talk: “Machine intelligence can be a powerful part of the power playbook,” and a “step function increase in the spread of AI” was happening alongside “the rise of ultra-nationalism, right-wing authoritarianism and fascism.” Her talk took us back to the origins and fantasies of control by fascists. There was an audible gasp in the room when Crawford traced the name of what was then Europe’s largest supercomputer, tracing it directly to fascist attempts at control: Mare Nostrum, Our Sea, a phrase referring both to Roman imperialism and to Italian Fascist fantasies of Mediterranean control. Crawford argued that the historical parallels between the projects of fascist control and computing control were no accident: Control over “our” sea, our sea of data, over populations, over a maligned “them” or other. Crawford never mentioned Trump by name once in that 2017 lecture, but the mainly techie crowd who packed into the room, knew who she implied in her talk. They were there because of Trump, because of their fears for US democracy, and for the lessons that the history of computing and control could teach us about 2017 and beyond. The movement of AI and fascism, she argued, share “the desire to centralize power, track populations, demonize outsiders and claim authority and neutrality without being accountable.” Crawford continued, “This is a fascist’s dream… Power without accountability” (Crawford, 2017). These centralizing tendencies of AI have also been pointed out by others as potentially giving enormous control and power to authoritarians. For example, MacQuillan defines AI as “a kind of computing, a form of knowledge production, a paradigm for social organization and a political project” (2022, p. 2).

AI’s tendencies of centralization and control reflect a continuation of fantasies of control of power over populations, not a recent break from it. What seems now quaint in Crawford’s talk from eight years before is how the examples of control she gave at the time seemed like edge cases. Most of us now know the risks of biased data, failed systems and lack of accountability. Increases in computing power mean we have more cases of widespread computing rolled out to surveil and control. In the UK, police now use the widespread deployment of live facial recognition technologies to scan people on the streets in London, at a Beyonce concert, or at a Formula One race (Radiya-Dixit & Neff, 2023). The edge cases for application for control have now become every day.

What also seems quaint in hindsight is looking at public supercomputing projects like the Mare Nostrum project as symbols of large-scale control. Government-back supercomputers are not powering the drive for AI, privately-hosted cloud computing is. Most of the world’s cloud computing capacity, 55%, is now controlled by only two US companies. These companies are translating initial data monopolies into infrastructural moats, exchanging financial capital for first-mover advantage in the rush to build an infrastructural market for services for AI.

The idea that digital technologies could lead to more, not less, centralization was not something that we foresaw at COI. However, looking back, it is clear that the tendencies for building control into the technological infrastructure for AI were there from rise of the commercial internet. Look back to the cover of Time magazine’s Person of the Year in 1999, Jeff Bezos. At the time, Amazon, was only 5 years old and hemorrhaging cash. In early letters to investors, Bezos warned that the company would not be profitable any time in the near future. These doubts were captured in Time’s feature. But that 1999 article also held a kernel of clarity: Bezos sought to use the growth of the Internet for market centralization and control. Bezos’s founding moment was in thinking how fast the Internet was growing, at the time 2,300% a year. The article quotes Bezos as saying that moment “was a wake-up call… I started thinking, O.K., what kind of business opportunity might there be here?”. Books were products with already existing, highly structured data sources that could be leveraged for that growth. More critically, Bezos soon identified that providing the infrastructural resources to other companies could establish Amazon at the center of the internet’s control during this time of explosive growth. If we look back at that 1999 article, it is clear now that Amazon was never about the books, and it was always about market control matched with that explosive growth in digital technologies. Not long after that feature appeared, Amazon launched the forerunner to Amazon Web Services (AWS), providing services and infrastructure to other companies expanding markets.

Now Amazon helps to control the computing infrastructure for making digital markets for AI around the world. That AWS infrastructure is the backbone for modern AI, and Amazon Web Services is at the center of AI’s concentration of energy, electricity, compute, and data. Power in one form, early market advantage and concentration of financial capital, has been converted into power in another. Whatever AI means, at least in how today’s infrastructure has centralized the services for computing, it means tools for the concentration of power.

3 Imaginaries about AI Fuel Ideologies of Growth

Currently, it is hard to escape the stories about the possibilities of AI for growth and the future. Consider the new Labour government in the UK who have pegged the economic future of the country to AI and what AI-fueled growth might do for the economy. Few can envy the predicament of the new government. The UK Conservative Party, in power for 14 years, handed the Labour Party an enormous bill for the failures of austerity policies. Conservative policies hollowed out economic growth, making the UK one of the slowest growing economies in the G7, despite its position as a financial and education superpower. “Growth must come first” is now the new government’s message and they are framing that in terms of “AI Opportunities Action Plan” for the economy (2025). Political rhetoric around AI in the UK says that it will transform the economy, save the National Health Service, and jump-start growth. “Currently available AI”, according to government minister Peter Kyle, could increase productivity growth to 5% per year over the next five years, a two- to five-fold increase every year for five years, which would be extraordinary for growth (The Washington Post, 2024).

These are powerful images of what AI could do, could become. Such a view of AI as a growth engine does what I call in my next book futuring work, activities and actions that leaders and practitioners put in motion to shape how new technologies might be used. Futuring work helps people both see what AI could be for and create new ways to position themselves and their decisions in relation to changes from technology. Futuring work about AI, about any technology really, helps people, companies and markets shape the use cases and guides early adoption. For example, AI could be about play, creativity, liberation, sex, democracy or any such ways to think about what people might fit AI into their lives. For now, at least in the UK, such playful ways of talking about AI’s future are put aside in favor of stories about how AI will work for us, for the economy and for growth. These conversations about AI create pathways for the technology’s future, and in this case, AI is about economic growth powered by efficiency. This idea about AI is sprinkled like fairy dust over economic problems with transformation and growth expected to magically appear as a result. But there are many other kinds of imaginaries, many other kinds of futures, that could show people could use and benefit from AI.

Of course, questions about economic growth and organizational transformation should be matched with the questions For whom? and Where?. Without such critical questions, the current growth-centric narratives about AI’s possibilities risk concentrating economic power into deeply unlevel playing fields. Such one-sided narratives about AI as economic growth cement divides between the global majority, where the scraping, cleaning, and managing of AI systems is translated into low-paid “ghost work” (Gray & Suri, 2019) and between the digital North, where these systems appear sanitized of such labor. If AI is powered by the people’s work that is little paid and placed far away from the US and Europe, then AI tools might indeed look like magic instead of the compute and labor-intensive data technologies that they are. The long global supply chains for AI’s compute and labor move problems far away from the clean offices and work-from-home comforts of the Western tech sector along existing postcolonial supply chains (Neff et al., 2020). It is not “our” labor in Western democracies that is going into AI systems. It is always and forever someone else’s. It is a move that mirrors the fantasies of population control and centralization of data and infrastructure: AI’s labor-saving imaginaries are built on intensive data work done in the Global South.

At the beginning of COI, many of us studied economic transitions. We were primed to look at how capitalisms varied around the world. I and others looked closely at changes that were happening to work and labor with the rise of digital companies (Neff, 2012). However, we missed how deeply unequal the economic gains from the digital transition would be. The Clinton administration bet big on globalization and Robert Reich’s “symbolic analysts” as the knowledge workers of the future who would save the post-industrial US economy with great jobs in urban centers. The “cool jobs” in growing “hot industries” would combine digital technology with creativity (Neff et al., 2005), and the tech industry would be clean, fast-growing, and well-paid. Above all, this new tech industry would be young and work within it would be democratic, as I analyzed in Venture Labor (Neff, 2012).

Today, when there is political expression in the US of justified anger about jobs and inflation, about the cost of American dream, about the industrial future of the US, this bet on Big Tech from the 1990s looks like shorting America’s future. US elites placed their markers on the tech sector, and they profited and continue to profit from the explosive growth in digital technologies. However, along the way, the Democratic Party forgot that these same ideologies of clean growth fueling great post-industrial jobs could hollow outgrowth in other sectors and crowd out futures for people outside of the tech sector. To call for regulations of the tech sector is to immediately be branded as anti-growth. If the US 2024 elections teach the US Democratic Party anything, it may be that people forgot until too late Silicon Valley could be right-wing and hiding self-interest behind visions for digital technologies as generalized engines of growth. Those visions did Silicon Valley’s futuring work for how we would all use the internet and digital technologies, and now we are paying the price.

4 AI Boosters Promote Efficiency over Accountability

The victory of efficiency over accountability for AI tools and technologies is the victory of the ends over the means. Ami Fields-Meyer and Janet Haven said this best in a recent Foreign Policy commentary (2024): “Liberal societies are characterized by openness, transparency, and individual agency. But the design and deployment of powerful AI systems are the precise inverse.” They continued:

Many of today’s AI systems […] run over civil rights and liberties and cause harm for which people cannot easily seek redress. They violate privacy, spread falsehoods, and obscure economic crimes such as price-fixing, fraud, and deception. And they are increasingly used — without an architecture of accountability — in institutions central to American life: the workplace, policing, the legal system, public services, schools, and hospitals (Fields-Meyer & Haven, 2024).

These are the AI systems that we have in place today.

The Biden White House, led by the work of sociologist and former Columbia professor Alondra Nelson, issued a Blueprint for an AI Bill of Rights (The White House, 2022). While it was not legislation, this policy document called on companies to ensure AI technologies are safe, fair, and protective of people’s privacy; that people were made aware when systems are being used to make decisions about them; and that people could opt out. The proposed framework was a proactive, democratic vision for the use of advanced technology in American society. Biden’s Executive Order (The White House, 2023) on AI mandated a coordinated federal response to AI, using a “rights and safety” framework. Over the last year, other jurisdictions, including the UK and the EU, have looked to the leadership of the Biden White House as showing a path forward for AI governance.

While this policy leadership was helpful, it could not check the market power of the companies building AI tools. Nor did these measures put in place binding regulations that ensure that AI tools are accountable to public oversight. The presumed rightness of the AI mission was never questioned in this framework. Western liberal democracies, blinded by the ideology of growth, may have committed a tragic mistake for democracy and public accountability: namely, privileging and prioritizing an industry moving fast and breaking things over ensuring a growth that is accountable and transparent to governments and publics. Efficiency, as Bowles and Gintis reminded us above, is only one part of the values of capitalist liberal democracies. If we want democracies to continue, we must not fall for the idea that efficiency is the only goal or value that counts, and keep accountability and transparency on the table.

5 AI Represents Absolute Control Coupled with Unchecked, Unaccountable Power that is Antithetical to Democracy

In Big Tech’s hype about the possible futures for AI, we see an industry serving private interests by pitching public investments in infrastructure. Choices about what is needed for this future are not made with transparency and accountability to governments and publics. The infrastructure necessary for this future — from new data centers to increased electricity to a vast undersea cable network (Starosielski, 2015) is mainly shielded from public view. The infrastructural investments powering this wave of AI no longer include mechanisms of public accountability, but a patchwork of standards and protocols and a spirit of limited regulations, lest growth be checked.

Consider the work around “AI safety” around the first AI Safety Summit at Bletchley Park in November 2023. Companies set themselves up as the only true experts on AI technologies, suggesting that government regulation could never manage the so-called existential threats to humanity. In effect, companies argued that they, not democratic governments, were humanity’s only hope. Marietje Schaake, in her book The Tech Coup (2024), argues that the lack of competitiveness in the markets building foundational models is part of the reason for the anti-democratic behavior of the companies. For example, Microsoft, Google and Amazon all admit on their Environment, Social and Governance (ESG) reports that their investments in LLMs are costing them carbon emissions goals. The trade-offs between the existential risks of climate change and the unchecked growth in AI are not yet a part of policy discussions, and companies are not accountable to publics and governments about the choices that they are making that will impact us all.

6 What Will the Trump Administration Mean for AI and Democracy?

As I write, the second Trump administration has yet to begin. But signals suggest that there will be a gutting of the rights-based approach to AI that characterized how the US approached global AI governance. An America-First attitude toward AI development could well supercharge already heightened tensions in drawing battle lines for global cyberwars. An isolationist US may have less leverage with allies to press for changes in AI regulation. An administration keen on cutting what they see as unnecessary regulation and blocks to unfettered growth could end requirements to Environment, Social and Governance (ESG) reporting for publicly traded companies, one of the few levers for public accountability that currently exist over large tech companies’ climate emissions. Geopolitical tensions that the Trump administration faces will play an enormously important role in a realignment of global leadership. The US can show the world how democracy and AI can be compatible, but only if tech policies are put in place that shore up the ability for governments and publics to meaningfully participate in the choices and decisions about AI’s possible futures.

7 Looking Forward

The Center on Organizational Innovation prepared a generation of scholars to care deeply about the relationship between digital technologies and ways of organizing companies, governments, and societies. The early giddy enthusiasm and excitement that we had for the possibilities digital technologies might bring societies have changed. Still, there are reasons to hope.

The first is through thinking about how digital technologies are domesticated through their use in workplaces. A hopeful approach watches how agency plays a role in how people adopt, resist, and modify AI tools. The second is odd comfort in the harsh reality that the climate crisis does not care about politics. The growing risks that societies face may force new kinds of public accountabilities about AI and all of our energy choices. Third is that without rebuilding trust in each other and in institutions we cannot have democratically accountable AI in democratic societies.

Whether democracy can survive AI will depend on us. Moving fast and breaking things is not a way to sustainable build digital futures. There are, however, alternatives. If we leave the decisions about what to build to the titans of tech the results will be anti-democratic and built for private over public gain. I help to lead the ESRC Digital Good network, and we are trying to reimagine what good looks like for digital societies, so I am hopeful that this work can be done. We can get back to that spirit of twenty-five years ago when David Stark founded the Center on Organizational Innovation. And we can imagine what we want the next 25 years to look like. To get there, however, we all have work to do.

References

Bowles, S., & Gintis, H. (1993). A Political and Economic Case for the Democratic Enterprise. Economics and Philosophy9(1), 75–100. https://doi.org/10.1017/S0266267100005125

Center on Organizational Innovation (COI). Who We Are. https://coi.sociology.columbia.edu/content/who-we-are (Accessed January 1, 2025).

Caro, R.A. (1974). The Power Broker. Robert Moses and the Fall of New York. New York: Knopf Doubleday Publishing Group.

Crawford, K. (2017). Dark Days: AI and the Rise of Fascism. Austin, TX: SXSW. https://www.youtube.com/watch?v=Dlr4O1aEJvI (Accessed January 1, 2025).

Fields-Meyer, A., & Haven, J. (2024). AI’s Alarming Trend to Illiberalism. Foreign Policy, October 31. https://foreignpolicy.com/2024/10/31/artificial-intelligence-ai-illiberalism-democracy-civil-rights/ (Accessed January 1, 2025).

Gray, M.L., & Suri, S. (2019). Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston, MA: Houghton Mifflin Harcourt.

McQuillan, D. (2022). Resisting AI: An Anti-fascist Approach to Artificial Intelligence. Bristol, UK: Bristol University Press.

Neff, G. (2012). Venture Labor: Work and the Burden of Risk in Innovative Industries. Cambridge, US: MIT Press.

Neff, G., McGrath, M. & Prakash, N. (2020). AI@Work. Oxford Internet Institute and the Minderoo Foundation. https://www.oii.ox.ac.uk/wp-content/uploads/2020/08/AI-at-Work-2020-Accessible-version.pdf (Accessed January 1, 2025).

Neff, G., & Stark, D. (2004). Permanently Beta: Responsive Organization in the Internet Era. In P.N. Howard & S. Jones (Eds.), Society Online: The Internet in Context (pp. 173—188). London: Sage. https://doi.org/10.4135/9781452229560

Neff, G., Wissinger, E., & Zukin, S. (2005). Entrepreneurial Labor among Cultural Producers: “Cool” Jobs in “Hot” Industries. Social Semiotics, 15(3), 307–334. https://doi.org/10.1080/10350330500310111

Radiya-Dixit, E., & Neff, G. (2023). A Sociotechnical Audit: Assessing Police Use of Facial Recognition. FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 1334–1346). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3593013.3594084

Schaake, M. (2024). The Tech Coup: How to Save Democracy from Silicon Valley. Princeton, NJ: Princeton University Press. https://doi.org/10.1515/9780691241180

Starosielski, N. (2015). The Undersea Network. Durham, US: Duke University Press. https://doi.org/10.1215/9780822376224

Trades Union Congress. (2024). The AI Bill Project, April 18. https://www.tuc.org.uk/research-analysis/reports/ai-bill-project (Accessed January 1, 2025).

UK Government. (2025). AI Opportunities Action Plan, January 13. https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan (Accessed January 14, 2025).

Washington Post Live. (2024). Transcript: The Futurist London: The New Age of AI. The Washington Post, October 9. https://www.washingtonpost.com/washington-post-live/2024/10/09/transcript-futurist-london-new-age-ai/ (Accessed January 1, 2025).

The White House. (2022). Blueprint for an AI Bill of Rights. https://www.whitehouse.gov/ostp/ai-bill-of-rights/ (Accessed January 1, 2025).

The White House. (2023). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Briefing Room, October 30, 2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ (Accessed January 1, 2025).

Winner, L. (1980). Do Artifacts Have Politics? Daedalus109(1), 121–136. http://www.jstor.org/stable/20024652