Mapping Value(s) in AI: Methodological Directions for Examining Normativity in Complex Technical Systems


  • Bernhard Rieder Department of Media Studies, University of Amsterdam
  • Geoff Gordon Asser Institute
  • Giovanni Sileno Informatics Institute, University of Amsterdam



Normativity, Machine learning, Empirical ethics, YouTube, Methodology


This paper seeks to develop a multidisciplinary methodological framework and research agenda for studying the broad array of 'ideas', 'norms', or 'values' incorporated and mobilized in systems relying on AI components. We focus on recommender systems as a broader field of technical practice and take YouTube as an example of a concrete artifact that raises many social concerns. To situate the conceptual perspective and rationale informing our approach, we briefly discuss investigations into normativity in technology more broadly and refer to 'descriptive ethics' and 'ethigraphy' as two approaches concerned with the empirical study of values and norms. Drawing on science and technology studies, we argue that normativity cannot be reduced to ethics, but requires paying attention to a wider range of elements, including the performativity of material objects themselves. The method of 'encircling' is presented as a way to deal with both the secrecy surrounding many commercial systems and the socio-technical and distributed character of normativity more broadly. The resulting investigation aims to draw from a series of approaches and methods to construct a much wider picture than what could result from one discipline only. The paper is then dedicated to developing this methodological framework organized into three layers that demarcate specific avenues for conceptual reflection and empirical research, moving from the more general to the more concrete: ambient technical knowledge, local design conditions, and materialized values. We conclude by arguing that deontological approaches to normativity in AI need to take into account the many different ways norms and values are embedded in technical systems.


Agre, P.E. (1997a). Computation and Human Experience. Cambridge: Cambridge University Press.

Agre, P.E. (1997b). Toward a Critical Technical Practice: Lessons Learned Trying to Reform AI. In G.C. Bowker, S.L. Star, W. Turner, & L. Gasser (Eds.), Social Science, Technical Systems, and Cooperative Work: Beyond the Great Divide (pp. 131–157). London: Psychology Press.

Ahmed, N., & Wahed, M. (2020). The De-democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research. ArXiv, 2010.15581.

Airoldi, M., Beraldo, D., & Gandini, A. (2016). Follow the Algorithm: An Exploratory Investigation of Music on YouTube. Poetics, 57, 1–13.

Alfano, M., Fard, A.E., Carter, J.A., Clutton, P., & Klein, C. (2020). Technologically Scaffolded Atypical Cognition: The Case of YouTube’s Recommender System. Synthese, 199.

Ananny, M., & Crawford, K. (2018). Seeing without Knowing: Limitations of the Transparency ideal and its Application to Algorithmic Accountability. New Media & Society, 20(3), 973–989.

Association for Computing Machinery (2017). Statement on Algorithmic Transparency and Accountability. ACM, 12 January.

Association for Computing Machinery (2018). ACM Code of Ethics and Professional Conduct. ACM, 22 June.

Barbrook, R., & Cameron, A. (1996). The Californian Ideology. Science as Culture, 6(1), 44–72.

Belle, V., & Papantonis, I. (2020). Principles and Practice of Explainable Machine Learning. ArXiv, 2009.11698.

Bishop, S. (2019). Managing Visibility on YouTube through Algorithmic Gossip. New Media & Society, 21(11-12), 2589–2606.

Bonini, T., & Gandini, A. (2019). “First Week Is Editorial, Second Week Is Algorithmic”: Platform Gatekeepers and the Platformization of Music Curation. Social Media + Society, 5(4), 205630511988000.

Bosma, E. (2019). Multi-sited Ethnography of Digital Security Technologies. In M. de Goede, E. Bosma, & P. Pallister-Wilkins (Eds.), Secrecy and Methods in Security Research: A Guide to Qualitative Fieldwork (pp. 193–212). London: Routledge.

Bratton, B.H. (2015). The Stack: On Software and Sovereignty. Cambridge, MA: MIT Press.

Bucher, T. (2018). If… Then: Algorithmic Power and Politics. New York, NY: Oxford University Press.

Bucher, T., & Helmond, A. (2018). The Affordances of Social Media Platforms. In J. Burgess, A. Marwick, & T. Poell (Eds.), The Sage Handbook of Social Media (pp. 233–253). London: Sage.

Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms. Big Data & Society, 3(1), 1–12.

Caplan, R., & Gillespie, T. (2020). Tiered Governance and Demonetization: The Shifting Terms of Labor and Compensation in the Platform Economy. Social Media + Society, 6(2), 1–13.

Christin, A. (2020). The Ethnographer and the Algorithm: Beyond the Black Box. Theory and Society, 49(5–6), 897–918.

Cohen, J.E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. New York, NY: Oxford University Press.

Cooper, P. (2021). How the YouTube Algorithm Works in 2023: The Complete Guide. Hootsuite, 21 June.

Covington, P., Adams, J., & Sargin, E. (2016). Deep Neural Networks for YouTube Recommendations. In S. Sen & W. Geyer (Eds.), RecSys ’16: Proceedings of the 10th ACM Conference on Recommender Systems (pp. 191–198). New York, NY: Association for Computing Machinery.

Davidson, J., Livingston, B., Sampath, D., Liebald, B., Liu, J., Nandy, P., Van Vleet, T., Gargi, U., Gupta, S., He, Y., & Lambert, M. (2010). The YouTube Video Recommendation System. RecSys ’10: Proceedings of the Fourth ACM Conference on Recommender Systems (pp. 293–296). New York, NY: Association for Computing Machinery.

de Goede, M., Bosma, E., & Pallister-Wilkins, P. (Eds.). (2019). Secrecy and Methods in Security Research: A Guide to Qualitative Fieldwork. London: Routledge.

Diakopoulos, N. (2015). Algorithmic Accountability: Journalistic Investigation of Computational Power Structures. Digital Journalism, 3(3), 398–415.

European Union (2020). Digital Services Act.

Dourish, P. (2016). Algorithms and Their Others: Algorithmic Culture in Context. Big Data & Society, 3(2), 1–11.

Ekstrand, M.D. (2020). LensKit for Python: Next-Generation Software for Recommender System Experiments. CIKM ’20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, (pp. 2999–3006). New York, NY: Association for Computing Machinery.

Eriksson, M., Fleischer, R., Johansson, A., Snickars, P., & Vonderau, P. (2019). Spotify Teardown: Inside the Black Box of Streaming Music. Boston, MA: The MIT Press.

Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY: St. Martin’s Press.

European Commission (2019). Ethics guidelines for trustworthy AI. European Comission.

Facebook (n.a.). What Are Recommendations on Facebook?. Facebook.

Foucault, M. (1969). L’archéologie du savoir. Paris: Gallimard.

Foucault, M. (1990). The Use of Pleasure. Volume 2 of The History of Sexuality (R. Hurley, Trans.). New York, NY: Vintage Books. (Original work published in 1984)

Friedler, S.A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E.P., & Roth, D. (2019). A Comparative Study of Fairness-Enhancing Interventions in Machine Learning. ArXiv, 1802.04422.

Gert, B. (2004). Common Morality: Deciding What To Do. New York, NY: Oxford University Press.

Gibson, J.J. (1986). The Ecological Approach to Visual Perception. London: Psychology Press.

Gillespie, T. (2010). The Politics of ‘Platforms.’ New Media & Society, 12(3), 347–364.

Gunawardana, A., & Shani, G. (2015). Evaluating Recommender Systems. In F. Ricci, L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (2nd ed., pp. 265–308). New York, NY: Springer.

Google (n.a.). Creator Discovery Handbook. Suggested Videos on the Watch Page. Google.

Google (n.a.). Frequently Asked Questions about “Made for Kids”.

Hallinan, B., Scharlach, R., & Shifman, L. (2022). Beyond Neutrality: Conceptualizing Platform Values. Communication Theory, 32(2), 201–222.

Hämäläinen, N. (2016). Descriptive Ethics. New York, NY: Palgrave Macmillan.

Herlocker, J.L., Konstan, J.A., Terveen, L.G., & Riedl, J.T. (2004). Evaluating Collaborative Filtering Recommender Systems. ACM Transactions on Information Systems, 22(1), 5–53.

Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT Press.

Institute of Electrical and Electronics Engineers (n.a.). IEEE Code of Ethics. IEEE.

Ingram, M. (2020). The YouTube ‘Radicalization Engine’ Debate Continues. Columbia Journalism Review, 9 January.

Jannach, D., & Adomavicius, G. (2016). Recommendations with a Purpose. RecSys ’16: Proceedings of the 10th ACM Conference on Recommender Systems (pp. 7–10). New York, NY: Association for Computing Machinery.

Jobin, A., Ienca, M., & Vayena, E. (2019). The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence, 1(9), 389–399.

Karlgren, J. (1990). An Algebra for Recommendations (Working Paper No 179.). Department of Computer and Systems Sciences. KTH Royal Institute of Technology and Stockholm University.

Khan, L. (2018). The New Brandeis Movement: America’s Antimonopoly Debate. Journal of European Competition Law & Practice, 9(3), 131–132.

Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent Trade-offs in the Fair Determination of Risk Scores. ArXiv, 1609.05807.

Klonick, K. (2018). The New Governors: The People, Rules, and Processes Governing Online Speech. Harvard Law Review, 131(6), 1598–1670.

Klonick, K. (2021). Inside the Making of Facebook’s Supreme Court. The New Yorker, 12 February.

Knobel, C., & Bowker, G.C. (2011). Values In Design. Communications of the ACM, 54(7), 26–28.

Krug, S. (2014). Don’t Make Me Think, Revisited: A Common Sense Approach to Web Usability. San Francisco, CA: New Riders.

Kumar, S. (2019). The Algorithmic Dance: YouTube’s Adpocalypse and the Gatekeeping of Cultural Content on Digital Platforms. Internet Policy Review, 8(2).

Latour, B. (1992). Aramis ou L’amour des Techniques. Paris: La Découverte.

Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. New York, NY: Oxford University Press.

Li, H.O.-Y., Bailey, A., Huynh, D., & Chan, J. (2020). YouTube as a Source of Information on COVID-19: A Pandemic of Misinformation? BMJ Global Health, 5(5), 1–6.

Light, B., Burgess, J., & Duguay, S. (2018). The Walkthrough Method: An Approach to the Study of Apps. New Media & Society, 20(3), 881–900.

Lynch, M. (2001). The Epistemology of Epistopics: Science and Technology Studies as an Emergent (Non)Discipline. American Sociological Association, Science, Knowledge & Technology Section (ASA-SKAT) Newsletter, Fall, 2–3.

Mackenzie, A. (2017). Machine Learners: Archaeology of a Data Practice. Cambridge, MA: The MIT Press.

Marcus, G.E. (1995). Ethnography in/of the World System: The Emergence of Multi-Sited Ethnography. Annual Review of Anthropology, 24(1), 95–117.

Matamoros-Fernández, A., Gray, J.E., Bartolo, L., Burgess, J., & Suzor, N. (2021). What’s “Up Next”? Investigating Algorithmic Recommendations on YouTube Across Issues and Over Time. Media and Communication, 9(4), 234–249.

Milano, S., Taddeo, M., & Floridi, L. (2019). Recommender Systems and their Ethical Challenges. SSRN Electronic Journal.

Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics, 26(4), 2141–2168.

Mudigere, D., Hao, Y., Huang, J., Jia, Z., Tulloch, A., Sridharan, S., Liu, X., Ozdal, M., Nie, J., Park, J., Luo, L., Yang, J.A., Gao, L., Ivchenko, D., Basant, A., Hu, Y., Yang, J., Ardestani, E.K., Wang, X., … Rao, V. (2021). High-performance, Distributed Training of Large-scale Deep Learning Recommendation Models. ArXiv, 2104.05158.

Nissenbaum, H. (1998). Values in the Design of Computer Systems. Computers in Society, March, 38–39.

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY: Crown.

Pasquale, F. (2015). The Black box Society: The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press.

Pasquinelli, M. (2019). How a Machine Learns and Fails: A Grammar of Error for Artificial Intelligence. Spheres, 5, 1–17.

Postigo, H. (2016). The Socio-Technical Architecture of Digital Labor: Converting Play into YouTube Money. New Media & Society, 18(2), 332–349.

Powles, J., & Nissenbaum, H. (2018). The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence. OneZero, 7 December.

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J.W., Christakis, N.A., Couzin, I.D., Jackson, M.O., Jennings, N.R., Kamar, E., Kloumann, I.M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D.C., Pentland, A.’S.’, Roberts, M.E., Shariff, A., Tenenbaum, J.B., Wellman, M. (2019). Machine Behaviour. Nature, 568(7753), 477–486.

Ramsay, S. (2014). The Hermeneutics of Screwing Around. In K. Kee (Ed.), Pastplay: Teaching and Learning History with Technology (pp. 111–120). Ann Arbor, MI: University of Michigan Press.

Resnick, P., Iacovou, N., Suchak, M., Bergstrom, P., & Riedl, J. (1994). GroupLens: An Open Architecture for Collaborative Filtering of Netnews. CSCW ’94: Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work (pp. 175–186). New York, NY: Association for Computing Machinery.

Resnick, P., & Varian, H.R. (1997). Recommender Systems. Communications of the ACM, 40(3), 56–58.

Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., & Meira, W. (2019). Auditing Radicalization Pathways on YouTube. ArXiv, 1908.08313.

Ricci, F., Rokach, L., & Shapira, B. (Eds.). (2015). Recommender Systems Handbook (2nd ed.). New York, NY: Springer.

Rich, E. (1983). Users are Individuals: Individualizing User Models. International Journal of Man-Machine Studies, 18, 199–214.

Rieder, B. (2020). Engines of Order: A Mechanology of Algorithmic Techniques. Amsterdam: Amsterdam University Press.

Rieder, B., & Hofmann, J. (2020). Towards Platform Observability. Internet Policy Review, 9(4).

Rieder, B., Matamoros-Fernández, A., & Coromina, Ò. (2018). From Ranking Algorithms to ‘Ranking Cultures’: Investigating the Modulation of Visibility in YouTube Search Results. Convergence, 24(1), 50–68.

Rieder, B., Sileno, G., & Gordon, G. (2021). A New AI Lexicon: Monopolization. Concentrated Power and Economic Embeddings in ML & AI. AI Now Institute. 1 October.

Samuelson, P.A. (1938). A Note on the Pure Theory of Consumer’s Behaviour. Economica, 5(17), 61–71.

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms. Paper presented to “Data and Discrimination: Converting Critical Concerns into Productive Inquiry”, a preconference at the 64th Annual Meeting of the International Communication Association, 22 May, Seattle, WA.

Seaver, N. (2017). Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems. Big Data & Society, 4(2), 1–12.

Seaver, N. (2019). Captivating Algorithms: Recommender Systems as Traps. Journal of Material Culture, 24(4), 421–436.

Siles, I., Segura-Castillo, A., Solís, R., & Sancho, M. (2020). Folk Theories of Algorithmic Recommendations on Spotify: Enacting Data Assemblages in the Global South. Big Data & Society, 7(1), 1–15.

Simondon, G. (1958). Du mode d’existence des objets techniques. Paris: Aubier.

Solsman, J.E. (2018). CES 2018: YouTube’s AI recommendations drive 70 percent of viewing—CNET. CNET, 10 January.

Stanfill, M. (2015). The Interface as Discourse: The Production of Norms Through Web Design. New Media & Society, 17(7), 1059–1074.

Statt, N. (2020). YouTube is a $15 billion-a-year business, Google reveals for the first time. The Verge, 3 February.

Straube, T. (2019). The Black Box and its Dis/Contents: Complications in Algorithmic Devices Research. In M. de Goede, E. Bosma, & P. Pallister-Wilkins (Eds.), Secrecy and Methods in Security Research: A Guide to Qualitative Fieldwork (pp. 175–192). London: Routledge.

van Dijck, J., Poell, T., & de Waal, M. (2018). The Platform Society: Public Values in a Connective World. New York, NY: Oxford University Press.

Van Veeren, E. (2018). Invisibility. In R. Bleiker (Ed.), Visual Global Politics (pp. 196–200). London: Routledge.

Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121–136.

Wodak, R., & Meyer, M. (Eds.). (2016). Methods of Critical Discourse Studies (3rd ed.). London: SAGE.

Yesilada, M., & Lewandowsky, S. (2022). Systematic Review: YouTube Recommendations and Problematic Content. Internet Policy Review, 11(1), 1–22.

YouTube (2019a). Continuing our Work to Improve Recommendations on YouTube. YouTube Official Blog, 25 January.

YouTube (2019b). Our Ongoing Work to Tackle Hate. YouTube Official Blog, 5 June.

Zhao, Z., Hong, L., Wei, L., Chen, J., Nath, A., Andrews, S., Kumthekar, A., Sathiamoorthy, M., Yi, X., & Chi, E. (2019). Recommending What Video to Watch Next: A Multitask Ranking System. RecSys ’19: Proceedings of the 13th ACM Conference on Recommender Systems (pp. 43–51). New York, NY: Association for Computing Machinery.

Ziewitz, M. (2019). Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines. Social Studies of Science, 49(5), 707–731.

Zuboff, S. (2018). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York, NY: PublicAffairs.




How to Cite

Rieder, B., Gordon, G., & Sileno, G. (2022). Mapping Value(s) in AI: Methodological Directions for Examining Normativity in Complex Technical Systems. Sociologica, 16(3), 51–83.