Mapping Value(s) in AI: Methodological Directions for Examining Normativity in Complex Technical Systems

Authors

  • Bernhard Rieder Department of Media Studies, University of Amsterdam https://orcid.org/0000-0002-2404-9277
  • Geoff Gordon Asser Institute https://orcid.org/0000-0002-7067-1964
  • Giovanni Sileno Informatics Institute, University of Amsterdam https://orcid.org/0000-0001-5155-9021

DOI:

https://doi.org/10.6092/issn.1971-8853/15910

Keywords:

Normativity, Machine learning, Empirical ethics, YouTube, Methodology

Abstract

This paper seeks to develop a multidisciplinary methodological framework and research agenda for studying the broad array of 'ideas', 'norms', or 'values' incorporated and mobilized in systems relying on AI components. We focus on recommender systems as a broader field of technical practice and take YouTube as an example of a concrete artifact that raises many social concerns. To situate the conceptual perspective and rationale informing our approach, we briefly discuss investigations into normativity in technology more broadly and refer to 'descriptive ethics' and 'ethigraphy' as two approaches concerned with the empirical study of values and norms. Drawing on science and technology studies, we argue that normativity cannot be reduced to ethics, but requires paying attention to a wider range of elements, including the performativity of material objects themselves. The method of 'encircling' is presented as a way to deal with both the secrecy surrounding many commercial systems and the socio-technical and distributed character of normativity more broadly. The resulting investigation aims to draw from a series of approaches and methods to construct a much wider picture than what could result from one discipline only. The paper is then dedicated to developing this methodological framework organized into three layers that demarcate specific avenues for conceptual reflection and empirical research, moving from the more general to the more concrete: ambient technical knowledge, local design conditions, and materialized values. We conclude by arguing that deontological approaches to normativity in AI need to take into account the many different ways norms and values are embedded in technical systems.

References

Agre, P.E. (1997a). Computation and Human Experience. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511571169

Agre, P.E. (1997b). Toward a Critical Technical Practice: Lessons Learned Trying to Reform AI. In G.C. Bowker, S.L. Star, W. Turner, & L. Gasser (Eds.), Social Science, Technical Systems, and Cooperative Work: Beyond the Great Divide (pp. 131–157). London: Psychology Press.

Ahmed, N., & Wahed, M. (2020). The De-democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research. ArXiv, 2010.15581. http://arxiv.org/abs/2010.15581

Airoldi, M., Beraldo, D., & Gandini, A. (2016). Follow the Algorithm: An Exploratory Investigation of Music on YouTube. Poetics, 57, 1–13. https://doi.org/10.1016/j.poetic.2016.05.001

Alfano, M., Fard, A.E., Carter, J.A., Clutton, P., & Klein, C. (2020). Technologically Scaffolded Atypical Cognition: The Case of YouTube’s Recommender System. Synthese, 199. https://doi.org/10.1007/s11229-020-02724-x

Ananny, M., & Crawford, K. (2018). Seeing without Knowing: Limitations of the Transparency ideal and its Application to Algorithmic Accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645

Association for Computing Machinery (2017). Statement on Algorithmic Transparency and Accountability. ACM, 12 January. https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf

Association for Computing Machinery (2018). ACM Code of Ethics and Professional Conduct. ACM, 22 June. https://www.acm.org/code-of-ethics

Barbrook, R., & Cameron, A. (1996). The Californian Ideology. Science as Culture, 6(1), 44–72. https://doi.org/10.1080/09505439609526455

Belle, V., & Papantonis, I. (2020). Principles and Practice of Explainable Machine Learning. ArXiv, 2009.11698. http://arxiv.org/abs/2009.11698

Bishop, S. (2019). Managing Visibility on YouTube through Algorithmic Gossip. New Media & Society, 21(11-12), 2589–2606. https://doi.org/10.1177/1461444819854731

Bonini, T., & Gandini, A. (2019). “First Week Is Editorial, Second Week Is Algorithmic”: Platform Gatekeepers and the Platformization of Music Curation. Social Media + Society, 5(4), 205630511988000. https://doi.org/10.1177/2056305119880006

Bosma, E. (2019). Multi-sited Ethnography of Digital Security Technologies. In M. de Goede, E. Bosma, & P. Pallister-Wilkins (Eds.), Secrecy and Methods in Security Research: A Guide to Qualitative Fieldwork (pp. 193–212). London: Routledge. https://doi.org/10.4324/9780429398186

Bratton, B.H. (2015). The Stack: On Software and Sovereignty. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/9780262029575.001.0001

Bucher, T. (2018). If… Then: Algorithmic Power and Politics. New York, NY: Oxford University Press.

Bucher, T., & Helmond, A. (2018). The Affordances of Social Media Platforms. In J. Burgess, A. Marwick, & T. Poell (Eds.), The Sage Handbook of Social Media (pp. 233–253). London: Sage. https://doi.org/10.4135/9781473984066.n14

Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512

Caplan, R., & Gillespie, T. (2020). Tiered Governance and Demonetization: The Shifting Terms of Labor and Compensation in the Platform Economy. Social Media + Society, 6(2), 1–13. https://doi.org/10.1177/2056305120936636

Christin, A. (2020). The Ethnographer and the Algorithm: Beyond the Black Box. Theory and Society, 49(5–6), 897–918. https://doi.org/10.1007/s11186-020-09411-3

Cohen, J.E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. New York, NY: Oxford University Press. https://doi.org/10.1093/oso/9780190246693.001.0001

Cooper, P. (2021). How the YouTube Algorithm Works in 2023: The Complete Guide. Hootsuite, 21 June. https://blog.hootsuite.com/how-the-youtube-algorithm-works/

Covington, P., Adams, J., & Sargin, E. (2016). Deep Neural Networks for YouTube Recommendations. In S. Sen & W. Geyer (Eds.), RecSys ’16: Proceedings of the 10th ACM Conference on Recommender Systems (pp. 191–198). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/2959100.2959190

Davidson, J., Livingston, B., Sampath, D., Liebald, B., Liu, J., Nandy, P., Van Vleet, T., Gargi, U., Gupta, S., He, Y., & Lambert, M. (2010). The YouTube Video Recommendation System. RecSys ’10: Proceedings of the Fourth ACM Conference on Recommender Systems (pp. 293–296). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/1864708.1864770

de Goede, M., Bosma, E., & Pallister-Wilkins, P. (Eds.). (2019). Secrecy and Methods in Security Research: A Guide to Qualitative Fieldwork. London: Routledge. https://doi.org/10.4324/9780429398186

Diakopoulos, N. (2015). Algorithmic Accountability: Journalistic Investigation of Computational Power Structures. Digital Journalism, 3(3), 398–415. https://doi.org/10.1080/21670811.2014.976411

European Union (2020). Digital Services Act. https://eur-lex.europa.eu/legal-content/en/TXT/?uri=COM%3A2020%3A825%3AFIN

Dourish, P. (2016). Algorithms and Their Others: Algorithmic Culture in Context. Big Data & Society, 3(2), 1–11. https://doi.org/10.1177/2053951716665128

Ekstrand, M.D. (2020). LensKit for Python: Next-Generation Software for Recommender System Experiments. CIKM ’20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, (pp. 2999–3006). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3340531.3412778

Eriksson, M., Fleischer, R., Johansson, A., Snickars, P., & Vonderau, P. (2019). Spotify Teardown: Inside the Black Box of Streaming Music. Boston, MA: The MIT Press. https://doi.org/10.7551/mitpress/10932.001.0001

Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY: St. Martin’s Press.

European Commission (2019). Ethics guidelines for trustworthy AI. European Comission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

Facebook (n.a.). What Are Recommendations on Facebook?. Facebook. https://www.facebook.com/help/1257205004624246

Foucault, M. (1969). L’archéologie du savoir. Paris: Gallimard.

Foucault, M. (1990). The Use of Pleasure. Volume 2 of The History of Sexuality (R. Hurley, Trans.). New York, NY: Vintage Books. (Original work published in 1984)

Friedler, S.A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E.P., & Roth, D. (2019). A Comparative Study of Fairness-Enhancing Interventions in Machine Learning. ArXiv, 1802.04422. https://arxiv.org/abs/1802.04422

Gert, B. (2004). Common Morality: Deciding What To Do. New York, NY: Oxford University Press. https://doi.org/10.1093/0195173716.001.0001

Gibson, J.J. (1986). The Ecological Approach to Visual Perception. London: Psychology Press.

Gillespie, T. (2010). The Politics of ‘Platforms.’ New Media & Society, 12(3), 347–364. https://doi.org/10.1177/1461444809342738

Gunawardana, A., & Shani, G. (2015). Evaluating Recommender Systems. In F. Ricci, L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (2nd ed., pp. 265–308). New York, NY: Springer. https://doi.org/10.1007/978-1-4899-7637-6_8

Google (n.a.). Creator Discovery Handbook. Suggested Videos on the Watch Page. Google. https://web.archive.org/web/20150329041618/https://support.google.com/youtube/answer/6060859?hl=en&ref_topic=6046759

Google (n.a.). Frequently Asked Questions about “Made for Kids”. https://support.google.com/youtube/answer/9684541?hl=en#zippy=%2Chow-will-recommendations-work-for-made-for-kids-or-not-made-for-kids-content-will-the-discovery-of-my-videos-be-affected

Hallinan, B., Scharlach, R., & Shifman, L. (2022). Beyond Neutrality: Conceptualizing Platform Values. Communication Theory, 32(2), 201–222. https://doi.org/10.1093/ct/qtab008

Hämäläinen, N. (2016). Descriptive Ethics. New York, NY: Palgrave Macmillan. https://doi.org/10.1057/978-1-137-58617-9

Herlocker, J.L., Konstan, J.A., Terveen, L.G., & Riedl, J.T. (2004). Evaluating Collaborative Filtering Recommender Systems. ACM Transactions on Information Systems, 22(1), 5–53. https://doi.org/10.1145/963770.963772

Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/1881.001.0001

Institute of Electrical and Electronics Engineers (n.a.). IEEE Code of Ethics. IEEE. https://www.ieee.org/about/corporate/governance/p7-8.html

Ingram, M. (2020). The YouTube ‘Radicalization Engine’ Debate Continues. Columbia Journalism Review, 9 January. https://www.cjr.org/the_media_today/youtube-radicalization.php

Jannach, D., & Adomavicius, G. (2016). Recommendations with a Purpose. RecSys ’16: Proceedings of the 10th ACM Conference on Recommender Systems (pp. 7–10). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/2959100.2959186

Jobin, A., Ienca, M., & Vayena, E. (2019). The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Karlgren, J. (1990). An Algebra for Recommendations (Working Paper No 179.). Department of Computer and Systems Sciences. KTH Royal Institute of Technology and Stockholm University. http://www.lingvi.st/papers/karlgren-algebra-for-recommendations-1990.pdf

Khan, L. (2018). The New Brandeis Movement: America’s Antimonopoly Debate. Journal of European Competition Law & Practice, 9(3), 131–132. https://doi.org/10.1093/jeclap/lpy020

Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent Trade-offs in the Fair Determination of Risk Scores. ArXiv, 1609.05807. http://arxiv.org/abs/1609.05807

Klonick, K. (2018). The New Governors: The People, Rules, and Processes Governing Online Speech. Harvard Law Review, 131(6), 1598–1670.

Klonick, K. (2021). Inside the Making of Facebook’s Supreme Court. The New Yorker, 12 February. https://www.newyorker.com/tech/annals-of-technology/inside-the-making-of-facebooks-supreme-court

Knobel, C., & Bowker, G.C. (2011). Values In Design. Communications of the ACM, 54(7), 26–28. https://doi.org/10.1145/1965724.1965735

Krug, S. (2014). Don’t Make Me Think, Revisited: A Common Sense Approach to Web Usability. San Francisco, CA: New Riders.

Kumar, S. (2019). The Algorithmic Dance: YouTube’s Adpocalypse and the Gatekeeping of Cultural Content on Digital Platforms. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1417

Latour, B. (1992). Aramis ou L’amour des Techniques. Paris: La Découverte.

Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. New York, NY: Oxford University Press.

Li, H.O.-Y., Bailey, A., Huynh, D., & Chan, J. (2020). YouTube as a Source of Information on COVID-19: A Pandemic of Misinformation? BMJ Global Health, 5(5), 1–6. https://doi.org/10.1136/bmjgh-2020-002604

Light, B., Burgess, J., & Duguay, S. (2018). The Walkthrough Method: An Approach to the Study of Apps. New Media & Society, 20(3), 881–900. https://doi.org/10.1177/1461444816675438

Lynch, M. (2001). The Epistemology of Epistopics: Science and Technology Studies as an Emergent (Non)Discipline. American Sociological Association, Science, Knowledge & Technology Section (ASA-SKAT) Newsletter, Fall, 2–3. https://asaskat.com/newsletters/

Mackenzie, A. (2017). Machine Learners: Archaeology of a Data Practice. Cambridge, MA: The MIT Press. https://doi.org/10.7551/mitpress/10302.001.0001

Marcus, G.E. (1995). Ethnography in/of the World System: The Emergence of Multi-Sited Ethnography. Annual Review of Anthropology, 24(1), 95–117. https://doi.org/10.1146/annurev.an.24.100195.000523

Matamoros-Fernández, A., Gray, J.E., Bartolo, L., Burgess, J., & Suzor, N. (2021). What’s “Up Next”? Investigating Algorithmic Recommendations on YouTube Across Issues and Over Time. Media and Communication, 9(4), 234–249. https://doi.org/10.17645/mac.v9i4.4184

Milano, S., Taddeo, M., & Floridi, L. (2019). Recommender Systems and their Ethical Challenges. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3378581

Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics, 26(4), 2141–2168. https://doi.org/10.1007/s11948-019-00165-5

Mudigere, D., Hao, Y., Huang, J., Jia, Z., Tulloch, A., Sridharan, S., Liu, X., Ozdal, M., Nie, J., Park, J., Luo, L., Yang, J.A., Gao, L., Ivchenko, D., Basant, A., Hu, Y., Yang, J., Ardestani, E.K., Wang, X., … Rao, V. (2021). High-performance, Distributed Training of Large-scale Deep Learning Recommendation Models. ArXiv, 2104.05158. http://arxiv.org/abs/2104.05158

Nissenbaum, H. (1998). Values in the Design of Computer Systems. Computers in Society, March, 38–39.

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY: Crown.

Pasquale, F. (2015). The Black box Society: The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press. https://doi.org/10.4159/harvard.9780674736061

Pasquinelli, M. (2019). How a Machine Learns and Fails: A Grammar of Error for Artificial Intelligence. Spheres, 5, 1–17.

Postigo, H. (2016). The Socio-Technical Architecture of Digital Labor: Converting Play into YouTube Money. New Media & Society, 18(2), 332–349. https://doi.org/10.1177/1461444814541527

Powles, J., & Nissenbaum, H. (2018). The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence. OneZero, 7 December. https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J.W., Christakis, N.A., Couzin, I.D., Jackson, M.O., Jennings, N.R., Kamar, E., Kloumann, I.M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D.C., Pentland, A.’S.’, Roberts, M.E., Shariff, A., Tenenbaum, J.B., Wellman, M. (2019). Machine Behaviour. Nature, 568(7753), 477–486. https://doi.org/10.1038/s41586-019-1138-y

Ramsay, S. (2014). The Hermeneutics of Screwing Around. In K. Kee (Ed.), Pastplay: Teaching and Learning History with Technology (pp. 111–120). Ann Arbor, MI: University of Michigan Press. https://doi.org/10.2307/j.ctv65swr0

Resnick, P., Iacovou, N., Suchak, M., Bergstrom, P., & Riedl, J. (1994). GroupLens: An Open Architecture for Collaborative Filtering of Netnews. CSCW ’94: Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work (pp. 175–186). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/192844.192905

Resnick, P., & Varian, H.R. (1997). Recommender Systems. Communications of the ACM, 40(3), 56–58. https://doi.org/10.1145/245108.245121

Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., & Meira, W. (2019). Auditing Radicalization Pathways on YouTube. ArXiv, 1908.08313. http://arxiv.org/abs/1908.08313

Ricci, F., Rokach, L., & Shapira, B. (Eds.). (2015). Recommender Systems Handbook (2nd ed.). New York, NY: Springer. https://doi.org/10.1007/978-1-4899-7637-6

Rich, E. (1983). Users are Individuals: Individualizing User Models. International Journal of Man-Machine Studies, 18, 199–214. https://doi.org/10.1016/S0020-7373(83)80007-8

Rieder, B. (2020). Engines of Order: A Mechanology of Algorithmic Techniques. Amsterdam: Amsterdam University Press. https://doi.org/10.5117/9789462986190

Rieder, B., & Hofmann, J. (2020). Towards Platform Observability. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1535

Rieder, B., Matamoros-Fernández, A., & Coromina, Ò. (2018). From Ranking Algorithms to ‘Ranking Cultures’: Investigating the Modulation of Visibility in YouTube Search Results. Convergence, 24(1), 50–68. https://doi.org/10.1177/1354856517736982

Rieder, B., Sileno, G., & Gordon, G. (2021). A New AI Lexicon: Monopolization. Concentrated Power and Economic Embeddings in ML & AI. AI Now Institute. 1 October. https://medium.com/a-new-ai-lexicon/a-new-ai-lexicon-monopolization-c43f136981ab

Samuelson, P.A. (1938). A Note on the Pure Theory of Consumer’s Behaviour. Economica, 5(17), 61–71. https://doi.org/10.2307/2548836

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms. Paper presented to “Data and Discrimination: Converting Critical Concerns into Productive Inquiry”, a preconference at the 64th Annual Meeting of the International Communication Association, 22 May, Seattle, WA. https://social.cs.uiuc.edu/papers/pdfs/ICA2014-Sandvig.pdf

Seaver, N. (2017). Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems. Big Data & Society, 4(2), 1–12. https://doi.org/10.1177/2053951717738104

Seaver, N. (2019). Captivating Algorithms: Recommender Systems as Traps. Journal of Material Culture, 24(4), 421–436. https://doi.org/10.1177/1359183518820366

Siles, I., Segura-Castillo, A., Solís, R., & Sancho, M. (2020). Folk Theories of Algorithmic Recommendations on Spotify: Enacting Data Assemblages in the Global South. Big Data & Society, 7(1), 1–15. https://doi.org/10.1177/2053951720923377

Simondon, G. (1958). Du mode d’existence des objets techniques. Paris: Aubier.

Solsman, J.E. (2018). CES 2018: YouTube’s AI recommendations drive 70 percent of viewing—CNET. CNET, 10 January. https://www.cnet.com/news/youtube-ces-2018-neal-mohan/

Stanfill, M. (2015). The Interface as Discourse: The Production of Norms Through Web Design. New Media & Society, 17(7), 1059–1074. https://doi.org/10.1177/1461444814520873

Statt, N. (2020). YouTube is a $15 billion-a-year business, Google reveals for the first time. The Verge, 3 February. https://www.theverge.com/2020/2/3/21121207/youtube-google-alphabet-earnings-revenue-first-time-reveal-q4-2019

Straube, T. (2019). The Black Box and its Dis/Contents: Complications in Algorithmic Devices Research. In M. de Goede, E. Bosma, & P. Pallister-Wilkins (Eds.), Secrecy and Methods in Security Research: A Guide to Qualitative Fieldwork (pp. 175–192). London: Routledge. https://doi.org/10.4324/9780429398186

van Dijck, J., Poell, T., & de Waal, M. (2018). The Platform Society: Public Values in a Connective World. New York, NY: Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001

Van Veeren, E. (2018). Invisibility. In R. Bleiker (Ed.), Visual Global Politics (pp. 196–200). London: Routledge. https://doi.org/10.4324/9781315856506-29

Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121–136.

Wodak, R., & Meyer, M. (Eds.). (2016). Methods of Critical Discourse Studies (3rd ed.). London: SAGE.

Yesilada, M., & Lewandowsky, S. (2022). Systematic Review: YouTube Recommendations and Problematic Content. Internet Policy Review, 11(1), 1–22. https://doi.org/10.14763/2022.1.1652

YouTube (2019a). Continuing our Work to Improve Recommendations on YouTube. YouTube Official Blog, 25 January. https://blog.youtube/news-and-events/continuing-our-work-to-improve

YouTube (2019b). Our Ongoing Work to Tackle Hate. YouTube Official Blog, 5 June. https://blog.youtube/news-and-events/our-ongoing-work-to-tackle-hate/

Zhao, Z., Hong, L., Wei, L., Chen, J., Nath, A., Andrews, S., Kumthekar, A., Sathiamoorthy, M., Yi, X., & Chi, E. (2019). Recommending What Video to Watch Next: A Multitask Ranking System. RecSys ’19: Proceedings of the 13th ACM Conference on Recommender Systems (pp. 43–51). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3298689.3346997

Ziewitz, M. (2019). Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines. Social Studies of Science, 49(5), 707–731. https://doi.org/10.1177/0306312719865607

Zuboff, S. (2018). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York, NY: PublicAffairs.

Downloads

Published

2023-03-15

How to Cite

Rieder, B., Gordon, G., & Sileno, G. (2022). Mapping Value(s) in AI: Methodological Directions for Examining Normativity in Complex Technical Systems. Sociologica, 16(3), 51–83. https://doi.org/10.6092/issn.1971-8853/15910

Issue

Section

Symposium