Does Explainability Require Transparency?
DOI:
https://doi.org/10.6092/issn.1971-8853/15804Keywords:
Explainable AI, Transparency, Explanation, Communication, Sociological systems theoryAbstract
Dealing with opaque algorithms, the frequent overlap between transparency and explainability produces seemingly unsolvable dilemmas, as the much-discussed trade-off between model performance and model transparency. Referring to Niklas Luhmann's notion of communication, the paper argues that explainability does not necessarily require transparency and proposes an alternative approach. Explanations as communicative processes do not imply any disclosure of thoughts or neural processes, but only reformulations that provide the partners with additional elements and enable them to understand (from their perspective) what has been done and why. Recent computational approaches aiming at post-hoc explainability reproduce what happens in communication, producing explanations of the working of algorithms that can be different from the processes of the algorithms.
References
Ananny, M., & Crawford, K. (2018). Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
Bateson, G. (1972). Steps to an Ecology of Mind. Chicago, IL: University of Chicago Press.
Beckers, A., & Teubner, G. (2021). Three Liability Regimes for Artificial Intelligence: Algorithmic Actants, Hybrids, Crowds. Oxford: Hart. https://doi.org/10.5040/9781509949366
Bibal, A., Lognoul, M., de Streel, A. et al. (2021). Legal Requirements on Explainability in Machine Learning. Artificial Intelligence and Law, 29(2), 149–169. https://doi.org/10.1007/s10506-020-09270-4
Bucher, T. (2018). If… Then: Algorithmic Power and Politics. Oxford: Oxford University Press.
Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms, Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512
Busuioc, M. (2020). Accountable Artificial Intelligence: Holding Algorithms to Account. Public Administration Review, 81(5), 825–836. https://doi.org/10.1111/puar.13293
Buhrmester, V., Münch, D. & Arens, M. (2019). Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey. arXiv, 1911.12116. https://arxiv.org/pdf/1911.12116.pdf
Cimiano, P., Rudolph, S. & Hartfiel, H. (2010). Computing Intensional Answers to Questions – An Inductive Logic Programming Approach. Data & Knowledge Engineering, 69(3), 261–278. https://doi.org/10.1016/j.datak.2009.10.008
Coeckelbergh, M. (2020). AI Ethics. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/12549.001.0001
Doshi-Velez, F. & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702.08608v2. https://arxiv.org/abs/1702.08608v2
Eco, U. (1975). Trattato di semiotica generale. Milano: Bompiani.
Esposito, E. (2022). Artificial Communication. How Algorithms Produce Social Intelligence. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/14189.001.0001
European Commission (2020). White Paper on Artificial Intelligence – A European approach to Excellence and Trust. European Commission. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0065&from=FR
European Data Protection Board (2017). Guidelines of the European Data Protection Board on Automated Individual Decision-making and Profiling. European Data Protection Board. https://ec.europa.eu/newsroom/article29/items/612053
European Union (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
Frosst, N. & Hinton, G. (2017). Distilling a Neural Network Into a Soft Decision Tree. arXiv, 1711.09784. https://arxiv.org/abs/1711.09784
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining Explanations: An Overview of Interpretability of Machine Learning. arXiv, 1806.00069. https://arxiv.org/pdf/1806.00069.pdf
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. Adaptive Computation and Machine Learning, Cambridge, MA: MIT Press.
Grice, H.P. (1975). Logic and Conversation. In P. Cole & J.L. Morgan, Speech Acts (pp. 41–58). New York, NY: Academic Press. https://doi.org/10.1163/9789004368811_003
Guidotti, R., Monreale, A., Ruggieri , S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, 51(5), 1–42. https://doi.org/10.1145/3236009
Gunning, D. (2017). Explainable Artificial Intelligence (XAI) (Technical Report). Defense Advanced Research Projects Agency.
Heider, F. (1958). The Psychology of Interpersonal Relations. New York, NY: Wiley. https://doi.org/10.1037/10628-000
Hempel, C.G. (1966). Philosophy of Natural Science. Englewood Cliffs, NJ: Prentice-Hall.
Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511809477
Keenan, B., & Sokol, K. (2023). Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann’s Functional Theory of Communication. arXiv, 2302.03460. https://doi.org/10.48550/arXiv.2302.03460
Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A. Baum, K. (2021). What Do We Want From Explainable Artificial Intelligence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research. arXiv, 2102.07817v1. https://doi.org/10.48550/arXiv.2102.07817
Latour, B. (1999). Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge, MA: Harvard University Press.
LeCun, Y., Bengio, Y., & Hinton G. (2015). Deep Learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
Lipton, Z.C. (2018). The Mythos of Interpretability. ACM, 16(3), 31–57. https://doi.org/10.1145/3236386.3241340
Luhmann N. (1995). Was ist Kommunikation?. In Soziologische Aufklärung, Vol. 6 (pp. 109–120). Opladen: Westdeutscher.
Luhmann N. (1997). Die Gesellschaft der Gesellschaft. Frankfurt am Main: Suhrkamp.
Malle, B.F. (1999). How People Explain Behavior: A New Theoretical Framework. Personality and Social Psychology Review, 3(1), 23–48. https://doi.org/10.1207/s15327957pspr0301_2
Mikalef, P., Conboy, K., Eriksson Lundström J., & Popovič, A. (2022). Thinking Responsibly about Responsible AI and The Dark Side’ of AI. European Journal of Information Systems, 31(3), 257–268. https://doi.org/10.1080/0960085X.2022.2026621
Miller, T. (2019). Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining Explanations in AI. In d. boyd & J. Morgenstern (Eds.), FAT* ’19: Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 279–288). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3287560.3287574
Montavon, G., Samek, W., & Müller, K. (2018). Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Processing, 73, 1–15. https://doi.org/10.1016/j.dsp.2017.10.011
O’Hara, K. (2020). Explainable AI and the Philosophy and Practice of Explanation. Computer Law & Security Review, 39. https://doi.org/10.1016/j.clsr.2020.105474
Pasquale, F. (2015). The Black Box Society. The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press. https://doi.org/10.4159/harvard.9780674736061
Pearl, J., & Mackenzie D. (2018). The Book of Why: The New Science of Cause and Effect. New York, NY: Basic Books.
Robbins, S. (2019). A Misdirected Principle with a Catch: Explicability for AI. Minds and Machines, 29, 495–514. https://doi.org/10.1007/s11023-019-09509-3
Rohlfing, K., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H.M., Buschmeier, H., Esposito, E., Grimminger, A., Hammer, B., Häb-Umbach, R., Horwath, I., Hüllermeier, E., Kern, F., Kopp, S., Thommes, K., Ngonga Ngomo, A.-C., Schulte, C., Wachsmuth, H., Wagner, P., Wrede, B. (2021). Explanations as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. IEEE Transactions on Cognitive and Developmental Systems, 13(3), 717–728. https://doi.org/10.1109/TCDS.2020.3044366
Roscher, R., Bohn, B., Duarte, M.F., & Garcke, J. (2020). Explainable Machine Learning for Scientific Insights and Discoveries. IEEE Transactions on Cognitive and Developmental Systems, 8, 42200–42216.
Rudin, C. (2019). Stop Explaining Black Box Machine Learning Models for High Stake Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, 1, 206–215. https://doi.org/10.1038/s42256-019-0048-x
Shannon, C.E., & Weaver, W. (1949). The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press.
Shmueli, G. (2010). To Explain or to Predict? Statistical Science, 25(3), 289–310. https://doi.org/10.1214/10-STS330
Theodorou, A., & Dignum, V. (2020). Towards Ethical and Socio-Legal Governance in AI. Nature Machine Intelligence, 2(1), 10–12. https://doi.org/10.1038/s42256-019-0136-y
Tilly C. (2006). Why?. Princeton, NJ: Princeton University Press.
Vilone, G., & Longo, L. (2021). Notions of Explainability and Evaluation Approaches for Explainable Artificial Intelligence. Information Fusion, 76, 89–106. https://doi.org/10.1016/j.inffus.2021.05.009
Von Hilgers, P. (2011). The History of the Black Box: The Clash of a Thing and Its Concept. Cultural Politics, 7(1), 41–58. https://doi.org/10.2752/175174311X12861940861707
Weinberger, D. (2018). 3 Principles for Solving AI Dilemma: Optimization vs Explanation. KDnuggets. https://www.kdnuggets.com/2018/02/3-principles-ai-dilemma-optimization-explanation.html
von Wright, G.H. (1971). Exploration and Understanding. Ithaca, NY: Cornell University Press.
Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2018). Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard? Philosophy & Technology, 32, 661–-683. https://doi.org/10.1007/s13347-018-0330-6
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Elena Esposito
This work is licensed under a Creative Commons Attribution 4.0 International License.