Does Explainability Require Transparency?


  • Elena Esposito Department of Political and Social Sciences, University of Bologna; Faculty of Sociology, Bielefeld University



Explainable AI, Transparency, Explanation, Communication, Sociological systems theory


Dealing with opaque algorithms, the frequent overlap between transparency and explainability produces seemingly unsolvable dilemmas, as the much-discussed trade-off between model performance and model transparency. Referring to Niklas Luhmann's notion of communication, the paper argues that explainability does not necessarily require transparency and proposes an alternative approach. Explanations as communicative processes do not imply any disclosure of thoughts or neural processes, but only reformulations that provide the partners with additional elements and enable them to understand (from their perspective) what has been done and why.  Recent computational approaches aiming at post-hoc explainability reproduce what happens in communication, producing explanations of the working of algorithms that can be different from the processes of the algorithms.


Ananny, M., & Crawford, K. (2018). Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability. New Media & Society, 20(3), 973–989.

Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI. Information Fusion, 58, 82–115.

Bateson, G. (1972). Steps to an Ecology of Mind. Chicago, IL: University of Chicago Press.

Beckers, A., & Teubner, G. (2021). Three Liability Regimes for Artificial Intelligence: Algorithmic Actants, Hybrids, Crowds. Oxford: Hart.

Bibal, A., Lognoul, M., de Streel, A. et al. (2021). Legal Requirements on Explainability in Machine Learning. Artificial Intelligence and Law, 29(2), 149–169.

Bucher, T. (2018). If… Then: Algorithmic Power and Politics. Oxford: Oxford University Press.

Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms, Big Data & Society, 3(1).

Busuioc, M. (2020). Accountable Artificial Intelligence: Holding Algorithms to Account. Public Administration Review, 81(5), 825–836.

Buhrmester, V., Münch, D. & Arens, M. (2019). Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey. arXiv, 1911.12116.

Cimiano, P., Rudolph, S. & Hartfiel, H. (2010). Computing Intensional Answers to Questions – An Inductive Logic Programming Approach. Data & Knowledge Engineering, 69(3), 261–278.

Coeckelbergh, M. (2020). AI Ethics. Cambridge, MA: MIT Press.

Doshi-Velez, F. & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702.08608v2.

Eco, U. (1975). Trattato di semiotica generale. Milano: Bompiani.

Esposito, E. (2022). Artificial Communication. How Algorithms Produce Social Intelligence. Cambridge, MA: MIT Press.

European Commission (2020). White Paper on Artificial Intelligence – A European approach to Excellence and Trust. European Commission.

European Data Protection Board (2017). Guidelines of the European Data Protection Board on Automated Individual Decision-making and Profiling. European Data Protection Board.

European Union (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

Frosst, N. & Hinton, G. (2017). Distilling a Neural Network Into a Soft Decision Tree. arXiv, 1711.09784.

Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining Explanations: An Overview of Interpretability of Machine Learning. arXiv, 1806.00069.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. Adaptive Computation and Machine Learning, Cambridge, MA: MIT Press.

Grice, H.P. (1975). Logic and Conversation. In P. Cole & J.L. Morgan, Speech Acts (pp. 41–58). New York, NY: Academic Press.

Guidotti, R., Monreale, A., Ruggieri , S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, 51(5), 1–42.

Gunning, D. (2017). Explainable Artificial Intelligence (XAI) (Technical Report). Defense Advanced Research Projects Agency.

Heider, F. (1958). The Psychology of Interpersonal Relations. New York, NY: Wiley.

Hempel, C.G. (1966). Philosophy of Natural Science. Englewood Cliffs, NJ: Prentice-Hall.

Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.

Keenan, B., & Sokol, K. (2023). Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann’s Functional Theory of Communication. arXiv, 2302.03460.

Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A. Baum, K. (2021). What Do We Want From Explainable Artificial Intelligence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research. arXiv, 2102.07817v1.

Latour, B. (1999). Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge, MA: Harvard University Press.

LeCun, Y., Bengio, Y., & Hinton G. (2015). Deep Learning. Nature, 521(7553), 436–444.

Lipton, Z.C. (2018). The Mythos of Interpretability. ACM, 16(3), 31–57.

Luhmann N. (1995). Was ist Kommunikation?. In Soziologische Aufklärung, Vol. 6 (pp. 109–120). Opladen: Westdeutscher.

Luhmann N. (1997). Die Gesellschaft der Gesellschaft. Frankfurt am Main: Suhrkamp.

Malle, B.F. (1999). How People Explain Behavior: A New Theoretical Framework. Personality and Social Psychology Review, 3(1), 23–48.

Mikalef, P., Conboy, K., Eriksson Lundström J., & Popovič, A. (2022). Thinking Responsibly about Responsible AI and The Dark Side’ of AI. European Journal of Information Systems, 31(3), 257–268.

Miller, T. (2019). Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence, 267, 1–38.

Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining Explanations in AI. In d. boyd & J. Morgenstern (Eds.), FAT* ’19: Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 279–288). New York, NY: Association for Computing Machinery.

Montavon, G., Samek, W., & Müller, K. (2018). Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Processing, 73, 1–15.

O’Hara, K. (2020). Explainable AI and the Philosophy and Practice of Explanation. Computer Law & Security Review, 39.

Pasquale, F. (2015). The Black Box Society. The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press.

Pearl, J., & Mackenzie D. (2018). The Book of Why: The New Science of Cause and Effect. New York, NY: Basic Books.

Robbins, S. (2019). A Misdirected Principle with a Catch: Explicability for AI. Minds and Machines, 29, 495–514.

Rohlfing, K., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H.M., Buschmeier, H., Esposito, E., Grimminger, A., Hammer, B., Häb-Umbach, R., Horwath, I., Hüllermeier, E., Kern, F., Kopp, S., Thommes, K., Ngonga Ngomo, A.-C., Schulte, C., Wachsmuth, H., Wagner, P., Wrede, B. (2021). Explanations as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. IEEE Transactions on Cognitive and Developmental Systems, 13(3), 717–728.

Roscher, R., Bohn, B., Duarte, M.F., & Garcke, J. (2020). Explainable Machine Learning for Scientific Insights and Discoveries. IEEE Transactions on Cognitive and Developmental Systems, 8, 42200–42216.

Rudin, C. (2019). Stop Explaining Black Box Machine Learning Models for High Stake Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, 1, 206–215.

Shannon, C.E., & Weaver, W. (1949). The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press.

Shmueli, G. (2010). To Explain or to Predict? Statistical Science, 25(3), 289–310.

Theodorou, A., & Dignum, V. (2020). Towards Ethical and Socio-Legal Governance in AI. Nature Machine Intelligence, 2(1), 10–12.

Tilly C. (2006). Why?. Princeton, NJ: Princeton University Press.

Vilone, G., & Longo, L. (2021). Notions of Explainability and Evaluation Approaches for Explainable Artificial Intelligence. Information Fusion, 76, 89–106.

Von Hilgers, P. (2011). The History of the Black Box: The Clash of a Thing and Its Concept. Cultural Politics, 7(1), 41–58.

Weinberger, D. (2018). 3 Principles for Solving AI Dilemma: Optimization vs Explanation. KDnuggets.

von Wright, G.H. (1971). Exploration and Understanding. Ithaca, NY: Cornell University Press.

Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2018). Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard? Philosophy & Technology, 32, 661–-683.




How to Cite

Esposito, E. (2022). Does Explainability Require Transparency?. Sociologica, 16(3), 17–27.