Qualification and Quantification in Machine Learning. From Explanation to Explication

Authors

  • Mireille Hildebrandt Law Faculty, Vrije Universiteit Brussel; Science Faculty, Radboud University, Netherlands. https://orcid.org/0000-0003-4558-9149

DOI:

https://doi.org/10.6092/issn.1971-8853/15845

Keywords:

GDPR, Right to an explanation, Explainable machine learning, Methodenstreit, Qualculation, Proxies, Explication

Abstract

Moving beyond the conundrum of explanation, usually portrayed as a trade-off against accuracy, this article traces the recent emergence of explainable AI to the legal “right to an explanation”, situating the need for an explanation in the underlying rule of law principle of contestability. Instead of going down the rabbit hole of causal or logical explanations, the article then revisits the Methodenstreit, whose outcome has resulted in the quantifiability of anything and everything, thus hiding the qualification that necessarily precedes any and all quantification. Finally, the paper proposes to use the quantification that is inherent in machine learning to identify individual decisions that resist quantification and require situated inquiry and qualitative research. For this, the paper explores Clifford Geertz’s notion of explication as a conceptual tool focused on discernment and judgment rather than calculation and reckoning.

References

Bayamlıoğlu, E. (2022). The Right to Contest Automated Decisions under the General Data Protection Regulation: Beyond the so-Called “Right to Explanation”. Regulation & Governance, 16(4), 1058–1078. https://doi.org/10.1111/rego.12391

Bygrave, L. (2001). Minding the Machine. Art.15 and the EC Data Protection Directive and Automated Profiling. Computer Law & Security Report, 17(1), 17–24. https://doi.org/10.1016/S0267-3649(01)00104-2

Cabitza, F., Ciucci, D., & Rasoini, R. (2017). A Giant with Feet of Clay: On the Validity of the Data That Feed Machine Learning in Medicine. arXiv, 1706.06838. http://arxiv.org/abs/1706.06838

Callon, M., & Law J. (2005). On Qualculation, Agency, and Otherness. Environment and Planning D: Society and Space, 23(5), 717–33. https://doi.org/10.1068/d343t

Campagner, A., Ciucci, D., Svensson, C., Figge, M.T., & Cabitza, F. (2021). Ground Truthing from Multi-Rater Labeling with Three-Way Decision and Possibility Theory. Information Sciences, 545(4), 771–790. https://doi.org/10.1016/j.ins.2020.09.049

European Union (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

Frankfurt, H.G. (2005). On Bullshit. Princeton, NJ: Princeton University Press. https://doi.org/10.1515/9781400826537

Frankfurt, H.G. (2010). On Truth. New York, NY: Vintage Books.

Geertz, C. (2010). The Interpretation of Cultures: Selected Essays. New York, NY: Basic Books.

Goebel, R., Chander, A., Holzinger, K., Lecue, F., Akata, Z., Stumpf, S., Kieseberg, P., & Holzinger, A. (2018). Explainable AI: The New 42? In A. Holzinger, P. Kieseberg, A.M. Tjoa, & E. Weippl (Eds.), Machine Learning and Knowledge Extraction (pp. 295–303). Cham: Springer. https://doi.org/10.1007/978-3-319-99740-7

Goodman, B., & Flaxman, S. (2016). European Union Regulations on Algorithmic Decision-making and a “Right to Explanation”. arXiv, 1606.08813. http://arxiv.org/abs/1606.08813

Goodman, B., & Flaxman, S. (2017). European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”. AI Magazine, 38(3), 50–57. https://doi.org/10.1609/aimag.v38i3.2741

Hildebrandt, M. (2012). The Dawn of a Critical Transparency Right for the Profiling Era. In J. Bus, M. Crompton, M. Hildbrandt & G. Metakides (Eds.), Digital Enlightenment Yearbook 2012 (pp. 41–56). Amsterdam: IOS Press. https://doi.org/10.3233/978-1-61499-057-4-41

Hildebrandt, M. (2017). Learning as a Machine. Crossovers between Humans and Machines. Journal of Learning Analytics, 4(1), 6–23. https://doi.org/10.18608/jla.2017.41.3

Hildebrandt, M. (2022). The Issue of Proxies and Choice Architectures. Why EU Law Matters for Recommender Systems. Frontiers in Artificial Intelligence, 5. https://doi.org/10.3389/frai.2022.789076

Hooker, S., Moorosi, N., Clark, G., Bengio, S., & Denton, E. (2020). Characterising Bias in Compressed Models. arXiv, 2010.03058. https://doi.org/10.48550/arXiv.2010.03058

Jasanoff, S. (1995). Science at the Bar. Law, Science, and Technology in America. Cambridge, MA: Harvard University Press. https://doi.org/10.4159/9780674039124

Kaminski, M.E. (2019). The Right to Explanation, Explained. Berkeley Technology Law Journal, 34(1). https://doi.org/10.2139/ssrn.3196985

Kaminski, M.E., & Urban, J.M. (2021). The Right to Contest AI. Columbia Law Review, 121(7), 1957–2048. https://www.jstor.org/stable/27083420

Kapoor, S., & Narayanan, A. (2022). Leakage and the Reproducibility Crisis in ML-based Science. arXiv, 2207.07048. https://doi.org/10.48550/arXiv.2207.07048

Kofman, A. (2018). Bruno Latour, the Post-Truth Philosopher, Mounts a Defense of Science. The New York Times, 25 October. https://www.nytimes.com/2018/10/25/magazine/bruno-latour-post-truth-philosopher-science.html

Lakoff, G., & Johnson, M. (2003). Metaphors We Live By (2nd ed.). Chicago, IL: University of Chicago Press. https://doi.org/10.7208/chicago/9780226470993.001.0001

Lucy, W. (2014). The Rule of Law and Private Law. In L.M. Austin & D. Klimchuk (Eds.), Private Law and the Rule of Law (pp. 41–66). Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198729327.003.0003

Medvedeva, M., Wieling, M., & Vols, M. (2022). Rethinking the Field of Automatic Prediction of Court Decisions. Artificial Intelligence and Law, 31(1), 195–212. https://doi.org/10.1007/s10506-021-09306-3

Mulvin, D. (2021). Proxies: The Cultural Work of Standing in. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/11765.001.0001

Munk, A.K., Olesen, A.G., & Jacomy, M. (2022). The Thick Machine: Anthropological AI between Explanation and Explication. Big Data & Society, 9(1). https://doi.org/10.1177/20539517211069891

Paullada, A., Raji, I.D., Bender, E.M., Denton, E., & Hanna, A. (2021). Data and Its (Dis)Contents: A Survey of Dataset Development and Use in Machine Learning Research. Patterns, 2(11), 100336. https://doi.org/10.1016/j.patter.2021.100336

Ploug, T., & Holm, S. (2020). The Four Dimensions of Contestable AI Diagnostics – a Patient-Centric Approach to Explainable AI. Artificial Intelligence in Medicine, 107, 101901. https://doi.org/10.1016/j.artmed.2020.101901

Ricoeur, P. (1975). La métaphore vive. Paris: Éditions du Seuil.

Smith, B.C. (2019). The Promise of Artificial Intelligence: Reckoning and Judgment. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/12385.001.0001

Stadler, F. (2020). From Methodenstreit to the “Science Wars” – an Overview on Methodological Disputes between the Natural, Social, and Cultural Sciences. In M. Będkowski, A. Brożek, A. Chybińska, S. Ivanyk & D. Traczykowski (Eds.) Formal and Informal Methods in Philosophy (pp. 77–100). Leiden: Brill. https://doi.org/10.1163/9789004420502_006

Thaler, R.H., & Sunstein, C.R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press.

Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GPDR. Harvard Journal of Law & Technology, 31(2), 841–88. https://doi.org/10.2139/ssrn.3063289

Downloads

Published

2023-03-15

How to Cite

Hildebrandt, M. (2022). Qualification and Quantification in Machine Learning. From Explanation to Explication. Sociologica, 16(3), 37–49. https://doi.org/10.6092/issn.1971-8853/15845

Issue

Section

Symposium