Hildebrandt publishes on Qualification and Quantification in Machine Learning
Hildebrandt publishes ‘Qualification and Quantification in Machine Learning. From Explanation to Explication’ in Sociologica (2022).
This publication develops the idea that quantification depends on a prior qualification, also in machine learning. The paper proposes that explanations of machine learning should engage with the decision on how to qualify certain data (e.g. as ground truth, or as a relevant variable), rather than trying to map the internal operations of a neural net. Finally, the paper suggest using machine learning to detect where the quantification fails, calling for qualitative research to ‘explicate’ the outlier. So, instead of trying to discipline, discard or ignore the outliers, they should be seen as the more interesting cases, where more in-depth research is required.
Abstract
Moving beyond the conundrum of explanation, usually portrayed as a trade-off against accuracy, this article traces the recent emergence of explainable AI to the legal “right to an explanation”, situating the need for an explanation in the underlying rule of law principle of contestability. Instead of going down the rabbit hole of causal or logical explanations, the article then revisits the Methodenstreit, whose outcome has resulted in the quantifiability of anything and everything, thus hiding the qualification that necessarily precedes any and all quantification. Finally, the paper proposes to use the quantification that is inherent in machine learning to identify individual decisions that resist quantification and require situated inquiry and qualitative research. For this, the paper explores Clifford Geertz’s notion of explication as a conceptual tool focused on discernment and judgment rather than calculation and reckoning.
Discussion