top of page

The Question of Interpretation: An ethical dilemma

by Ayush Somani (PhD Fellow, UiT Tromsø)

Let me start with a famous quote that is both relevant and encouraging for my work.

If you can’t explain it simply, you don’t understand it well enough.

- Albert Einstein

Interestingly, there is no universally accepted mathematical, formal, or technical definition of interpretability. In 1995, Russell and Norvig [7] have examined four practices that historically de- fined the fields of AI: (i) thinking humanly, (ii) reasoning logically, (iii) functioning humanly, and (iv) acting rationally. Montavon et al. [5] provide a general and widely accepted definition in 2018. “An interpretation is the transformation of an abstract concept into a domain that humans can comprehend.” In the preceding definition, two segments stand out: ‘concept’ and ‘comprehend.’ In 2019, Miller [4] provided a commonly used (non-mathematical) definition of interpretability - “The degree to which an observer can understand the reason for a decision is referred to as in- terpretability.” Take note of the three segments of the preceding definition: ‘understand’, ‘reason’ and ‘decision’. This brings us to the fundamental umbrellas under which most discovered litera- ture’s interpretation strategies fall. This could be a reasonable basis for comparing and criticizing explanations from various domains. It includes human-computer interactions, law and regulations, and social sciences, but excludes computer science. Figure 1 shows the non-exhaustive publishing trends for Deep learning, Interpretable Deep Learning, and Ethical Deep Learning on the Web of Science for the past 30 years, showing an increase in research demand for interpretable results with increasing computationally intensive systems.

Fig. 1: The authorship report of the last 30 years is derived from Web of Science’s survey using key phrases (a) ‘Deep Learning’ (blue) (b) ‘Interpretable Deep Learning’ or ‘Explainable Deep Learning’ (orange), and (c) ‘Deep Learning Ethics’ or ‘Ethical Deep Learning’ (gray).

A formal definition remains elusive, so we look to the field of psychology for clues. T. Lom- brozo [3] said in 2006 that “explanations are central to our understanding and the currency in which we exchange beliefs.” Questions like what an explanation is, what makes some explanations better than others, how explanations are made, and when people seek explanations are just beginning to be answered. In fact, the definition of ‘explanation’ in the psychology literature ranges from the deductive-nomological view [2], where explanations are seen as logical proofs, to a more general sense of how something works. In fact, different works use different standards and all of them can be explained in some way.

The purpose of interpretability is to bridge the gap between “Right to receive explaination” vs “Right to be informed”. Rudin et al. [6] questioned the accuracy, completeness, and relia- bility of the DNN model in 2019. Furthermore, human-based evaluation is required to improve predictability and legal compliance [1]. However, nothing is known about the precise definitions of AI terms or whether they have distinct or overlapping meanings. When these capabilities are fully developed, responsible AI will emerge, which can be used in real-world scenarios in a variety of industries. We need to take a quick look at how important, necessary, categorized, and justified different interpretation ideas are. The need and objectives for interpretability and target audience may vary widely by user domain.



[1] Campbell, N.C., Murray, E., Darbyshire, J., Emery, J., Farmer, A., Griffiths, F., Guthrie, B., Lester, H., Wilson, P., Kinmonth, A.L.: Designing and evaluating complex interventions to improve health care. Bmj334(7591), 455–459 (2007)

[2] Hempel, C.G., Oppenheim, P.: Studies in the logic of explanation. Philosophy of science15(2), 135–175 (1948)

[3] Lombrozo, T.: The structure and function of explanations. Trends in cognitive sciences10(10), 464–470 (2006)

[4] Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267, 1–38 (2019)

[5] Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digital Signal Processing 73, 1–15 (2018)

[6] Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence1(5), 206–215 (2019)

[7] Russell, S., Norvig, P.: A modern, agent-oriented approach to introductory artificial intelli- gence. Acm Sigart Bulletin 6(2), 24–26 (1995)


Follow Ayush Somani (Click here)

6 views0 comments
bottom of page