Eth­ics of ex­plain­able AI: What char­ac­ter­ises good ex­plan­a­tions?

Opacity

Due to the widespread use of opaque, data-driven tools and services, it has become a question of general interest whether these systems can be sufficiently well assessed, both epistemically (do we know what they do) and normatively (what should they do/what not). It often seems necessary to provide some sort of explanation or even transparency to ensure debugging, smooth operating or justifiying its usage. Opacity relates, among other things, to an agent’s literacy (prior and domain knowledge), context and purpose of usage. Accordingly, software and robots are not evenly opaque to everyone.

Information asymmetry

Beyond the issue of digital illiteracy, systems can also be opa- que due to their intrinsic technical features or by intention. It can become problematic if systems are opaque to the public and impartial third parties due to corporate secrecy, if they treat people unequally, e.g. in medical treatment, automobile insurance, credit loaning or hiring. We usually hold unequal treatment to be fair if there are good reasons to do so (e.g. no selling alcohol to minors). However, if we can’t assess the rea- sons why someone is treated differently, it is hard to argue in favour or against alleged discrimination.

Reflecting on the call for explainability

Accordingly, rendering AI-system understandable has become an urgent research matter, and explainable AI is the name of a new research field in computer science and beyond. Currently, research and development are guided by the principle of human-centric AI. By placing the human at the centre of the effort, the idea is to do justice to the high contextual dependency of the quality of explanations, to ensure optimal utility (and thus business) and at the same time remain in line with fundamental democratic values. There are several issues of ethical and philosophical interest, here: First, explaining AI is not per se a good thing. In the worst case, explanations can be used to manipulate users or to create acceptance for a technology that is ethically or legally unacceptable. Hence, we need to thoroughly consider if explaining the systems really does good – and because of the complexity and uncertainty of social situa- tions, we need to acknowledge that we can only classify usecases of needing explanations by idealisation, which means that there might also be situations you have not accounted for in advance. Here, it is important to think about meta-strategies for coping with these situations. Second, ethical requirements and user demands should consequently be differentiated. Some users’ wishes, e.g. understanding awkward robot beha- viour, must not be of ethical interest at all. Sometimes, users’ demands and ethical requirements might empower each other, e.g. in those cases where explaining fosters users’ agency and autonomy. There can also be cases in which empowering certain users counteracts ethical principles. For instance, a high degree of transparency is known to enable some users to ‘game the system’ thereby increasing their self-determination. However, the same transparency might overwhelm others thereby introducing unfairness rather than reducing it.

The workgroup contributes to these questions in exchange with colleagues of the TRR 318 “Constructing Explainability” and the research network “SustAInable Life-cycle of Intelligent Socio-Technical Systems” (SAIL).