Explanation Methods in Deep Learning: Users, Values, Concerns and\n Challenges Article Swipe
YOU?
·
· 2018
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.1803.07517
· OA: W4301971765
Issues regarding explainable AI involve four components: users, laws &\nregulations, explanations and algorithms. Together these components provide a\ncontext in which explanation methods can be evaluated regarding their adequacy.\nThe goal of this chapter is to bridge the gap between expert users and lay\nusers. Different kinds of users are identified and their concerns revealed,\nrelevant statements from the General Data Protection Regulation are analyzed in\nthe context of Deep Neural Networks (DNNs), a taxonomy for the classification\nof existing explanation methods is introduced, and finally, the various classes\nof explanation methods are analyzed to verify if user concerns are justified.\nOverall, it is clear that (visual) explanations can be given about various\naspects of the influence of the input on the output. However, it is noted that\nexplanation methods or interfaces for lay users are missing and we speculate\nwhich criteria these methods / interfaces should satisfy. Finally it is noted\nthat two important concerns are difficult to address with explanation methods:\nthe concern about bias in datasets that leads to biased DNNs, as well as the\nsuspicion about unfair outcomes.\n