25 Feb 2022 Mark Keane, (University College Dublin) Explaining artificial intelligence: contrastive explanations for AI black boxes and what people think of them.
From OLLIE Quinn
Abstract: In recent years, there has been a lot of excitement around the apparent success of Deep Learning in AI. There has also been a decent amount of skepticism around the issue of knowing what these models are actually doing, when they are being successful. This has led to the emerging area of Explainable AI, where techniques have been developed to explain a model’s workings to end-users and model developers. Recently, contrastive explanations (counterfactual and semi-factual) have become very popular for explaining the predictions of such black-box AI systems. For example, if you are refused a loan by an AI and ask “why”, a counterfactual explanation might tell you, “well, if you asked for a smaller loan, then you would have been granted the loan.”. These counterfactuals are generated by methods that perform perturbations of the feature values of the original situation (e.g., we perturb the value of the loan). In this talk, I review some of the contrastive methods we have developed for different datasets (tabular, image, time-series) based on a case-based reasoning approach. I also review some of our very recent work on user studies testing whether these AI methods are comprehendible to users in the ways that are assumed by AI researchers (Spoiler Alert: they often aren’t).