If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California State University, Chico and an M.Sc. in Ethics of Technology from the University of Twente. He is a founding member of the Foundation for Responsible Robotics and a member of the 4TU Centre for Ethics and Technology. Scott is skeptical of AI as a grand solution to societal problems and argues that AI should be boring.
Topic covered include:
- Why do people worry about the opacity of AI?
- What’s the difference between explainability and transparency?
- What’s the moral value or function of explainable AI?
- Must we distinguish between the ethical value of an explanation and its epistemic value?
- Why is it so technically difficult to make AI explainable?
- Will we ever have a technical solution to the explanation problem?
- Why does Scott think there is Catch 22 involved in insisting on explainable AI?
- When should we insist on explanations and when are they unnecessary?
- Should we insist on using boring AI?
- Scotts’s webpage
- Scott’s paper “A Misdirected Principle with a Catch: Explicability for AI”
- Scott’s paper “The Value of Transparency: Bulk Data and Authorisation“
- “The Right to an Explanation Explained” by Margot Kaminski
- Episode 36 – Wachter on Algorithms and Explanations