77 – Should AI be Explainable?

scott robbins

If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California State University, Chico and an M.Sc. in Ethics of Technology from the University of Twente. He is a founding member of the Foundation for Responsible Robotics and a member of the 4TU Centre for Ethics and Technology. Scott is skeptical of AI as a grand solution to societal problems and argues that AI should be boring.

You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).

Show Notes

Topic covered include:

  • Why do people worry about the opacity of AI?
  • What’s the difference between explainability and transparency?
  • What’s the moral value or function of explainable AI?
  • Must we distinguish between the ethical value of an explanation and its epistemic value?
  • Why is it so technically difficult to make AI explainable?
  • Will we ever have a technical solution to the explanation problem?
  • Why does Scott think there is Catch 22 involved in insisting on explainable AI?
  • When should we insist on explanations and when are they unnecessary?
  • Should we insist on using boring AI?


Relevant Links


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s