Dan Goldstein
Interpretable artificial intelligence
20/03/2018 - 12:00 - 13:00
Room K-H6-LG07, Tyree Energy Technology Building, UNSW 2052
Description
- March 20, 2018
- Speaker: Dan Goldstein
- Topic: Interpretable Artificial Intelligence: Does it Work as Advertised?
Abstract
Important decisions that used to be made by humans are now being made by AI systems. Concerns about vital decisions emanating from black boxes has led to a renewed interest in interpretable AI systems, those that expose the grounds on which they are deciding. In the first half of this talk, I present and test the predictive accuracy of a simple method for creating interpretable decision making rules (based on Jung, Concannon, Shroff, Goel, & Goldstein).
In the second part, I present the results of a battery of studies aimed at determining whether interpretable models achieve their intended effects on human users. In particular, we (Poursabzi-Sangdeh, Goldstein, Hofman, Vaughan & Wallach) test the degree to which people can anticipate the results of interpretable models, trust their predictions, and spot their mistakes.
About the speaker
Dan Goldstein works at the intersection of behavioral economics and computer science. Prior to joining Microsoft, Dan was a Principal Research Scientist at Yahoo Research and a professor at London Business School. He received his Ph.D. at The University of Chicago and has taught and researched at Columbia, Harvard, Stanford and Max Planck Institute in Germany, where he was awarded the Otto Hahn Medal in 1997.
His academic writings have appeared in journals from Science to Psychological Review. Goldstein is a member of the Academic Advisory Board of the UK’s Behavioral Insights Team (aka Britain’s “nudge unit”). He was President of the Society for Judgment and Decision Making for the year 2015-2016. He is the editor of Decision Science News.