Over the last two years, the capabilities of AI systems have exploded. AlphaFold2, MuZero, CLIP, DALLE, GPT-3 and many other models have extended the reach of AI to new problem classes. There’s a lot to be excited about.
But as we’ve seen in other episodes of the podcast, there’s a lot more to getting value from an AI system than jacking up its capabilities. And increasingly, one of these additional missing factors is becoming trust. You can make all the powerful AIs you want, but if no one trusts their output — or if people trust it when they shouldn’t — you can end up doing more harm than good.
That’s why we invited Ayanna Howard on the podcast. Ayanna is a roboticist, entrepreneur and Dean of the College of Engineering at Ohio State University, where she focuses her research on human-machine interactions and the factors that go into building human trust in AI systems. She joined me to talk about her research, its applications in medicine and education, and the future of human-machine trust.
---
Intro music:
Artist: Ron Gelinas
Track Title: Daybreak Chill Blend (original mix)
Link to Track: https://youtu.be/d8Y2sKIgFWc
---
Chapters:
0:00 Intro
1:30 Ayanna’s background
6:10 The interpretability of neural networks
12:40 Domain of machine-human interaction
17:00 The issue of preference
20:50 Gelman/newspaper amnesia
26:35 Assessing a person’s persuadability
31:40 Doctors and new technology
36:00 Responsibility and accountability
43:15 The social pressure aspect
47:15 Is Ayanna optimistic?
53:00 Wrap-up
11/03/21 • 53 min
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/towards-data-science-114156/101-ayanna-howard-ai-and-the-trust-problem-17435463"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to 101. ayanna howard - ai and the trust problem on goodpods" style="width: 225px" /> </a>
Copy