
99 - Trusting Untrustworthy Machines and Other Psychological Quirks
11/07/22 • -1 min
In this episode I chat to Matthias Uhl. Matthias is a professor of the social and ethical implications of AI at the Technische Hochschule Ingolstadt. Matthias is a behavioural scientist that has been doing a lot of work on human-AI/Robot interaction. He focuses, in particular, on applying some of the insights and methodologies of behavioural economics to these questions. We talk about three recent studies he and his collaborators have run revealing interesting quirks in how humans relate to AI decision-making systems. In particular, his findings suggesting that people do outsource responsibility to machines, are willing to trust untrustworthy machines and prefer the messy discretion of human decision-makers over the precise logic of machines. Matthias's research is fascinating and has some important implications for people working in AI ethics and policy.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).
Relevant Links
- Matthias's Faculty Page
- 'Hiding Behind Machines: Artificial Agents May Help to Evade Punishment' by Matthias and colleagues
- 'Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions' by Matthias and colleagues
- 'People Prefer Moral Discretion to Algorithms: Algorithm Aversion Beyond Intransparency' by Matthias and colleagues
Subscribe to the newsletter
In this episode I chat to Matthias Uhl. Matthias is a professor of the social and ethical implications of AI at the Technische Hochschule Ingolstadt. Matthias is a behavioural scientist that has been doing a lot of work on human-AI/Robot interaction. He focuses, in particular, on applying some of the insights and methodologies of behavioural economics to these questions. We talk about three recent studies he and his collaborators have run revealing interesting quirks in how humans relate to AI decision-making systems. In particular, his findings suggesting that people do outsource responsibility to machines, are willing to trust untrustworthy machines and prefer the messy discretion of human decision-makers over the precise logic of machines. Matthias's research is fascinating and has some important implications for people working in AI ethics and policy.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).
Relevant Links
- Matthias's Faculty Page
- 'Hiding Behind Machines: Artificial Agents May Help to Evade Punishment' by Matthias and colleagues
- 'Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions' by Matthias and colleagues
- 'People Prefer Moral Discretion to Algorithms: Algorithm Aversion Beyond Intransparency' by Matthias and colleagues
Subscribe to the newsletter
Previous Episode

Ethics of Academia (12) - Olle Häggström
In this episode (the last in this series for the time being) I chat to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology in Sweden. Having spent the first half of his academic life focuses largely on pure mathematical research, Olle has shifted focus in recent years to consider how research can benefit humanity and how some research might be too risky to pursue. We have a detailed conversation about the ethics of research and contrast different ideals of what it means to be a scientist in the modern age. Lots of great food for thought in this one.
You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.
Subscribe to the newsletter
Next Episode

100 - The Past and Future of Transhumanism
In this episode (which by happenstance is the 100th official episode - although I have released more than that) I chat to Elise Bohan. Elise is a senior research scholar at the Future of Humanity Institute in Oxford University. She has a PhD in macrohistory ("big" history) and has written the first book-length history of the transhumanist movement. She has also, recently, published the book Future Superhuman, which is a guide to transhumanist ideas and arguments. We talk about this book in some detail, and cover some of its more controversial claims.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).
Subscribe to the newsletter
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/philosophical-disquisitions-242489/99-trusting-untrustworthy-machines-and-other-psychological-quirks-27050326"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to 99 - trusting untrustworthy machines and other psychological quirks on goodpods" style="width: 225px" /> </a>
Copy