
#116 - AI: Racing Toward the Brink
02/06/18 • 127 min
Sam Harris speaks with Eliezer Yudkowsky about the nature of intelligence, different types of AI, the “alignment problem,” IS vs OUGHT, the possibility that future AI might deceive us, the AI arms race, conscious AI, coordination problems, and other topics.
Eliezer Yudkowsky is a decision theorist and computer scientist at the Machine Intelligence Research Institute in Berkeley, California who is known for his work in technological forecasting. His publications include the Cambridge Handbook of Artificial Intelligence chapter “The Ethics of Artificial Intelligence,” co-authored with Nick Bostrom. Yudkowsky’s writings have helped spark a number of ongoing academic and public debates about the long-term impact of AI, and he has written a number of popular introductions to topics in cognitive science and formal epistemology, such as Rationality: From AI to Zombies and “Harry Potter and the Methods of Rationality.” His latest book is Inadequate Equilibria: Where and How Civilizations Get Stuck.
Twitter: @ESYudkowsky
Facebook: facebook.com/yudkowsky
Episodes that have been re-released as part of the Best of Making Sense series may have been edited for relevance since their original airing.
Sam Harris speaks with Eliezer Yudkowsky about the nature of intelligence, different types of AI, the “alignment problem,” IS vs OUGHT, the possibility that future AI might deceive us, the AI arms race, conscious AI, coordination problems, and other topics.
Eliezer Yudkowsky is a decision theorist and computer scientist at the Machine Intelligence Research Institute in Berkeley, California who is known for his work in technological forecasting. His publications include the Cambridge Handbook of Artificial Intelligence chapter “The Ethics of Artificial Intelligence,” co-authored with Nick Bostrom. Yudkowsky’s writings have helped spark a number of ongoing academic and public debates about the long-term impact of AI, and he has written a number of popular introductions to topics in cognitive science and formal epistemology, such as Rationality: From AI to Zombies and “Harry Potter and the Methods of Rationality.” His latest book is Inadequate Equilibria: Where and How Civilizations Get Stuck.
Twitter: @ESYudkowsky
Facebook: facebook.com/yudkowsky
Episodes that have been re-released as part of the Best of Making Sense series may have been edited for relevance since their original airing.
Previous Episode

#115 - Sam Harris, Lawrence Krauss, and Matt Dillahunty (1)
Sam Harris speaks with Lawrence Krauss and Matt Dillahunty about the threat of nuclear war, science and a universal conception of morality, the role of intuition in science, the primacy of consciousness, the nature of time, free will, the self, meditation, and other topics. This conversation was recorded at New York City Center on January 13, 2018.
Next Episode

Bonus Questions: Eliezer Yudkowsky
Eliezer Yudkowsky is a decision theorist and computer scientist at the Machine Intelligence Research Institute in Berkeley, California who is known for his work in technological forecasting. His publications include the Cambridge Handbook of Artificial Intelligence chapter “The Ethics of Artificial Intelligence,” co-authored with Nick Bostrom. Yudkowsky’s writings have helped spark a number of ongoing academic and public debates about the long-term impact of AI, and he has written a number of popular introductions to topics in cognitive science and formal epistemology, such as Rationality: From AI to Zombies and “Harry Potter and the Methods of Rationality.” His latest book is Inadequate Equilibria: Where and How Civilizations Get Stuck.
Twitter: @ESYudkowsky
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/making-sense-with-sam-harris-invalid-feed-208900/116-ai-racing-toward-the-brink-21951569"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to #116 - ai: racing toward the brink on goodpods" style="width: 225px" /> </a>
Copy