Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Dwarkesh Podcast - Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

04/06/23 • 243 min

1 Listener

Dwarkesh Podcast

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - TIME article

(0:09:06) - Are humans aligned?

(0:37:35) - Large language models

(1:07:15) - Can AIs help with alignment?

(1:30:17) - Society’s response to AI

(1:44:42) - Predictions (or lack thereof)

(1:56:55) - Being Eliezer

(2:13:06) - Othogonality

(2:35:00) - Could alignment be easier than we think?

(3:02:15) - What will AIs want?

(3:43:54) - Writing fiction & whether rationality helps you win


Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
plus icon
bookmark

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - TIME article

(0:09:06) - Are humans aligned?

(0:37:35) - Large language models

(1:07:15) - Can AIs help with alignment?

(1:30:17) - Society’s response to AI

(1:44:42) - Predictions (or lack thereof)

(1:56:55) - Being Eliezer

(2:13:06) - Othogonality

(2:35:00) - Could alignment be easier than we think?

(3:02:15) - What will AIs want?

(3:43:54) - Writing fiction & whether rationality helps you win


Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Previous Episode

undefined - Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:

time to AGI

leaks and spies

what's after generative models

post AGI futures

working with Microsoft and competing with Google

difficulty of aligning superhuman AI

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00) - Time to AGI

(05:57) - What’s after generative models?

(10:57) - Data, models, and research

(15:27) - Alignment

(20:53) - Post AGI Future

(26:56) - New ideas are overrated

(36:22) - Is progress inevitable?

(41:27) - Future Breakthroughs


Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Next Episode

undefined - Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes

Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes

It was a tremendous honor & pleasure to interview Richard Rhodes, Pulitzer Prize winning author of The Making of the Atomic Bomb

We discuss

similarities between AI progress & Manhattan Project (developing a powerful, unprecedented, & potentially apocalyptic technology within an uncertain arms-race situation)

visiting starving former Soviet scientists during fall of Soviet Union

whether Oppenheimer was a spy, & consulting on the Nolan movie

living through WW2 as a child

odds of nuclear war in Ukraine, Taiwan, Pakistan, & North Korea

how the US pulled of such a massive secret wartime scientific & industrial project

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - Oppenheimer movie

(0:06:22) - Was the bomb inevitable?

(0:29:10) - Firebombing vs nuclear vs hydrogen bombs

(0:49:44) - Stalin & the Soviet program

(1:08:24) - Deterrence, disarmament, North Korea, Taiwan

(1:33:12) - Oppenheimer as lab director

(1:53:40) - AI progress vs Manhattan Project

(1:59:50) - Living through WW2

(2:16:45) - Secrecy

(2:26:34) - Wisdom & war


Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Episode Comments

Featured in these lists

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/dwarkesh-podcast-202059/eliezer-yudkowsky-why-ai-will-kill-us-aligning-llms-nature-of-intellig-29178872"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to eliezer yudkowsky - why ai will kill us, aligning llms, nature of intelligence, scifi, & rationality on goodpods" style="width: 225px" /> </a>

Copy