goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones

Dwarkesh Podcast

Dwarkesh Patel

Deeply researched interviews https://link.chtbl.com/dwarkesh
www.dwarkeshpatel.com

...more

profile image
profile image

2 Listeners

not bookmarked icon
Share icon

All episodes

Best episodes

Top 10 Dwarkesh Podcast Episodes

Best episodes ranked by Goodpods Users most listened

Here is my conversation with Dario Amodei, CEO of Anthropic.

Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00:00) - Introduction

(00:01:00) - Scaling

(00:15:46) - Language

(00:22:58) - Economic Usefulness

(00:38:05) - Bioterrorism

(00:43:35) - Cybersecurity

(00:47:19) - Alignment & mechanistic interpretability

(00:57:43) - Does alignment research require scale?

(01:05:30) - Misuse vs misalignment

(01:09:06) - What if AI goes well?

(01:11:05) - China

(01:15:11) - How to think about alignment

(01:31:31) - Is modern security good enough?

(01:36:09) - Inefficiencies in training

(01:45:53) - Anthropic’s Long Term Benefit Trust

(01:51:18) - Is Claude conscious?

(01:56:14) - Keeping a low profile


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
play

08/08/23 • 118 min

profile image

1 Listener

bookmark
plus icon
share episode
Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.
We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Read the full transcript here.
Follow Will on Twitter. Follow me on Twitter for updates on future episodes.
Timestamps
(00:23) - Effective Altruism and Western values
(07:47) - The contingency of technology
(12:02) - Who changes history?
(18:00) - Longtermist institutional reform
(25:56) - Are companies longtermist?
(28:57) - Living in an era of plasticity
(34:52) - How good can the future be?
(39:18) - Contra Tyler Cowen on what’s most important
(45:36) - AI and the centralization of power
(51:34) - The problems with academia
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
play

07/26/22 • 56 min

profile image

1 Listener

bookmark
plus icon
share episode

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - TIME article

(0:09:06) - Are humans aligned?

(0:37:35) - Large language models

(1:07:15) - Can AIs help with alignment?

(1:30:17) - Society’s response to AI

(1:44:42) - Predictions (or lack thereof)

(1:56:55) - Being Eliezer

(2:13:06) - Othogonality

(2:35:00) - Could alignment be easier than we think?

(3:02:15) - What will AIs want?

(3:43:54) - Writing fiction & whether rationality helps you win


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
play

04/06/23 • 243 min

profile image

1 Listener

bookmark
plus icon
share episode

In terms of the depth and range of topics, this episode is the best I’ve done.

No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of.

We ended up talking for 8 hours, so I'm splitting this episode into 2 parts.

This part is about Carl’s model of an intelligence explosion, which integrates everything from:

how fast algorithmic progress & hardware improvements in AI are happening,

what primate evolution suggests about the scaling hypothesis,

how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,

how quickly robots produced from existing factories could take over the economy.

We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer.

The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff.

Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00:00) - Intro

(00:01:32) - Intelligence Explosion

(00:18:03) - Can AIs do AI research?

(00:39:00) - Primate evolution

(01:03:30) - Forecasting AI progress

(01:34:20) - After human-level AGI

(02:08:39) - AI takeover scenarios


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
play

06/14/23 • 164 min

profile image

1 Listener

bookmark
plus icon
share episode

I learned so much from Sarah Paine, Professor of History and Strategy at the Naval War College.

We discuss:

how continental vs maritime powers think and how this explains Xi & Putin's decisions

how a war with China over Taiwan would shake out and whether it could go nuclear

why the British Empire fell apart, why China went communist, how Hitler and Japan could have coordinated to win WW2, and whether Japanese occupation was good for Korea, Taiwan and Manchuria

plus other lessons from WW2, Cold War, and Sino-Japanese War

how to study history properly, and why leaders keep making the same mistakes

If you want to learn more, check out her books - they’re some of the best military history I’ve ever read.

Watch on YouTube, listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript.

Timestamps

(0:00:00) - Grand strategy

(0:11:59) - Death ground

(0:23:19) - WW1

(0:39:23) - Writing history

(0:50:25) - Japan in WW2

(0:59:58) - Ukraine

(1:10:50) - Japan/Germany vs Iraq/Afghanistan occupation

(1:21:25) - Chinese invasion of Taiwan

(1:51:26) - Communists & Axis

(2:08:34) - Continental vs maritime powers


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
play

10/04/23 • 144 min

profile image

1 Listener

bookmark
plus icon
share episode

It was a tremendous honor & pleasure to interview Richard Rhodes, Pulitzer Prize winning author of The Making of the Atomic Bomb

We discuss

similarities between AI progress & Manhattan Project (developing a powerful, unprecedented, & potentially apocalyptic technology within an uncertain arms-race situation)

visiting starving former Soviet scientists during fall of Soviet Union

whether Oppenheimer was a spy, & consulting on the Nolan movie

living through WW2 as a child

odds of nuclear war in Ukraine, Taiwan, Pakistan, & North Korea

how the US pulled of such a massive secret wartime scientific & industrial project

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - Oppenheimer movie

(0:06:22) - Was the bomb inevitable?

(0:29:10) - Firebombing vs nuclear vs hydrogen bombs

(0:49:44) - Stalin & the Soviet program

(1:08:24) - Deterrence, disarmament, North Korea, Taiwan

(1:33:12) - Oppenheimer as lab director

(1:53:40) - AI progress vs Manhattan Project

(1:59:50) - Living through WW2

(2:16:45) - Secrecy

(2:26:34) - Wisdom & war


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
play

05/23/23 • 157 min

profile image

1 Listener

bookmark
plus icon
share episode
play

05/22/20 • 59 min

Bryan Caplan is a Professor of Economics at George Mason University and a New York Times Bestselling author. His most famous works include: The Myth of the Rational Voter, Selfish Reasons to Have More Kids, The Case Against Education, and Open Borders: The Science and Ethics of Immigration.
I talk to Bryan about open borders, the idea trap, UBI, appeasement, China, the education system, and Bryan Caplan's next two books on poverty and housing regulation.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.
Follow Bryan on Twitter. Follow me on Twitter for updates on future episodes.


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
play

05/22/20 • 59 min

profile image

1 Listener

bookmark
plus icon
share episode

Byrne Hobart writes The Diff, a newsletter about inflections in finance and technology with 24,000+ subscribers.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

Episode website here.
The Diff newsletter: https://diff.substack.com/

Follow Byrne on Twitter.

Follow me on Twitter for updates on future episodes!

Thanks for reading The Lunar Society! Subscribe for free to receive new posts and support my work.

Timestamps:

(0:00:00) - Byrne's one big idea: stagnation

(0:05:50) -Has regulation caused stagnation?

(0:14:00) - FDA retribution

(0:15:15) - Embryo selection

(0:17:32) - Patient longtermism

(0:21:02) - Are there secret societies?

(0:26:53) - College, optionality, and conformity

(0:34:40) - Differentiated credentiations underrated?

(0:39:15) - WIll contientiousness increase in value?

(0:44:26) - Why aren't rationalists more into finance?

(0:48:04) - Rationalists are bad at changing the world.

(0:52:20) - Why read more?

(0:57:10) - Does knowledge have increasing returns?

(1:01:30) - How to escape the middle career trap?

(1:04:48) - Advice for young people

(1:08:40) - How to learn about a subject?


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
play

10/05/21 • 71 min

bookmark
plus icon
share episode
play

01/03/22 • 57 min

Rohit Krishnan is a venture capitalist who writes about "the strange loops underlying our systems of innovation" at https://www.strangeloopcanon.com.

We discussed J. Storrs. Hall's book Where Is My Flying Car?

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

Episode website + transcript here.
Relevant essays from Rohit:

Review of Where Is My Flying Car

The Small Successes Of Nanotech
Isolated Narratives of Progress

Meditations On Regulations

Follow Rohit's Twitter. Follow me on Twitter for updates on future episodes.

Timestamps:

(00:00) - Why don't we have flying cars?

(08:09) - Should we expect exponential growth?

(18:13) - Machiavelli Effect and centralization of science funding

(27:55) - We need more science fiction

(32:40) - The return of citizen science?

(37:40) - Have we grown too comfortable for progress?

(42:15) - Is India the future of innovation?

(47:15) - Is there an upper-income trap?

(50:30) - Forecasts for technologies

Timestamps:

00:00 Why don't we have flying cars?

08:09 Should we expect exponential growth?

18:13 Machiavelli Effect and centralization of science funding

27:55 We need more science fiction

32:40 The return of citizen science?

37:40 Have we grown too comfortable for progress?

42:15 Is India the future of innovation?

47:15 Is there an upper-income trap?

50:30 Forecasts for technologies


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
play

01/03/22 • 57 min

bookmark
plus icon
share episode

Richard Hanania is the President of the Center for the Study of Partisanship and Ideology and the author of Public Choice Theory and the Illusion of Grand Strategy: How Generals, Weapons Manufacturers, and Foreign Governments Shape American Foreign Policy.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

Episode website here.

Follow Richard on Twitter. Follow me on Twitter for updates on future episodes.

Read Richard's Substack: https://richardhanania.substack.com/

Timestamps:

(0:00:00) - Intro

(0:04:35) - Did war prevent sclerosis?

(0:06:05) - China vs America's grand strategy

(0:10:00) - Does the president have more power over foreign policy?

(0:11:30) - How to deter bad actors?

(0:15:39) - Do some countries have a coherent foreign policy?

(0:16:55) - Why does self-interest matter in foreign but not domestic policy?

(0:21:05) - Should we limit money in politics?

(0:23:47) - Should we credit expertise for nuclear detante and global prosperity?

(0:28:45) - Have international alliances made us safer?

(0:31:57) - Why does academic bueracracy work in some fields?

(0:36:26) - Did academia suck even before diversity?

(0:39:34) - How do we get expertise in social sciences?

(0:42:19) - Why are things more liberal?

(0:43:55) - Why is big tech so liberal?

(0:47:53) - Authoritarian populism vs libertarianism

(0:51:40) - Can authoritarian governments increase fertility?

(0:54:54) - Will increasing fertility be dysgenic?

(0:56:43) - Will not having kids become cool?

(0:59:22) -Advice for libertarians?


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
play

02/24/22 • 62 min

bookmark
plus icon
share episode

Show more

Toggle view more icon

Featured in these lists

FAQ

How many episodes does Dwarkesh Podcast have?

Dwarkesh Podcast currently has 72 episodes available.

What topics does Dwarkesh Podcast cover?

The podcast is about Society & Culture, Podcasts and Technology.

What is the most popular episode on Dwarkesh Podcast?

The episode title 'Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress' is the most popular.

What is the average episode length on Dwarkesh Podcast?

The average episode length on Dwarkesh Podcast is 93 minutes.

How often are episodes of Dwarkesh Podcast released?

Episodes of Dwarkesh Podcast are typically released every 8 days, 18 hours.

When was the first episode of Dwarkesh Podcast?

The first episode of Dwarkesh Podcast was released on May 22, 2020.

Show more FAQ

Toggle view more icon

Comments

0.0

out of 5

Star filled grey IconStar filled grey IconStar filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey Icon
Star filled grey Icon

No ratings yet