
#43 - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines
09/25/18 • 164 min
1 Listener
Daniel Ellsberg - leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency - claims in his new book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a colleague wondered how so many details of their nuclear planning had leaked.
Links to learn more, summary and full transcript.
The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today.
If the system can’t contact military leaders, it checks for signs of a nuclear strike, and if it detects them, automatically launches all remaining Soviet weapons at targets across the northern hemisphere.
As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity.
You might think the United States would have a more sensible nuclear launch policy. You’d be wrong.
As Ellsberg explains, based on first-hand experience as a nuclear war planner in the 50s, that the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth.
The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe.
The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival.
Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if the tell the enemy you’ve done it.
Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity.
Strategically, the setup is stupid. Ethically, it is monstrous.
So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization?
Daniel explores these questions eloquently and urgently in his book. Today we cover:
Why full disarmament today would be a mistake and the optimal number of nuclear weapons to hold
* How well are secrets kept in the government?
* What was the risk of the first atomic bomb test?
* The effect of Trump on nuclear security
* Do we have a reliable estimate of the magnitude of a ‘nuclear winter’?
* Why Gorbachev allowed Russia’s covert biological warfare program to continue
Get this episode by subscribing: type 80,000 Hours into your podcasting app.
The 80,000 Hours Podcast is produced by Keiran Harris.
Daniel Ellsberg - leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency - claims in his new book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a colleague wondered how so many details of their nuclear planning had leaked.
Links to learn more, summary and full transcript.
The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today.
If the system can’t contact military leaders, it checks for signs of a nuclear strike, and if it detects them, automatically launches all remaining Soviet weapons at targets across the northern hemisphere.
As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity.
You might think the United States would have a more sensible nuclear launch policy. You’d be wrong.
As Ellsberg explains, based on first-hand experience as a nuclear war planner in the 50s, that the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth.
The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe.
The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival.
Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if the tell the enemy you’ve done it.
Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity.
Strategically, the setup is stupid. Ethically, it is monstrous.
So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization?
Daniel explores these questions eloquently and urgently in his book. Today we cover:
Why full disarmament today would be a mistake and the optimal number of nuclear weapons to hold
* How well are secrets kept in the government?
* What was the risk of the first atomic bomb test?
* The effect of Trump on nuclear security
* Do we have a reliable estimate of the magnitude of a ‘nuclear winter’?
* Why Gorbachev allowed Russia’s covert biological warfare program to continue
Get this episode by subscribing: type 80,000 Hours into your podcasting app.
The 80,000 Hours Podcast is produced by Keiran Harris.
Previous Episode

#42 - Amanda Askell on moral empathy, the value of information & the ethics of infinity
Consider two familiar moments at a family reunion.
Our host, Uncle Bill, takes pride in his barbecuing skills. But his niece Becky says that she now refuses to eat meat. A groan goes round the table; the family mostly think of this as an annoying picky preference. But if seriously considered as a moral position, as they might if instead Becky were avoiding meat on religious grounds, it would usually receive a very different reaction.
An hour later Bill expresses a strong objection to abortion. Again, a groan goes round the table; the family mostly think that he has no business in trying to foist his regressive preference on anyone. But if considered not as a matter of personal taste, but rather as a moral position - that Bill genuinely believes he’s opposing mass-murder - his comment might start a serious conversation.
Amanda Askell, who recently completed a PhD in philosophy at NYU focused on the ethics of infinity, thinks that we often betray a complete lack of moral empathy. All sides of the political spectrum struggle to get inside the mind of people we disagree with and see issues from their point of view.
Links to learn more, summary and full transcript.
This often happens because of confusion between preferences and moral positions.
Assuming good faith on the part of the person you disagree with, and actually engaging with the beliefs they claim to hold, is perhaps the best remedy for our inability to make progress on controversial issues.
One potential path for progress surrounds contraception; a lot of people who are anti-abortion are also anti-contraception. But they’ll usually think that abortion is much worse than contraception, so why can’t we compromise and agree to have much more contraception available?
According to Amanda, a charitable explanation for this is that people who are anti-abortion and anti-contraception engage in moral reasoning and advocacy based on what, in their minds, is the best of all possible worlds: one where people neither use contraception nor get abortions.
So instead of arguing about abortion and contraception, we could discuss the underlying principle that one should advocate for the best possible world, rather than the best probable world.
Successfully break down such ethical beliefs, absent political toxicity, and it might be possible to actually converge on a key point of agreement.
Today’s episode blends such everyday topics with in-depth philosophy, including:
What is 'moral cluelessness' and how can we work around it?
* Amanda's biggest criticisms of social justice activists, and of critics of social justice activists
* Is there an ethical difference between prison and corporal punishment?
* How to resolve 'infinitarian paralysis' - the inability to make decisions when infinities are involved.
* What’s effective altruism doing wrong?
* How should we think about jargon? Are a lot of people who don’t communicate clearly just scamming us?
* How can people be more successful within the cocoon of school and university?
* How did Amanda find doing a philosophy PhD, and how will she decide what to do now?
Links:
* Career review: Congressional staffer
* Randomised experiment on quitting
* Psychology replication quiz
* Should you focus on your comparative advantage.
Get this episode by subscribing: type 80,000 Hours into your podcasting app.
The 80,000 Hours podcast is produced by Keiran Harris.
Next Episode

#44 - Paul Christiano on how we'll hand the future off to AI, & solving the alignment problem
Paul Christiano is one of the smartest people I know. After our first session produced such great material, we decided to do a second recording, resulting in our longest interview so far. While challenging at times I can strongly recommend listening - Paul works on AI himself and has a very unusually thought through view of how it will change the world. This is now the top resource I'm going to refer people to if they're interested in positively shaping the development of AI, and want to understand the problem better. Even though I'm familiar with Paul's writing I felt I was learning a great deal and am now in a better position to make a difference to the world.
A few of the topics we cover are:
Why Paul expects AI to transform the world gradually rather than explosively and what that would look like
* Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us
* Why AI systems will probably be granted legal and property rights
* How an advanced AI that doesn't share human goals could still have moral value
* Why machine learning might take over science research from humans before it can do most other tasks
* Which decade we should expect human labour to become obsolete, and how this should affect your savings plan.
Links to learn more, summary and full transcript.
Important new article: These are the world’s highest impact career paths according to our research
Here's a situation we all regularly confront: you want to answer a difficult question, but aren't quite smart or informed enough to figure it out for yourself. The good news is you have access to experts who *are* smart enough to figure it out. The bad news is that they disagree.
If given plenty of time - and enough arguments, counterarguments and counter-counter-arguments between all the experts - should you eventually be able to figure out which is correct? What if one expert were deliberately trying to mislead you? And should the expert with the correct view just tell the whole truth, or will competition force them to throw in persuasive lies in order to have a chance of winning you over?
In other words: does 'debate', in principle, lead to truth?
According to Paul Christiano - researcher at the machine learning research lab OpenAI and legendary thinker in the effective altruism and rationality communities - this question is of more than mere philosophical interest. That's because 'debate' is a promising method of keeping artificial intelligence aligned with human goals, even if it becomes much more intelligent and sophisticated than we are.
It's a method OpenAI is actively trying to develop, because in the long-term it wants to train AI systems to make decisions that are too complex for any human to grasp, but without the risks that arise from a complete loss of human oversight.
If AI-1 is free to choose any line of argument in order to attack the ideas of AI-2, and AI-2 always seems to successfully defend them, it suggests that every possible line of argument would have been unsuccessful.
But does that mean that the ideas of AI-2 were actually right? It would be nice if the optimal strategy in debate were to be completely honest, provide good arguments, and respond to counterarguments in a valid way. But we don't know that's the case.
Get this episode by subscribing: type '80,000 Hours' into your podcasting app.
The 80,000 Hours Podcast is produced by Keiran Harris.
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/80000-hours-podcast-134884/43-daniel-ellsberg-on-the-institutional-insanity-that-maintains-nuclea-6613715"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to #43 - daniel ellsberg on the institutional insanity that maintains nuclear doomsday machines on goodpods" style="width: 225px" /> </a>
Copy