
#131 – Lewis Dartnell on getting humanity to bounce back faster in a post-apocalyptic world
06/03/22 • 65 min
1 Listener
“We’re leaving these 16 contestants on an island with nothing but what they can scavenge from an abandoned factory and apartment block. Over the next 365 days, they’ll try to rebuild as much of civilisation as they can — from glass, to lenses, to microscopes. This is: The Knowledge!”
If you were a contestant on such a TV show, you'd love to have a guide to how basic things you currently take for granted are done — how to grow potatoes, fire bricks, turn wood to charcoal, find acids and alkalis, and so on.
Today’s guest Lewis Dartnell has gone as far compiling this information as anyone has with his bestselling book The Knowledge: How to Rebuild Civilization in the Aftermath of a Cataclysm.
Links to learn more, summary and full transcript.
But in the aftermath of a nuclear war or incredibly deadly pandemic that kills most people, many of the ways we do things today will be impossible — and even some of the things people did in the past, like collect coal from the surface of the Earth, will be impossible the second time around.
As Lewis points out, there’s “no point telling this band of survivors how to make something ultra-efficient or ultra-useful or ultra-capable if it's just too damned complicated to build in the first place. You have to start small and then level up, pull yourself up by your own bootstraps.”
So it might sound good to tell people to build solar panels — they’re a wonderful way of generating electricity. But the photovoltaic cells we use today need pure silicon, and nanoscale manufacturing — essentially the same technology as microchips used in a computer — so actually making solar panels would be incredibly difficult.
Instead, you’d want to tell our group of budding engineers to use more appropriate technologies like solar concentrators that use nothing more than mirrors — which turn out to be relatively easy to make.
A disaster that unravels the complex way we produce goods in the modern world is all too possible. Which raises the question: why not set dozens of people to plan out exactly what any survivors really ought to do if they need to support themselves and rebuild civilisation? Such a guide could then be translated and distributed all around the world.
The goal would be to provide the best information to speed up each of the many steps that would take survivors from rubbing sticks together in the wilderness to adjusting a thermostat in their comfy apartments.
This is clearly not a trivial task. Lewis's own book (at 300 pages) only scratched the surface of the most important knowledge humanity has accumulated, relegating all of mathematics to a single footnote.
And the ideal guide would offer pretty different advice depending on the scenario. Are survivors dealing with a radioactive ice age following a nuclear war? Or is it an eerily intact but near-empty post-pandemic world with mountains of goods to scavenge from the husks of cities?
As a brand-new parent, Lewis couldn’t do one of our classic three- or four-hour episodes — so this is an unusually snappy one-hour interview, where Rob and Lewis are joined by Luisa Rodriguez to continue the conversation from her episode of the show last year.
Chapters:
- Rob’s intro (00:00:00)
- The interview begins (00:00:59)
- The biggest impediments to bouncing back (00:03:18)
- Can we do a serious version of The Knowledge? (00:14:58)
- Recovering without much coal or oil (00:29:56)
- Most valuable pro-resilience adjustments we can make today (00:40:23)
- Feeding the Earth in disasters (00:47:45)
- The reality of humans trying to actually do this (00:53:54)
- Most exciting recent findings in astrobiology (01:01:00)
- Rob’s outro (01:03:37)
Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore
“We’re leaving these 16 contestants on an island with nothing but what they can scavenge from an abandoned factory and apartment block. Over the next 365 days, they’ll try to rebuild as much of civilisation as they can — from glass, to lenses, to microscopes. This is: The Knowledge!”
If you were a contestant on such a TV show, you'd love to have a guide to how basic things you currently take for granted are done — how to grow potatoes, fire bricks, turn wood to charcoal, find acids and alkalis, and so on.
Today’s guest Lewis Dartnell has gone as far compiling this information as anyone has with his bestselling book The Knowledge: How to Rebuild Civilization in the Aftermath of a Cataclysm.
Links to learn more, summary and full transcript.
But in the aftermath of a nuclear war or incredibly deadly pandemic that kills most people, many of the ways we do things today will be impossible — and even some of the things people did in the past, like collect coal from the surface of the Earth, will be impossible the second time around.
As Lewis points out, there’s “no point telling this band of survivors how to make something ultra-efficient or ultra-useful or ultra-capable if it's just too damned complicated to build in the first place. You have to start small and then level up, pull yourself up by your own bootstraps.”
So it might sound good to tell people to build solar panels — they’re a wonderful way of generating electricity. But the photovoltaic cells we use today need pure silicon, and nanoscale manufacturing — essentially the same technology as microchips used in a computer — so actually making solar panels would be incredibly difficult.
Instead, you’d want to tell our group of budding engineers to use more appropriate technologies like solar concentrators that use nothing more than mirrors — which turn out to be relatively easy to make.
A disaster that unravels the complex way we produce goods in the modern world is all too possible. Which raises the question: why not set dozens of people to plan out exactly what any survivors really ought to do if they need to support themselves and rebuild civilisation? Such a guide could then be translated and distributed all around the world.
The goal would be to provide the best information to speed up each of the many steps that would take survivors from rubbing sticks together in the wilderness to adjusting a thermostat in their comfy apartments.
This is clearly not a trivial task. Lewis's own book (at 300 pages) only scratched the surface of the most important knowledge humanity has accumulated, relegating all of mathematics to a single footnote.
And the ideal guide would offer pretty different advice depending on the scenario. Are survivors dealing with a radioactive ice age following a nuclear war? Or is it an eerily intact but near-empty post-pandemic world with mountains of goods to scavenge from the husks of cities?
As a brand-new parent, Lewis couldn’t do one of our classic three- or four-hour episodes — so this is an unusually snappy one-hour interview, where Rob and Lewis are joined by Luisa Rodriguez to continue the conversation from her episode of the show last year.
Chapters:
- Rob’s intro (00:00:00)
- The interview begins (00:00:59)
- The biggest impediments to bouncing back (00:03:18)
- Can we do a serious version of The Knowledge? (00:14:58)
- Recovering without much coal or oil (00:29:56)
- Most valuable pro-resilience adjustments we can make today (00:40:23)
- Feeding the Earth in disasters (00:47:45)
- The reality of humans trying to actually do this (00:53:54)
- Most exciting recent findings in astrobiology (01:01:00)
- Rob’s outro (01:03:37)
Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore
Previous Episode

#130 – Will MacAskill on balancing frugality with ambition, whether you need longtermism, & mental health under pressure
Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you're all bunched up on a few tables in a basement office.
But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You're the same group of people committed to making sacrifices for the cause — but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP.
You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have.
This is roughly the situation faced by today's guest Will MacAskill — University of Oxford philosopher, author of the forthcoming book What We Owe The Future, and founding figure in the effective altruism movement.
Links to learn more, summary and full transcript.
Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing.
While surely a huge success, it brings with it risks that he's never had to consider before:
• Will and his colleagues might try to spend a lot of money trying to get more things done more quickly — but actually just waste it.
• Being seen as profligate could strike onlookers as selfish and disreputable.
• Folks might start pretending to agree with their agenda just to get grants.
• People working on nearby issues that are less flush with funding may end up resentful.
• People might lose their focus on helping others as they get seduced by the prospect of earning a nice living.
• Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely.
But all these 'risks of commission' have to be weighed against 'risk of omission': the failure to achieve all you could have if you'd been truly ambitious.
People looking askance at you for paying high salaries to attract the staff you want is unpleasant.
But failing to prevent the next pandemic because you didn't have the necessary medical experts on your grantmaking team is worse than unpleasant — it's a true disaster. Yet few will complain, because they'll never know what might have been if you'd only set frugality aside.
Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition. In today's episode, Rob and Will discuss the above as well as:
• Will humanity likely converge on good values as we get more educated and invest more in moral philosophy — or are the things we care about actually quite arbitrary and contingent?
• Why are so many nonfiction books full of factual errors?
• How does Will avoid anxiety and depression with more responsibility on his shoulders than ever?
• What does Will disagree with his colleagues on?
• Should we focus on existential risks more or less the same way, whether we care about future generations or not?
• Are potatoes one of the most important technologies ever developed?
• And plenty more.
Chapters:
- Rob’s intro (00:00:00)
- The interview begins (00:02:41)
- What We Owe The Future preview (00:09:23)
- Longtermism vs. x-risk (00:25:39)
- How is Will doing? (00:33:16)
- Having a life outside of work (00:46:45)
- Underappreciated people in the effective altruism community (00:52:48)
- A culture of ambition within effective altruism (00:59:50)
- Massively scalable projects (01:11:40)
- Downsides and risks from the increase in funding (01:14:13)
- Barriers to ambition (01:28:47)
- The Future Fund (01:38:04)
- Patient philanthropy (01:52:50)
- Will’s disagreements with Sam Bankman-Fried and Nick Beckstead (01:56:42)
- Astronomical risks of suffering (s-risks) (02:00:02)
- Will’s future plans (02:02:41)
- What is it with Will and potatoes? (02:08:40)
Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore
Next Episode

#132 – Nova DasSarma on why information security may be critical to the safe development of AI systems
If a business has spent $100 million developing a product, it's a fair bet that they don't want it stolen in two seconds and uploaded to the web where anyone can use it for free.
This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.
Today's guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic. One of her jobs is to stop hackers exfiltrating Anthropic's incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.
Links to learn more, summary and full transcript.
The worries aren't purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we'll develop so-called artificial 'general' intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.
If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.
If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally 'go rogue,' breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can't be shut off.
As Nova explains, in either case, we don't want such models disseminated all over the world before we've confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic -- perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly.
If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.
We'll soon need the ability to 'sandbox' (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.
In today's conversation, Rob and Nova cover:
• How good or bad is information security today
• The most secure computer systems that exist
• How to design an AI training compute centre for maximum efficiency
• Whether 'formal verification' can help us design trustworthy systems
• How wide the gap is between AI capabilities and AI safety
• How to disincentivise hackers
• What should listeners do to strengthen their own security practices
• And much more.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.
Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/80000-hours-podcast-134884/131-lewis-dartnell-on-getting-humanity-to-bounce-back-faster-in-a-post-21275010"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to #131 – lewis dartnell on getting humanity to bounce back faster in a post-apocalyptic world on goodpods" style="width: 225px" /> </a>
Copy