
Natural Risks
11/14/18 • 37 min
Humans have faced existential risks since our species was born. Because we are Earthbound, what happens to Earth happens to us. Josh points out that there’s a lot that can happen to Earth - like gamma ray bursts, supernovae, and runaway greenhouse effect. (Original score by Point Lobo.)
Interviewees: Robin Hanson, George Mason University economist (creator of the Great Filter hypothesis); Ian O’Neill, astrophysicist and science writer; Toby Ord, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.comSee omnystudio.com/listener for privacy information.
Humans have faced existential risks since our species was born. Because we are Earthbound, what happens to Earth happens to us. Josh points out that there’s a lot that can happen to Earth - like gamma ray bursts, supernovae, and runaway greenhouse effect. (Original score by Point Lobo.)
Interviewees: Robin Hanson, George Mason University economist (creator of the Great Filter hypothesis); Ian O’Neill, astrophysicist and science writer; Toby Ord, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.comSee omnystudio.com/listener for privacy information.
Previous Episode

X Risks
Humanity could have a future billions of years long – or we might not make it past the next century. If we have a trip through the Great Filter ahead of us, then we appear to be entering it now. It looks like existential risks will be our filter. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Robin Hanson, George Mason University economist (creator of the Great Filter hypothesis); Toby Ord, Oxford University philosopher; Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.comSee omnystudio.com/listener for privacy information.
Next Episode

Artificial Intelligence
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.comSee omnystudio.com/listener for privacy information.
The End Of The World with Josh Clark - Natural Risks
Transcript
Existential risks aren't new to us. Ever since our species was born. There have always been lots of catastrophes ready and waiting to wipe the human race off the face of the earth. It's just that these risks are of our own making. The history of humanity is relatively brief, and we've been fortunate to have come along during a period of relative calm in Earth's history. Maybe we couldn't have come along had things been more tumultuous. Who knows,
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/the-end-of-the-world-with-josh-clark-82328/natural-risks-18447541"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to natural risks on goodpods" style="width: 225px" /> </a>
Copy