Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
Hear This Idea

Hear This Idea

Fin Moorhouse and Luca Righetti

Hear This Idea is a podcast showcasing new thinking in philosophy, the social sciences, and effective altruism. Each episode has an accompanying write-up at www.hearthisidea.com/episodes.
bookmark
Share icon

All episodes

Best episodes

Top 10 Hear This Idea Episodes

Goodpods has curated a list of the 10 best Hear This Idea episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Hear This Idea for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Hear This Idea episode by adding your comments to the episode page.

Sonia Ben Ouagrham-Gormley is an associate professor at George Mason University and Deputy Director of their Biodefence Programme

In this episode we talk about:

  • Where the belief that 'bioweapons are easy to make' came from and why it has been difficult to change
  • Why transferring tacit knowledge is so difficult -- and the particular challenges that rogue actors face
  • As well as lastly what Sonia makes of the AI-Bio risk discourse and what types of advances in technology would cause her concern

You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

bookmark
plus icon
share episode

Katja Grace is a researcher and writer. She runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of artificial intelligence (AI). Katja blogs primarily at worldspiritsockpuppet, and indirectly at Meteuphoric, Worldly Positions, LessWrong and the EA Forum.

We discuss:

  • What is AI Impacts working on?
  • Counterarguments to the basic AI x-risk case
  • Reasons to doubt that superhuman AI systems will be strongly goal-directed
  • Reasons to doubt that if goal-directed superhuman AI systems are built, their goals will be bad by human lights
  • Aren't deep learning systems fairly good at understanding our 'true' intentions?
  • Reasons to doubt that (misaligned) superhuman AI would overpower humanity
  • The case for slowing down AI
  • Is AI really an arms race?
  • Are there examples from history of valuable technologies being limited or slowed down?
  • What does Katja think about the recent open letter on pausing giant AI experiments?
  • Why read George Saunders?

Key links:

You can see more links and a full transcript at hearthisidea.com/episodes/grace.

bookmark
plus icon
share episode

Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52.

In this episode, we talk about:

  • The basic case for working on existential risk from AI
  • How to begin figuring out what to do to reduce the risks
  • Threat models for the risks of advanced AI
  • 'Theories of victory' for how the world mitigates the risks
  • 'Intermediate goals' in AI governance
  • What useful (and less useful) research looks like for reducing AI x-risk
  • Practical advice for usefully contributing to efforts to reduce existential risk from AI
  • Resources for getting started and finding job openings

Key links:

bookmark
plus icon
share episode

Chris Miller is an Associate Professor of International History at Tufts University and author of the book “Chip War: The Fight for the World's Most Critical Technology” (the Financial Times Business Book of the Year). He is also a Visiting Fellow at the American Enterprise Institute, and Eurasia Director at the Foreign Policy Research Institute.

Over the next few episodes we will be exploring the potential for catastrophe cause by advanced artificial intelligence. But before we look ahead, we wanted to give a primer on where we are today: on the history and trends behind the development of AI so far. In this episode, we discuss:

  • How semiconductors have historically been related to US military strategy
  • How the Taiwanese company TSMC became such an important player in this space — while other countries’ attempts have failed
  • What the CHIPS Act signals about attitudes to compute governance in the decade ahead

Further reading is available on our website: hearthisidea.com/episodes/miller

If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

bookmark
plus icon
share episode

Damon Binder is a research analyst at Open Philanthropy. His research focuses on potential risks from pandemics and from biotechnology. He previously worked as a research scholar at the University of Oxford’s Future of Humanity Institute, where he studied existential risks. Prior to that he completed his PhD in theoretical physics at Princeton University.

We discuss:

  • How did early states manage large populations?
  • What explains the hockeystick shape of world economic growth?
  • Did urbanisation enable more productive farming, or vice-versa?
  • What does transformative AI mean for growth?
  • Would 'degrowth' benefit the world?
  • What do theoretical physicists actually do, and what are they still trying to understand?
  • Why not just run bigger physics experiments to solve the latest problems?
  • What could the history of physics tell us about its future?
  • In what sense are the universe's constants fine-tuned?
  • Will the universe ever just... end?
  • Why might we expect digital minds to be a big deal?

Links

You can find more episodes and links at our website, hearthisidea.com.

(This episode is a bonus episode because it's less focused on topics in effective altruism than normal)

bookmark
plus icon
share episode

A full writeup of this episode is available on our website: hearthisidea.com/episodes/esvelt-sandbrink.

Kevin Esvelt is an assistant professor at the MIT Media Lab, where he is director of the Sculpting Evolution group, which invents new ways to study and influence the evolution of ecosystems. He helped found the SecureDNA Project and the Nucleic Acid Observatory, both of which we discuss in the episode. Esvelt is also known for proposing the idea of using CRISPR to implement gene drives.

Jonas Sandbrink is a researcher and DPhil student at the Future of Humanity Institute. He is a fellow at both the Emerging Leaders in Biosecurity Initiative at the Johns Hopkins Center for Health Security, and with the Ending Bioweapons Program at the Council on Strategic Risks. Jonas’ research interests include the dual-use potential of life sciences research and biotechnology, as well as fast response countermeasures like vaccine platforms.

We discuss:

  • The concepts of differential technological development, dual-use research, transfer risks in research, 'information loops', and responsible access to biological data
  • Strengthening norms against risky biological research, such as novel virus identification and gain of function research
  • Connection-based warning systems and metagenomic sequencing technology
  • Advanced PPE, Far-UVC sterilisation technology, and other countermeasures against pandemics potentially worse than Covid
  • Analogies between progress in biotechnology and the early history of nuclear weapons
  • How to use your career to work on these problems — even if you don’t have a background in biology.

You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/farmer.

If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

bookmark
plus icon
share episode

Bruce Friedrich is the co-founder and executive director of The Good Food Institute — a nonprofit that works with scientists, investors, and entrepreneurs to support the development and marketing of cell-cultured and plant-based alternatives to animal food products.

In this episode we discuss:

  • [00:02:21] Bruce's path to GFI
  • [00:06:01] Inefficiencies of animal agriculture
  • [00:10:06] Other external harms of animal agriculture
  • [00:18:27] GFI's theory of change
  • [00:27:54] Why focus on affluent markets?
  • [00:32:53] Is regular meat-eating an historical abberation?
  • [00:35:22] Protein alternative research
  • [00:38:49] Plant-based vs cultivated meat
  • [00:42:40] Marketing protein alternatives
  • [00:47:27] Nomenclature
  • [00:49:44] Policy
  • [00:53:46] Why do we need government spending on R&D?
  • [00:57:40] GFI's counterfactual impact
  • [01:01:08] Religious influences
  • [01:04:43] The supreme court
  • [01:09:16] Three book recommendations
  • [01:13:14] Outro

You can read much more on these topics in our accompanying write-up: earthisidea.com/episodes/bruce.

If you have any feedback or suggestions for future guests, feel free get in touch through our website or by using the star rating form on each episode page. Please also consider leaving us a review wherever you're listening to this (e.g. Apple Podcasts_ — it's probably the easiest (free) means of growing the show. If you want to support the show more directly and help us keep hosting these episodes online, consider leaving a tip.

Thanks for listening!

bookmark
plus icon
share episode

Peter Singer is a moral philosopher and public intellectual, most widely known for his writings about animal ethics and global poverty.

In this episode we discuss:

  • [00:00:00] Introduction
  • [00:01:43] Background — Peter Singer introduces himself.
  • [00:02:16] Speciesism — Defining the term, and explaining the case against speciesism.
  • [00:06:55] Wild Animal Suffering — hould we intervene to reduce suffering in nature?
  • [00:09:00] Weighing and Ending Animal Lives — Are all animal lives equal? What, if anything, is wrong with (painlessly) killing animals?
  • [00:13:20] Ignoring Animals: Why did thinkers of the past apparently neglect the moral worth of animals? Why is animal ethics relatively new?
  • [00:16:50] History of Western Attitudes to Animals—Can we trace the origins of contemporary attitudes to animals back to ancient Greece and Judeo-Christian values?
  • [00:21:07] Counterfactual Impact of Animal Advocacy
  • [00:24:10] The Power of Moral Argument
  • [00:25:00] The Schwitzgebel Study
  • [00:29:30] What should we do now? — Are veganism and vegetarianism all-or-nothing decisions? Or is it worth choosing a more incremental pathway?
  • [00:32:25] The case for Human Challenge Trials
  • [00:35:46] Trade-off between Lives and Well-being in Lockdowns — Can the cure for the pandemic be worse than the disease? How would we know?
  • [00:42:07] Moral Realism — Parfit's 'Future Tuesday Indifference'
  • [00:46:10] Other Moral Systems — What about egalitarianism or prioritarianism?
  • [00:49:10] Controversial ideas
  • [00:52:28] Journal of Controversial Ideas
  • [00:54:35] What have you changed your mind about?
  • [00:55:57] Book Recommendations
  • [00:57:18] Where to Find PS Online

You can read much more on these topics in our accompanying write-up: https://hearthisidea.com/episodes/peter.

If you have any feedback or suggestions for future guests, please get in touch through our website. Please also consider leaving us a review wherever you're listening to this. If you want to support the show more directly and help us keep hosting these episodes online, consider leaving a tip at www.tips.pinecast.com/jar/hear-this-idea. Thanks for listening!

bookmark
plus icon
share episode

Show Notes

Tobias Cremer is a PhD student in Politics and International Studies. His thesis examines the relationship between right-wing populism and religion in Western Europe and North America.

You can read more on this episode's accompanying write-up.

If you have any feedback or suggestions for future guests, please get in touch through our website.

Please consider leaving a review on Apple Podcasts or wherever you're listening to this; we're just starting out and it really helps listeners find us!

If you want to support the show more directly, you can also buy us a drink here.

bookmark
plus icon
share episode
Hear This Idea - #76 – Joe Carlsmith on Scheming AI
play

03/16/24 • 111 min

Joe Carlsmith is a writer, researcher, and philosopher. He works as a senior research analyst at Open Philanthropy, where he focuses on existential risk from advanced artificial intelligence. He also writes independently about various topics in philosophy and futurism, and holds a doctorate in philosophy from the University of Oxford.

You can find links and a transcript at www.hearthisidea.com/episodes/carlsmith

In this episode we talked about a report Joe recently authored, titled ‘Scheming AIs: Will AIs fake alignment during training in order to get power?’. The report “examines whether advanced AIs that perform well in training will be doing so in order to gain power later”; a behaviour Carlsmith calls scheming.

We talk about:

  • Distinguishing ways AI systems can be deceptive and misaligned
  • Why powerful AI systems might acquire goals that go beyond what they’re trained to do, and how those goals could lead to scheming
  • Why scheming goals might perform better (or worse) in training than less worrying goals
  • The ‘counting argument’ for scheming AI
  • Why goals that lead to scheming might be simpler than the goals we intend
  • Things Joe is still confused about, and research project ideas

You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does Hear This Idea have?

Hear This Idea currently has 86 episodes available.

What topics does Hear This Idea cover?

The podcast is about Society & Culture, Podcasts, Social Sciences, Science and Philosophy.

What is the most popular episode on Hear This Idea?

The episode title 'Bonus: 'How I Learned To Love Shrimp' & David Coman-Hidy' is the most popular.

What is the average episode length on Hear This Idea?

The average episode length on Hear This Idea is 99 minutes.

How often are episodes of Hear This Idea released?

Episodes of Hear This Idea are typically released every 17 days, 1 hour.

When was the first episode of Hear This Idea?

The first episode of Hear This Idea was released on Jan 25, 2020.

Show more FAQ

Toggle view more icon

Comments