Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
The Sentience Institute Podcast - Thomas Metzinger on a moratorium on artificial sentience development

Thomas Metzinger on a moratorium on artificial sentience development

08/10/22 • 110 min

The Sentience Institute Podcast

And for an applied ethics perspective, I think the most important thing is if we want to minimize suffering in the world, and if we want to minimize animal suffering, we should always, err on the side of caution, we should always be on the safe side.

  • Thomas Metzinger

Should we advocate for a moratorium on the development of artificial sentience? What might that look like, and what would be the challenges?
Thomas Metzinger was a full professor of theoretical philosophy at the Johannes Gutenberg Universitat Mainz until 2022, and is now a professor emeritus. Before that, he was the president of the German cognitive science society from 2005 to 2007, president of the association for the scientific study of consciousness from 2009 to 2011, and an adjunct fellow at the Frankfurt Institute for advanced studies since 2011. He is also a co-founder of the German Effective Altruism Foundation, president of the Barbara Wengeler Foundation, and on the advisory board of the Giordano Bruno Foundation. In 2009, he published a popular book, The Ego Tunnel: The Science of the Mind and the Myth of the Self, which addresses a wider audience and discusses the ethical, cultural, and social consequences of consciousness research. From 2018 to 2020 Metzinger worked as a member of the European Commission's high level expert group on artificial intelligence.
Topics discussed in the episode:

  • 0:00 introduction
  • 2:12 Defining consciousness and sentience
  • 9:55 What features might a sentient artificial intelligence have?
  • 17:11 Moratorium on artificial sentience development
  • 37:46 Case for a moratorium
  • 49:30 What would a moratorium look like?
  • 53:07 Social hallucination problem
  • 55:49 Incentives of politicians
  • 1:01:51 Incentives of tech companies
  • 1:07:18 Local vs global moratoriums
  • 1:11:52 Repealing the moratorium
  • 1:16:01 Information hazards
  • 1:22:21 Trends in thinking on artificial sentience over time
  • 1:39:38 What are the open problems in this field, and how might someone work on them with their career?

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

plus icon
bookmark

And for an applied ethics perspective, I think the most important thing is if we want to minimize suffering in the world, and if we want to minimize animal suffering, we should always, err on the side of caution, we should always be on the safe side.

  • Thomas Metzinger

Should we advocate for a moratorium on the development of artificial sentience? What might that look like, and what would be the challenges?
Thomas Metzinger was a full professor of theoretical philosophy at the Johannes Gutenberg Universitat Mainz until 2022, and is now a professor emeritus. Before that, he was the president of the German cognitive science society from 2005 to 2007, president of the association for the scientific study of consciousness from 2009 to 2011, and an adjunct fellow at the Frankfurt Institute for advanced studies since 2011. He is also a co-founder of the German Effective Altruism Foundation, president of the Barbara Wengeler Foundation, and on the advisory board of the Giordano Bruno Foundation. In 2009, he published a popular book, The Ego Tunnel: The Science of the Mind and the Myth of the Self, which addresses a wider audience and discusses the ethical, cultural, and social consequences of consciousness research. From 2018 to 2020 Metzinger worked as a member of the European Commission's high level expert group on artificial intelligence.
Topics discussed in the episode:

  • 0:00 introduction
  • 2:12 Defining consciousness and sentience
  • 9:55 What features might a sentient artificial intelligence have?
  • 17:11 Moratorium on artificial sentience development
  • 37:46 Case for a moratorium
  • 49:30 What would a moratorium look like?
  • 53:07 Social hallucination problem
  • 55:49 Incentives of politicians
  • 1:01:51 Incentives of tech companies
  • 1:07:18 Local vs global moratoriums
  • 1:11:52 Repealing the moratorium
  • 1:16:01 Information hazards
  • 1:22:21 Trends in thinking on artificial sentience over time
  • 1:39:38 What are the open problems in this field, and how might someone work on them with their career?

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

Previous Episode

undefined - Tobias Baumann of the Center for Reducing Suffering on global priorities research and effective strategies to reduce suffering

Tobias Baumann of the Center for Reducing Suffering on global priorities research and effective strategies to reduce suffering

“We think that the most important thing right now is capacity building. We’re not so much focused on having impact now or in the next year, we’re thinking about the long term and the very big picture... Now, what exactly does capacity building mean? It can simply mean getting more people involved... I would frame it more in terms of building a healthy community that’s stable in the long term... And one aspect that’s just as important as the movement building is that we need to improve our knowledge of how to best reduce suffering. You could call it ‘wisdom building’... And CRS aims to contribute to [both] through our research... Some people just naturally tend to be more inclined to explore a lot of different topics... Others have maybe more of a tendency to dive into something more specific and dig up a lot of sources and go into detail and write a comprehensive report and I think both these can be very valuable... What matters is just that overall your work is contributing to progress on... the most important questions of our time.”

  • Tobias Baumann

There are many different ways that we can reduce suffering or have other forms of positive impact. But how can we increase our confidence about which actions are most cost-effective? And what can people do now that seems promising?

Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.

Topics discussed in the episode:

  • Who is currently working to reduce risks of astronomical suffering in the long-term future (“s-risks”) and what are they doing? (2:50)
  • What are “information hazards,” how concerned should we be about them, and how can we reduce them? (12:21)
  • What is the Center for Reducing Suffering’s theory of change and what are its research plans? (17:52)
  • What are the main bottlenecks to further progress in the field of work focused on reducing s-risks? (29:46)
  • Does it make more sense to work directly on reducing specific s-risks or on broad risk factors that affect many different risks? (34:27)
  • Which particular types of global priorities research seem most useful? (38:15)
  • What are some of the implications of taking a longtermist approach for animal advocacy? (45:31)
  • If we decide that focusing directly on the interests of artificial sentient beings is a high priority, what are the most important next steps in research and advocacy? (1:00:04)
  • What are the most promising career paths for reducing s-risks? (1:09:25)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

Next Episode

undefined - Kurt Gray on human-robot interaction and mind perception

Kurt Gray on human-robot interaction and mind perception

And then you're like, actually, I can't know what it's like to be a bat—again, the problem of other minds, right? There's this fundamental divide between a human mind and a bat, but at least a bat's a mammal. What is it like to be an AI? I have no idea. So I think [mind perception] could make us less sympathetic to them in some sense because it's—I don't know, they're a circuit board, there are these algorithms, and so who knows? I can subjugate them now under the heel of human desire because they're not like me.

  • Kurt Gray

What is mind perception? What do we know about mind perception of AI/robots? Why do people like to use AI for some decisions but not moral decisions? Why would people rather give up hundreds of hospital beds than let AI make moral decisions?

Kurt Gray is a Professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides.

Topics discussed in the episode:

  • Introduction (0:00)
  • How did a geophysicist come to be doing social psychology? (0:51)
  • What do the Deepest Beliefs Lab and the Center for the Science of Moral Understanding do? (3:11)
  • What is mind perception? (4:45)
  • What is a mind? (7:45)
  • Agency vs experience, or thinking vs feeling (9:40)
  • Why do people see moral exemplars as being insensitive to pain? (10:45)
  • How will people perceive minds in robots/AI? (18:50)
  • Perspective taking as a tool to reduce substratism towards AI (29:30)
  • Why don’t people like using AI to make moral decisions? (32:25)
  • What would be the moral status of AI if they are not sentient? (38:00)
  • The presence of robots can make people seem more similar (44:10)
  • What can we expect about discrimination towards digital minds in the future? (48:30)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

The Sentience Institute Podcast - Thomas Metzinger on a moratorium on artificial sentience development

Transcript

Michael Dello-Iacovo

Welcome to the Sentience Institute podcast and to our 18th episode, I'm Michael Dello-Iacovo, strategy lead and researcher at Sentience Institute. Returning

Michael Dello-Iacovo

listeners might notice a different accent today. And that's because I have taken over as host of our podcast from Jamie Harris. And needless to say, I have very big shoes to fill. On

Michael Dello-Iacovo
Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/the-sentience-institute-podcast-467997/thomas-metzinger-on-a-moratorium-on-artificial-sentience-development-63067037"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to thomas metzinger on a moratorium on artificial sentience development on goodpods" style="width: 225px" /> </a>

Copy