Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
The Sentience Institute Podcast - Eric Schwitzgebel on user perception of the moral status of AI

Eric Schwitzgebel on user perception of the moral status of AI

The Sentience Institute Podcast

02/15/24 • 57 min

plus icon
bookmark
Share icon

I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they'd be willing to make excessive sacrifices for this thing that isn't really sentient.

  • Eric Schwitzgebel

Why should AI systems be designed so as to not confuse users about their moral status? What would make an AI system sentience or moral standing clear? Are there downsides to treating an AI as not sentient even if it’s not sentient? What happens when some theories of consciousness disagree about AI consciousness? Have the developments in large language models in the last few years come faster or slower than Eric expected? Where does Eric think we will see sentience first in AI if we do?

Eric Schwitzgebel is professor of philosophy at University of California, Berkeley, specializing in philosophy of mind and moral psychology. His books include Describing Inner Experience? Proponent Meets Skeptic (with Russell T. Hurlburt), Perplexities of Consciousness, A Theory of Jerks and Other Philosophical Misadventures, and most recently The Weirdness of the World. He blogs at The Splintered Mind.

Topics discussed in the episode:

  • Introduction (0:00)
  • AI systems must not confuse users about their sentience or moral status introduction (3:14)
  • Not confusing experts (5:30)
  • Not confusing general users (9:12)
  • What would make an AI system sentience or moral standing clear? (13:21)
  • Are there downsides to treating an AI as not sentient even if it’s not sentient? (16:33)
  • How would we implement this solution at a policy level? (25:19)
  • What happens when some theories of consciousness disagree about AI consciousness? (28:24)
  • How does this approach to uncertainty in AI consciousness relate to Jeff Sebo’s approach? (34:15)
  • Consciousness and artificial intelligence insights from the science of consciousness introduction (36:38)
  • How does the indicator properties approach account for factors relating to consciousness that we might be missing? (39:37)
  • What was the process for determining what indicator properties to include? (42:58)
  • Advantages of the indicator properties approach (44:49)
  • Have the developments in large language models in the last few years come faster or slower than Eric expected? (46:25)
  • Where does Eric think we will see sentience first in AI if we do? (50:17)
  • Are things like grounding or embodiment essential for understanding and consciousness? (53:35)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

02/15/24 • 57 min

plus icon
bookmark
Share icon

The Sentience Institute Podcast - Eric Schwitzgebel on user perception of the moral status of AI

Transcript

Speaker 1

Welcome to the Sentient Institute Podcast, and to our 23rd episode. I'm Michael Ho Yvo strategy lead and researcher at Sentient Institute on the Sentient Institute podcast. We interview researchers about moral circle expansion and digital minds, which are ais that seem to have mental faculties, such as autonomy agency and sentience. Our guest for today is Eric Schitz Gebel . Eric is professor of philosophy at University of California Berkeley, sp

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/the-sentience-institute-podcast-467997/eric-schwitzgebel-on-user-perception-of-the-moral-status-of-ai-63066891"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to eric schwitzgebel on user perception of the moral status of ai on goodpods" style="width: 225px" /> </a>

Copy