Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
The Sentience Institute Podcast

The Sentience Institute Podcast

Sentience Institute

Interviews with activists, social scientists, entrepreneurs and change-makers about the most effective strategies to expand humanity’s moral circle, with an emphasis on expanding the circle to farmed animals. Host Jamie Harris, a researcher at moral expansion think tank Sentience Institute, takes a deep dive with guests into advocacy strategies from political initiatives to corporate campaigns to technological innovation to consumer interventions, and discusses advocacy lessons from history, sociology, and psychology.
bookmark
Share icon

All episodes

Best episodes

Seasons

Top 10 The Sentience Institute Podcast Episodes

Goodpods has curated a list of the 10 best The Sentience Institute Podcast episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to The Sentience Institute Podcast for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite The Sentience Institute Podcast episode by adding your comments to the episode page.

The Sentience Institute Podcast - Raphaël Millière on large language models

Raphaël Millière on large language models

The Sentience Institute Podcast

play

07/03/23 • 109 min

Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypotheses in the way children do, it should learn all the time. If you observe young babies and toddlers, they are constantly experimenting. They're like little scientists, you see babies grabbing their feet, and testing whether that's part of my body or not, and learning gradually and very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.

  • Raphaël Millière

How do large language models work? What are the dangers of overclaiming and underclaiming the capabilities of large language models? What are some of the most important cognitive capacities to understand for large language models? Are large language models showing sparks of artificial general intelligence? Do language models really understand language?

Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society and a Lecturer in the Philosophy Department at Columbia University. He completed his DPhil (PhD) in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language understanding, reasoning, and planning.

Topics discussed in the episode:

  • Introduction (0:00)
  • How Raphaël came to work on AI (1:25)
  • How do large language models work? (5:50)
  • Deflationary and inflationary claims about large language models (19:25)
  • The dangers of overclaiming and underclaiming (25:20)
  • Summary of cognitive capacities large language models might have (33:20)
  • Intelligence (38:10)
  • Artificial general intelligence (53:30)
  • Consciousness and sentience (1:06:10)
  • Theory of mind (01:18:09)
  • Compositionality (1:24:15)
  • Language understanding and referential grounding (1:30:45)
  • Which cognitive capacities are most useful to understand for various purposes? (1:41:10)
  • Conclusion (1:47:23)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

bookmark
plus icon
share episode
The Sentience Institute Podcast - Matti Wilks on human-animal interaction and moral circle expansion
play

01/19/23 • 66 min

Speciesism being socially learned is probably our most dominant theory of why we think we're getting the results that we're getting. But to be very clear, this is super early research. We have a lot more work to do. And it's actually not just in the context of speciesism that we're finding this stuff. So basically we've run some studies showing that while adults will prioritize humans over even very large numbers of animals in sort of tragic trade-offs, children are much more likely to prioritize humans and animals lives similarly. So an adult will save one person over a hundred dogs or pigs, whereas children will save, I think it was two dogs or six pigs over one person. And this was children that were about five to 10 years old. So often when you look at biases in development, so something like minimal group bias, that peaks quite young.

  • Matti Wilks

What does our understanding of human-animal interaction imply for human-robot interaction? Is speciesism socially learned? Does expanding the moral circle dilute it? Why is there a correlation between naturalness and acceptableness? What are some potential interventions for moral circle expansion and spillover from and to animal advocacy?

Matti Wilks is a lecturer (assistant professor) in psychology at the University of Edinburgh. She uses approaches from social and developmental psychology to explore barriers to prosocial and ethical behavior—right now she is interested in factors that shape how we morally value others, the motivations of unusually altruistic groups, why we prefer natural things, and our attitudes towards cultured meat. Matti completed her PhD in developmental psychology at the University of Queensland, Australia, and was a postdoc at Princeton and Yale Universities.

Topics discussed in the episode:

  • Introduction (0:00)
  • What matters ethically? (1:00)
  • The link between animals and digital minds (3:10)
  • Higher vs lower orders of pleasure/suffering (4:15)
  • Psychology of human-animal interaction and what that means for human-robot interaction (5:40)
  • Is speciesism socially learned? (10:15)
  • Implications for animal advocacy strategy (19:40)
  • Moral expansiveness scale and the moral circle (23:50)
  • Does expanding the moral circle dilute it? (27:40)
  • Predictors for attitudes towards species and artificial sentience (30:05)
  • Correlation between naturalness and acceptableness (38:30)
  • What does our understanding of naturalness and acceptableness imply for attitudes towards cultured meat? (49:00)
  • How can we counter concerns about naturalness in cultured meat? (52:00)
  • What does our understanding of attitudes towards naturalness imply for artificial sentience? (54:00)
  • Interventions for moral circle expansion and spillover from and to animal advocacy (56:30)
  • Academic field building as a strategy for developing a cause area (1:00:50)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

bookmark
plus icon
share episode
The Sentience Institute Podcast - David Gunkel on robot rights

David Gunkel on robot rights

The Sentience Institute Podcast

play

12/05/22 • 64 min

Robot rights are not the same thing as a set of human rights. Human rights are very specific to a singular species, the human being. Robots may have some overlapping powers, claims, privileges, or immunities that would need to be recognized by human beings, but their grouping or sets of rights will be perhaps very different.

  • David Gunkel

Can and should robots and AI have rights? What’s the difference between robots and AI? Should we grant robots rights even if they aren’t sentient? What might robot rights look like in practice? What philosophies and other ways of thinking are we not exploring enough? What might human-robot interactions look like in the future? What can we learn from science fiction? Can and should we be trying to actively get others to think of robots in a more positive light?

David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters and has published twelve internationally recognized books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA).

Topics discussed in the episode:

  • Introduction (0:00)
  • Why robot rights and not AI rights? (1:12)
  • The other question: can and should robots have rights? (5:39)
  • What is the case for robot rights? (10:21)
  • What would robot rights look like? (19:50)
  • What can we learn from other, particularly non-western, ways of thinking for robot rights? (26:33)
  • What will human-robot interaction look like in the future? (33:20)
  • How artificial sentience being less discrete than biological sentience might affect the case for rights (40:45)
  • Things we can learn from science fiction for human-robot interaction and robot rights (42:55)
  • Can and should we do anything to encourage people to see robots in a more positive light? (47:55)
  • Why David pursued philosophy of technology over computer science more generally (52:01)
  • Does having technical expertise give you more credibility (54:01)
  • Shifts in thinking about robots and AI David has noticed over his career (58:03)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

bookmark
plus icon
share episode
The Sentience Institute Podcast - Kurt Gray on human-robot interaction and mind perception
play

10/30/22 • 59 min

And then you're like, actually, I can't know what it's like to be a bat—again, the problem of other minds, right? There's this fundamental divide between a human mind and a bat, but at least a bat's a mammal. What is it like to be an AI? I have no idea. So I think [mind perception] could make us less sympathetic to them in some sense because it's—I don't know, they're a circuit board, there are these algorithms, and so who knows? I can subjugate them now under the heel of human desire because they're not like me.

  • Kurt Gray

What is mind perception? What do we know about mind perception of AI/robots? Why do people like to use AI for some decisions but not moral decisions? Why would people rather give up hundreds of hospital beds than let AI make moral decisions?

Kurt Gray is a Professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides.

Topics discussed in the episode:

  • Introduction (0:00)
  • How did a geophysicist come to be doing social psychology? (0:51)
  • What do the Deepest Beliefs Lab and the Center for the Science of Moral Understanding do? (3:11)
  • What is mind perception? (4:45)
  • What is a mind? (7:45)
  • Agency vs experience, or thinking vs feeling (9:40)
  • Why do people see moral exemplars as being insensitive to pain? (10:45)
  • How will people perceive minds in robots/AI? (18:50)
  • Perspective taking as a tool to reduce substratism towards AI (29:30)
  • Why don’t people like using AI to make moral decisions? (32:25)
  • What would be the moral status of AI if they are not sentient? (38:00)
  • The presence of robots can make people seem more similar (44:10)
  • What can we expect about discrimination towards digital minds in the future? (48:30)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

bookmark
plus icon
share episode

“We think that the most important thing right now is capacity building. We’re not so much focused on having impact now or in the next year, we’re thinking about the long term and the very big picture... Now, what exactly does capacity building mean? It can simply mean getting more people involved... I would frame it more in terms of building a healthy community that’s stable in the long term... And one aspect that’s just as important as the movement building is that we need to improve our knowledge of how to best reduce suffering. You could call it ‘wisdom building’... And CRS aims to contribute to [both] through our research... Some people just naturally tend to be more inclined to explore a lot of different topics... Others have maybe more of a tendency to dive into something more specific and dig up a lot of sources and go into detail and write a comprehensive report and I think both these can be very valuable... What matters is just that overall your work is contributing to progress on... the most important questions of our time.”

  • Tobias Baumann

There are many different ways that we can reduce suffering or have other forms of positive impact. But how can we increase our confidence about which actions are most cost-effective? And what can people do now that seems promising?

Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.

Topics discussed in the episode:

  • Who is currently working to reduce risks of astronomical suffering in the long-term future (“s-risks”) and what are they doing? (2:50)
  • What are “information hazards,” how concerned should we be about them, and how can we reduce them? (12:21)
  • What is the Center for Reducing Suffering’s theory of change and what are its research plans? (17:52)
  • What are the main bottlenecks to further progress in the field of work focused on reducing s-risks? (29:46)
  • Does it make more sense to work directly on reducing specific s-risks or on broad risk factors that affect many different risks? (34:27)
  • Which particular types of global priorities research seem most useful? (38:15)
  • What are some of the implications of taking a longtermist approach for animal advocacy? (45:31)
  • If we decide that focusing directly on the interests of artificial sentient beings is a high priority, what are the most important next steps in research and advocacy? (1:00:04)
  • What are the most promising career paths for reducing s-risks? (1:09:25)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

bookmark
plus icon
share episode

“If some beings are excluded from moral consideration then the results are usually quite bad, as evidenced by many forms of both current and historical suffering... I would definitely say that those that don’t have any sort of political representation or power are at risk. That’s true for animals right now; it might be true for artificially sentient beings in the future... And yeah, I think that is a plausible priority. Another candidate would be to work on other broad factors to improve the future such as by trying to fix politics, which is obviously a very, very ambitious goal... [Another candidate would be] trying to shape transformative AI more directly. We’ve talked about the uncertainty there is regarding the development of artificial intelligence, but at least there’s a certain chance that people are right about this being a very crucial technology; and if so, shaping it in the right way is very important obviously.”

  • Tobias Baumann

Expanding humanity’s moral circle to include farmed animals and other sentient beings is a promising strategy for reducing the risk of astronomical suffering in the long-term future. But are there other causes that we could focus on that might be better? And should reducing future suffering actually be our goal?

Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.

Topics discussed in the episode:

  • Why moral circle expansion is a plausible priority for those of us focused on doing good (2:17)
  • Tobias’ view on why we should accept longtermism — the idea that the value of our actions is determined primarily by their impacts on the long-term future (5:50)
  • Are we living at the most important time in history? (14:15)
  • When, if ever, will transformative AI arrive? (20:35)
  • Assuming longtermism, should we prioritize focusing on risks of astronomical suffering in the long-term future (s-risks) or on maximizing the likelihood of positive outcomes? (27:00)
  • What sorts of future beings might be excluded from humanity’s moral circle in the future, and why might this happen? (37:45)
  • What are the main reasons to believe that moral circle expansion might not be a very promising way to have positive impacts on the long-term future? (41:40)
  • Should we focus on other forms of values spreading that might be broadly positive, rather than expanding humanity’s moral circle? (48:55)
  • Beyond values spreading, which other causes should people focused on reducing s-risks consider prioritizing (50:25)
  • Should we expend resources on moral circle expansion and other efforts to reduce s-risk now or just invest our money and resources in order to benefit from compound interest? (1:00:02)
  • If we decide to focus on moral circle expansion, should we focus on the current frontiers of the moral circle, such as farmed animals, or focus more directly on groups of future beings we are concerned about? (1:03:06)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

bookmark
plus icon
share episode

We [Faunalytics] put out a lot of things in 2020. Some of the favorites that I [Jo] have, probably top of the list, I’m really excited about our animal product impact scales, where we did a lot of background research to figure out and estimate the impact of replacing various animal products with plant-based or cultivated alternatives. Apart from that, we’ve also done some research on people’s beliefs about chickens and fish that’s intended as a starting point on a program of research so that we can look at the best ways to advocate for those smaller animals... [Rethink Priorities’] bigger projects within farmed animal advocacy include work on EU legislation, in particular our view of how much do countries comply with EU animal welfare laws and what we can do to increase compliance. Jason Schukraft wrote many articles about topics like how the moral value of animals differs across species. There has been a review of shrimp farming. I [Saulius] finished an article in which I estimate global captive vertebrate numbers. And Abraham Rowe posted an article about insects raised for food and feed which I think is a very important topic.

  • Jo Anderson and Saulius Šimčikas

There have been many new research posts relevant to animal advocacy in 2020. But which are the most important for animal advocates to pay close attention to? And what sorts of research should we prioritize in the future?

Jo Anderson is the Research Director at Faunalytics, a nonprofit that conducts, summarizes, and disseminates research relevant to animal advocacy. Saulius Šimčikas is a Senior Staff Researcher at Rethink Priorities, a nonprofit that conducts research relevant to farmed animal advocacy, wild animals, and several other cause areas associated with the effective altruism community.

Topics discussed in the episode:

  • Faunalytics and Rethink Priorities’ research in 2020 relevant to animal advocacy (1:40)
  • Jo and Saulius’ work on polling about fish welfare (5:37)
  • The impact of replacing different types of animal products (12:27)
  • To what extent should animal advocates focus on legislative campaigns rather than corporate campaigns? (16:29)
  • Experiences and turnover in the animal advocacy movement (24:33)
  • New research on the difficulties of scaling up cultured meat (28:15)
  • New research about the promise of lectures to reduce students’ animal product consumption (32:16)
  • Charity Entrepreneurship’s (many) new intervention reports (36:54)
  • How the idea of longtermism should affect animal advocacy (39:32)
  • Other exciting effective animal advocacy research published in 2020 (45:51)
  • How does all this research actually lead to impact for animals? What is the theory of change? (50:06)
  • How do you decide or prioritize which specific research topic to pursue? (56:41)
  • What are the pros and cons of working on multiple cause areas within a single research nonprofit? (1:00:11)
  • What are the pros and cons of various different types of research? (1:05:21)
  • What are the main bottlenecks that the farmed animal movement and its contributing research organizations face? (1:18:17)
  • What routes into effective animal advocacy research roles did Jamie, Jo, and Saulius take and what is the relative importance of effective animal advocacy familiarity vs. formal research experience? (1:23:49)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

bookmark
plus icon
share episode

“Why inner transformation, why these practices are also built into model: unless we root out the root cause of the issue, which is disconnection, which is a lack of understanding that we are interrelated, and therefore I have an inherent responsibility to show up in the world with kindness and compassion and to reduce the harm and the suffering that I cause in the world. Unless we’re able to do that, these problems are still going to exist. The issues of race relations still exist. How many years have people been fighting for this? The issue of homophobia, of racism, whatever it is, they still exist; why do they still exist after so much work, after so much money has been poured into it, after so many lives have been lost, so many people have been beaten and spilled their blood? They’ve shed their tears for these issues. Because unless we address the underlying schisms within human consciousness, within us as individuals, it’s still going to exist; it’s still going to be there. Direct impact, indirect impact, I just want to see impact and if you’re someone who wants to make an impact, I want to hear from you.

  • Ajay Dahiya

Animals are harmed in all continents in the world. But how can we support the advocates seeking to help them? And what sort of support is most needed?

Ajay Dahiya is the executive director of The Pollination Project, an organisation which funds and supports grassroots advocates and organizations working towards positive social change, such as to help animals.

Topics discussed in the episode:

  • How the Pollination Project helps grassroots animal advocates (1:20)
  • How we can support grassroots animal advocacy in India and build a robust movement (12:48)
  • How the grants and support offered concretely benefit the grantees (19:22)
  • The application and review process for The Pollination Project’s grant-making (24:00)
  • What makes good grantees? And how does The Pollination Project evaluate them? (27:34)
  • How does The Pollination Project identify and evaluate grantees? (35:14)
  • How important is the non-financial support that the Pollination Project offers relative to the financial support? (44:54)
  • What similarities and differences does The Pollination Project have to other grant-makers that support effective animal advocacy? (55:23)
  • What are the difficulties of making grants in lots of different countries? (1:02:00)
  • To what extent are grassroots animal advocates constrained by a lack of funding? (1:06:26)
  • Why doesn’t The Pollination Project’s prioritize some of the work that it does over others? Isn’t this kind of prioritization necessary in order to maximize positive impact? (1:10:00)
  • What are the main challenges that The Pollination Project faces, preventing it having further impact? (1:29:05)
  • What makes good grant-makers? (1:31:58)
  • How Ajay’s experience as a monk came about and how it affects his work as a grant-maker (1:34:37)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

bookmark
plus icon
share episode

“The main work that really needs to be carried out here is work in the intersection of animal welfare science and the science of ecology and other fields in life science... You could also build a career, not as a scientist, but say, in public administration or government. And you can reach a position in policy-making that can be relevant for the field, so there are plenty of different options there... Getting other interventions accepted and implemented would require significant lobby work. And that’s why having people, for instance, if you have people who are sympathetic to reducing wild animal suffering, and they are working in, say, national parks administration or working with the agricultural authorities, forest authorities, or whatever, these people could really make a significant difference.”

  • Oscar Horta

Animals in the wild suffer, often to a large degree, because of natural disasters, parasites, disease, starvation, and other causes. But what can we do as individuals to help them? What are the most urgent priorities?

Oscar Horta is a Professor of philosophy at the University of Santiago de Compostela and a co-founder of the nonprofit Animal Ethics. He has published and lectured in English and other languages on topics including speciesism and wild animal welfare.

Topics discussed in the episode:

  • Why should animal advocates and researchers think more carefully about the definition of speciesism? (1:40)
  • Why Oscar believes framing our messaging in terms in speciesism and focusing on attitudes rather than behavior would help advocates to do more good (9:10)
  • How relevant is existing research to the proposed research field of welfare biology, that would consider wild animals among other animals, and how can we integrate it? (16:40)
  • What sorts of research are most urgently needed to advance the field of welfare biology and how can people go about pursuing this? (21:13)
  • Careers related to helping wild animals in policy (36:10)
  • What you can do if you already work at an animal advocacy organization or are interested in growing the field in other ways (39:45)
  • The size of the current wild animal welfare movement in and the work of relevant nonprofits (51:40)
  • How can we most effectively build support for this sort of work among other animal advocates and effective altruists? (57:33)
  • How can we most effectively build a new academic field? (1:02:49)
  • To what extent is public-facing advocacy desirable at this point? (1:10:09)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

bookmark
plus icon
share episode
The Sentience Institute Podcast - Eric Schwitzgebel on user perception of the moral status of AI
play

02/15/24 • 57 min

I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they'd be willing to make excessive sacrifices for this thing that isn't really sentient.

  • Eric Schwitzgebel

Why should AI systems be designed so as to not confuse users about their moral status? What would make an AI system sentience or moral standing clear? Are there downsides to treating an AI as not sentient even if it’s not sentient? What happens when some theories of consciousness disagree about AI consciousness? Have the developments in large language models in the last few years come faster or slower than Eric expected? Where does Eric think we will see sentience first in AI if we do?

Eric Schwitzgebel is professor of philosophy at University of California, Berkeley, specializing in philosophy of mind and moral psychology. His books include Describing Inner Experience? Proponent Meets Skeptic (with Russell T. Hurlburt), Perplexities of Consciousness, A Theory of Jerks and Other Philosophical Misadventures, and most recently The Weirdness of the World. He blogs at The Splintered Mind.

Topics discussed in the episode:

  • Introduction (0:00)
  • AI systems must not confuse users about their sentience or moral status introduction (3:14)
  • Not confusing experts (5:30)
  • Not confusing general users (9:12)
  • What would make an AI system sentience or moral standing clear? (13:21)
  • Are there downsides to treating an AI as not sentient even if it’s not sentient? (16:33)
  • How would we implement this solution at a policy level? (25:19)
  • What happens when some theories of consciousness disagree about AI consciousness? (28:24)
  • How does this approach to uncertainty in AI consciousness relate to Jeff Sebo’s approach? (34:15)
  • Consciousness and artificial intelligence insights from the science of consciousness introduction (36:38)
  • How does the indicator properties approach account for factors relating to consciousness that we might be missing? (39:37)
  • What was the process for determining what indicator properties to include? (42:58)
  • Advantages of the indicator properties approach (44:49)
  • Have the developments in large language models in the last few years come faster or slower than Eric expected? (46:25)
  • Where does Eric think we will see sentience first in AI if we do? (50:17)
  • Are things like grounding or embodiment essential for understanding and consciousness? (53:35)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does The Sentience Institute Podcast have?

The Sentience Institute Podcast currently has 23 episodes available.

What topics does The Sentience Institute Podcast cover?

The podcast is about Society & Culture, Activism, Podcasts, Social Sciences, Science, Animal Rights, Strategy and Politics.

What is the most popular episode on The Sentience Institute Podcast?

The episode title 'Raphaël Millière on large language models' is the most popular.

What is the average episode length on The Sentience Institute Podcast?

The average episode length on The Sentience Institute Podcast is 93 minutes.

How often are episodes of The Sentience Institute Podcast released?

Episodes of The Sentience Institute Podcast are typically released every 36 days.

When was the first episode of The Sentience Institute Podcast?

The first episode of The Sentience Institute Podcast was released on Dec 3, 2019.

Show more FAQ

Toggle view more icon

Comments