Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Philosophical Disquisitions - 94 - Robot Friendship and Hatred

94 - Robot Friendship and Hatred

11/01/21 • -1 min

Philosophical Disquisitions

Can we move beyond the Aristotelian account of friendship when thinking about our relationships with robots? Can we hate robots? In this episode, I talk to Helen Ryland about these topics. Helen is a UK-based philosopher. She completed her PhD in Philosophy in 2020 at the University of Birmingham. She now works as an Associate Lecturer for The Open University. Her work examines human-robot relationships, video game ethics, and the personhood and moral status of marginal cases of human rights (e.g., subjects with dementia, nonhuman animals, and robots).

You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).

Show Notes

Topics covered include:
  • What is friendship and why does it matter?
  • The Aristotelian account of friendship
  • Limitations of the Aristotelian account
  • Moving beyond Aristotle
  • The degrees of friendship model
  • Why we can be friends with robots
  • Criticisms of robot-human friendship
  • The possibility of hating robots
  • Do we already hate robots?
  • Why would it matter if we did hate robots?

Relevant Links

Subscribe to the newsletter

plus icon
bookmark

Can we move beyond the Aristotelian account of friendship when thinking about our relationships with robots? Can we hate robots? In this episode, I talk to Helen Ryland about these topics. Helen is a UK-based philosopher. She completed her PhD in Philosophy in 2020 at the University of Birmingham. She now works as an Associate Lecturer for The Open University. Her work examines human-robot relationships, video game ethics, and the personhood and moral status of marginal cases of human rights (e.g., subjects with dementia, nonhuman animals, and robots).

You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).

Show Notes

Topics covered include:
  • What is friendship and why does it matter?
  • The Aristotelian account of friendship
  • Limitations of the Aristotelian account
  • Moving beyond Aristotle
  • The degrees of friendship model
  • Why we can be friends with robots
  • Criticisms of robot-human friendship
  • The possibility of hating robots
  • Do we already hate robots?
  • Why would it matter if we did hate robots?

Relevant Links

Subscribe to the newsletter

Previous Episode

undefined - 93 - Will machines impede moral progress?

93 - Will machines impede moral progress?


Thomas Sinclair (left), Ben Kenward (right)

Lots of people are worried about the ethics of AI. One particular area of concern is whether we should program machines to follow existing normative/moral principles when making decisions. But social moral values change over time. Should machines not be designed to allow for such changes? If machines are programmed to follow our current values will they impede moral progress? In this episode, I talk to Ben Kenward and Thomas Sinclair about this issue. Ben is a Senior Lecturer in Psychology at Oxford Brookes University in the UK. His research focuses on ecological psychology, mainly examining environmental activism such as the Extinction Rebellion movement of which he is a part. Thomas is a Fellow and Tutor in Philosophy at Wadham College, Oxford, and an Associate Professor of Philosophy at Oxford's Faculty of Philosophy. His research and teaching focus on questions in moral and political philosophy.

You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).

Show Notes

Topics discussed incude:
  • What is a moral value?
  • What is a moral machine?
  • What is moral progress?
  • Has society progress, morally speaking, in the past?
  • How can we design moral machines?
  • What's the problem with getting machines to follow our current moral consensus?
  • Will people over-defer to machines? Will they outsource their moral reasoning to machines?
  • Why is a lack of moral progress such a problem right now?

Relevant Links

Next Episode

undefined - 95 - The Psychology of the Moral Circle

95 - The Psychology of the Moral Circle

I was raised in the tradition of believing that everyone is of equal moral worth. But when I scrutinise my daily practices, I don’t think I can honestly say that I act as if everyone is of equal moral worth. The idea that some people belong within the circle of moral concern and some do not is central to many moral systems. But what affects the dynamics of the moral circle? How does it contract and expand? Can it expand indefinitely? In this episode I discuss these questions with Joshua Rottman. Josh is an associate Professor in the Department of Psychology and the Program in Scientific and Philosophical Studies of Mind at Franklin and Marshall College. His research is situated at the intersection of cognitive development and moral psychology, and he primarily focuses on studying the factors that lead certain entities and objects to be attributed with (or stripped of) moral concern.

You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).

Show Notes

Topics discussed include:
  • The normative significance of moral psychology
  • The concept of the moral circle
  • How the moral circle develops in children
  • How the moral circle changes over time
  • Can the moral circle expand indefinitely?
  • Do we have a limited budget of moral concern?
  • Do most people underuse their budget of moral concern?
  • Why do some people prioritise the non-human world over marginal humans?

Relevant Links

Subscribe to the newsletter

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/philosophical-disquisitions-242489/94-robot-friendship-and-hatred-27050343"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to 94 - robot friendship and hatred on goodpods" style="width: 225px" /> </a>

Copy