Philosophical Disquisitions
John Danaher
1 Listener
All episodes
Best episodes
Top 10 Philosophical Disquisitions Episodes
Goodpods has curated a list of the 10 best Philosophical Disquisitions episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Philosophical Disquisitions for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Philosophical Disquisitions episode by adding your comments to the episode page.
103 - GPT: How worried should we be?
Philosophical Disquisitions
03/23/23 • -1 min
In this episode of the podcast, I chat to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology in Sweden. We talk about GPT and LLMs more generally. What are they? Are they intelligent? What risks do they pose or presage? Are we proceeding with the development of this technology in a reckless way? We try to answer all these questions, and more.
You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.
Subscribe to the newsletter
1 Listener
Epicureanism and the Problem of Premature Death (Audio Essay)
Philosophical Disquisitions
06/02/19 • -1 min
This audio essay looks at the Epicurean philosophy of death, focusing specifically on how they addressed the problem of premature death. The Epicureans believe that premature death is not a tragedy, provided it occurs after a person has attained the right state of pleasure. If you enjoy listening to these audio essays, and the other podcast episodes, you might consider rating and/or reviewing them on your preferred podcasting service.
You can listen below or download here. You can also subscribe on Apple, Stitcher or a range of other services (the RSS feed is here).
I've written lots about the philosophy of death over the years. Here are some relevant links if you would like to do further reading on the topic:
- The Badness of Death and the Meaning of Life (index)
- The Lucretian Symmetry Argument (Part 1 and Part 2)
- Is Death Bad or Less Good? (Part 1, Part 2, Part 3, and Part 4)
Subscribe to the newsletter
107 - Will Large Language Models disrupt healthcare?
Philosophical Disquisitions
04/19/23 • -1 min
In this episode of the podcast I chat to Jess Morley. Jess is currently a DPhil candidate at the Oxford Internet Institute. Her research focuses on the use of data in healthcare, oftentimes on the impact of big data and AI, but, as she puts it herself, usually on 'less whizzy' things. Sadly, our conversation focuses on the whizzy things, in particular the recent hype about large language models and their potential to disrupt the way in which healthcare is managed and delivered. Jess is sceptical about the immediate potential for disruption but thinks it is worth exploring, carefully, the use of this technology in healthcare.
You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.
Relevant Links
Subscribe to the newsletter
New Podcast Series - 'This is Technology Ethics'
Philosophical Disquisitions
09/25/23 • -1 min
I am very excited to announce the launch of a new podcast series with my longtime friend and collaborator Sven Nyholm. The podcast is intended to introduce key themes, concepts, arguments and ideas arising from the ethics of technology. It roughly follows the structure from the book This is Technology Ethics by Sven , but in a loose and conversational style. In the nine episodes, we will cover the nature of technology and ethics, the methods of technology ethics, the problems of control, responsibility, agency and behaviour change that are central to many contemporary debates about the ethics of technology. We will also cover perennially popular topics such as whether a machine could have moral status, whether a robot could (or should) be a friend, lover or work colleague, and the desirability of merging with machines. The podcast is intended to be accessible to a wide audience and could provide an ideal companion to an introductory or advanced course in the ethics of technology (with particular focus on AI, robotics and other digital technologies).
I will be releasing the podcast on the Philosophical Disquisitions podcast feed, but I have also created an independent podcast feed and website, if you are just interested in it. The first episode can be downloaded here or you can listen below. You can also subscribe on Apple, Spotify, Amazon and a range of other podcasting services.
If you go the website or subscribe via the standalone feed, you can download the first two episodes now. There is also a promotional tie with the book publisher. If you use the code 'TEC20' on the publisher's website (here) you can get 20% off the regular price.
Subscribe to the newsletter
65 - Vold on How We Can Extend Our Minds With AI
Philosophical Disquisitions
11/22/19 • -1 min
In this episode I talk to Dr Karina Vold. Karina is a philosopher of mind, cognition, and artificial intelligence. She works on the ethical and societal impacts of emerging technologies and their effects on human cognition. Dr Vold is currently a postdoctoral Research Associate at the Leverhulme Centre for the Future of Intelligence, a Research Fellow at the Faculty of Philosophy, and a Digital Charter Fellow at the Alan Turing Institute. We talk about the ethics extended cognition and how it pertains to the use of artificial intelligence. This is a fascinating topic because it addresses one of the oft-overlooked effects of AI on the human mind.
You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).
Show Notes
- 0:00 - Introduction
- 1:55 - Some examples of AI cognitive extension
- 13:07 - Defining cognitive extension
- 17:25 - Extended cognition versus extended mind
- 19:44 - The Coupling-Constitution Fallacy
- 21:50 - Understanding different theories of situated cognition
- 27:20 - The Coupling-Constitution Fallacy Redux
- 30:20 - What is distinctive about AI-based cognitive extension?
- 34:20 - The three/four different ways of thinking about human interactions with AI
- 40:04 - Problems with this framework
- 49:37 - The Problem of Cognitive Atrophy
- 53:31 - The Moral Status of AI Extenders
- 57:12 - The Problem of Autonomy and Manipulation
- 58:55 - The policy implications of recognising AI cognitive extension
Relevant Links
- Karina's homepage
- Karina at the Leverhulme Centre for the Future of Intelligence
- "AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI" by José Hernández Orallo and Karina Vold
- "The Parity Argument for Extended Consciousness" by Karina
- "Are ‘you’ just inside your skin or is your smartphone part of you?" by Karina
- "The Extended Mind" by Clark and Chalmers
- Theory and Application of the Extended Mind (series by me)
Subscribe to the newsletter
87 - AI and the Value Alignment Problem
Philosophical Disquisitions
12/23/20 • -1 min
How do we make sure that an AI does the right thing? How could we do this when we ourselves don't even agree on what the right thing might be? In this episode, I talk to Iason Gabriel about these questions. Iason is a political theorist and ethicist currently working as a Research Scientist at DeepMind. His research focuses on the moral questions raised by artificial intelligence. His recent work addresses the challenge of value alignment, responsible innovation, and human rights. He has also been a prominent contributor to the debate about the ethics of effective altruism.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).
Show Notes:
Topics discussed include:
- What is the value alignment problem?
- Why is it so important that we get value alignment right?
- Different ways of conceiving the problem
- How different AI architectures affect the problem
- Why there can be no purely technical solution to the value alignment problem
- Six potential solutions to the value alignment problem
- Why we need to deal with value pluralism and uncertainty
- How political theory can help to resolve the problem
Relevant Links
- Iason on Twitter
- "Artificial Intelligence, Values and Alignment" by Iason
- "Effective Altruism and its Critics" by Iason
- My blog series on the above article
- "Social Choice Ethics in Artificial Intelligence" by Seth Baum
Subscribe to the newsletter
#59 - Torres on Existential Risk, Omnicidal Agents and Superintelligence
Philosophical Disquisitions
05/09/19 • -1 min
In this episode I talk to Phil Torres. Phil is an author and researcher who primarily focuses on existential risk. He is currently a visiting researcher at the Centre for the Study of Existential Risk at Cambridge University. He has published widely on emerging technologies, terrorism, and existential risks, with articles appearing in the Bulletin of the Atomic Scientists, Futures, Erkenntnis, Metaphilosophy, Foresight, Journal of Future Studies, and the Journal of Evolution and Technology. He is the author of several books, including most recently Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. We talk about the problem of apocalyptic terrorists, the proliferation dual-use technology and the governance problem that arises as a result. This is both a fascinating and potentially terrifying discussion.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here).
Show Notes
- 0:00 – Introduction
- 3:14 – What is existential risk? Why should we care?
- 8:34 – The four types of agential risk/omnicidal terrorists
- 17:51 – Are there really omnicidal terror agents?
- 20:45 – How dual-use technology give apocalyptic terror agents the means to their desired ends
- 27:54 – How technological civilisation is uniquely vulernable to omnicidal agents
- 32:00 – Why not just stop creating dangerous technologies?
- 36:47 – Making the case for mass surveillance
- 41:08 – Why mass surveillance must be asymmetrical
- 45:02 – Mass surveillance, the problem of false positives and dystopian governance
- 56:25 – Making the case for benevolent superintelligent governance
- 1:02:51 – Why advocate for something so fantastical?
- 1:06:42 – Is an anti-tech solution any more fantastical than a benevolent AI solution?
- 1:10:20 – Does it all just come down to values: are you a techno-optimist or a techno-pessimist?
Relevant Links
- Phil’s webpage
- ‘Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History’ by Phil
- Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks by Phil
- ‘The Vulnerable World Hypothesis” by Nick Bostrom
- Phil’s comparison of his paper with Bostrom’s paper
- The Guardian orders the small-pox genome
- Slaughterbots
- The Future of Violence by Ben Wittes and Gabriela Blum
- Future Crimes by Marc Goodman
- The Dyn Cyberattack
- Autonomous Technology by Langdon Winner
- 'Biotechnology and the Lifetime of Technological Civilisations’ by JG Sotos
- The God Machine Thought Experiment (Persson and Savulescu)
Subscribe to the newsletter
99 - Trusting Untrustworthy Machines and Other Psychological Quirks
Philosophical Disquisitions
11/07/22 • -1 min
In this episode I chat to Matthias Uhl. Matthias is a professor of the social and ethical implications of AI at the Technische Hochschule Ingolstadt. Matthias is a behavioural scientist that has been doing a lot of work on human-AI/Robot interaction. He focuses, in particular, on applying some of the insights and methodologies of behavioural economics to these questions. We talk about three recent studies he and his collaborators have run revealing interesting quirks in how humans relate to AI decision-making systems. In particular, his findings suggesting that people do outsource responsibility to machines, are willing to trust untrustworthy machines and prefer the messy discretion of human decision-makers over the precise logic of machines. Matthias's research is fascinating and has some important implications for people working in AI ethics and policy.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).
Relevant Links
- Matthias's Faculty Page
- 'Hiding Behind Machines: Artificial Agents May Help to Evade Punishment' by Matthias and colleagues
- 'Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions' by Matthias and colleagues
- 'People Prefer Moral Discretion to Algorithms: Algorithm Aversion Beyond Intransparency' by Matthias and colleagues
Subscribe to the newsletter
94 - Robot Friendship and Hatred
Philosophical Disquisitions
11/01/21 • -1 min
Can we move beyond the Aristotelian account of friendship when thinking about our relationships with robots? Can we hate robots? In this episode, I talk to Helen Ryland about these topics. Helen is a UK-based philosopher. She completed her PhD in Philosophy in 2020 at the University of Birmingham. She now works as an Associate Lecturer for The Open University. Her work examines human-robot relationships, video game ethics, and the personhood and moral status of marginal cases of human rights (e.g., subjects with dementia, nonhuman animals, and robots).
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).
Show Notes
Topics covered include:- What is friendship and why does it matter?
- The Aristotelian account of friendship
- Limitations of the Aristotelian account
- Moving beyond Aristotle
- The degrees of friendship model
- Why we can be friends with robots
- Criticisms of robot-human friendship
- The possibility of hating robots
- Do we already hate robots?
- Why would it matter if we did hate robots?
Relevant Links
- Helen's homepage
- 'It's Friendship Jim, But Not as We Know It: A Degrees-of-Friendship View of Human–Robot Friendships' by Helen
- Could you hate a robot? Does it matter if you could? by Helen
Subscribe to the newsletter
75 - The Vital Ethical Contexts of Coronavirus
Philosophical Disquisitions
04/15/20 • -1 min
There is a lot of data and reporting out there about the COVID 19 pandemic. How should we make sense of that data? Do the media narratives misrepresent or mislead us as to the true risks associated with the disease? Have governments mishandled the response? Can they be morally blamed for what they have done. These are the questions I discuss with my guest on today's show: David Shaw. David is a Senior Researcher at the Institute for Biomedical Ethics at the University of Basel and an Assistant Professor at the Care and Public Health Research Institute, Maastricht University. We discuss some recent writing David has been doing on the Journal of Medical Ethics blog about the coronavirus crisis.
You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).
Show Notes
Topics discussed include...- Why is it important to keep death rates and other data in context?
- Is media reporting of deaths misleading?
- Why do the media discuss 'soaring' death rates and 'grim' statistics?
- Are we ignoring the unintended health consequences of COVID 19?
- Should we take the economic costs more seriously given the link between poverty/inequality and health outcomes?
- Did the UK government mishandle the response to the crisis? Are they blameworthy for what they did?
- Is it fair to criticise governments for their handling of the crisis?
- Is it okay for governments to experiment on their populations in response to the crisis?
Relevant Links
- David's Profile Page at the University of Basel
- 'The Vital Contexts of Coronavirus' by David
- 'The Slow Dragon and the Dim Sloth: What can the world learn from coronavirus responses in Italy and the UK?' by Marcello Ienca and David Shaw
- 'Don't let the ethics of despair infect the ICU' by David Shaw, Dan Harvey and Dale Gardiner
- 'Deaths in New York City Are More Than Double the Usual Total' in the NYT (getting the context right?!)
- Preliminary results from German Antibody tests in one town: 14% of the population infected
- Do Death Rates Go Down in a Recession?
- The Sun's Good Friday headline
Subscribe to the newsletter
Show more best episodes
Show more best episodes
FAQ
How many episodes does Philosophical Disquisitions have?
Philosophical Disquisitions currently has 78 episodes available.
What topics does Philosophical Disquisitions cover?
The podcast is about Society & Culture, Courses, Intelligence, Podcasts, Technology, Education, Digital, Religion, Philosophy and Ethics.
What is the most popular episode on Philosophical Disquisitions?
The episode title '103 - GPT: How worried should we be?' is the most popular.
How often are episodes of Philosophical Disquisitions released?
Episodes of Philosophical Disquisitions are typically released every 9 days, 2 hours.
When was the first episode of Philosophical Disquisitions?
The first episode of Philosophical Disquisitions was released on May 9, 2019.
Show more FAQ
Show more FAQ