
London Futurists
London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum Chace
Calum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.
His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.
He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.
In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.
He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.
Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.
David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.
He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.
As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.
From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.
Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

1 Listener
All episodes
Best episodes
Seasons
Top 10 London Futurists Episodes
Goodpods has curated a list of the 10 best London Futurists episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to London Futurists for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite London Futurists episode by adding your comments to the episode page.

01/02/25 • 42 min
Our guest in this episode is Jeff LaPorte, a software engineer, entrepreneur and investor based in Vancouver, who writes Road to Artificia, a newsletter about discovering the principles of post‐AI societies.
Calum recently came across Jeff's article “Valuing Humans in the Age of Superintelligence: HumaneRank” and thought it had some good, original ideas, so we wanted to invite Jeff onto the podcast and explore them.
Selected follow-ups:
- Jeff LaPorte personal business website
- Road to Artificia: A newsletter about discovering the principles of societies post‐AI
- Valuing Humans in the Age of Superintelligence: HumaneRank
- Ideas Lying Around - article by Cory Doctorow about a famous saying by Milton Friedman
- PageRank - Wikipedia
- Nosedive (Black Mirror episode) - IMDb
- The Economic Singularity - book by Calum Chace
- World Chess Championship 2024 - WIkipedia
- WALL.E (2008 movie) - IMDb
- A day in the life of Asimov, 2045 - short story by David Wood
- Why didn't electricity immediately change manufacturing? - by Tim Harford, BBC
- Responsible use of artificial intelligence in government - Government of Canada
- Bipartisan House Task Force Report on Artificial Intelligence - U.S. House of Representatives
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonDiscover how technology is reshaping our lives and livelihoods.
Listen on: Apple Podcasts Spotify

1 Listener

Innovating in education: the Codam experience, with David Giron
London Futurists
07/06/23 • 34 min
In this episode our guest is David Giron, the Director at what is arguably one of the world's most innovative educational initiatives, Codam College in Amsterdam. David was previously the head of studies at Codam's famous parent school 42 in Paris, and he has now spent 10 years putting into practice the somewhat revolutionary ideas of the 42 network. We ask David about what he has learned during these ten years, but we're especially interested in his views on how the world of education stands to be changed even further in the months and years ahead by generative AI.
Selected follow-ups:
https://www.codam.nl/en/team
https://42.fr/en/network-42/
Topics addressed in this episode include:
*) David's background at Epitech and 42 before joining Codam
*) The peer-to-peer framework at the heart of 42
*) Learning without teachers
*) Student assessment without teachers
*) Connection with the "competency-based learning" or "mastery learning" ideas of Sir Ken Robinson
*) Extending the 42 learning method beyond software engineering to other fields
*) Two ways of measuring whether the learning method is successful
*) Is it necessary for a school to fail some students from time to time?
*) The impact of Covid on the offline collaborative approach of Codam
*) ChatGPT is more than a tool; it is a "topic", on which people are inclined to take sides
*) Positive usage models for ChatGPT within education
*) Will ChatGPT make the occupation of software engineering a "job from the past"?
*) Software engineers will shift their skills from code-writing to prompt-writing
*) Why generative AI is likely to have a faster impact on work than the introduction of mechanisation
*) The adoption rate of generative AI by Codam students - and how it might change later this year
*) Code first or comment first?
*) The level of interest in Codam shown by other educational institutions
*) The resistance to change within traditional educational institutions
*) "The revolution is happening outside"
*) From "providing knowledge" to "creating a learning experience"
*) From large language models to full video systems that are individually tailored to help each person learn whatever they need in order to solve problems
*) Learning to code as a proxy for the more fundamental skill of learning to learn
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify

1 Listener

Anticipating Longevity Escape Velocity, with Aubrey de Grey
London Futurists
11/30/22 • 30 min
One area of technology that is frequently in the news these days is rejuvenation biotechnology, namely the possibility of undoing key aspects of biological aging via a suite of medical interventions. What these interventions target isn't individual diseases, such as cancer, stroke, or heart disease, but rather the common aggravating factors that lie behind the increasing prevalence of these diseases as we become older.
Our guest in this episode is someone who has been at the forefront for over 20 years of a series of breakthrough initiatives in this field of rejuvenation biotechnology. He is Dr Aubrey de Grey, co-founder of the Methuselah Foundation, the SENS Research Foundation, and, most recently, the LEV Foundation - where 'LEV' stands for Longevity Escape Velocity.
Topics discussed include:
*) Different concepts of aging and damage repair;
*) Why the outlook for damage repair is significantly more tangible today than it was ten years ago;
*) The role of foundations in supporting projects which cannot receive funding from commercial ventures;
*) Questions of pace of development: cautious versus bold;
*) Changing timescales for the likely attainment of robust mouse rejuvenation ('RMR') and longevity escape velocity ('LEV');
*) The "Less Death" initiative;
*) "Anticipating anticipation" - preparing for likely sweeping changes in public attitude once understanding spreads about the forthcoming available of powerful rejuvenation treatments;
*) Various advocacy initiatives that Aubrey is supporting;
*) Ways in which listeners can help to accelerate the attainment of LEV.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Some follow-up reading:
https://levf.org
https://lessdeath.org
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify

1 Listener

12/26/24 • 34 min
Our subject in this episode is altruism – our human desire and instinct to assist each other, making some personal sacrifices along the way. More precisely, our subject is the possible future of altruism – a future in which our philanthropic activities – our charitable donations, and how we spend our discretionary time – could have a considerably greater impact than at present. The issue is that many of our present activities, which are intended to help others, aren’t particularly effective.
That’s the judgement reached by our guest today, Stefan Schubert. Stefan is a researcher in philosophy and psychology, currently based in Stockholm, Sweden, and has previously held roles at the LSE and the University of Oxford. Stefan is the co-author of the recently published book “Effective Altruism and the Human Mind”.
Selected follow-ups:
- Stefan Schubert - Effective Altruism
- Effective Altruism and the Human Mind: The Clash Between Impact and Intuition - Oxford University Press (open access)
- Centre for Effective Altruism
- Professor Nadira Faber - Uehiro Institute, Oxford
- What are the best charities to support in 2024? - Giving What We Can
- Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed - Time
- Virtues for Real-World Utilitarians - by Stefan Schubert & Lucius Caviola, Utilitarianism
- Deworming - Effective Altruism Forum
- What we know about Musk's cost-cutting mission - BBC article about DOGE
- What is your p(doom)? with Darren McKee
- Longtermism - Wikipedia
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify

1 Listener

Taming the Machine, with Nell Watson
London Futurists
06/20/24 • 46 min
Those who rush to leverage AI’s power without adequate preparation face difficult blowback, scandals, and could provoke harsh regulatory measures. However, those who have a balanced, informed view on the risks and benefits of AI, and who, with care and knowledge, avoid either complacent optimism or defeatist pessimism, can harness AI’s potential, and tap into an incredible variety of services of an ever-improving quality.
These are some words from the introduction of the new book, “Taming the machine: ethically harness the power of AI”, whose author, Nell Watson, joins us in this episode.
Nell’s many roles include: Chair of IEEE’s Transparency Experts Focus Group, Executive Consultant on philosophical matters for Apple, and President of the European Responsible Artificial Intelligence Office. She also leads several organizations such as EthicsNet.org, which aims to teach machines prosocial behaviours, and CulturalPeace.org, which crafts Geneva Conventions-style rules for cultural conflict.
Selected follow-ups:
- Nell Watson's website
- Taming the Machine - book website
- BodiData (corporation)
- Post Office Horizon scandal: Why hundreds were wrongly prosecuted - BBC News
- Dutch scandal serves as a warning for Europe over risks of using algorithms - Politico
- Robodebt: Illegal Australian welfare hunt drove people to despair - BBC News
- What is the infected blood scandal and will victims get compensation? - BBC News
- MIRI 2024 Mission and Strategy Update - from the Machine Intelligence Research Institute (MIRI)
- British engineering giant Arup revealed as $25 million deepfake scam victim - CNN
- Zersetzung psychological warfare technique - Wikipedia
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonDiscover how technology is reshaping our lives and livelihoods.
Listen on: Apple Podcasts Spotify

04/23/25 • 43 min
Our subject in this episode may seem grim – it’s the potential extinction of the human species, either from a natural disaster, like a supervolcano or an asteroid, or from our own human activities, such as nuclear weapons, greenhouse gas emissions, engineered biopathogens, misaligned artificial intelligence, or high energy physics experiments causing a cataclysmic rupture in space and time.
These scenarios aren’t pleasant to contemplate, but there’s a school of thought that urges us to take them seriously – to think about the unthinkable, in the phrase coined in 1962 by pioneering futurist Herman Kahn. Over the last couple of decades, few people have been thinking about the unthinkable more carefully and systematically than our guest today, Sean ÓhÉigeartaigh. Sean is the author of a recent summary article from Cambridge University Press that we’ll be discussing, “Extinction of the human species: What could cause it and how likely is it to occur?”
Sean is presently based in Cambridge where he is a Programme Director at the Leverhulme Centre for the Future of Intelligence. Previously he was founding Executive Director of the Centre for the Study of Existential Risk, and before that, he managed research activities at the Future of Humanity Institute in Oxford.
Selected follow-ups:
- Seán Ó hÉigeartaigh - Leverhulme Centre Profile
- Extinction of the human species - by Sean ÓhÉigeartaigh
- Herman Kahn - Wikipedia
- Moral.me - by Conscium
- Classifying global catastrophic risks - by Shahar Avin et al
- Defence in Depth Against Human Extinction - by Anders Sandberg et al
- The Precipice - book by Toby Ord
- Measuring AI Ability to Complete Long Tasks - by METR
- Cold Takes - blog by Holden Karnofsky
- What Comes After the Paris AI Summit? - Article by Sean
- ARC-AGI - by François Chollet
- Henry Shevlin - Leverhulme Centre profile
- Eleos (includes Rosie Campbell and Robert Long)
- NeurIPS talk by David Chalmers
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonDiscover how technology is reshaping our lives and livelihoods.
Listen on: Apple Podcasts Spotify

A narrow path to a good future with AI, with Andrea Miotti
London Futurists
10/21/24 • 41 min
Our guest in this episode is Andrea Miotti, the founder and executive director of ControlAI. On their website, ControlAI have the tagline, “Fighting to keep humanity in control”. Control over what, you might ask. The website answers: control deepfakes, control scaling, control foundation models, and, yes, control AI.
The latest project from ControlAI is called “A Narrow Path”, which is a comprehensive policy plan split into three phases: Safety, Stability, and Flourishing. To be clear, the envisioned flourishing involves what is called “Transformative AI”. This is no anti-AI campaign, but rather an initiative to “build a robust science and metrology of intelligence, safe-by-design AI engineering, and other foundations for transformative AI under human control”.
The initiative has already received lots of feedback, both positive and negative, which we discuss.
Selected follow-ups:
- A Narrow Path - main website
- ControlAI
- Conjecture - Redefining AI Safety
- What is Agentic AI - Interface.AI
- Chat GPT’s new O1 model escaped its environment to complete “impossible” hacking task - by Mihai Andrei
- Biological Weapons Convention - United Nations
- Poisoning of Sergei and Yulia Skripal - Wikipedia (use of Novichok nerve agent in Salisbury, UK)
- Gathering of AI Safety Institutes in November in San Francisco
- Conscium - Pioneering safe, efficient AI
- The UK's APPG (All Party Parliamentary Group) on AI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonDiscover how technology is reshaping our lives and livelihoods.
Listen on: Apple Podcasts Spotify

What's new in longevity, with Martin O'Dea
London Futurists
08/09/23 • 30 min
Our guest in this episode is Martin O'Dea. As the CEO of Longevity Events Limited, Martin is the principal organiser of the annual Longevity Summit Dublin. In a past life, Martin lectured on business strategy at Dublin Business School. He has been keeping a close eye on the longevity space for more than ten years, and is well placed to speak about how the field is changing. Martin sits on a number of boards including the LEV Foundation, where, full disclosure, so does David.
This conversation is a chance to discover, ahead of time, what some of the highlights are likely to be at this year's Longevity Summit Dublin, which is taking place from 17th to 20th August.
Selected follow-ups:
https://longevitysummitdublin.com/
https://www.levf.org/projects/robust-mouse-rejuvenation-study-1
Topics addressed in this episode include:
*) Emma Teeling and the unexpected longevity of bats
*) Steve Austad and a wide range of long-lived animal species, as featured in his recent new book "Methuselah's Zoo"
*) Michael Levin and the role of bioelectrical networks in the coordination of cells during embryogenesis and regeneration
*) Filling four days of talks - "not an issue at all"
*) A special focus on "the hard problems of aging"
*) The work of the LEV (Longevity Escape Velocity) Foundation and the vision of Aubrey de Grey
*) Various signs of growing public interest in intervening in the biology of aging
*) A look back at a conference in London in 2010
*) Two events in 2013: academic publications on "hallmarks of aging", and Google's creation of Calico
*) Multi-million dollar investments in longevity are increasingly becoming "just pocket change... par for the course"
*) Selective interest from media and documentary makers, coupled with some hesitancy
*) Playing tennis at the age of 110 with your great grandchildren - and then what?
*) The possibility of "a ChatGPT moment for longevity" that changes public opinion virtually overnight
*) Why the attainment of RMR (Robust Mouse Rejuvenation) would be a seminal event
*) The rationale for trying a variety of different life-extending interventions in combination - and why pharmaceutical companies and academics have both shied away from such an experiment
*) The four treatments trialled in phase 1 of RMR, with other treatments under consideration for later phases
*) A message to any billionaires listening
*) A message to any politicians listening: the longevity dividend, as expounded by Andrew Scott and Andrew Steele
*) Another potential seminal moment: the TAME trial (Targeting Aging with Metformin), as advocated by Nir Barzilai
*) Why researchers who wanted to work on aging had to work on Parkinson's instead
*) Looking ahead to 2033
*) The role of longevity summits in strengthening the longevity community and setting individuals on new trajectories in their lives
*) The benefits of maintaining a collaborative, open attitude, without the obstacles of NDAs (Non-Disclosure Agreements)
*) Options for progress accelerating, not just from exponential trends, but from intersections of insights from different fields
*) Beware naïve philosophical concerns about entropy and about the presumed wisdom of evolution
*) The sad example of campaigner Aaron Schwartz
*) Important roles for decentralized science alongside existing commercial models
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify

GPT-4 transforming education, with Donald Clark
London Futurists
06/08/23 • 47 min
The launch of GPT-4 on 14th March has provoked concerns and searching questions, and nowhere more so than in the education sector. Earlier this month, the share price of US edutech company Chegg halved when its CEO admitted that GPT technology was a threat to its business model.
Looking ahead, GPT models seem to put flesh on the bones of the idea that all students could one day have a personal tutor as effective as Aristotle, who was Alexander the Great’s personal tutor. When that happens, students should leave school and university far, far better educated than we did.
Donald Clark is the ideal person to discuss this with. He founded Epic Group in 1983, and made it the UK’s largest provider of bespoke online education services before selling it in 2005. He is now the CEO of an AI learning company called WildFire, and an investor in and Board member of several other education technology businesses. In 2020 he published a book called Artificial Intelligence for Learning.
Selected follow-ups:
https://donaldclarkplanb.blogspot.com/
https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education
https://www.gatesnotes.com/The-Age-of-AI-Has-Begun
https://www.amazon.co.uk/Case-against-Education-System-Waste/dp/0691196451/
https://www.amazon.co.uk/Head-Hand-Heart-Intelligence-Over-Rewarded/dp/1982128461/
Topics addressed in this episode include:
*) "Education is a bit of a slow learner"
*) Why GPT-4 has unprecedented potential to transform education
*) The possibility of an online universal teacher
*) Traditional education sometimes fails to follow best pedagogical practice
*) Accelerating "time to competence" via personalised tuition
*) Calum's experience learning maths
*) How Khan Academy and DuoLingo are partnering with GPT-4
*) The significance of the large range of languages covered by ChatGPT
*) The recent essay on "The Age of AI" by Bill Gates
*) Students learning social skills from each other
*) An imbalanced societal focus on educating and valuing "head" rather than "heart" or "hand"
*) "The case against education" by Bryan Caplan
*) Evidence of wide usage of ChatGPT by students of all ages
*) Three gaps between GPT-4 and AGI, and how they are being bridged by including GPT-4 in "ensembles"
*) GPT-4 has a better theory of physics than GPT 3.5
*) Encouraging a generative AI to learn about a worldview via its own sensory input, rather than directly feeding a worldview into it
*) Pros and cons of "human exceptionalism"
*) How GPT-4 is upending our ideas on the relation between language and intelligence
*) Generative AI, the "C skills", and the set of jobs left for humans to do
*) Custer's last stand?
*) Three camps regarding progress toward AGI
*) Investors' reactions to Italy banning ChatGPT (subsequently reversed)
*) Different views on GDPR and European legislation
*) Further thoughts on implications of GPT-4 for the education industry
*) Shocking statistics on declining enrolment numbers in US universities
*) Beyond exclusivity: "A tutor
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify
Digital Disruption with Geoff NielsonDiscover how technology is reshaping our lives and livelihoods.
Listen on: Apple Podcasts Spotify

Education and work - past, present, and future, with Riaz Shah
London Futurists
01/25/24 • 37 min
Our guest in this episode is Riaz Shah. Until recently, Riaz was a partner at EY, where he was for 27 years, specialising in technology and innovation. Towards the end of his time at EY he became a Professor for Innovation & Leadership at Hult International Business School, where he leads sessions with senior executives of global companies.
In 2016, Riaz took a one-year sabbatical to open the One Degree Academy, a free school in a disadvantaged area of London. There’s an excellent TEDx talk from 2020 about how that happened, and about how to prepare for the very uncertain future of work.
This discussion, which was recorded at the close of 2023, covers the past, present, and future of education, work, politics, nostalgia, and innovation.
Selected follow-ups:
Riaz Shah at EY
The TEDx talk Rise Above the Machines by Riaz Shah
One Degree Mentoring Charity
One Degree Academy
EY Tech MBA by Hult International Business School
Gallup survey: State of the Global Workplace, 2023
BCG report: How People Can Create—and Destroy—Value with Generative AI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify
Show more best episodes

Show more best episodes
FAQ
How many episodes does London Futurists have?
London Futurists currently has 112 episodes available.
What topics does London Futurists cover?
The podcast is about News, Ai, Tech News, Podcasts, Technology and Disruption.
What is the most popular episode on London Futurists?
The episode title 'Innovating in education: the Codam experience, with David Giron' is the most popular.
What is the average episode length on London Futurists?
The average episode length on London Futurists is 38 minutes.
How often are episodes of London Futurists released?
Episodes of London Futurists are typically released every 7 days.
When was the first episode of London Futurists?
The first episode of London Futurists was released on Aug 2, 2022.
Show more FAQ

Show more FAQ