Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
London Futurists

London Futurists

London Futurists

Anticipating and managing exponential impact - hosts David Wood and Calum Chace
Calum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.
His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.
He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.
In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.
He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.
Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.
David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.
He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.
As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.
From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.
Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

bookmark
Share icon

All episodes

Best episodes

Seasons

Top 10 London Futurists Episodes

Goodpods has curated a list of the 10 best London Futurists episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to London Futurists for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite London Futurists episode by adding your comments to the episode page.

Our guest in this episode is Riaz Shah. Until recently, Riaz was a partner at EY, where he was for 27 years, specialising in technology and innovation. Towards the end of his time at EY he became a Professor for Innovation & Leadership at Hult International Business School, where he leads sessions with senior executives of global companies.
In 2016, Riaz took a one-year sabbatical to open the One Degree Academy, a free school in a disadvantaged area of London. There’s an excellent TEDx talk from 2020 about how that happened, and about how to prepare for the very uncertain future of work.
This discussion, which was recorded at the close of 2023, covers the past, present, and future of education, work, politics, nostalgia, and innovation.
Selected follow-ups:
Riaz Shah at EY
The TEDx talk Rise Above the Machines by Riaz Shah
One Degree Mentoring Charity
One Degree Academy
EY Tech MBA by Hult International Business School
Gallup survey: State of the Global Workplace, 2023
BCG report: How People Can Create—and Destroy—Value with Generative AI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

bookmark
plus icon
share episode
London Futurists - Progress with ending aging, with Aubrey de Grey
play

04/21/24 • 40 min

Our topic in this episode is progress with ending aging. Our guest is the person who literally wrote the book on that subject, namely the book, “Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime”. He is Aubrey de Grey, who describes himself in his Twitter biography as “spearheading the global crusade to defeat aging”.
In pursuit of that objective, Aubrey co-founded the Methuselah Foundation in 2003, the SENS Research Foundation in 2009, and the LEV Foundation, that is the Longevity Escape Velocity Foundation, in 2022, where he serves as President and Chief Science Officer.
Full disclosure: David also has a role on the executive management team of LEV Foundation, but for this recording he was wearing his hat as co-host of the London Futurists Podcast.
The conversation opens with this question: "When people are asked about ending aging, they often say the idea sounds nice, but they see no evidence for any actual progress toward ending aging in humans. They say that they’ve heard talk about that subject for years, or even decades, but wonder when all that talk is going to result in people actually living significantly longer. How do you respond?"
Selected follow-ups:

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

bookmark
plus icon
share episode

Our guest in this episode is David Wakeling, a partner at A&O Shearman, which became the world’s third largest law firm in May, thanks to the merger of Allen and Overy, a UK “magic circle” firm, with Shearman & Sterling of New York.
David heads up a team within the firm called the Markets Innovation Group (MIG), which consists of lawyers, developers and technologists, and is seeking to disrupt the legal industry. He also leads the firm's AI Advisory practice, through which the firm is currently advising 80 of the largest global businesses on the safe deployment of AI.
One of the initiatives David has led is the development and launch of ContractMatrix, in partnership with Microsoft and Harvey, an OpenAI-backed, GPT-4-based large language model that has been fine-tuned for the legal industry. ContractMatrix is a contract drafting and negotiation tool powered by generative AI. It was tested and honed by 1,000 of the firm’s lawyers prior to launch, to mitigate against risks like hallucinations. The firm estimates that the tool is saving up to seven hours from the average contract review, which is around a 30% efficiency gain. As well as internal use by 2,000 of its lawyers, it is also licensed to clients.
This is the third time we have looked at the legal industry on the podcast. While lawyers no longer use quill pens, they are not exactly famous for their information technology skills, either. But the legal profession has a couple of characteristics which make it eminently suited to the deployment of advanced AI systems: it generates vast amounts of data and money, and lawyers frequently engage in text-based routine tasks which can be automated by generative AI systems.
Previous London Futurists Podcast episodes on the legal industry:

Other selected follow-ups:

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

bookmark
plus icon
share episode

In this episode our guest is David Giron, the Director at what is arguably one of the world's most innovative educational initiatives, Codam College in Amsterdam. David was previously the head of studies at Codam's famous parent school 42 in Paris, and he has now spent 10 years putting into practice the somewhat revolutionary ideas of the 42 network. We ask David about what he has learned during these ten years, but we're especially interested in his views on how the world of education stands to be changed even further in the months and years ahead by generative AI.
Selected follow-ups:
https://www.codam.nl/en/team
https://42.fr/en/network-42/
Topics addressed in this episode include:
*) David's background at Epitech and 42 before joining Codam
*) The peer-to-peer framework at the heart of 42
*) Learning without teachers
*) Student assessment without teachers
*) Connection with the "competency-based learning" or "mastery learning" ideas of Sir Ken Robinson
*) Extending the 42 learning method beyond software engineering to other fields
*) Two ways of measuring whether the learning method is successful
*) Is it necessary for a school to fail some students from time to time?
*) The impact of Covid on the offline collaborative approach of Codam
*) ChatGPT is more than a tool; it is a "topic", on which people are inclined to take sides
*) Positive usage models for ChatGPT within education
*) Will ChatGPT make the occupation of software engineering a "job from the past"?
*) Software engineers will shift their skills from code-writing to prompt-writing
*) Why generative AI is likely to have a faster impact on work than the introduction of mechanisation
*) The adoption rate of generative AI by Codam students - and how it might change later this year
*) Code first or comment first?
*) The level of interest in Codam shown by other educational institutions
*) The resistance to change within traditional educational institutions
*) "The revolution is happening outside"
*) From "providing knowledge" to "creating a learning experience"
*) From large language models to full video systems that are individually tailored to help each person learn whatever they need in order to solve problems
*) Learning to code as a proxy for the more fundamental skill of learning to learn
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

bookmark
plus icon
share episode

The public discussion in a number of countries around the world expresses worries about what is called an aging society. These countries anticipate a future with fewer younger people who are active members of the economy, and a growing number of older people who need to be supported by the people still in the workforce. It’s an inversion of the usual demographic pyramid, with less at the bottom, and more at the top.
However, our guest in this episode recommends a different framing of the future – not as an aging society, but as a longevity society, or even an evergreen society. He is Andrew Scott, Professor of Economics at the London Business School. His other roles include being a Research Fellow at the Centre for Economic Policy Research, and a consulting scholar at Stanford University’s Center on Longevity.

Andrew’s latest book is entitled “The Longevity Imperative: Building a Better Society for Healthier, Longer Lives”. Commendations for the book include this from the political economist Daron Acemoglu, “A must-read book with an important message and many lessons”, and this from the historian Niall Ferguson, “Persuasive, uplifting and wise”.
Selected follow-ups:

Related quotations:

  • Aging is "...revealed and made manifest only by the most unnatural experiment of prolonging an animal's life by sheltering it from the hazards of its ordinary existence" - Peter Medawar, 1951
  • "To die of old age is a death rare, extraordinary, and singular, and, therefore, so much less natural than the others; ’tis the last and extremest sort of dying: and the more remote, the less to be hoped for" - Michel de Montaigne, 1580

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

bookmark
plus icon
share episode
London Futurists - GPT-4 transforming education, with Donald Clark
play

06/08/23 • 45 min

The launch of GPT-4 on 14th March has provoked concerns and searching questions, and nowhere more so than in the education sector. Earlier this month, the share price of US edutech company Chegg halved when its CEO admitted that GPT technology was a threat to its business model.
Looking ahead, GPT models seem to put flesh on the bones of the idea that all students could one day have a personal tutor as effective as Aristotle, who was Alexander the Great’s personal tutor. When that happens, students should leave school and university far, far better educated than we did.
Donald Clark is the ideal person to discuss this with. He founded Epic Group in 1983, and made it the UK’s largest provider of bespoke online education services before selling it in 2005. He is now the CEO of an AI learning company called WildFire, and an investor in and Board member of several other education technology businesses. In 2020 he published a book called Artificial Intelligence for Learning.
Selected follow-ups:
https://donaldclarkplanb.blogspot.com/
https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education
https://www.gatesnotes.com/The-Age-of-AI-Has-Begun
https://www.amazon.co.uk/Case-against-Education-System-Waste/dp/0691196451/
https://www.amazon.co.uk/Head-Hand-Heart-Intelligence-Over-Rewarded/dp/1982128461/
Topics addressed in this episode include:
*) "Education is a bit of a slow learner"
*) Why GPT-4 has unprecedented potential to transform education
*) The possibility of an online universal teacher
*) Traditional education sometimes fails to follow best pedagogical practice
*) Accelerating "time to competence" via personalised tuition
*) Calum's experience learning maths
*) How Khan Academy and DuoLingo are partnering with GPT-4
*) The significance of the large range of languages covered by ChatGPT
*) The recent essay on "The Age of AI" by Bill Gates
*) Students learning social skills from each other
*) An imbalanced societal focus on educating and valuing "head" rather than "heart" or "hand"
*) "The case against education" by Bryan Caplan
*) Evidence of wide usage of ChatGPT by students of all ages
*) Three gaps between GPT-4 and AGI, and how they are being bridged by including GPT-4 in "ensembles"
*) GPT-4 has a better theory of physics than GPT 3.5
*) Encouraging a generative AI to learn about a worldview via its own sensory input, rather than directly feeding a worldview into it
*) Pros and cons of "human exceptionalism"
*) How GPT-4 is upending our ideas on the relation between language and intelligence
*) Generative AI, the "C skills", and the set of jobs left for humans to do
*) Custer's last stand?
*) Three camps regarding progress toward AGI
*) Investors' reactions to Italy banning ChatGPT (subsequently reversed)
*) Different views on GDPR and European legislation
*) Further thoughts on implications of GPT-4 for the education industry
*) Shocking statistics on declining enrolment numbers in US universities
*) Beyond exclusivity: "A tutorial system for everybody"?
*) A boon for Senegal and other countries in the global south?
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

bookmark
plus icon
share episode

One area of technology that is frequently in the news these days is rejuvenation biotechnology, namely the possibility of undoing key aspects of biological aging via a suite of medical interventions. What these interventions target isn't individual diseases, such as cancer, stroke, or heart disease, but rather the common aggravating factors that lie behind the increasing prevalence of these diseases as we become older.
Our guest in this episode is someone who has been at the forefront for over 20 years of a series of breakthrough initiatives in this field of rejuvenation biotechnology. He is Dr Aubrey de Grey, co-founder of the Methuselah Foundation, the SENS Research Foundation, and, most recently, the LEV Foundation - where 'LEV' stands for Longevity Escape Velocity.
Topics discussed include:
*) Different concepts of aging and damage repair;
*) Why the outlook for damage repair is significantly more tangible today than it was ten years ago;
*) The role of foundations in supporting projects which cannot receive funding from commercial ventures;
*) Questions of pace of development: cautious versus bold;
*) Changing timescales for the likely attainment of robust mouse rejuvenation ('RMR') and longevity escape velocity ('LEV');
*) The "Less Death" initiative;
*) "Anticipating anticipation" - preparing for likely sweeping changes in public attitude once understanding spreads about the forthcoming available of powerful rejuvenation treatments;
*) Various advocacy initiatives that Aubrey is supporting;
*) Ways in which listeners can help to accelerate the attainment of LEV.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Some follow-up reading:
https://levf.org
https://lessdeath.org

bookmark
plus icon
share episode
London Futurists - Introducing Conscium, with Daniel Hulme and Ted Lappas
play

07/01/24 • 42 min

This episode is a bit different from the usual, because we are interviewing Calum's boss. Calum says that mainly to tease him, because he thinks the word “boss” is a dirty word.
His name is Daniel Hulme, and this is his second appearance on the podcast. He was one of our earliest guests, long ago, in episode 8. Back then, Daniel had just sold his AI consultancy, Satalia, to the advertising and media giant WPP. Today, he is Chief AI Officer at WPP, but he is joining us to talk about his new venture, Conscium - which describes itself as "the world's first applied AI consciousness research organisation".
Conscium states that "our aim is to deepen our understanding of consciousness to pioneer efficient, intelligent, and safe AI that builds a better future for humanity".
Also joining us is Ted Lappas, who is head of technology at Conscium, and he is also one of our illustrious former guests on the podcast.
By way of full disclosure, Calum is CMO at Conscium, and David is on the Conscium advisory board.
Selected follow-ups:

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

bookmark
plus icon
share episode

The UK government has announced plans for a global AI Safety Summit, to be held in Bletchley Park in Buckinghamshire, outside London, on 1st and 2nd of November. That raises the importance of thinking more seriously about potential scenarios for the future of AI. In this episode, co-hosts Calum and David review Calum's concept of the Economic Singularity - a topic that deserves to be addressed at the Bletchley Park Summit.
Selected follow-ups:
https://www.gov.uk/government/news/uk-government-sets-out-ai-safety-summit-ambitions
https://calumchace.com/the-economic-singularity/
https://transpolitica.org/projects/surveys/anticipating-ai-30/
Topics addressed in this episode include:
*) The five themes announced for the AI Safety Summit
*) Three different phases in the future of AI, and the need for greater clarity about which risks and opportunities apply in each phase
*) Two misconceptions about the future of joblessness
*) Learning from how technology pushed horses out of employment
*) What the word 'singularity' means in the term "Economic Singularity"
*) Sources of meaning, beyond jobs and careers
*) Contrasting UBI and UGI (Universal Basic Income and Universal Generous Income)
*) Two different approaches to making UGI affordable
*) Three forces that are driving prices downward
*) Envisioning a possible dual economy
*) Anticipating "the great churn" - the accelerated rate of change of jobs
*) The biggest risk arising from technological unemployment
*) Flaws in the concept of GDP (Gross Domestic Product)
*) A contest between different narratives
*) Signs of good reactions by politicians
*) Recalling Christmas 1914
*) Suspension of "normal politics"
*) Have invitations been lost in the post?
*) 16 questions about what AI might be like in 2030
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

bookmark
plus icon
share episode
London Futurists - Collapsing AGI timelines, with Ross Nordby
play

10/26/22 • 35 min

How likely is it that, by 2030, someone will build artificial general intelligence (AGI)?
Ross Nordby is an AI researcher who has shortened his AGI timelines: he has changed his mind about when AGI might be expected to exist. He recently published an article on the LessWrong community discussion site, giving his argument in favour of shortening these timelines. He now identifies 2030 as the date by which it is 50% likely that AGI will exist. In this episode, we ask Ross questions about his argument, and consider some of the implications that arise.
Article by Ross: https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon
Effective Altruism Long-Term Future Fund: https://funds.effectivealtruism.org/funds/far-future
MIRI (Machine Intelligence Research Institution): https://intelligence.org/
00.57 Ross’ background: real-time graphics, mostly in video games
02.10 Increased familiarity with AI made him reconsider his AGI timeline
02.37 He submitted a grant request to the Effective Altruism Long-Term Future Fund to move into AI safety work
03.50 What Ross was researching: can we make an AI intrinsically interpretable?
04.25 The AGI Ross is interested in is defined by capability, regardless of consciousness or sentience
04.55 An AI that is itself "goalless" might be put to uses with destructive side-effects
06.10 The leading AI research groups are still DeepMind and OpenAI
06.43 Other groups, like Anthropic, are more interested in alignment
07.22 If you can align an AI to any goal at all, that is progress: it indicates you have some control
08.00 Is this not all abstract and theoretical - a distraction from more pressing problems?
08.30 There are other serious problems, like pandemics and global warming, but we have to solve them all
08.45 Globally, only around 300 people are focused on AI alignment: not enough
10.05 AGI might well be less than three decades away
10.50 AlphaGo surprised the community, which was expecting Go to be winnable 10-15 years later
11.10 Then AlphaGo was surpassed by systems like AlphaZero and MuZero, which were actually simpler, and more flexible
11.20 AlphaTensor frames matrix multiplication as a game, and becomes superhuman at it
11.40 In 2018, the Transformer paper was published, but no-one forecast GPT-3’s capabilities
12.00 This year, Minerva (similar to GPT-3) got 50% correct on the math dataset: high school competition math problems
13.16 Illustrators now feel threatened by systems like Dall-E, Stable Diffusion, etc
13.30 The conclusion is that intelligence is easier to simulate than we thought
13.40 But these systems also do stupid things. They are brittle
18.00 But we could use transformers more intelligently
19.20 They turn out to be able to write code, and to explain jokes, and do maths reasoning
21:10 Google's Gopher AI
22.05 Machines don’t yet have internal models of the world, which we call common sense
24.00 But an early version of GPT-3 demonstrated the ability to model a human thought process alongside a machine’s
27.15 Ross’ current timeline is 50% probability of AGI by 2030, and 90+% by 2050
27:35 Counterarguments?
29.35 So what is to be done?
30.55 If convinced that AGI is coming soon, most lay people would probably demand that all AI research stops immediately. Which isn’t possible
31.40 Maybe publicity would be good in order to generate resources for AI alignment. And to avoid a backlash against secrecy
33.55 It would be great if more billionaires opened their wallets, but actually there are funds available for people who want to work on the problem
34.20 People who can help would not have to take a pay cut to work on AI alignment
Audio engineering by Alexander Chace
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does London Futurists have?

London Futurists currently has 101 episodes available.

What topics does London Futurists cover?

The podcast is about News, Tech News, Podcasts, Technology and Disruption.

What is the most popular episode on London Futurists?

The episode title 'Anticipating Longevity Escape Velocity, with Aubrey de Grey' is the most popular.

What is the average episode length on London Futurists?

The average episode length on London Futurists is 37 minutes.

How often are episodes of London Futurists released?

Episodes of London Futurists are typically released every 7 days.

When was the first episode of London Futurists?

The first episode of London Futurists was released on Aug 2, 2022.

Show more FAQ

Toggle view more icon

Comments