Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
NOUS - Michael Wooldridge on the History and Future of AI

Michael Wooldridge on the History and Future of AI

05/12/21 • 55 min

NOUS

AI research endured years of failure and frustration before new techniques in deep learning unleashed the swift, astonishing progress of the last decade.

Michael’s recent book A Brief History of Artificial Intelligence explores what we can learn from this history, and examines where we are now and where the field is going.

We discuss:

  • Why the Cyc project’s aim to encode ‘all human knowledge’ (!!) into a functioning AI got stuck, despite years of intense effort.
  • What OpenAI’s GPT-3 language-generating AI system really knows about making an omelette.
  • Why the next generation of AI systems may have to combine symbolic and non-symbolic approaches.

You can find NOUS on Twitter @NSthepodcast

plus icon
bookmark

AI research endured years of failure and frustration before new techniques in deep learning unleashed the swift, astonishing progress of the last decade.

Michael’s recent book A Brief History of Artificial Intelligence explores what we can learn from this history, and examines where we are now and where the field is going.

We discuss:

  • Why the Cyc project’s aim to encode ‘all human knowledge’ (!!) into a functioning AI got stuck, despite years of intense effort.
  • What OpenAI’s GPT-3 language-generating AI system really knows about making an omelette.
  • Why the next generation of AI systems may have to combine symbolic and non-symbolic approaches.

You can find NOUS on Twitter @NSthepodcast

Previous Episode

undefined - Iris Berent on Innate Knowledge and Why We Are Blind to Ourselves

Iris Berent on Innate Knowledge and Why We Are Blind to Ourselves

The idea we have ‘innate knowledge’ seems quite wrong to most of us. But we do! And the intuitions leading us astray here also blind us to other aspects of human nature.

We are all ‘blind storytellers’. Professor Iris Berent reveals what misleads us, and what we are missing.

18:55 Newborns have basic knowledge of the nature of objects. Eye-tracking experiments reveal that they have a grasp of the 3 c’s - cohesion, contact and continuity.

22:35 How do you get expectations about the nature of the world coded into genes? Do genes somehow give rise to computational ‘rules’ in the brain? Is my inability to grasp this illustrating Iris’ argument!? A deep mystery remains.

26:51 Birdsong is innate. So why not aspects of language and human object cognition?

28:20 “People know how to talk in more or less the sense that spiders know how to spin webs“ says Steven Pinker.

37:44 We learn a particular language from those around us - but some argue that the deep structural rules underlying all languages are innate. How does that work? Are there ‘rules’ of language somehow inscribed in neural structures?

47:39 Our intuitive biases to *dualism* and *essentialism* lead us to get lots of things wrong about human nature.

55:05 Why we go ‘insane about the brain’, and get weirdly impressed by neuroscience-y explanations, even when they are bad.

1:00:44 Why is our thinking about mental disorders so biased and confused?

***

Check out Iris Berent's book The Blind Storyteller here, or find her on Twitter @berent_iris

To get in touch with Ilan or join the conversation, you can find NOUS on Twitter @NSthepodcast or on email at [email protected]

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/nous-19442/michael-wooldridge-on-the-history-and-future-of-ai-13830967"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to michael wooldridge on the history and future of ai on goodpods" style="width: 225px" /> </a>

Copy