Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
AI lab by information labs - AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

AI lab by information labs

10/21/24 • 14 min

plus icon
bookmark
Share icon

🔍 In this TL;DR episode, Jurgen Gravestein (Conversation Design Institute) discusses his Substack blog post delving into the ‘Intelligence Paradox’ with the AI lab

📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[01:08] Q1-The ‘Intelligence Paradox’:
How does the language used to describe AI lead to misconceptions and the so-called ‘Intelligence Paradox’?
⏲️[05:36] Q2-‘Conceptual Borrowing’:
What is ‘conceptual borrowing’ and how does it impact public perception and understanding of AI?
⏲️[10:04] Q3-Human vs AI ‘Learning’:
Why is it misleading to use the term ‘learning’ for AI processes and what this means for the future of AI development?
⏲️[14:11] Wrap-up & Outro

💭 Q1-The ‘Intelligence Paradox’

🗣️ What’s really interesting about chatbots and AI is that for the first time in human history, we have technology talking back at us, and that's doing a lot of interesting things to our brains.
🗣️ In the 1960s, there was an experiment with Chatbot Eliza, which was a very simple, pre-programmed chatbot (...) And it showed that when people are talking to technology, and technology talks back, we’re quite easily fooled by that technology. And that has to do with language fluency and how we perceive language.
🗣️ Language is a very powerful tool (...) there’s a correlation between perceived intelligence and language fluency (...) a social phenomenon that I like to call the ‘Intelligence Paradox’. (...) people perceive you as less smart, just because you are less fluent in how you’re able to express yourself.
🗣️ That also works the other way around with AI and chatbots (...). We saw that chatbots can now respond in extremely fluent language very flexibly. (...) And as a result of that, we perceive them as pretty smart. Smarter than they actually are, in fact.
🗣️ We tend to overestimate the capabilities of [AI] systems because of their language fluency, and we perceive them as smarter than they really are, and it leads to confusion (...) about how the technology actually works.

💭 Q2-‘Conceptual Borrowing’

🗣️ A research article (...) from two professors, Luciano Floridi and Anna Nobre, (...) explaining (...) conceptual borrowing [states]: “through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as biological computers."
🗣️ Similar to the Intelligence Paradox, it can lead to confusion (...) about whether we underestimate or overestimate the impact of a certain technology. And that, in turn, informs how we make policies or regulate certain technologies now or in the future.
🗣️ A small example of conceptual borrowing would be the term “hallucinations”. (...) a common term to describe when systems like chatGPT say something that sounds very authoritative and sounds very correct and precise, but is actually made up, or partly confabulated. (...) this actually has nothing to do with real hallucinations [but] with statistical patterns that don’t match up with the question that’s being asked.

💭 Q3-Human vs AI ‘Learning’

🗣️ If you talk about conceptual borrowing, “machine learning” is a great example of that, too. (...) there's a very (...) big discrepancy between what learning is in the psychological terms and the biological terms when we talk about learning, and then when it comes to these systems.
🗣️ So if you actually start to be convinced that LLMs are as smart and learn as quickly as people or children (...) you could be over attributing qualities to these systems.
🗣️ [ARC-AGI challenge:] a $1 million USD prize pool for the first person that can build an AI to solve a new benchmark that (...) consists of very simple puzzles that a five-year old (...) could basically solve. (...) it hasn't been solved yet.
🗣️ That’s, again, an interesting way to look at learning, and especially where these systems fall short. [AI] can reason based on (...) the data that they've seen, but as soon as it (..) goes out of (...) what they've seen in their data set, they will struggle with whatever task they are being asked to perform.

📌 About Our Guest
🎙️ Jurgen Gravestein | Sr Conversation Designer, Conversation Design Institute (CDI)
X https://x.com/@gravestein1989
🌐 Blog Post | The Intelligence Paradox
https://jurgengravestein.substack.com/p/the-intelligence-paradox
🌐 Newsletter
https://jurgengravestein.substack.com
🌐 CDI
https://www.conversationdesigninstitute.com
🌐 Profs. Floridi & Nobre's article
http://dx.doi.org/10.2139/ss...

10/21/24 • 14 min

plus icon
bookmark
Share icon

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/ai-lab-by-information-labs-282357/ai-lab-tldr-jurgen-gravestein-the-intelligence-paradox-76792392"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to ai lab tl;dr | jurgen gravestein - the intelligence paradox on goodpods" style="width: 225px" /> </a>

Copy