
How Do AI Models Actually Think? - Laura Ruis
01/20/25 • 78 min
1 Listener
Laura Ruis, a PhD student at University College London and researcher at Cohere, explains her groundbreaking research into how large language models (LLMs) perform reasoning tasks, the fundamental mechanisms underlying LLM reasoning capabilities, and whether these models primarily rely on retrieval or develop procedural knowledge.
SPONSOR MESSAGES:
***
CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.
https://centml.ai/pricing/
Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?
Goto https://tufalabs.ai/
***
TOC
1. LLM Foundations and Learning
1.1 Scale and Learning in Language Models [00:00:00]
1.2 Procedural Knowledge vs Fact Retrieval [00:03:40]
1.3 Influence Functions and Model Analysis [00:07:40]
1.4 Role of Code in LLM Reasoning [00:11:10]
1.5 Semantic Understanding and Physical Grounding [00:19:30]
2. Reasoning Architectures and Measurement
2.1 Measuring Understanding and Reasoning in Language Models [00:23:10]
2.2 Formal vs Approximate Reasoning and Model Creativity [00:26:40]
2.3 Symbolic vs Subsymbolic Computation Debate [00:34:10]
2.4 Neural Network Architectures and Tensor Product Representations [00:40:50]
3. AI Agency and Risk Assessment
3.1 Agency and Goal-Directed Behavior in Language Models [00:45:10]
3.2 Defining and Measuring Agency in AI Systems [00:49:50]
3.3 Core Knowledge Systems and Agency Detection [00:54:40]
3.4 Language Models as Agent Models and Simulator Theory [01:03:20]
3.5 AI Safety and Societal Control Mechanisms [01:07:10]
3.6 Evolution of AI Capabilities and Emergent Risks [01:14:20]
REFS:
[00:01:10] Procedural Knowledge in Pretraining & LLM Reasoning
Ruis et al., 2024
https://arxiv.org/abs/2411.12580
[00:03:50] EK-FAC Influence Functions in Large LMs
Grosse et al., 2023
https://arxiv.org/abs/2308.03296
[00:13:05] Surfaces and Essences: Analogy as the Core of Cognition
Hofstadter & Sander
https://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinking/dp/0465018475
[00:13:45] Wittgenstein on Language Games
https://plato.stanford.edu/entries/wittgenstein/
[00:14:30] Montague Semantics for Natural Language
https://plato.stanford.edu/entries/montague-semantics/
[00:19:35] The Chinese Room Argument
David Cole
https://plato.stanford.edu/entries/chinese-room/
[00:19:55] ARC: Abstraction and Reasoning Corpus
François Chollet
https://arxiv.org/abs/1911.01547
[00:24:20] Systematic Generalization in Neural Nets
Lake & Baroni, 2023
https://www.nature.com/articles/s41586-023-06668-3
[00:27:40] Open-Endedness & Creativity in AI
Tim Rocktäschel
https://arxiv.org/html/2406.04268v1
[00:30:50] Fodor & Pylyshyn on Connectionism
https://www.sciencedirect.com/science/article/abs/pii/0010027788900315
[00:31:30] Tensor Product Representations
Smolensky, 1990
https://www.sciencedirect.com/science/article/abs/pii/000437029090007M
[00:35:50] DreamCoder: Wake-Sleep Program Synthesis
Kevin Ellis et al.
https://courses.cs.washington.edu/courses/cse599j1/22sp/papers/dreamcoder.pdf
[00:36:30] Compositional Generalization Benchmarks
Ruis, Lake et al., 2022
https://arxiv.org/pdf/2202.10745
[00:40:30] RNNs & Tensor Products
McCoy et al., 2018
https://arxiv.org/abs/1812.08718
[00:46:10] Formal Causal Definition of Agency
Kenton et al.
https://arxiv.org/pdf/2208.08345v2
[00:48:40] Agency in Language Models
Sumers et al.
https://arxiv.org/abs/2309.02427
[00:55:20] Heider & Simmel’s Moving Shapes Experiment
https://www.nature.com/articles/s41598-024-65532-0
[01:00:40] Language Models as Agent Models
Jacob Andreas, 2022
https://arxiv.org/abs/2212.01681
[01:13:35] Pragmatic Understanding in LLMs
Ruis et al.
https://arxiv.org/abs/2210.14986
Laura Ruis, a PhD student at University College London and researcher at Cohere, explains her groundbreaking research into how large language models (LLMs) perform reasoning tasks, the fundamental mechanisms underlying LLM reasoning capabilities, and whether these models primarily rely on retrieval or develop procedural knowledge.
SPONSOR MESSAGES:
***
CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.
https://centml.ai/pricing/
Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?
Goto https://tufalabs.ai/
***
TOC
1. LLM Foundations and Learning
1.1 Scale and Learning in Language Models [00:00:00]
1.2 Procedural Knowledge vs Fact Retrieval [00:03:40]
1.3 Influence Functions and Model Analysis [00:07:40]
1.4 Role of Code in LLM Reasoning [00:11:10]
1.5 Semantic Understanding and Physical Grounding [00:19:30]
2. Reasoning Architectures and Measurement
2.1 Measuring Understanding and Reasoning in Language Models [00:23:10]
2.2 Formal vs Approximate Reasoning and Model Creativity [00:26:40]
2.3 Symbolic vs Subsymbolic Computation Debate [00:34:10]
2.4 Neural Network Architectures and Tensor Product Representations [00:40:50]
3. AI Agency and Risk Assessment
3.1 Agency and Goal-Directed Behavior in Language Models [00:45:10]
3.2 Defining and Measuring Agency in AI Systems [00:49:50]
3.3 Core Knowledge Systems and Agency Detection [00:54:40]
3.4 Language Models as Agent Models and Simulator Theory [01:03:20]
3.5 AI Safety and Societal Control Mechanisms [01:07:10]
3.6 Evolution of AI Capabilities and Emergent Risks [01:14:20]
REFS:
[00:01:10] Procedural Knowledge in Pretraining & LLM Reasoning
Ruis et al., 2024
https://arxiv.org/abs/2411.12580
[00:03:50] EK-FAC Influence Functions in Large LMs
Grosse et al., 2023
https://arxiv.org/abs/2308.03296
[00:13:05] Surfaces and Essences: Analogy as the Core of Cognition
Hofstadter & Sander
https://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinking/dp/0465018475
[00:13:45] Wittgenstein on Language Games
https://plato.stanford.edu/entries/wittgenstein/
[00:14:30] Montague Semantics for Natural Language
https://plato.stanford.edu/entries/montague-semantics/
[00:19:35] The Chinese Room Argument
David Cole
https://plato.stanford.edu/entries/chinese-room/
[00:19:55] ARC: Abstraction and Reasoning Corpus
François Chollet
https://arxiv.org/abs/1911.01547
[00:24:20] Systematic Generalization in Neural Nets
Lake & Baroni, 2023
https://www.nature.com/articles/s41586-023-06668-3
[00:27:40] Open-Endedness & Creativity in AI
Tim Rocktäschel
https://arxiv.org/html/2406.04268v1
[00:30:50] Fodor & Pylyshyn on Connectionism
https://www.sciencedirect.com/science/article/abs/pii/0010027788900315
[00:31:30] Tensor Product Representations
Smolensky, 1990
https://www.sciencedirect.com/science/article/abs/pii/000437029090007M
[00:35:50] DreamCoder: Wake-Sleep Program Synthesis
Kevin Ellis et al.
https://courses.cs.washington.edu/courses/cse599j1/22sp/papers/dreamcoder.pdf
[00:36:30] Compositional Generalization Benchmarks
Ruis, Lake et al., 2022
https://arxiv.org/pdf/2202.10745
[00:40:30] RNNs & Tensor Products
McCoy et al., 2018
https://arxiv.org/abs/1812.08718
[00:46:10] Formal Causal Definition of Agency
Kenton et al.
https://arxiv.org/pdf/2208.08345v2
[00:48:40] Agency in Language Models
Sumers et al.
https://arxiv.org/abs/2309.02427
[00:55:20] Heider & Simmel’s Moving Shapes Experiment
https://www.nature.com/articles/s41598-024-65532-0
[01:00:40] Language Models as Agent Models
Jacob Andreas, 2022
https://arxiv.org/abs/2212.01681
[01:13:35] Pragmatic Understanding in LLMs
Ruis et al.
https://arxiv.org/abs/2210.14986
Previous Episode

Jurgen Schmidhuber on Humans co-existing with AIs
Jürgen Schmidhuber, the father of generative AI, challenges current AI narratives, revealing that early deep learning work is in his opinion misattributed, where it actually originated in Ukraine and Japan. He discusses his early work on linear transformers and artificial curiosity which preceded modern developments, shares his expansive vision of AI colonising space, and explains his groundbreaking 1991 consciousness model. Schmidhuber dismisses fears of human-AI conflict, arguing that superintelligent AI scientists will be fascinated by their own origins and motivated to protect life rather than harm it, while being more interested in other superintelligent AI and in cosmic expansion than earthly matters. He offers unique insights into how humans and AI might coexist. This was the long-awaited second, unreleased part of our interview we filmed last time. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? Goto https://tufalabs.ai/ *** Interviewer: Tim Scarfe TOC [00:00:00] The Nature and Motivations of AI [00:02:08] Influential Inventions: 20th vs. 21st Century [00:05:28] Transformer and GPT: A Reflection The revolutionary impact of modern language models, the 1991 linear transformer, linear vs. quadratic scaling, the fast weight controller, and fast weight matrix memory. [00:11:03] Pioneering Contributions to AI and Deep Learning The invention of the transformer, pre-trained networks, the first GANs, the role of predictive coding, and the emergence of artificial curiosity. [00:13:58] AI's Evolution and Achievements The role of compute, breakthroughs in handwriting recognition and computer vision, the rise of GPU-based CNNs, achieving superhuman results, and Japanese contributions to CNN development. [00:15:40] The Hardware Lottery and GPUs GPUs as a serendipitous advantage for AI, the gaming-AI parallel, and Nvidia's strategic shift towards AI. [00:19:58] AI Applications and Societal Impact AI-powered translation breaking communication barriers, AI in medicine for imaging and disease prediction, and AI's potential for human enhancement and sustainable development. [00:23:26] The Path to AGI and Current Limitations Distinguishing large language models from AGI, challenges in replacing physical world workers, and AI's difficulty in real-world versus board games. [00:25:56] AI and Consciousness Simulating consciousness through unsupervised learning, chunking and automatizing neural networks, data compression, and self-symbols in predictive world models. [00:30:50] The Future of AI and Humanity Transition from AGIs as tools to AGIs with their own goals, the role of humans in an AGI-dominated world, and the concept of Homo Ludens. [00:38:05] The AI Race: Europe, China, and the US Europe's historical contributions, current dominance of the US and East Asia, and the role of venture capital and industrial policy. [00:50:32] Addressing AI Existential Risk The obsession with AI existential risk, commercial pressure for friendly AIs, AI vs. hydrogen bombs, and the long-term future of AI. [00:58:00] The Fermi Paradox and Extraterrestrial Intelligence Expanding AI bubbles as an explanation for the Fermi paradox, dark matter and encrypted civilizations, and Earth as the first to spawn an AI bubble. [01:02:08] The Diversity of AI and AI Ecologies The unrealism of a monolithic super intelligence, diverse AIs with varying goals, and intense competition and collaboration in AI ecologies. [01:12:21] Final Thoughts and Closing Remarks REFERENCES: See pinned comment on YT: https://youtu.be/fZYUqICYCAk
Next Episode

Subbarao Kambhampati - Do o1 models search?
Join Prof. Subbarao Kambhampati and host Tim Scarfe for a deep dive into OpenAI's O1 model and the future of AI reasoning systems.
How O1 likely uses reinforcement learning similar to AlphaGo, with hidden reasoning tokens that users pay for but never see
The evolution from traditional Large Language Models to more sophisticated reasoning systems
The concept of "fractal intelligence" in AI - where models work brilliantly sometimes but fail unpredictably
Why O1's improved performance comes with substantial computational costs
The ongoing debate between single-model approaches (OpenAI) vs hybrid systems (Google)
The critical distinction between AI as an intelligence amplifier vs autonomous decision-maker
SPONSOR MESSAGES:
***
CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.
https://centml.ai/pricing/
Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?
Goto https://tufalabs.ai/
***
TOC:
1. **O1 Architecture and Reasoning Foundations**
[00:00:00] 1.1 Fractal Intelligence and Reasoning Model Limitations
[00:04:28] 1.2 LLM Evolution: From Simple Prompting to Advanced Reasoning
[00:14:28] 1.3 O1's Architecture and AlphaGo-like Reasoning Approach
[00:23:18] 1.4 Empirical Evaluation of O1's Planning Capabilities
2. **Monte Carlo Methods and Model Deep-Dive**
[00:29:30] 2.1 Monte Carlo Methods and MARCO-O1 Implementation
[00:31:30] 2.2 Reasoning vs. Retrieval in LLM Systems
[00:40:40] 2.3 Fractal Intelligence Capabilities and Limitations
[00:45:59] 2.4 Mechanistic Interpretability of Model Behavior
[00:51:41] 2.5 O1 Response Patterns and Performance Analysis
3. **System Design and Real-World Applications**
[00:59:30] 3.1 Evolution from LLMs to Language Reasoning Models
[01:06:48] 3.2 Cost-Efficiency Analysis: LLMs vs O1
[01:11:28] 3.3 Autonomous vs Human-in-the-Loop Systems
[01:16:01] 3.4 Program Generation and Fine-Tuning Approaches
[01:26:08] 3.5 Hybrid Architecture Implementation Strategies
Transcript: https://www.dropbox.com/scl/fi/d0ef4ovnfxi0lknirkvft/Subbarao.pdf?rlkey=l3rp29gs4hkut7he8u04mm1df&dl=0
REFS:
[00:02:00] Monty Python (1975)
Witch trial scene: flawed logical reasoning.
https://www.youtube.com/watch?v=zrzMhU_4m-g
[00:04:00] Cade Metz (2024)
Microsoft–OpenAI partnership evolution and control dynamics.
https://www.nytimes.com/2024/10/17/technology/microsoft-openai-partnership-deal.html
[00:07:25] Kojima et al. (2022)
Zero-shot chain-of-thought prompting ('Let's think step by step').
https://arxiv.org/pdf/2205.11916
[00:12:50] DeepMind Research Team (2023)
Multi-bot game solving with external and internal planning.
https://deepmind.google/research/publications/139455/
[00:15:10] Silver et al. (2016)
AlphaGo's Monte Carlo Tree Search and Q-learning.
https://www.nature.com/articles/nature16961
[00:16:30] Kambhampati, S. et al. (2023)
Evaluates O1's planning in "Strawberry Fields" benchmarks.
https://arxiv.org/pdf/2410.02162
[00:29:30] Alibaba AIDC-AI Team (2023)
MARCO-O1: Chain-of-Thought + MCTS for improved reasoning.
https://arxiv.org/html/2411.14405
[00:31:30] Kambhampati, S. (2024)
Explores LLM "reasoning vs retrieval" debate.
https://arxiv.org/html/2403.04121v2
[00:37:35] Wei, J. et al. (2022)
Chain-of-thought prompting (introduces last-letter concatenation).
https://arxiv.org/pdf/2201.11903
[00:42:35] Barbero, F. et al. (2024)
Transformer attention and "information over-squashing."
https://arxiv.org/html/2406.04267v2
[00:46:05] Ruis, L. et al. (2023)
Influence functions to understand procedural knowledge in LLMs.
https://arxiv.org/html/2411.12580v1
(truncated - continued in shownotes/transcript doc)
If you like this episode you’ll love
Episode Comments
Featured in these lists
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/machine-learning-street-talk-mlst-213859/how-do-ai-models-actually-think-laura-ruis-82355124"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to how do ai models actually think? - laura ruis on goodpods" style="width: 225px" /> </a>
Copy