Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Machine Learning Street Talk (MLST) - Francois Chollet - ARC reflections - NeurIPS 2024
plus icon
bookmark

Francois Chollet - ARC reflections - NeurIPS 2024

01/09/25 • 86 min

1 Listener

Machine Learning Street Talk (MLST)

François Chollet discusses the outcomes of the ARC-AGI (Abstraction and Reasoning Corpus) Prize competition in 2024, where accuracy rose from 33% to 55.5% on a private evaluation set.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?

They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.

Goto https://tufalabs.ai/

***

Read about the recent result on o3 with ARC here (Chollet knew about it at the time of the interview but wasn't allowed to say):

https://arcprize.org/blog/oai-o3-pub-breakthrough

TOC:

1. Introduction and Opening

[00:00:00] 1.1 Deep Learning vs. Symbolic Reasoning: François’s Long-Standing Hybrid View

[00:00:48] 1.2 “Why Do They Call You a Symbolist?” – Addressing Misconceptions

[00:01:31] 1.3 Defining Reasoning

3. ARC Competition 2024 Results and Evolution

[00:07:26] 3.1 ARC Prize 2024: Reflecting on the Narrative Shift Toward System 2

[00:10:29] 3.2 Comparing Private Leaderboard vs. Public Leaderboard Solutions

[00:13:17] 3.3 Two Winning Approaches: Deep Learning–Guided Program Synthesis and Test-Time Training

4. Transduction vs. Induction in ARC

[00:16:04] 4.1 Test-Time Training, Overfitting Concerns, and Developer-Aware Generalization

[00:19:35] 4.2 Gradient Descent Adaptation vs. Discrete Program Search

5. ARC-2 Development and Future Directions

[00:23:51] 5.1 Ensemble Methods, Benchmark Flaws, and the Need for ARC-2

[00:25:35] 5.2 Human-Level Performance Metrics and Private Test Sets

[00:29:44] 5.3 Task Diversity, Redundancy Issues, and Expanded Evaluation Methodology

6. Program Synthesis Approaches

[00:30:18] 6.1 Induction vs. Transduction

[00:32:11] 6.2 Challenges of Writing Algorithms for Perceptual vs. Algorithmic Tasks

[00:34:23] 6.3 Combining Induction and Transduction

[00:37:05] 6.4 Multi-View Insight and Overfitting Regulation

7. Latent Space and Graph-Based Synthesis

[00:38:17] 7.1 Clément Bonnet’s Latent Program Search Approach

[00:40:10] 7.2 Decoding to Symbolic Form and Local Discrete Search

[00:41:15] 7.3 Graph of Operators vs. Token-by-Token Code Generation

[00:45:50] 7.4 Iterative Program Graph Modifications and Reusable Functions

8. Compute Efficiency and Lifelong Learning

[00:48:05] 8.1 Symbolic Process for Architecture Generation

[00:50:33] 8.2 Logarithmic Relationship of Compute and Accuracy

[00:52:20] 8.3 Learning New Building Blocks for Future Tasks

9. AI Reasoning and Future Development

[00:53:15] 9.1 Consciousness as a Self-Consistency Mechanism in Iterative Reasoning

[00:56:30] 9.2 Reconciling Symbolic and Connectionist Views

[01:00:13] 9.3 System 2 Reasoning - Awareness and Consistency

[01:03:05] 9.4 Novel Problem Solving, Abstraction, and Reusability

10. Program Synthesis and Research Lab

[01:05:53] 10.1 François Leaving Google to Focus on Program Synthesis

[01:09:55] 10.2 Democratizing Programming and Natural Language Instruction

11. Frontier Models and O1 Architecture

[01:14:38] 11.1 Search-Based Chain of Thought vs. Standard Forward Pass

[01:16:55] 11.2 o1’s Natural Language Program Generation and Test-Time Compute Scaling

[01:19:35] 11.3 Logarithmic Gains with Deeper Search

12. ARC Evaluation and Human Intelligence

[01:22:55] 12.1 LLMs as Guessing Machines and Agent Reliability Issues

[01:25:02] 12.2 ARC-2 Human Testing and Correlation with g-Factor

[01:26:16] 12.3 Closing Remarks and Future Directions

SHOWNOTES PDF:

https://www.dropbox.com/scl/fi/ujaai0ewpdnsosc5mc30k/CholletNeurips.pdf?rlkey=s68dp432vefpj2z0dp5wmzqz6&st=hazphyx5&dl=0

plus icon
bookmark

François Chollet discusses the outcomes of the ARC-AGI (Abstraction and Reasoning Corpus) Prize competition in 2024, where accuracy rose from 33% to 55.5% on a private evaluation set.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?

They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.

Goto https://tufalabs.ai/

***

Read about the recent result on o3 with ARC here (Chollet knew about it at the time of the interview but wasn't allowed to say):

https://arcprize.org/blog/oai-o3-pub-breakthrough

TOC:

1. Introduction and Opening

[00:00:00] 1.1 Deep Learning vs. Symbolic Reasoning: François’s Long-Standing Hybrid View

[00:00:48] 1.2 “Why Do They Call You a Symbolist?” – Addressing Misconceptions

[00:01:31] 1.3 Defining Reasoning

3. ARC Competition 2024 Results and Evolution

[00:07:26] 3.1 ARC Prize 2024: Reflecting on the Narrative Shift Toward System 2

[00:10:29] 3.2 Comparing Private Leaderboard vs. Public Leaderboard Solutions

[00:13:17] 3.3 Two Winning Approaches: Deep Learning–Guided Program Synthesis and Test-Time Training

4. Transduction vs. Induction in ARC

[00:16:04] 4.1 Test-Time Training, Overfitting Concerns, and Developer-Aware Generalization

[00:19:35] 4.2 Gradient Descent Adaptation vs. Discrete Program Search

5. ARC-2 Development and Future Directions

[00:23:51] 5.1 Ensemble Methods, Benchmark Flaws, and the Need for ARC-2

[00:25:35] 5.2 Human-Level Performance Metrics and Private Test Sets

[00:29:44] 5.3 Task Diversity, Redundancy Issues, and Expanded Evaluation Methodology

6. Program Synthesis Approaches

[00:30:18] 6.1 Induction vs. Transduction

[00:32:11] 6.2 Challenges of Writing Algorithms for Perceptual vs. Algorithmic Tasks

[00:34:23] 6.3 Combining Induction and Transduction

[00:37:05] 6.4 Multi-View Insight and Overfitting Regulation

7. Latent Space and Graph-Based Synthesis

[00:38:17] 7.1 Clément Bonnet’s Latent Program Search Approach

[00:40:10] 7.2 Decoding to Symbolic Form and Local Discrete Search

[00:41:15] 7.3 Graph of Operators vs. Token-by-Token Code Generation

[00:45:50] 7.4 Iterative Program Graph Modifications and Reusable Functions

8. Compute Efficiency and Lifelong Learning

[00:48:05] 8.1 Symbolic Process for Architecture Generation

[00:50:33] 8.2 Logarithmic Relationship of Compute and Accuracy

[00:52:20] 8.3 Learning New Building Blocks for Future Tasks

9. AI Reasoning and Future Development

[00:53:15] 9.1 Consciousness as a Self-Consistency Mechanism in Iterative Reasoning

[00:56:30] 9.2 Reconciling Symbolic and Connectionist Views

[01:00:13] 9.3 System 2 Reasoning - Awareness and Consistency

[01:03:05] 9.4 Novel Problem Solving, Abstraction, and Reusability

10. Program Synthesis and Research Lab

[01:05:53] 10.1 François Leaving Google to Focus on Program Synthesis

[01:09:55] 10.2 Democratizing Programming and Natural Language Instruction

11. Frontier Models and O1 Architecture

[01:14:38] 11.1 Search-Based Chain of Thought vs. Standard Forward Pass

[01:16:55] 11.2 o1’s Natural Language Program Generation and Test-Time Compute Scaling

[01:19:35] 11.3 Logarithmic Gains with Deeper Search

12. ARC Evaluation and Human Intelligence

[01:22:55] 12.1 LLMs as Guessing Machines and Agent Reliability Issues

[01:25:02] 12.2 ARC-2 Human Testing and Correlation with g-Factor

[01:26:16] 12.3 Closing Remarks and Future Directions

SHOWNOTES PDF:

https://www.dropbox.com/scl/fi/ujaai0ewpdnsosc5mc30k/CholletNeurips.pdf?rlkey=s68dp432vefpj2z0dp5wmzqz6&st=hazphyx5&dl=0

Previous Episode

undefined - Jeff Clune - Agent AI Needs Darwin

Jeff Clune - Agent AI Needs Darwin

AI professor Jeff Clune ruminates on open-ended evolutionary algorithms—systems designed to generate novel and interesting outcomes forever. Drawing inspiration from nature’s boundless creativity, Clune and his collaborators aim to build “Darwin Complete” search spaces, where any computable environment can be simulated. By harnessing the power of large language models and reinforcement learning, these AI agents continuously develop new skills, explore uncharted domains, and even cooperate with one another in complex tasks.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?

They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.

Goto https://tufalabs.ai/

***

A central theme throughout Clune’s work is “interestingness”: an elusive quality that nudges AI agents toward genuinely original discoveries. Rather than rely on narrowly defined metrics—which often fail due to Goodhart’s Law—Clune employs language models to serve as proxies for human judgment. In doing so, he ensures that “interesting” always reflects authentic novelty, opening the door to unending innovation.

Yet with these extraordinary possibilities come equally significant risks. Clune says we need AI safety measures—particularly as the technology matures into powerful, open-ended forms. Potential pitfalls include agents inadvertently causing harm or malicious actors subverting AI’s capabilities for destructive ends. To mitigate this, Clune advocates for prudent governance involving democratic coalitions, regulation of cutting-edge models, and global alignment protocols.

Jeff Clune:

https://x.com/jeffclune

http://jeffclune.com/

(Interviewer: Tim Scarfe)

TOC:

1. Introduction

[00:00:00] 1.1 Overview and Opening Thoughts

2. Sponsorship

[00:03:00] 2.1 TufaAI Labs and CentML

3. Evolutionary AI Foundations

[00:04:12] 3.1 Open-Ended Algorithm Development and Abstraction Approaches

[00:07:56] 3.2 Novel Intelligence Forms and Serendipitous Discovery

[00:11:46] 3.3 Frontier Models and the 'Interestingness' Problem

[00:30:36] 3.4 Darwin Complete Systems and Evolutionary Search Spaces

4. System Architecture and Learning

[00:37:35] 4.1 Code Generation vs Neural Networks Comparison

[00:41:04] 4.2 Thought Cloning and Behavioral Learning Systems

[00:47:00] 4.3 Language Emergence in AI Systems

[00:50:23] 4.4 AI Interpretability and Safety Monitoring Techniques

5. AI Safety and Governance

[00:53:56] 5.1 Language Model Consistency and Belief Systems

[00:57:00] 5.2 AI Safety Challenges and Alignment Limitations

[01:02:07] 5.3 Open Source AI Development and Value Alignment

[01:08:19] 5.4 Global AI Governance and Development Control

6. Advanced AI Systems and Evolution

[01:16:55] 6.1 Agent Systems and Performance Evaluation

[01:22:45] 6.2 Continuous Learning Challenges and In-Context Solutions

[01:26:46] 6.3 Evolution Algorithms and Environment Generation

[01:35:36] 6.4 Evolutionary Biology Insights and Experiments

[01:48:08] 6.5 Personal Journey from Philosophy to AI Research

Shownotes:

We craft detailed show notes for each episode with high quality transcript and references and best parts bolded.

https://www.dropbox.com/scl/fi/fz43pdoc5wq5jh7vsnujl/JEFFCLUNE.pdf?rlkey=uu0e70ix9zo6g5xn6amykffpm&st=k2scxteu&dl=0

Next Episode

undefined - Yoshua Bengio - Designing out Agency for Safe AI

Yoshua Bengio - Designing out Agency for Safe AI

Professor Yoshua Bengio is a pioneer in deep learning and Turing Award winner. Bengio talks about AI safety, why goal-seeking “agentic” AIs might be dangerous, and his vision for building powerful AI tools without giving them agency. Topics include reward tampering risks, instrumental convergence, global AI governance, and how non-agent AIs could revolutionize science and medicine while reducing existential threats. Perfect for anyone curious about advanced AI risks and how to manage them responsibly.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?

They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.

Goto https://tufalabs.ai/

***

Interviewer: Tim Scarfe

Yoshua Bengio:

https://x.com/Yoshua_Bengio

https://scholar.google.com/citations?user=kukA0LcAAAAJ&hl=en

https://yoshuabengio.org/

https://en.wikipedia.org/wiki/Yoshua_Bengio

TOC:

1. AI Safety Fundamentals

[00:00:00] 1.1 AI Safety Risks and International Cooperation

[00:03:20] 1.2 Fundamental Principles vs Scaling in AI Development

[00:11:25] 1.3 System 1/2 Thinking and AI Reasoning Capabilities

[00:15:15] 1.4 Reward Tampering and AI Agency Risks

[00:25:17] 1.5 Alignment Challenges and Instrumental Convergence

2. AI Architecture and Safety Design

[00:33:10] 2.1 Instrumental Goals and AI Safety Fundamentals

[00:35:02] 2.2 Separating Intelligence from Goals in AI Systems

[00:40:40] 2.3 Non-Agent AI as Scientific Tools

[00:44:25] 2.4 Oracle AI Systems and Mathematical Safety Frameworks

3. Global Governance and Security

[00:49:50] 3.1 International AI Competition and Hardware Governance

[00:51:58] 3.2 Military and Security Implications of AI Development

[00:56:07] 3.3 Personal Evolution of AI Safety Perspectives

[01:00:25] 3.4 AI Development Scaling and Global Governance Challenges

[01:12:10] 3.5 AI Regulation and Corporate Oversight

4. Technical Innovations

[01:23:00] 4.1 Evolution of Neural Architectures: From RNNs to Transformers

[01:26:02] 4.2 GFlowNets and Symbolic Computation

[01:30:47] 4.3 Neural Dynamics and Consciousness

[01:34:38] 4.4 AI Creativity and Scientific Discovery

SHOWNOTES (Transcript, references, best clips etc):

https://www.dropbox.com/scl/fi/ajucigli8n90fbxv9h94x/BENGIO_SHOW.pdf?rlkey=38hi2m19sylnr8orb76b85wkw&dl=0

CORE REFS (full list in shownotes and pinned comment):

[00:00:15] Bengio et al.: "AI Risk" Statement

https://www.safe.ai/work/statement-on-ai-risk

[00:23:10] Bengio on reward tampering & AI safety (Harvard Data Science Review)

https://hdsr.mitpress.mit.edu/pub/w974bwb0

[00:40:45] Munk Debate on AI existential risk, featuring Bengio

https://munkdebates.com/debates/artificial-intelligence

[00:44:30] "Can a Bayesian Oracle Prevent Harm from an Agent?" (Bengio et al.) on oracle-to-agent safety

https://arxiv.org/abs/2408.05284

[00:51:20] Bengio (2024) memo on hardware-based AI governance verification

https://yoshuabengio.org/wp-content/uploads/2024/08/FlexHEG-Memo_August-2024.pdf

[01:12:55] Bengio’s involvement in EU AI Act code of practice

https://digital-strategy.ec.europa.eu/en/news/meet-chairs-leading-development-first-general-purpose-ai-code-practice

[01:27:05] Complexity-based compositionality theory (Elmoznino, Jiralerspong, Bengio, Lajoie)

https://arxiv.org/abs/2410.14817

[01:29:00] GFlowNet Foundations (Bengio et al.) for probabilistic inference

https://arxiv.org/pdf/2111.09266

[01:32:10] Discrete attractor states in neural systems (Nam, Elmoznino, Bengio, Lajoie)

https://arxiv.org/pdf/2302.06403

Episode Comments

Featured in these lists

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/machine-learning-street-talk-mlst-213859/francois-chollet-arc-reflections-neurips-2024-81412703"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to francois chollet - arc reflections - neurips 2024 on goodpods" style="width: 225px" /> </a>

Copy