
Want to Understand Neural Networks? Think Elastic Origami! - Prof. Randall Balestriero
02/08/25 • 78 min
1 Listener
Professor Randall Balestriero joins us to discuss neural network geometry, spline theory, and emerging phenomena in deep learning, based on research presented at ICML. Topics include the delayed emergence of adversarial robustness in neural networks ("grokking"), geometric interpretations of neural networks via spline theory, and challenges in reconstruction learning. We also cover geometric analysis of Large Language Models (LLMs) for toxicity detection and the relationship between intrinsic dimensionality and model control in RLHF.
SPONSOR MESSAGES:
***
CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.
https://centml.ai/pricing/
Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?
Goto https://tufalabs.ai/
***
Randall Balestriero
https://x.com/randall_balestr
https://randallbalestriero.github.io/
Show notes and transcript: https://www.dropbox.com/scl/fi/3lufge4upq5gy0ug75j4a/RANDALLSHOW.pdf?rlkey=nbemgpa0jhawt1e86rx7372e4&dl=0
TOC:
Introduction
00:00:00: Introduction
Neural Network Geometry and Spline Theory
00:01:41: Neural Network Geometry and Spline Theory
00:07:41: Deep Networks Always Grok
00:11:39: Grokking and Adversarial Robustness
00:16:09: Double Descent and Catastrophic Forgetting
Reconstruction Learning
00:18:49: Reconstruction Learning
00:24:15: Frequency Bias in Neural Networks
Geometric Analysis of Neural Networks
00:29:02: Geometric Analysis of Neural Networks
00:34:41: Adversarial Examples and Region Concentration
LLM Safety and Geometric Analysis
00:40:05: LLM Safety and Geometric Analysis
00:46:11: Toxicity Detection in LLMs
00:52:24: Intrinsic Dimensionality and Model Control
00:58:07: RLHF and High-Dimensional Spaces
Conclusion
01:02:13: Neural Tangent Kernel
01:08:07: Conclusion
REFS:
[00:01:35] Humayun – Deep network geometry & input space partitioning
https://arxiv.org/html/2408.04809v1
[00:03:55] Balestriero & Paris – Linking deep networks to adaptive spline operators
https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf
[00:13:55] Song et al. – Gradient-based white-box adversarial attacks
https://arxiv.org/abs/2012.14965
[00:16:05] Humayun, Balestriero & Baraniuk – Grokking phenomenon & emergent robustness
https://arxiv.org/abs/2402.15555
[00:18:25] Humayun – Training dynamics & double descent via linear region evolution
https://arxiv.org/abs/2310.12977
[00:20:15] Balestriero – Power diagram partitions in DNN decision boundaries
https://arxiv.org/abs/1905.08443
[00:23:00] Frankle & Carbin – Lottery Ticket Hypothesis for network pruning
https://arxiv.org/abs/1803.03635
[00:24:00] Belkin et al. – Double descent phenomenon in modern ML
https://arxiv.org/abs/1812.11118
[00:25:55] Balestriero et al. – Batch normalization’s regularization effects
https://arxiv.org/pdf/2209.14778
[00:29:35] EU – EU AI Act 2024 with compute restrictions
https://www.lw.com/admin/upload/SiteAttachments/EU-AI-Act-Navigating-a-Brave-New-World.pdf
[00:39:30] Humayun, Balestriero & Baraniuk – SplineCam: Visualizing deep network geometry
https://openaccess.thecvf.com/content/CVPR2023/papers/Humayun_SplineCam_Exact_Visualization_and_Characterization_of_Deep_Network_Geometry_and_CVPR_2023_paper.pdf
[00:40:40] Carlini – Trade-offs between adversarial robustness and accuracy
https://arxiv.org/pdf/2407.20099
[00:44:55] Balestriero & LeCun – Limitations of reconstruction-based learning methods
https://openreview.net/forum?id=ez7w0Ss4g9
(truncated, see shownotes PDF)
Professor Randall Balestriero joins us to discuss neural network geometry, spline theory, and emerging phenomena in deep learning, based on research presented at ICML. Topics include the delayed emergence of adversarial robustness in neural networks ("grokking"), geometric interpretations of neural networks via spline theory, and challenges in reconstruction learning. We also cover geometric analysis of Large Language Models (LLMs) for toxicity detection and the relationship between intrinsic dimensionality and model control in RLHF.
SPONSOR MESSAGES:
***
CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.
https://centml.ai/pricing/
Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?
Goto https://tufalabs.ai/
***
Randall Balestriero
https://x.com/randall_balestr
https://randallbalestriero.github.io/
Show notes and transcript: https://www.dropbox.com/scl/fi/3lufge4upq5gy0ug75j4a/RANDALLSHOW.pdf?rlkey=nbemgpa0jhawt1e86rx7372e4&dl=0
TOC:
Introduction
00:00:00: Introduction
Neural Network Geometry and Spline Theory
00:01:41: Neural Network Geometry and Spline Theory
00:07:41: Deep Networks Always Grok
00:11:39: Grokking and Adversarial Robustness
00:16:09: Double Descent and Catastrophic Forgetting
Reconstruction Learning
00:18:49: Reconstruction Learning
00:24:15: Frequency Bias in Neural Networks
Geometric Analysis of Neural Networks
00:29:02: Geometric Analysis of Neural Networks
00:34:41: Adversarial Examples and Region Concentration
LLM Safety and Geometric Analysis
00:40:05: LLM Safety and Geometric Analysis
00:46:11: Toxicity Detection in LLMs
00:52:24: Intrinsic Dimensionality and Model Control
00:58:07: RLHF and High-Dimensional Spaces
Conclusion
01:02:13: Neural Tangent Kernel
01:08:07: Conclusion
REFS:
[00:01:35] Humayun – Deep network geometry & input space partitioning
https://arxiv.org/html/2408.04809v1
[00:03:55] Balestriero & Paris – Linking deep networks to adaptive spline operators
https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf
[00:13:55] Song et al. – Gradient-based white-box adversarial attacks
https://arxiv.org/abs/2012.14965
[00:16:05] Humayun, Balestriero & Baraniuk – Grokking phenomenon & emergent robustness
https://arxiv.org/abs/2402.15555
[00:18:25] Humayun – Training dynamics & double descent via linear region evolution
https://arxiv.org/abs/2310.12977
[00:20:15] Balestriero – Power diagram partitions in DNN decision boundaries
https://arxiv.org/abs/1905.08443
[00:23:00] Frankle & Carbin – Lottery Ticket Hypothesis for network pruning
https://arxiv.org/abs/1803.03635
[00:24:00] Belkin et al. – Double descent phenomenon in modern ML
https://arxiv.org/abs/1812.11118
[00:25:55] Balestriero et al. – Batch normalization’s regularization effects
https://arxiv.org/pdf/2209.14778
[00:29:35] EU – EU AI Act 2024 with compute restrictions
https://www.lw.com/admin/upload/SiteAttachments/EU-AI-Act-Navigating-a-Brave-New-World.pdf
[00:39:30] Humayun, Balestriero & Baraniuk – SplineCam: Visualizing deep network geometry
https://openaccess.thecvf.com/content/CVPR2023/papers/Humayun_SplineCam_Exact_Visualization_and_Characterization_of_Deep_Network_Geometry_and_CVPR_2023_paper.pdf
[00:40:40] Carlini – Trade-offs between adversarial robustness and accuracy
https://arxiv.org/pdf/2407.20099
[00:44:55] Balestriero & LeCun – Limitations of reconstruction-based learning methods
https://openreview.net/forum?id=ez7w0Ss4g9
(truncated, see shownotes PDF)
Previous Episode

Nicholas Carlini (Google DeepMind)
Nicholas Carlini from Google DeepMind offers his view of AI security, emergent LLM capabilities, and his groundbreaking model-stealing research. He reveals how LLMs can unexpectedly excel at tasks like chess and discusses the security pitfalls of LLM-generated code.
SPONSOR MESSAGES:
***
CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.
https://centml.ai/pricing/
Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?
Goto https://tufalabs.ai/
***
Transcript: https://www.dropbox.com/scl/fi/lat7sfyd4k3g5k9crjpbf/CARLINI.pdf?rlkey=b7kcqbvau17uw6rksbr8ccd8v&dl=0
TOC:
1. ML Security Fundamentals
[00:00:00] 1.1 ML Model Reasoning and Security Fundamentals
[00:03:04] 1.2 ML Security Vulnerabilities and System Design
[00:08:22] 1.3 LLM Chess Capabilities and Emergent Behavior
[00:13:20] 1.4 Model Training, RLHF, and Calibration Effects
2. Model Evaluation and Research Methods
[00:19:40] 2.1 Model Reasoning and Evaluation Metrics
[00:24:37] 2.2 Security Research Philosophy and Methodology
[00:27:50] 2.3 Security Disclosure Norms and Community Differences
3. LLM Applications and Best Practices
[00:44:29] 3.1 Practical LLM Applications and Productivity Gains
[00:49:51] 3.2 Effective LLM Usage and Prompting Strategies
[00:53:03] 3.3 Security Vulnerabilities in LLM-Generated Code
4. Advanced LLM Research and Architecture
[00:59:13] 4.1 LLM Code Generation Performance and O(1) Labs Experience
[01:03:31] 4.2 Adaptation Patterns and Benchmarking Challenges
[01:10:10] 4.3 Model Stealing Research and Production LLM Architecture Extraction
REFS:
[00:01:15] Nicholas Carlini’s personal website & research profile (Google DeepMind, ML security) - https://nicholas.carlini.com/
[00:01:50] CentML AI compute platform for language model workloads - https://centml.ai/
[00:04:30] Seminal paper on neural network robustness against adversarial examples (Carlini & Wagner, 2016) - https://arxiv.org/abs/1608.04644
[00:05:20] Computer Fraud and Abuse Act (CFAA) – primary U.S. federal law on computer hacking liability - https://www.justice.gov/jm/jm-9-48000-computer-fraud
[00:08:30] Blog post: Emergent chess capabilities in GPT-3.5-turbo-instruct (Nicholas Carlini, Sept 2023) - https://nicholas.carlini.com/writing/2023/chess-llm.html
[00:16:10] Paper: “Self-Play Preference Optimization for Language Model Alignment” (Yue Wu et al., 2024) - https://arxiv.org/abs/2405.00675
[00:18:00] GPT-4 Technical Report: development, capabilities, and calibration analysis - https://arxiv.org/abs/2303.08774
[00:22:40] Historical shift from descriptive to algebraic chess notation (FIDE) - https://en.wikipedia.org/wiki/Descriptive_notation
[00:23:55] Analysis of distribution shift in ML (Hendrycks et al.) - https://arxiv.org/abs/2006.16241
[00:27:40] Nicholas Carlini’s essay “Why I Attack” (June 2024) – motivations for security research - https://nicholas.carlini.com/writing/2024/why-i-attack.html
[00:34:05] Google Project Zero’s 90-day vulnerability disclosure policy - https://googleprojectzero.blogspot.com/p/vulnerability-disclosure-policy.html
[00:51:15] Evolution of Google search syntax & user behavior (Daniel M. Russell) - https://www.amazon.com/Joy-Search-Google-Master-Information/dp/0262042878
[01:04:05] Rust’s ownership & borrowing system for memory safety - https://doc.rust-lang.org/book/ch04-00-understanding-ownership.html
[01:10:05] Paper: “Stealing Part of a Production Language Model” (Carlini et al., March 2024) – extraction attacks on ChatGPT, PaLM-2 - https://arxiv.org/abs/2403.06634
[01:10:55] First model stealing paper (Tramèr et al., 2016) – attacking ML APIs via prediction - https://arxiv.org/abs/1609.02943
Next Episode

Sepp Hochreiter - LSTM: The Comeback Story?
Sepp Hochreiter, the inventor of LSTM (Long Short-Term Memory) networks – a foundational technology in AI. Sepp discusses his journey, the origins of LSTM, and why he believes his latest work, XLSTM, could be the next big thing in AI, particularly for applications like robotics and industrial simulation. He also shares his controversial perspective on Large Language Models (LLMs) and why reasoning is a critical missing piece in current AI systems.
SPONSOR MESSAGES:
***
CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!
https://centml.ai/pricing/
Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.
Goto https://tufalabs.ai/
***
TRANSCRIPT AND BACKGROUND READING:
https://www.dropbox.com/scl/fi/n1vzm79t3uuss8xyinxzo/SEPPH.pdf?rlkey=fp7gwaopjk17uyvgjxekxrh5v&dl=0
Prof. Sepp Hochreiter
https://www.nx-ai.com/
https://x.com/hochreitersepp
https://scholar.google.at/citations?user=tvUH3WMAAAAJ&hl=en
TOC:
1. LLM Evolution and Reasoning Capabilities
[00:00:00] 1.1 LLM Capabilities and Limitations Debate
[00:03:16] 1.2 Program Generation and Reasoning in AI Systems
[00:06:30] 1.3 Human vs AI Reasoning Comparison
[00:09:59] 1.4 New Research Initiatives and Hybrid Approaches
2. LSTM Technical Architecture
[00:13:18] 2.1 LSTM Development History and Technical Background
[00:20:38] 2.2 LSTM vs RNN Architecture and Computational Complexity
[00:25:10] 2.3 xLSTM Architecture and Flash Attention Comparison
[00:30:51] 2.4 Evolution of Gating Mechanisms from Sigmoid to Exponential
3. Industrial Applications and Neuro-Symbolic AI
[00:40:35] 3.1 Industrial Applications and Fixed Memory Advantages
[00:42:31] 3.2 Neuro-Symbolic Integration and Pi AI Project
[00:46:00] 3.3 Integration of Symbolic and Neural AI Approaches
[00:51:29] 3.4 Evolution of AI Paradigms and System Thinking
[00:54:55] 3.5 AI Reasoning and Human Intelligence Comparison
[00:58:12] 3.6 NXAI Company and Industrial AI Applications
REFS:
[00:00:15] Seminal LSTM paper establishing Hochreiter's expertise (Hochreiter & Schmidhuber)
https://direct.mit.edu/neco/article-abstract/9/8/1735/6109/Long-Short-Term-Memory
[00:04:20] Kolmogorov complexity and program composition limitations (Kolmogorov)
https://link.springer.com/article/10.1007/BF02478259
[00:07:10] Limitations of LLM mathematical reasoning and symbolic integration (Various Authors)
https://www.arxiv.org/pdf/2502.03671
[00:09:05] AlphaGo’s Move 37 demonstrating creative AI (Google DeepMind)
https://deepmind.google/research/breakthroughs/alphago/
[00:10:15] New AI research lab in Zurich for fundamental LLM research (Benjamin Crouzier)
https://tufalabs.ai
[00:19:40] Introduction of xLSTM with exponential gating (Beck, Hochreiter, et al.)
https://arxiv.org/abs/2405.04517
[00:22:55] FlashAttention: fast & memory-efficient attention (Tri Dao et al.)
https://arxiv.org/abs/2205.14135
[00:31:00] Historical use of sigmoid/tanh activation in 1990s (James A. McCaffrey)
https://visualstudiomagazine.com/articles/2015/06/01/alternative-activation-functions.aspx
[00:36:10] Mamba 2 state space model architecture (Albert Gu et al.)
https://arxiv.org/abs/2312.00752
[00:46:00] Austria’s Pi AI project integrating symbolic & neural AI (Hochreiter et al.)
https://www.jku.at/en/institute-of-machine-learning/research/projects/
[00:48:10] Neuro-symbolic integration challenges in language models (Diego Calanzone et al.)
https://openreview.net/forum?id=7PGluppo4k
[00:49:30] JKU Linz’s historical and neuro-symbolic research (Sepp Hochreiter)
https://www.jku.at/en/news-events/news/detail/news/bilaterale-ki-projekt-unter-leitung-der-jku-erhaelt-fwf-cluster-of-excellence/
YT: https://www.youtube.com/watch?v=8u2pW2zZLCs
<truncated, see show notes/YT>
If you like this episode you’ll love
Episode Comments
Featured in these lists
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/machine-learning-street-talk-mlst-213859/want-to-understand-neural-networks-think-elastic-origami-prof-randall-83837281"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to want to understand neural networks? think elastic origami! - prof. randall balestriero on goodpods" style="width: 225px" /> </a>
Copy