Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Theory and Practice - S4E6: MIT’s James DiCarlo on Reverse-Engineering Human Sight with AI
plus icon
bookmark

S4E6: MIT’s James DiCarlo on Reverse-Engineering Human Sight with AI

09/06/23 • 45 min

Theory and Practice

Season 4 of our Theory and Practice podcast investigates the powerful new world of AI applications and what it means to be human in the age of human-like artificial intelligence. Episode 6 explores what happens when AI is explicitly used to understand humans.

In this episode, we're joined by James DiCarlo, the Peter de Florez Professor of Neuroscience at Massachusetts Institute of Technology and Director of the MIT Quest for Intelligence. Trained in biomedical engineering and medicine, Professor DiCarlo brings a technical mindset to understanding the machine-like processes in human brains. His focus is on the machinery that enables us to see.

"Anything that our brain achieves is because there's a machine in there. It's not magic; there's some kind of machine running. So that means there is some machine that could emulate what we do. And our job is to figure out the details of that machine. So the problem is someday tractable. It's just a question of when."

Professor DiCarlo unpacks how well convolutional neural networks (CNNs), a form of deep learning, mimic the human brain. These networks excel at finding patterns in images to recognize objects. One key difference with humans is that our vision feeds information into different areas of the brain and receives feedback. Professor DiCarlo argues that CNNs help him and his team understand how our brains gather vast amounts of data from a limited field of vision in a millisecond glimpse.

Alex and Anthony also discuss the potential clinical applications of machine learning — from using an ECG to determine a person's biological age to understanding a person's cardiovascular health from retina images.

plus icon
bookmark

Season 4 of our Theory and Practice podcast investigates the powerful new world of AI applications and what it means to be human in the age of human-like artificial intelligence. Episode 6 explores what happens when AI is explicitly used to understand humans.

In this episode, we're joined by James DiCarlo, the Peter de Florez Professor of Neuroscience at Massachusetts Institute of Technology and Director of the MIT Quest for Intelligence. Trained in biomedical engineering and medicine, Professor DiCarlo brings a technical mindset to understanding the machine-like processes in human brains. His focus is on the machinery that enables us to see.

"Anything that our brain achieves is because there's a machine in there. It's not magic; there's some kind of machine running. So that means there is some machine that could emulate what we do. And our job is to figure out the details of that machine. So the problem is someday tractable. It's just a question of when."

Professor DiCarlo unpacks how well convolutional neural networks (CNNs), a form of deep learning, mimic the human brain. These networks excel at finding patterns in images to recognize objects. One key difference with humans is that our vision feeds information into different areas of the brain and receives feedback. Professor DiCarlo argues that CNNs help him and his team understand how our brains gather vast amounts of data from a limited field of vision in a millisecond glimpse.

Alex and Anthony also discuss the potential clinical applications of machine learning — from using an ECG to determine a person's biological age to understanding a person's cardiovascular health from retina images.

Previous Episode

undefined - S4E5: Mapping the World of Smell to Broaden Diagnostics in Healthcare

S4E5: Mapping the World of Smell to Broaden Diagnostics in Healthcare

On Season 4 of Theory and Practice, Anthony Philippakis and Alex Wiltschko explore newly emerging human-like artificial intelligence and robots — and how we can learn as much about ourselves, as humans, as we do about the machines we use. The series has delved into many aspects of AI, from safety guardrails to empathic communication to robotic surgery and how computers can make decisions.

In episode 5, we explore how machine learning helped create a map of odor and how that technology will train computers to smell. Anthony Philippakis visits Dr. Alex Wiltschko’s lab at Osmo, where scientists are dedicated to digitizing our sense of smell.

Next Episode

undefined - S4E7: Google DeepMind’s Clément Farabet on AI Reasoning

S4E7: Google DeepMind’s Clément Farabet on AI Reasoning

In this season of Theory and Practice, we explore newly emerging human-like artificial intelligence and robots — and how we can learn as much about ourselves, as humans, as we do about the machines we use. As we near the end of Season 4, we explore whether decision-making and judgment are still the final preserve of humans.

Our guest for Episode 7 is Dr. Clément Farabet, VP of Research at Google DeepMind. For the past 15 years, Dr. Farabet’s work has been guided by a central mission: figuring out how to build AI systems that can learn on their own — and ultimately redefine how we write software. We discuss the conundrum in the Chinese Room Argument to explore whether computers can achieve artificial general intelligence.

Dr. Farabet outlines four modules required for computers to demonstrate understanding. These modules include a predictive model of its environment that can create a representation of its world and an ability to store memories. He also points to the ability to perform reasoning about possible futures from its representation and memories. And finally, he explains how the ability to act in the world is key to illustrating understanding.

Dr. Fabaret believes that we can build computers to become more human-like than most people may realize, but the overarching goal should be to build systems that improve human life.

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/theory-and-practice-57347/s4e6-mits-james-dicarlo-on-reverse-engineering-human-sight-with-ai-33227405"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to s4e6: mit’s james dicarlo on reverse-engineering human sight with ai on goodpods" style="width: 225px" /> </a>

Copy