
Understanding Implicit Neural Representations with Itzik Ben-Shabat
04/21/23 • 55 min
In this episode of Computer Vision Decoded, we are going to dive into implicit neural representations.
We are joined by Itzik Ben-Shabat, a Visiting Research Fellow at the Australian National Universit (ANU) and Technion – Israel Institute of Technology as well as the host of the Talking Paper Podcast.
You will learn a core understanding of implicit neural representations, key concepts and terminology, how it's being used in applications today, and Itzik's research into improving output with limit input data.
Episode timeline:
00:00 Intro
01:23 Overview of what implicit neural representations are
04:08 How INR compares and contrasts with a NeRF
08:17 Why did Itzik pursued this line of research
10:56 What is normalization and what are normals
13:13 Past research people should read to learn about the basics of INR
16:10 What is an implicit representation (without the neural network)
24:27 What is DiGS and what problem with INR does it solve?
35:54 What is OG-I NR and what problem with INR does it solve?
40:43 What software can researchers use to understand INR?
49:15 What information should non-scientists be focused to learn about INR?
Itzik's Website: https://www.itzikbs.com/
Follow Itzik on Twitter: https://twitter.com/sitzikbs
Follow Itzik on LinkedIn: https://www.linkedin.com/in/yizhak-itzik-ben-shabat-67b3b1b7/
Talking Papers Podcast: https://talking.papers.podcast.itzikbs.com/
Follow Jared Heinly on Twitter: https://twitter.com/JaredHeinly
Follow Jonathan Stephens on Twitter at: https://twitter.com/jonstephens85
Referenced past episode- What is CVPR: https://share.transistor.fm/s/15edb19d
This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode of Computer Vision Decoded, we are going to dive into implicit neural representations.
We are joined by Itzik Ben-Shabat, a Visiting Research Fellow at the Australian National Universit (ANU) and Technion – Israel Institute of Technology as well as the host of the Talking Paper Podcast.
You will learn a core understanding of implicit neural representations, key concepts and terminology, how it's being used in applications today, and Itzik's research into improving output with limit input data.
Episode timeline:
00:00 Intro
01:23 Overview of what implicit neural representations are
04:08 How INR compares and contrasts with a NeRF
08:17 Why did Itzik pursued this line of research
10:56 What is normalization and what are normals
13:13 Past research people should read to learn about the basics of INR
16:10 What is an implicit representation (without the neural network)
24:27 What is DiGS and what problem with INR does it solve?
35:54 What is OG-I NR and what problem with INR does it solve?
40:43 What software can researchers use to understand INR?
49:15 What information should non-scientists be focused to learn about INR?
Itzik's Website: https://www.itzikbs.com/
Follow Itzik on Twitter: https://twitter.com/sitzikbs
Follow Itzik on LinkedIn: https://www.linkedin.com/in/yizhak-itzik-ben-shabat-67b3b1b7/
Talking Papers Podcast: https://talking.papers.podcast.itzikbs.com/
Follow Jared Heinly on Twitter: https://twitter.com/JaredHeinly
Follow Jonathan Stephens on Twitter at: https://twitter.com/jonstephens85
Referenced past episode- What is CVPR: https://share.transistor.fm/s/15edb19d
This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
Previous Episode

From 2D to 3D: 4 Ways to Make a 3D Reconstruction from Imagery
In this episode of Computer Vision Decoded, we are going to dive into 4 different ways to 3D reconstruct a scene with images. Our cohost Jared Heinly, a PhD in the computer science specializing in 3D reconstruction from images, will dive into the 4 distinct strategies and discuss the pros and cons of each.
Links to content shared in this episode:
Live SLAM to measure a stockpile with SR Measure: https://srmeasure.com/professional
Jared's notes on the iPhone LiDAR and SLAM: https://everypoint.medium.com/everypoint-gets-hands-on-with-apples-new-lidar-sensor-44eeb38db579
How to capture images for 3D reconstruction: https://youtu.be/AQfRdr_gZ8g
00:00 Intro
01:30 3D Reconstruction from Video
13:48 3D Reconstruction from Images
28:05 3D Reconstruction from Stereo Pairs
38:43 3D Reconstruction from SLAM
Follow Jared Heinly
Twitter: https://twitter.com/JaredHeinly
LinkedIn https://www.linkedin.com/in/jheinly/
Follow Jonathan Stephens
Twitter: https://twitter.com/jonstephens85
LinkedIn: https://www.linkedin.com/in/jonathanstephens/
This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
Next Episode

OpenMVG Decoded: Pierre Moulon's 10 Year Journey Building Open-Source Software
In this episode of Computer Vision Decoded, we are going to dive into Pierre Moulon's 10 years experience building OpenMVG. We also cover the impact of open-source software in the computer vision industry and everything involved in building your own project. There is a lot to learn here!
Our episode guest, Pierre Moulon, is a computer vision research scientist and creator of OpenMVG - a library for computer-vision scientists and targeted for the Multiple View Geometry community.
The episode follow's Pierre's journey building OpenMVG which he wrote about as an article in his GitHub repository.
Explore OpenMVG on GitHub: https://github.com/openMVG/openMVG
Pierre's article on building OpenMVG: https://github.com/openMVG/openMVG/discussions/2165
Episode timeline:
00:00 Intro
01:00 Pierre Moulon's Background
04:40 What is OpenMVG?
08:43 What is the importance of open-source software for the computer vision community?
12:30 What to look for deciding to use an opensource project
16:27 What is Multi View Geometry?
24:24 What was the biggest challenge building OpenMVG?
31:00 How do you grow a community around an open-source project
38:09 Choosing a licensing model for your open-source project
43:07 Funding and sponsorship for your open-source project
46:46 Building an open-source project for your resume
49:53 How to get started with OpenMVG
Contact:
Follow Pierre Moulon on LinkedIn: https://www.linkedin.com/in/pierre-moulon/
Follow Jared Heinly on Twitter: https://twitter.com/JaredHeinly
Follow Jonathan Stephens on Twitter at: https://twitter.com/jonstephens85
This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/computer-vision-decoded-691631/understanding-implicit-neural-representations-with-itzik-ben-shabat-92429832"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to understanding implicit neural representations with itzik ben-shabat on goodpods" style="width: 225px" /> </a>
Copy