
Eye On A.I.
Craig S. Smith
2 Listeners
All episodes
Best episodes
Seasons
Top 10 Eye On A.I. Episodes
Goodpods has curated a list of the 10 best Eye On A.I. episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Eye On A.I. for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Eye On A.I. episode by adding your comments to the episode page.

05/24/23 • 62 min
Welcome to Eye on AI, the podcast that keeps you informed about the latest trends, obstacles, and possibilities in the realm of artificial intelligence. In this episode, we have the privilege of engaging in a thought-provoking discussion with Aidan Gomez, an exceptional AI developer and co-founder of Cohere. Aidan’s passion lies in enhancing the efficiency of massive neural networks and effectively deploying them in the real world. Drawing from his vast experience, which includes leading a team of researchers at For.ai and conducting groundbreaking research at Google Brain, Aidan provides us with unique insights and anecdotes that shed light on the AI landscape. During our conversation, Aidan explains his collaboration with the legendary Geoffrey Hinton and their remarkable project at Google Brain. We delve into the intricate architecture of AI systems, demystifying the construction of the transformative transformer algorithm. Aidan generously shares his knowledge on the creation of attention within these models and the complexities of scaling such systems. As we explore the fascinating domain of language models, Aidan discusses their learning process, bridging the gap between code and data. We uncover the immense potential of these models to suggest other large-scale counterparts. We gain invaluable insights into Aidan’s journey as a co-founder of Cohere, an innovative platform revolutionizing the utilization of language technology. Tune in to Eye on AI now to immerse yourself in a captivating conversation that will expand your understanding of this ever-develop field. (00:00) Preview (00:33) Introduction & sponsorship (02:00) Aidan's background with machine learning & AI (05:10) Geoffrey Hinton & Aidan Gomez working together (07:55) Aidan Gomez & Google Brain's project (12:53) Aidan's role in building AI architecture (15:25) How the transformer algorithm is built (18:25) How do you create attention? (20:40) How do you scale the model? (25:10) How language models learn from code and data (29:55) Did you know the potential of the project? (34:15) Can LLMs suggest other large models? (36:45) How Aidan Gomez started Cohere (41:10) How do people use Cohere? (46:50) Examples of language technology models (48:40) How Cohere handles hallucinations (52:53) The dangers of AI Craig Smith Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
2 Listeners

12/03/23 • 51 min
This episode is sponsored by Oracle. AI is revolutionizing industries, but needs power without breaking the bank. Enter Oracle Cloud Infrastructure (OCI): the one-stop platform for all your AI needs, with 4-8x the bandwidth of other clouds. Train AI models faster and at half the cost. Be ahead like Uber and Cohere.
If you want to do more and spend less like Uber, 8x8, and Databricks Mosaic - take a free test drive of OCI at https://oracle.com/eyeonai
In episode #159 of Eye on AI, Craig Smith sits down with Peter Chen, the co-founder and CEO of Covariant, in a deep dive into the world of AI-driven robotics.
Peter shares his journey from his early days in China to his pivotal role in shaping the future of AI at Covariant. He discusses the philosophies that guided his work at OpenAI and how these have influenced Covariant's mission in robotics.
This episode unveils how Covariant is harnessing AI to build foundational models for robotics, discussing the intersection of reinforcement learning, generative models, and the broader implications for the field. Peter elaborates on the challenges and breakthroughs in developing AI agents that can operate in dynamic, real-world environments, providing insights into the future of robotics and AI integration.
Join us for this insightful conversation, where Peter Chen maps out the evolving landscape of AI in robotics, shedding light on how Covariant is pushing the boundaries of what's possible.
Stay updated:
Craig Smith's Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
(00:00) Preview and Introduction
(03:10) Peter Chen Journey in AI
(09:53) The Evolution of Generative AI and Transformer Models
(12:21) The Concept of World Models in AI
(14:03) Building Robust Role Models in AI
(20:48) Training AI: From Video Analysis to Real-World Interaction
(23:10) The Three Pillars of Building a Robotic Foundation Model
(27:36) Architectural Insights of Covariant's Foundation Model
(33:20) Adapting AI Models to Diverse Hardware
(35:01) The Future of Robotics: Progress and Potential
(38:55) Real-World Application and Future of AI-Controlled Robots
(42:11) Envisioning the Future of Automated Warehouses
(45:51) The Evolution of Robotics: Current Trends and Future Prospects
1 Listener

10/18/23 • 63 min
This episode is sponsored by Celonis ,the global leader in process mining. AI has landed and enterprises are adapting. To give customers slick experiences and teams the technology to deliver. The road is long, but you’re closer than you think. Your business processes run through systems. Creating data at every step. Celonis recontrusts this data to generate Process Intelligence. A common business language. So AI knows how your business flows. Across every department, every system and every process. With AI solutions powered by Celonis enterprises get faster, more accurate insights. A new level of automation potential. And a step change in productivity, performance and customer satisfaction Process Intelligence is the missing piece in the AI Enabled tech stack.
Go to https:/celonis.com/eyeonai to find out more. Welcome to episode 146 of the Eye on AI podcast. In this episode, host Craig Smith sits down with Viren Jain, a leading Research Scientist at Google in Mountain View, California. Viren, at the helm of the Connectomics team, has pioneered breakthroughs in synapse-resolution brain mapping in collaboration with esteemed institutions such as HHMI, Max Planck, and Harvard.
The conversation kicks off with Jain introducing his academic journey and the evolution of connectomics – the comprehensive study of neural connections in the brain. The duo delves deep into the challenges and advancements in imaging technologies, comparing their progression to genome sequencing. Craig probes further, inquiring about shared principles across organisms, the dynamic behavior of the brain, and the role of electron microscopes in understanding neural structures.
The dialogue also touches upon Google's role in the research, Jain's collaborative ventures, and the potential future of AI and connectomics. Viren also shares his insights into neuron tracing, the significance of combining algorithm predictions, the zebra finch bird's song-learning mechanism, and the broader goal of enhancing human health and medicine.
Craig Smith Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
(00:00) Preview, Introduction and Celonis
(06:45) Viren’s Academic and Professional Journey
(13:17) AI’s Technological Progress and Challenges
(22:20) Deep Dive into Connectomics
(39:20) Google's Role in AI
(44:16) Natural Learning vs. AI Algorithms
(57:32) Brain Mapping: Present and Future
(01:00:33) Brain Studies for Medical Advancement
(01:06:05) Final Reflections and Celonis ad
1 Listener

Setting the stage for 2023
Eye On A.I.
01/02/23 • 60 min
To set the stage for some terrific conversations I have coming to you in the new year, in this episode we go back to some earlier conversations that talk about how we got to where we are in deep learning and how those early threads continue to lead innovation.
1 Listener

01/24/24 • 45 min
Join host Craig Smith on episode #166 of Eye on AI as we sit down with Itamar Arel, CEO of Tenyx, a company that uses proprietary neuroscience-inspired AI technology to build the next generation of voice-based conversational agents.
Itamar shares his journey that started with academic research to becoming a leading tech entrepreneur with Tenyx. We explore the evolution of voice AI in customer service and the unique challenges and advancements in understanding and responding to human speech.
We dig deeper into Tenyx’s unique approach to AI-driven customer service while exploring the production and design considerations when developing cutting-edge AI technology.
Make sure you watch till the end as Itamar shares his vision on how AI is going to reshape industries and help advance modern day businesses.
Enjoyed this conversation? Make sure you like, comment and share for more fascinating discussions in the world of AI.
This episode is sponsored by Shopify. Shopify is a commerce platform that allows anyone to set up an online store and sell their products. Whether you’re selling online, on social media, or in person, Shopify has you covered on every base. With Shopify you can sell physical and digital products. You can sell services, memberships, ticketed events, rentals and even classes and lessons.
Sign up for a $1 per month trial period at http://shopify.com/eyeonai
Stay Updated:
Craig Smith Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
(00:00) Preview and Introduction
(03:35) Itamar Arel's Career and Introduction to Voice AI Development
(07:39) Key Differences in Current and Past Technology Solutions
(09:10) Advancements in Voice AI and Large Language Models
(11:00) The Inception and Evolution of Tenyx
(13:29) Challenges in Developing Voice AI
(18:27) Innovative Approaches in Voice AI Development
(21:05) Data Handling and Fine-Tuning in Model Development
(25;41) How To Standout In The Crowded AI Market
(29:44) The Future of Voice AI and Generative Models
(37:14) Testing and Evaluation of Voice AI Systems
(40:10) Where Will AI Be in 5 Years?
(43:42) Closing Remarks and A Word From Our Sponsors
1 Listener

06/22/23 • 58 min
Welcome to episode #126 of Eye on AI with Craig Smith and Noam Chomsky. Are neural nets the key to understanding the human brain and language acquisition? In this conversation with renowned linguist and cognitive scientist Noam Chomsky, we delve into the limitations of large language models and the ongoing quest to uncover the mysteries of the human mind. Together, we explore the historical development of research in this field, from Minsky’s thesis to Jeff Hinton’s goals for understanding the brain. We also discuss the potential harms and benefits of large language models, comparing them to the internal combustion engine and its differences from a gazelle running. We tackle the difficult task of studying the neurophysiology of human cognition and the ethical implications of invasive experiments. As we consider language as a natural object, we discuss the works of notable figures such as Albert Einstein, Galileo, Leibniz, and Turing, and the similarities between language and biology. We even entertain the possibility of extraterrestrial language and communication. Join us on this thought-provoking journey as we explore the intricacies of language, the brain, and our place in the cosmos. (00:00) Preview
(00:43) Introduction
(01:54) Noam Chomsky’s neural net ideology & criticisms
(6:58) Jeff Hinton & Noam Chomsky’s: How the brain works
(10:05) Correlation between neural nets and the brain
(11:11) Noam Chomsky’s reaction to Chat-GPT & LLMs
(15:21) Exploring the mechanisms of the brain
(19:00) What do we learn from chatbots?
(22:30) What are impossible languages?
(26:45) Generative AI doesn’t show true intelligence?
(28:40) Is there a danger of AI becoming too intelligent?
(31:30) Can AI language models become sentient?
(36:40) Turing machine and neural nets experimentations
(42:40) Non-evasive procedures for understanding the brain
(45:54) Does Noam Chomsky still work on understanding the brain?
(49:33) Is Noam Chomsky excited about the future of neural nets?
(55:30) Albert Einstein and Galileo’s principles
(55:40) Is there an extraterrestrial language model?
Craig Smith Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
1 Listener

03/15/23 • 43 min
In this podcast episode, Ilya Sutskever, the co-founder and chief scientist at OpenAI, discusses his vision for the future of artificial intelligence (AI), including large language models like GPT-4. Sutskever starts by explaining the importance of AI research and how OpenAI is working to advance the field. He shares his views on the ethical considerations of AI development and the potential impact of AI on society. The conversation then moves on to large language models and their capabilities. Sutskever talks about the challenges of developing GPT-4 and the limitations of current models. He discusses the potential for large language models to generate a text that is indistinguishable from human writing and how this technology could be used in the future. Sutskever also shares his views on AI-aided democracy and how AI could help solve global problems such as climate change and poverty. He emphasises the importance of building AI systems that are transparent, ethical, and aligned with human values. Throughout the conversation, Sutskever provides insights into the current state of AI research, the challenges facing the field, and his vision for the future of AI. This podcast episode is a must-listen for anyone interested in the intersection of AI, language, and society. Timestamps: (00:04) Introduction of Craig Smith and Ilya Sutskever. (01:00) Sutskever's AI and consciousness interests. (02:30) Sutskever's start in machine learning with Hinton. (03:45) Realization about training large neural networks. (06:33) Convolutional neural network breakthroughs and imagenet. (08:36) Predicting the next thing for unsupervised learning. (10:24) Development of GPT-3 and scaling in deep learning. (11:42) Specific scaling in deep learning and potential discovery. (13:01) Small changes can have big impact. (13:46) Limits of large language models and lack of understanding. (14:32) Difficulty in discussing limits of language models. (15:13) Statistical regularities lead to better understanding of world. (16:33) Limitations of language models and hope for reinforcement learning. (17:52) Teaching neural nets through interaction with humans. (21:44) Multimodal understanding not necessary for language models. (25:28) Autoregressive transformers and high-dimensional distributions. (26:02) Autoregressive transformers work well on images. (27:09) Pixels represented like a string of text. (29:40) Large generative models learn compressed representations of real-world processes. (31:31) Human teachers needed to guide reinforcement learning process. (35:10) Opportunity to teach AI models more skills with less data. (39:57) Desirable to have democratic process for providing information. (41:15) Impossible to understand everything in complicated situations. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
1 Listener

07/06/23 • 48 min
Yoshua Bengio, the legendary AI expert, will join us for Episode 128 of Eye on AI podcast. In this episode, we delve into the unnerving question: Could the rise of a superhuman AI signal the downfall of humanity as we know it? Join us as we embark on an exploration of the existential threat posed by superhuman AI, leaving no stone unturned. We dissect the Future of Life Institute’s role in overseeing large language model development. As well as the sobering warnings issued by the Centre for AI Safety regarding artificial general intelligence. The stakes have never been higher, and we uncover the pressing need for action. Prepare to confront the disconcerting notion of society’s gradual disempowerment and an ever-increasing dependency on AI. We shed light on the challenges of extricating ourselves from this intricate web, where pulling the plug on AI seems almost impossible. Brace yourself for a thought-provoking discussion on the potential psychological effects of realizing that our relentless pursuit of AI advancement may inadvertently jeopardize humanity itself. In this episode, we dare to imagine a future where deep learning amplifies system-2 capabilities, forcing us to develop countermeasures and regulations to mitigate associated risks. We grapple with the possibility of leveraging AI to combat climate change, while treading carefully to prevent catastrophic outcomes. But that’s not all. We confront the notion of AI systems acting autonomously, highlighting the critical importance of stringent regulation surrounding their access and usage.
(00:00) Preview
(00:42) Introduction
(03:30) Yoshua Bengio's essay on AI extinction
(09:45) Use cases for dangerous uses of AI
(12:00) Why are AI risks only happening now?
(17:50) Extinction threat and fear with AI & climate change
(21:10) Super intelligence and the concerns for humanity
(15:02) Yoshua Bengio research in AI safety
(29:50) Are corporations a form of artificial intelligence?
(31:15) Extinction scenarios by Yoshua Bengio
(37:00) AI agency and AI regulation
(40:15) Who controls AI for the general public?
(45:11) The AI debate in the world
Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
1 Listener

11/23/23 • 41 min
This episode is sponsored by ISS, a leading global provider of video intelligence and data awareness solutions. ISS offers a robust portfolio of AI-powered, high-trust video analytics for streamlining security, safety and business operations within a wide range of vertical markets. So, what do you want to know about your environment?
To learn more about our video intelligence solutions, visit https://issivs.com/
On episode #155 of Eye on AI, Craig Smith sits down with Karen Hao, who is currently a contributing writer for The Atlantic with an impressive background as a foreign correspondent for The Wall Street Journal in Hong Kong and as a senior artificial intelligence editor at MIT Technology Review.
Known for her incisive coverage of AI, including its ethical and societal implications, Hao brings a wealth of knowledge from her experiences in journalism and data science.
In this episode, Karen delves into the recent controversies surrounding OpenAI, shedding light on the power struggles, ethical dilemmas, and corporate alliances shaping the future of artificial intelligence. Her unique perspective, gained from her experience as a foreign correspondent and senior AI editor, offers a deep understanding of the complexities that exist in the AI world.
We explore the intricate narrative constructed by OpenAI, its relationship with giants like Microsoft, and the broader implications of these partnerships on AI development and ethics. Karen's critical analysis provides an insightful look into the often opaque world of AI and its global impact.
If you find this discussion as enlightening as we did, please consider leaving a 5-star rating on Spotify and a review on Apple Podcasts.
Stay Updated:
Craig Smith Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
(00:00) Preview and Introduction
(02:28) Karen Hao's Background and Entry into Tech Journalism
(09:14) OpenAI's GPT-2 Controversy and Company Evolution
(11:35) Nonprofit and For-Profit Structure of OpenAI
(15:26) OpenAI Board Dynamics and Power Struggles
(18:07) Transparency and Open Source in AI Development
(21:27) Future of OpenAI and Tech Industry Speculations
(26:22) Microsoft's Investment and Partnership with OpenAI
(31:28) Sam Altman's Potential Chip Company Endeavor
(33:20) AGI Speculations and Existential Risks
(34:38) AGI Definitions and Real-World AI Limitations
1 Listener

11/02/23 • 55 min
This episode is sponsored by Oracle. AI is revolutionizing industries, but needs power without breaking the bank. Enter Oracle Cloud Infrastructure (OCI): the one-stop platform for all your AI needs, with 4-8x the bandwidth of other clouds. Train AI models faster and at half the cost. Be ahead like Uber and Cohere.
If you want to do more and spend less like Uber, 8x8, and Databricks Mosaic - take a free test drive of OCI at https://oracle.com/eyeonai
Welcome to episode 150 of the ‘Eye on AI’ podcast. In this episode, host Craig Smith sits down with Yann LeCun, a Turing Award winner who has been instrumental in advancing convolutional neural networks and whose work spans machine learning, computer vision, and more.
Tune is as Craig and Yann explore the intricacies of AI, world models, and the challenges of continuous learning.
In this episode, Yann delves deep into the concept of a "world model" - systems that can predict the world's future states, allowing agents to make informed decisions. The discussion transitions to the challenges of training these models, particularly when dealing with diverse data like text and images. We then discuss the computational demands of modern AI models, with Yann highlighting the nuances between generative models for videos and language.
He also touches upon the idea of the "Embodied Turing Tests" and how augmented language models can bridge the gap between human-like behavior and computational efficiency.The spotlight then shifts to pressing concerns surrounding the open-source nature of AI models, with Yann articulating the legal ramifications and the future of open-source AI. Drawing from global perspectives, including China's stance on open-source, Yann underscores the imperative for a collaborative approach in the AI space, ensuring it's reflective of diverse global needs.
Craig Smith Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
(00:00) Preview, Oracle and Introduction
(02:42) Decoding The World Model and Gaia 1
(07:43) Energy and Computational Demands of AI
(08:06) Video vs. Text Processing & True AI Capabilities
(11:17) Embodied Turing Test & Augmented LLMs
(15:38) Is AI a Threat To Society?
(25:04) Where is AI Development Headed?
(31:06) Interplay of Neuroscience and AI**
(33:33) Yann's Vision, JEPA, and Learning Challenges
(39:05) Yann's Career, AI Progress, and Challenges
(44:47) The Open Source Debate in AI
(55:30) Oracle Cloud Infrastructure
1 Listener
Show more best episodes

Show more best episodes
FAQ
How many episodes does Eye On A.I. have?
Eye On A.I. currently has 243 episodes available.
What topics does Eye On A.I. cover?
The podcast is about Research, Podcasts, Technology and Programming.
What is the most popular episode on Eye On A.I.?
The episode title '#123 Aidan Gomez: How AI Language Models Will Shape The Future' is the most popular.
What is the average episode length on Eye On A.I.?
The average episode length on Eye On A.I. is 45 minutes.
How often are episodes of Eye On A.I. released?
Episodes of Eye On A.I. are typically released every 7 days, 3 hours.
When was the first episode of Eye On A.I.?
The first episode of Eye On A.I. was released on Oct 8, 2018.
Show more FAQ

Show more FAQ