Eye On A.I.
Craig S. Smith
2 Listeners
All episodes
Best episodes
Seasons
Top 10 Eye On A.I. Episodes
Goodpods has curated a list of the 10 best Eye On A.I. episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Eye On A.I. for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Eye On A.I. episode by adding your comments to the episode page.
05/24/23 • 62 min
Welcome to Eye on AI, the podcast that keeps you informed about the latest trends, obstacles, and possibilities in the realm of artificial intelligence. In this episode, we have the privilege of engaging in a thought-provoking discussion with Aidan Gomez, an exceptional AI developer and co-founder of Cohere. Aidan’s passion lies in enhancing the efficiency of massive neural networks and effectively deploying them in the real world. Drawing from his vast experience, which includes leading a team of researchers at For.ai and conducting groundbreaking research at Google Brain, Aidan provides us with unique insights and anecdotes that shed light on the AI landscape. During our conversation, Aidan explains his collaboration with the legendary Geoffrey Hinton and their remarkable project at Google Brain. We delve into the intricate architecture of AI systems, demystifying the construction of the transformative transformer algorithm. Aidan generously shares his knowledge on the creation of attention within these models and the complexities of scaling such systems. As we explore the fascinating domain of language models, Aidan discusses their learning process, bridging the gap between code and data. We uncover the immense potential of these models to suggest other large-scale counterparts. We gain invaluable insights into Aidan’s journey as a co-founder of Cohere, an innovative platform revolutionizing the utilization of language technology. Tune in to Eye on AI now to immerse yourself in a captivating conversation that will expand your understanding of this ever-develop field. (00:00) Preview (00:33) Introduction & sponsorship (02:00) Aidan's background with machine learning & AI (05:10) Geoffrey Hinton & Aidan Gomez working together (07:55) Aidan Gomez & Google Brain's project (12:53) Aidan's role in building AI architecture (15:25) How the transformer algorithm is built (18:25) How do you create attention? (20:40) How do you scale the model? (25:10) How language models learn from code and data (29:55) Did you know the potential of the project? (34:15) Can LLMs suggest other large models? (36:45) How Aidan Gomez started Cohere (41:10) How do people use Cohere? (46:50) Examples of language technology models (48:40) How Cohere handles hallucinations (52:53) The dangers of AI Craig Smith Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
2 Listeners
06/08/23 • 57 min
Welcome to episode #124 of the Eye on AI podcast, where we bring you the latest insights into the fascinating world of artificial intelligence. In this episode, Craig Smith is joined by Sina Kian, General Counsel and COO at Aleo, as they dive deep into the revolutionary realm of zero-knowledge proofs. Join us as we explore the incredible potential of zero-knowledge proofs in safeguarding sensitive data while leveraging it for machine learning and AI applications. Sina Kian provides shares how this innovative technology can reshape privacy, digital identity, and even social media authentication. During this conversation, we delve into the power of privacy-preserving blockchain technology and its far-reaching impact across industries. Discover how Aleo is at the forefront of making digital identity more secure and how it can be seamlessly integrated across platforms without compromising sensitive information. We examine the future of machine learning and AI, unraveling the role that digital identity plays in accessing products and content based on location. As we venture into the depths of the social media landscape, we also explore the risks and rewards associated with user data and privacy. Gain insights into how privacy-preserving technology can shield user information and authenticate data and content without compromising privacy. This conversation will discusses the potential of zero-knowledge proofs and privacy-preserving technology, offering a glimpse into how they will shape the future of machine learning and AI. (00:00) Preview
(00:41) Introduction
(02:28) Sina Kian's background in Aleo & blockchain
(05:49) Blockchain's integration with AI & machine learning
(11:48) How data is protected in blockchain technology
(12:25) Use cases of encryption with Aleo
(18:53) How Aleo works with an open source protocol
(24:13) Aleo's progress in developing its open source project
(31:13) Why social media platforms capture your data
(34:16) How can you find widespread adoption?
(35:43) How the government are getting involved in digital identity
(41:53) How data privacy integrates to Web 3.0
(45:15) Blockchain's implementation in the real world
(48:43) Next steps from Aleo
(53:53) Social media interaction with privacy
Craig Smith Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
1 Listener
02/14/24 • 59 min
Join host Craig Smith on episode #169 of Eye on AI as we sit down with Guillermo Rauch, CEO of Vercel, a company that optimizes frontend deployment, making web projects faster to execute and easier to scale.
In this episode, we explore the intersection of AI and web development, showcasing how Vercel is pioneering the integration of generative AI technologies to revolutionize web experiences.
Guillermo Rauch shares his vision on the future of web development, emphasizing the significance of JavaScript, the evolution of the web, and the critical role of Vercel in facilitating developers to seamlessly deploy AI-driven applications.
We delve into the architectural innovations at Vercel, the democratization of AI, and the synergy of combining diverse AI models for superior functionalities as well. Guillermo's perspectives provide a glimpse into the future of interactive web experiences, powered by AI.
This episode is a must-listen for developers, entrepreneurs, and anyone interested in the cutting-edge of AI and web development.
Don't forget to rate us on Apple Podcast and Spotify if you enjoyed this episode!
This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.
Download NetSuite’s popular KPI Checklist, designed to give you consistently excellent performance - absolutely free at https://netsuite.com/EYEONAI
Stay Updated:
Craig Smith Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
(00:00) Preview and Introduction
(03:01) Guillermo's Background and Next.js Creation
(08:19) Vercel's Operation on AWS and Alternative Solutions
(13:12) Combining Different AI Models for Enhanced Outputs
(17:25) The Evolution of AI and Focus on User Experience
(22:46) Introduction to Copilot and AI Integration in Coding
(26:33) The Future of AI Technologies
(31:41) Generative AI Applications and Vercel's Role
(33:14) Flexibility and Framework Support on Vercel
(38:11) Vercel's Architectural Approach
(41:02) Trends in Generative AI and App Development
(47:54) Supporting Applications with World Models
(50:54) Vercel's Growth and Expansion in Serving AI Companies
(55:20) Flexibility in Model Integration and Future Directions
(57:28) Closing Remarks and Oracle Cloud Infrastructure Promo
1 Listener
03/02/23 • 34 min
In this episode, Ben Sorscher, a PhD student at Stanford, sheds light on the challenges posed by the ever-increasing size of data sets used to train machine learning models, specifically large language models. The sheer size of these data sets has been pushing the limits of scaling, as the cost of training and the environmental impact of the electricity they consume becomes increasingly enormous. As a solution, Ben discusses the concept of “data pruning” - a method of reducing the size of data sets without sacrificing model performance. Data pruning involves selecting the most important or representative data points and removing the rest, resulting in a smaller, more efficient data set that still produces accurate results. Throughout the podcast, Ben delves into the intricacies of data pruning, including the benefits and drawbacks of the technique, the practical considerations for implementing it in machine learning models, and the potential impact it could have on the field of artificial intelligence. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
1 Listener
04/13/23 • 40 min
In this episode of the Eye on A.I. podcast, host Craig Smith interviews Yoshua Bengio, one of the founding fathers of deep learning and a Turing Award winner. Bengio shares his insights on the famous pause letter, which he signed along with other prominent A.I. researchers, calling for a more responsible approach to the development of A.I. technologies. He discusses the potential risks associated with increasingly powerful A.I. models and the importance of ensuring that models are developed in a way that aligns with our ethical values. Bengio also talks about his latest research on world models and inference machines, which aim to provide A.I. systems with the ability to reason for reality and make more informed decisions. He explains how these models are built and how they could be used in a variety of applications, such as autonomous vehicles and robotics. Throughout the podcast, Bengio emphasises the need for interdisciplinary collaboration and the importance of addressing the ethical implications of A.I. technologies. Don’t miss this insightful conversation with one of the most influential figures in A.I. on Eye on A.I. podcast! Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
1 Listener
04/28/23 • 55 min
In this podcast, we sit down with Danny Tobey, an attorney with the global law firm DLA Piper, to discuss the changing legal dynamics surrounding artificial intelligence. As one of the leading experts in the field, Danny provides valuable insights into the current state of legislation and regulation, the efforts of regulatory bodies like the Federal Trade Commission in tackling issues related to AI, and how the law firm of the future will look as AI continues to transform the economy. With the growing impact of AI on all aspects of our lives, the legal profession is facing unique challenges and opportunities. Danny brings a wealth of knowledge and experience to the conversation, having worked with clients in industries ranging from healthcare to financial services to consumer products. Throughout the podcast, Danny explores the ethical and legal implications of AI, as well as the ways in which AI is already reshaping the legal industry. He provides thoughtful perspectives on how the legal profession can adapt and evolve to meet the demands of an AI-driven economy, and the role that lawyers and regulatory bodies will play in shaping the future of this transformative technology. Whether you're a legal professional looking to stay on top of the latest developments in AI, or simply interested in the ways that AI is changing the legal landscape, this podcast is sure to offer valuable insights and food for thought. So join us as we dive deep into the intersection of law and artificial intelligence with Danny Tobey. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
1 Listener
02/01/23 • 37 min
In this episode, Terry Sejnowski, an AI pioneer, chairman of the NeurIPS Foundation, and co-creator of Boltzmann Machines, delves into the latest developments in deep learning and their potential impact on our understanding of the human brain. Terry Sejnowski begins by discussing the NeurIPS conference - one of the most significant events in the field of artificial intelligence - and its role in advancing research and innovation in deep learning. He shares insights into the latest breakthroughs in the field, including the repurposing of the sleep-wake cycle of Boltzmann Machines in Geoff Hinton's new Forward-Forward algorithm. Throughout the episode, Terry Sejnowski shares his expertise on the intersection of artificial intelligence and neuroscience, exploring how advances in deep learning may help us better understand the complexities of the human brain. He discusses how researchers are using AI techniques to study brain activity and the potential implications for fields such as medicine and psychology. Overall, this episode will be of particular interest to those interested in the latest developments in artificial intelligence and their potential applications in neuroscience and related fields. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
1 Listener
01/19/23 • 58 min
In this episode, Geoffrey Hinton, a renowned computer scientist and a leading expert in deep learning, provides an in-depth exploration of his groundbreaking new learning algorithm - the forward-forward algorithm. Hinton argues this algorithm provides a more plausible model for how the cerebral cortex might learn, and could be the key to unlocking new possibilities in artificial intelligence. Throughout the episode, Hinton discusses the mechanics of the forward-forward algorithm, including how it differs from traditional deep learning models and what makes it more effective. He also provides insights into the potential applications of this new algorithm, such as enabling machines to perform tasks that were previously thought to be exclusive to human cognition. Hinton shares his thoughts on the current state of deep learning and its future prospects, particularly in neuroscience. He explores how advances in deep learning may help us gain a better understanding of our own brains and how we can use this knowledge to create more intelligent machines. Overall, this podcast provides a fascinating glimpse into the latest developments in artificial intelligence and the cutting-edge research being conducted by one of its leading pioneers. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
1 Listener
02/16/23 • 55 min
In this episode, Yann LeCun, a renowned computer scientist and AI researcher, shares his insights on the limitations of large language models and how his new joint embedding predictive architecture could help bridge the gap. While large language models have made remarkable strides in natural language processing and understanding, they are still far from perfect. Yann LeCun points out that these models often cannot capture the nuances and complexities of language, leading to inaccuracies and errors. To address this gap, Yann LeCun introduces his new joint embedding predictive architecture - a novel approach to language modelling that combines techniques from computer vision and natural language processing. This approach involves jointly embedding text and images, allowing for more accurate predictions and a better understanding of the relationships between original concepts and objects. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
1 Listener
03/15/23 • 43 min
In this podcast episode, Ilya Sutskever, the co-founder and chief scientist at OpenAI, discusses his vision for the future of artificial intelligence (AI), including large language models like GPT-4. Sutskever starts by explaining the importance of AI research and how OpenAI is working to advance the field. He shares his views on the ethical considerations of AI development and the potential impact of AI on society. The conversation then moves on to large language models and their capabilities. Sutskever talks about the challenges of developing GPT-4 and the limitations of current models. He discusses the potential for large language models to generate a text that is indistinguishable from human writing and how this technology could be used in the future. Sutskever also shares his views on AI-aided democracy and how AI could help solve global problems such as climate change and poverty. He emphasises the importance of building AI systems that are transparent, ethical, and aligned with human values. Throughout the conversation, Sutskever provides insights into the current state of AI research, the challenges facing the field, and his vision for the future of AI. This podcast episode is a must-listen for anyone interested in the intersection of AI, language, and society. Timestamps: (00:04) Introduction of Craig Smith and Ilya Sutskever. (01:00) Sutskever's AI and consciousness interests. (02:30) Sutskever's start in machine learning with Hinton. (03:45) Realization about training large neural networks. (06:33) Convolutional neural network breakthroughs and imagenet. (08:36) Predicting the next thing for unsupervised learning. (10:24) Development of GPT-3 and scaling in deep learning. (11:42) Specific scaling in deep learning and potential discovery. (13:01) Small changes can have big impact. (13:46) Limits of large language models and lack of understanding. (14:32) Difficulty in discussing limits of language models. (15:13) Statistical regularities lead to better understanding of world. (16:33) Limitations of language models and hope for reinforcement learning. (17:52) Teaching neural nets through interaction with humans. (21:44) Multimodal understanding not necessary for language models. (25:28) Autoregressive transformers and high-dimensional distributions. (26:02) Autoregressive transformers work well on images. (27:09) Pixels represented like a string of text. (29:40) Large generative models learn compressed representations of real-world processes. (31:31) Human teachers needed to guide reinforcement learning process. (35:10) Opportunity to teach AI models more skills with less data. (39:57) Desirable to have democratic process for providing information. (41:15) Impossible to understand everything in complicated situations. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
1 Listener
Show more best episodes
Show more best episodes
FAQ
How many episodes does Eye On A.I. have?
Eye On A.I. currently has 227 episodes available.
What topics does Eye On A.I. cover?
The podcast is about Research, Podcasts, Technology and Programming.
What is the most popular episode on Eye On A.I.?
The episode title '#123 Aidan Gomez: How AI Language Models Will Shape The Future' is the most popular.
What is the average episode length on Eye On A.I.?
The average episode length on Eye On A.I. is 45 minutes.
How often are episodes of Eye On A.I. released?
Episodes of Eye On A.I. are typically released every 7 days, 7 hours.
When was the first episode of Eye On A.I.?
The first episode of Eye On A.I. was released on Oct 8, 2018.
Show more FAQ
Show more FAQ