The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington

1 Creator
5.0
(1)

1 Creator



3 Listeners
5.0
(1)
All episodes
Best episodes
Top 10 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) Episodes
Best episodes ranked by Goodpods Users most listened
Modeling Human Behavior with Generative Agents with Joon Sung Park - #632
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
06/05/23 • 46 min
Today we’re joined by Joon Sung Park, a PhD Student at Stanford University. Joon shares his passion for creating AI systems that can solve human problems and his work on the recent paper Generative Agents: Interactive Simulacra of Human Behavior, which showcases generative agents that exhibit believable human behavior. We discuss using empirical methods to study these systems and the conflicting papers on whether AI models have a worldview and common sense. Joon talks about the importance of context and environment in creating believable agent behavior and shares his team's work on scaling emerging community behaviors. He also dives into the importance of a long-term memory module in agents and the use of knowledge graphs in retrieving associative information. The goal, Joon explains, is to create something that people can enjoy and empower people, solving existing problems and challenges in the traditional HCI and AI field.
06/05/23 • 46 min


2 Listeners
Runway Gen-2: Generative AI for Video Creation with Anastasis Germanidis - #622
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
03/27/23 • 48 min
Today we’re joined by Anastasis Germanidis, Co-Founder and CTO of RunwayML. Amongst all the product and model releases over the past few months, Runway threw its hat into the ring with Gen-1, a model that can take still images or video and transform them into completely stylized videos. They followed that up just a few weeks later with the release of Gen-2, a multimodal model that can produce a video from text prompts. We had the pleasure of chatting with Anastasis about both models, exploring the challenges of generating video, the importance of alignment in model deployment, the potential use of RLHF, the deployment of models as APIs, and much more!
The complete show notes for this episode can be found at twimlai.com/go/622.
03/27/23 • 48 min

1 Listener
Reinforcement Learning for Personalization at Spotify with Tony Jebara - #609
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
12/29/22 • 41 min
Today we continue our NeurIPS 2022 series joined by Tony Jebara, VP of engineering and head of machine learning at Spotify. In our conversation with Tony, we discuss his role at Spotify and how the company’s use of machine learning has evolved over the last few years, and the business value of machine learning, specifically recommendations, hold at the company.
We dig into his talk on the intersection of reinforcement learning and lifetime value (LTV) at Spotify, which explores the application of Offline RL for user experience personalization. We discuss the various papers presented in the talk, and how they all map toward determining and increasing a user’s LTV.
The complete show notes for this episode can be found at twimlai.com/go/609.
12/29/22 • 41 min

1 Listener
AI Trends 2023: Causality and the Impact on Large Language Models with Robert Osazuwa Ness - #616
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
02/14/23 • 82 min
Today we’re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, to break down the latest trends in the world of causal modeling. In our conversation with Robert, we explore advances in areas like causal discovery, causal representation learning, and causal judgements. We also discuss the impact causality could have on large language models, especially in some of the recent use cases we’ve seen like Bing Search and ChatGPT. Finally, we discuss the benchmarks for causal modeling, the top causality use cases, and the most exciting opportunities in the field.
The complete show notes for this episode can be found at twimlai.com/go/616.
02/14/23 • 82 min

1 Listener
Does ChatGPT “Think”? A Cognitive Neuroscience Perspective with Anna Ivanova - #620
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
03/13/23 • 45 min
5.0
Today we’re joined by Anna Ivanova, a postdoctoral researcher at MIT Quest for Intelligence. In our conversation with Anna, we discuss her recent paper Dissociating language and thought in large language models: a cognitive perspective. In the paper, Anna reviews the capabilities of LLMs by considering their performance on two different aspects of language use: 'formal linguistic competence', which includes knowledge of rules and patterns of a given language, and 'functional linguistic competence', a host of cognitive abilities required for language understanding and use in the real world. We explore parallels between linguistic competence and AGI, the need to identify new benchmarks for these models, whether an end-to-end trained LLM can address various aspects of functional competence, and much more!
The complete show notes for this episode can be found at twimlai.com/go/620.
03/13/23 • 45 min

1 Listener
1 Comment
1
Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein - 621
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
03/20/23 • 51 min
Today we’re joined by Tom Goldstein, an associate professor at the University of Maryland. Tom’s research sits at the intersection of ML and optimization and has previously been featured in the New Yorker for his work on invisibility cloaks, clothing that can evade object detection. In our conversation, we focus on his more recent research on watermarking LLM output. We explore the motivations behind adding these watermarks, how they work, and different ways a watermark could be deployed, as well as political and economic incentive structures around the adoption of watermarking and future directions for that line of work. We also discuss Tom’s research into data leakage, particularly in stable diffusion models, work that is analogous to recent guest Nicholas Carlini’s research into LLM data extraction.
03/20/23 • 51 min

1 Listener
AI Trends 2023: Natural Language Proc - ChatGPT, GPT-4 and Cutting Edge Research with Sameer Singh - #613
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
01/23/23 • 105 min
Today we continue our AI Trends 2023 series joined by Sameer Singh, an associate professor in the department of computer science at UC Irvine and fellow at the Allen Institute for Artificial Intelligence (AI2). In our conversation with Sameer, we focus on the latest and greatest advancements and developments in the field of NLP, starting out with one that took the internet by storm just a few short weeks ago, ChatGPT. We also explore top themes like decomposed reasoning, causal modeling in NLP, and the need for “clean” data. We also discuss projects like HuggingFace’s BLOOM, the debacle that was the Galactica demo, the impending intersection of LLMs and search, use cases like Copilot, and of course, we get Sameer’s predictions for what will happen this year in the field.
The complete show notes for this episode can be found at twimlai.com/go/613.
01/23/23 • 105 min

1 Listener
Service Cards and ML Governance with Michael Kearns - #610
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
01/02/23 • 39 min
Today we conclude our AWS re:Invent 2022 series joined by Michael Kearns, a professor in the department of computer and information science at UPenn, as well as an Amazon Scholar. In our conversation, we briefly explore Michael’s broader research interests in responsible AI and ML governance and his role at Amazon. We then discuss the announcement of service cards, and their take on “model cards” at a holistic, system level as opposed to an individual model level. We walk through the information represented on the cards, as well as explore the decision-making process around specific information being omitted from the cards. We also get Michael’s take on the years-old debate of algorithmic bias vs dataset bias, what some of the current issues are around this topic, and what research he has seen (and hopes to see) addressing issues of “fairness” in large language models.
The complete show notes for this episode can be found at twimlai.com/go/610.
01/02/23 • 39 min

1 Listener
Data-Centric Zero-Shot Learning for Precision Agriculture with Dimitris Zermas - #615
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
02/06/23 • 32 min
Today we’re joined by Dimitris Zermas, a principal scientist at agriscience company Sentera. Dimitris’ work at Sentera is focused on developing tools for precision agriculture using machine learning, including hardware like cameras and sensors, as well as ML models for analyzing the vast amount of data they acquire. We explore some specific use cases for machine learning, including plant counting, the challenges of working with classical computer vision techniques, database management, and data annotation. We also discuss their use of approaches like zero-shot learning and how they’ve taken advantage of a data-centric mindset when building a better, more cost-efficient product.
02/06/23 • 32 min

1 Listener
Robotic Dexterity and Collaboration with Monroe Kennedy III - #619
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
03/06/23 • 52 min
Today we’re joined by Monroe Kennedy III, an assistant professor at Stanford, director of the Assistive Robotics and Manipulation Lab, and a national director of Black in Robotics. In our conversation with Monroe, we spend some time exploring the robotics landscape, getting Monroe’s thoughts on the current challenges in the field, as well as his opinion on choreographed demonstrations like the dancing Boston Robotics machines. We also dig into his work around two distinct threads, Robotic Dexterity, (what does it take to make robots capable of doing manipulation useful tasks with and for humans?) and Collaborative Robotics (how do we go beyond advanced autonomy in robots towards making effective robotic teammates capable of working with human counterparts?). Finally, we discuss DenseTact, an optical-tactile sensor capable of visualizing the deformed surface of a soft fingertip and using that image in a neural network to perform calibrated shape reconstruction and 6-axis wrench estimation.
The complete show notes for this episode can be found at twimlai.com/go/619.
03/06/23 • 52 min

1 Listener
Show more

Show more
FAQ
How many episodes does The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) have?
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) currently has 676 episodes available.
What topics does The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover?
The podcast is about News, Tech News, Podcasts and Technology.
What is the most popular episode on The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)?
The episode title 'Modeling Human Behavior with Generative Agents with Joon Sung Park - #632' is the most popular.
What is the average episode length on The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)?
The average episode length on The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) is 45 minutes.
How often are episodes of The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) released?
Episodes of The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) are typically released every 3 days, 19 hours.
When was the first episode of The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)?
The first episode of The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) was released on May 21, 2016.
Show more FAQ

Show more FAQ
Comments
5.0
out of 5
1 Rating