Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington

1 Creator

1 Creator

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
profile image

6 Listeners

bookmark
Share icon

All episodes

Best episodes

Seasons

Top 10 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) Episodes

Goodpods has curated a list of the 10 best The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) episode by adding your comments to the episode page.

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - Modeling Human Behavior with Generative Agents with Joon Sung Park - #632

Modeling Human Behavior with Generative Agents with Joon Sung Park - #632

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

play

06/05/23 • 46 min

Today we’re joined by Joon Sung Park, a PhD Student at Stanford University. Joon shares his passion for creating AI systems that can solve human problems and his work on the recent paper Generative Agents: Interactive Simulacra of Human Behavior, which showcases generative agents that exhibit believable human behavior. We discuss using empirical methods to study these systems and the conflicting papers on whether AI models have a worldview and common sense. Joon talks about the importance of context and environment in creating believable agent behavior and shares his team's work on scaling emerging community behaviors. He also dives into the importance of a long-term memory module in agents and the use of knowledge graphs in retrieving associative information. The goal, Joon explains, is to create something that people can enjoy and empower people, solving existing problems and challenges in the traditional HCI and AI field.

profile image

2 Listeners

bookmark
plus icon
share episode
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - AI Trends 2024: Reinforcement Learning in the Age of LLMs with Kamyar Azizzadenesheli - #670

AI Trends 2024: Reinforcement Learning in the Age of LLMs with Kamyar Azizzadenesheli - #670

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

play

02/05/24 • 70 min

Today we’re joined by Kamyar Azizzadenesheli, a staff researcher at Nvidia, to continue our AI Trends 2024 series. In our conversation, Kamyar updates us on the latest developments in reinforcement learning (RL), and how the RL community is taking advantage of the abstract reasoning abilities of large language models (LLMs). Kamyar shares his insights on how LLMs are pushing RL performance forward in a variety of applications, such as ALOHA, a robot that can learn to fold clothes, and Voyager, an RL agent that uses GPT-4 to outperform prior systems at playing Minecraft. We also explore the progress being made in assessing and addressing the risks of RL-based decision-making in domains such as finance, healthcare, and agriculture. Finally, we discuss the future of deep reinforcement learning, Kamyar’s top predictions for the field, and how greater compute capabilities will be critical in achieving general intelligence.

The complete show notes for this episode can be found at twimlai.com/go/670.

2 Listeners

bookmark
plus icon
share episode
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - ML Models for Safety-Critical Systems with Lucas García - #705

ML Models for Safety-Critical Systems with Lucas García - #705

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

play

10/14/24 • 76 min

Today, we're joined by Lucas García, principal product manager for deep learning at MathWorks to discuss incorporating ML models into safety-critical systems. We begin by exploring the critical role of verification and validation (V&V) in these applications. We review the popular V-model for engineering critical systems and then dig into the “W” adaptation that’s been proposed for incorporating ML models. Next, we discuss the complexities of applying deep learning neural networks in safety-critical applications using the aviation industry as an example, and talk through the importance of factors such as data quality, model stability, robustness, interpretability, and accuracy. We also explore formal verification methods, abstract transformer layers, transformer-based architectures, and the application of various software testing techniques. Lucas also introduces the field of constrained deep learning and convex neural networks and its benefits and trade-offs.

The complete show notes for this episode can be found at https://twimlai.com/go/705.

2 Listeners

bookmark
plus icon
share episode
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - AI Trends 2024: Machine Learning & Deep Learning with Thomas Dietterich - #666

AI Trends 2024: Machine Learning & Deep Learning with Thomas Dietterich - #666

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

play

01/08/24 • 65 min

Today we continue our AI Trends 2024 series with a conversation with Thomas Dietterich, distinguished professor emeritus at Oregon State University. As you might expect, Large Language Models figured prominently in our conversation, and we covered a vast array of papers and use cases exploring current research into topics such as monolithic vs. modular architectures, hallucinations, the application of uncertainty quantification (UQ), and using RAG as a sort of memory module for LLMs. Lastly, don’t miss Tom’s predictions on what he foresees happening this year as well as his words of encouragement for those new to the field.

The complete show notes for this episode can be found at twimlai.com/go/666.

1 Listener

bookmark
plus icon
share episode
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - Open Source Generative AI at Hugging Face with Jeff Boudier - #624

Open Source Generative AI at Hugging Face with Jeff Boudier - #624

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

play

04/11/23 • 33 min

Today we’re joined by Jeff Boudier, head of product at Hugging Face 🤗. In our conversation with Jeff, we explore the current landscape of open-source machine learning tools and models, the recent shift towards consumer-focused releases, and the importance of making ML tools accessible. We also discuss the growth of the Hugging Face Hub, which currently hosts over 150k models, and how formalizing their collaboration with AWS will help drive the adoption of open-source models in the enterprise.

The complete show notes for this episode can be found at twimlai.com/go/624

1 Listener

bookmark
plus icon
share episode
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos - #627

Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos - #627

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

play

05/01/23 • 33 min

Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more.

The complete show notes for this episode can be found at https://twimlai.com/go/627.

1 Listener

bookmark
plus icon
share episode
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - Generative AI at the Edge with Vinesh Sukumar - #623

Generative AI at the Edge with Vinesh Sukumar - #623

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

play

04/03/23 • 39 min

Today we’re joined by Vinesh Sukumar, a senior director and head of AI/ML product management at Qualcomm Technologies. In our conversation with Vinesh, we explore how mobile and automotive devices have different requirements for AI models and how their AI stack helps developers create complex models on both platforms. We also discuss the growing interest in text-based input and the shift towards transformers, generative content, and recommendation engines. Additionally, we explore the challenges and opportunities for ML Ops investments on the edge, including the use of synthetic data and evolving models based on user data. Finally, we delve into the latest advancements in large language models, including Prometheus-style models and GPT-4.

The complete show notes for this episode can be found at twimlai.com/go/623.

1 Listener

bookmark
plus icon
share episode
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678

Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

play

04/01/24 • 48 min

Today we're joined by Jonas Geiping, a research group leader at the ELLIS Institute, to explore his paper: "Coercing LLMs to Do and Reveal (Almost) Anything". Jonas explains how neural networks can be exploited, highlighting the risk of deploying LLM agents that interact with the real world. We discuss the role of open models in enabling security research, the challenges of optimizing over certain constraints, and the ongoing difficulties in achieving robustness in neural networks. Finally, we delve into the future of AI security, and the need for a better approach to mitigate the risks posed by optimized adversarial attacks.

The complete show notes for this episode can be found at twimlai.com/go/678.

1 Listener

bookmark
plus icon
share episode
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - AI Trends 2023: Causality and the Impact on Large Language Models with Robert Osazuwa Ness - #616

AI Trends 2023: Causality and the Impact on Large Language Models with Robert Osazuwa Ness - #616

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

play

02/14/23 • 82 min

Today we’re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, to break down the latest trends in the world of causal modeling. In our conversation with Robert, we explore advances in areas like causal discovery, causal representation learning, and causal judgements. We also discuss the impact causality could have on large language models, especially in some of the recent use cases we’ve seen like Bing Search and ChatGPT. Finally, we discuss the benchmarks for causal modeling, the top causality use cases, and the most exciting opportunities in the field.

The complete show notes for this episode can be found at twimlai.com/go/616.

1 Listener

bookmark
plus icon
share episode
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - Responsible AI in the Generative Era with Michael Kearns - #662

Responsible AI in the Generative Era with Michael Kearns - #662

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

play

12/22/23 • 36 min

Today we’re joined by Michael Kearns, professor in the Department of Computer and Information Science at the University of Pennsylvania and an Amazon scholar. In our conversation with Michael, we discuss the new challenges to responsible AI brought about by the generative AI era. We explore Michael’s learnings and insights from the intersection of his real-world experience at AWS and his work in academia. We cover a diverse range of topics under this banner, including service card metrics, privacy, hallucinations, RLHF, and LLM evaluation benchmarks. We also touch on Clean Rooms ML, a secured environment that balances accessibility to private datasets through differential privacy techniques, offering a new approach for secure data handling in machine learning.

The complete show notes for this episode can be found at twimlai.com/go/662.

1 Listener

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) have?

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) currently has 729 episodes available.

What topics does The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover?

The podcast is about News, Tech News, Podcasts and Technology.

What is the most popular episode on The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)?

The episode title 'Modeling Human Behavior with Generative Agents with Joon Sung Park - #632' is the most popular.

What is the average episode length on The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)?

The average episode length on The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) is 45 minutes.

How often are episodes of The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) released?

Episodes of The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) are typically released every 3 days, 22 hours.

When was the first episode of The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)?

The first episode of The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) was released on May 21, 2016.

Show more FAQ

Toggle view more icon

Comments