Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
Deep Papers

Deep Papers

Arize AI

Deep Papers is a podcast series featuring deep dives on today’s most important AI papers and research. Hosted by Arize AI founders and engineers, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.

1 Listener

bookmark
Share icon

All episodes

Best episodes

Seasons

Top 10 Deep Papers Episodes

Goodpods has curated a list of the 10 best Deep Papers episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Deep Papers for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Deep Papers episode by adding your comments to the episode page.

Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Hosted by AI Pub creator Brian Burns and Arize AI founders Jason Lopatecki and Aparna Dhinakaran, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.
In this episode, we talk about Orca. Recent research focuses on improving smaller models through imitation learning using outputs from large foundation models (LFMs). Challenges include limited imitation signals, homogeneous training data, and a lack of rigorous evaluation, leading to overestimation of small model capabilities.
To address this, Orca is a 13-billion parameter model that learns to imitate LFMs’ reasoning process. Orca leverages rich signals from GPT-4, surpassing state-of-the-art models by over 100% in complex zero-shot reasoning benchmarks. It also shows competitive performance in professional and academic exams without CoT. Learning from step-by-step explanations, generated by humans or advanced AI models, enhances model capabilities and skills.
Full transcript and more here: https://arize.com/blog/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4-paper-reading/

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

1 Listener

bookmark
plus icon
share episode

Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. This episode is led by Aparna Dhinakaran ( Chief Product Officer, Arize AI) and Michael Schiff (Chief Technology Officer, Arize AI), as they discuss the paper "Llama 2: Open Foundation and Fine-Tuned Chat Models."
In this paper reading, we explore the paper “Developing Llama 2: Pretrained Large Language Models Optimized for Dialogue.” The paper introduces Llama 2, a collection of pretrained and fine-tuned large language models ranging from 7 billion to 70 billion parameters. Their fine-tuned model, Llama 2-Chat, is specifically designed for dialogue use cases and showcases superior performance on various benchmarks. Through human evaluations for helpfulness and safety, Llama 2-Chat emerges as a promising alternative to closed-source models. Discover the approach to fine-tuning and safety improvements, allowing us to foster responsible development and contribute to this rapidly evolving field.
Full transcript and more here: https://arize.com/blog/llama-2-open-foundation-and-fine-tuned-chat-models-paper-reading/

Follow AI__Pub on Twitter. To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

1 Listener

bookmark
plus icon
share episode

This week, we discuss the implications of Text-to-Video Generation and speculate as to the possibilities (and limitations) of this incredible technology with some hot takes. Dat Ngo, ML Solutions Engineer at Arize, is joined by community member and AI Engineer Vibhu Sapra to review OpenAI’s technical report on their Text-To-Video Generation Model: Sora.
According to OpenAI, “Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.” At the time of this recording, the model had not been widely released yet, but was becoming available to red teamers to assess risk, and also to artists to receive feedback on how Sora could be helpful for creatives.

At the end of our discussion, we also explore EvalCrafter: Benchmarking and Evaluating Large Video Generation Models. This recent paper proposed a new framework and pipeline to exhaustively evaluate the performance of the generated videos, which we look at in light of Sora.

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

1 Listener

bookmark
plus icon
share episode

This week we explore ReAct, an approach that enhances the reasoning and decision-making capabilities of LLMs by combining step-by-step reasoning with the ability to take actions and gather information from external sources in a unified framework.

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

bookmark
plus icon
share episode

Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Hosted by AI Pub creator Brian Burns and Arize AI founders Jason Lopatecki and Aparna Dhinakaran, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.
In this first episode, we’re joined by Long Ouyang and Ryan Lowe, research scientists at OpenAI and creators of InstructGPT. InstructGPT was one of the first major applications of Reinforcement Learning with Human Feedback to train large language models, and is the precursor to the now-famous ChatGPT. Listen to learn about the major ideas behind InstructGPT and the future of aligning language models to human intention.

Read OpenAI's InstructGPT paper here: https://openai.com/blog/instruction-following/

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

bookmark
plus icon
share episode

Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. This episode is led by Sally-Ann DeLucia and Amber Roberts, as they discuss the paper "Lost in the Middle: How Language Models Use Long Contexts."
This paper examines how well language models utilize longer input contexts. The study focuses on multi-document question answering and key-value retrieval tasks. The researchers find that performance is highest when relevant information is at the beginning or end of the context. Accessing information in the middle of long contexts leads to significant performance degradation. Even explicitly long-context models experience decreased performance as the context length increases. The analysis enhances our understanding and offers new evaluation protocols for future long-context models.
Full transcript and more here: https://arize.com/blog/lost-in-the-middle-how-language-models-use-long-contexts-paper-reading/

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

bookmark
plus icon
share episode
Deep Papers - Exploring OpenAI's o1-preview and o1-mini
play

09/27/24 • 42 min

OpenAI recently released its o1-preview, which they claim outperforms GPT-4o on a number of benchmarks. These models are designed to think more before answering and handle complex tasks better than their other models, especially science and math questions.
We take a closer look at their latest crop of o1 models, and we also highlight some research our team did to see how they stack up against Claude Sonnet 3.5--using a real world use case.
Read it on our blog: https://arize.com/blog/exploring-openai-o1-preview-and-o1-mini

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

bookmark
plus icon
share episode

A recent announcement on X boasted a tuned model with pretty outstanding performance, and claimed these results were achieved through Reflection Tuning. However, people were unable to reproduce the results. We dive into some recent drama in the AI community as a jumping off point for a discussion about Reflection 70B.
In 2023, there was a paper written about Reflection Tuning that this new model (Reflection 70B) draws concepts from. Reflection tuning is an optimization technique where models learn to improve their decision-making processes by “reflecting” on past actions or predictions. This method enables models to iteratively refine their performance by analyzing mistakes and successes, thus improving both accuracy and adaptability over time. By incorporating a feedback loop, reflection tuning can address model weaknesses more dynamically, helping AI systems become more robust in real-world applications where uncertainty or changing environments are prevalent.
Dat Ngo (AI Solutions Architect at Arize), talks to Rohan Pandey (Founding Engineer at Reworkd) about Reflection 70B, Reflection Tuning, the recent drama, and the importance of double checking your research.

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

bookmark
plus icon
share episode

This week’s paper presents a comprehensive study of the performance of various LLMs acting as judges. The researchers leverage TriviaQA as a benchmark for assessing objective knowledge reasoning of LLMs and evaluate them alongside human annotations which they find to have a high inter-annotator agreement. The study includes nine judge models and nine exam-taker models – both base and instruction-tuned. They assess the judge models’ alignment across different model sizes, families, and judge prompts to answer questions about the strengths and weaknesses of this paradigm, and what potential biases it may hold.

Read it on the blog: https://arize.com/blog/judging-the-judges-llm-as-a-judge/

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

bookmark
plus icon
share episode

It’s been an exciting couple weeks for GenAI! Join us as we discuss the latest research from OpenAI and Anthropic. We’re excited to chat about this significant step forward in understanding how LLMs work and the implications it has for deeper understanding of the neural activity of language models. We take a closer look at some recent research from both OpenAI and Anthropic. These two recent papers both focus on the sparse autoencoder--an unsupervised approach for extracting interpretable features from an LLM. In "Extracting Concepts from GPT-4," OpenAI researchers propose using k-sparse autoencoders to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. In "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet," researchers at Anthropic show that scaling laws can be used to guide the training of sparse autoencoders, among other findings.

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does Deep Papers have?

Deep Papers currently has 33 episodes available.

What topics does Deep Papers cover?

The podcast is about Mathematics, Podcasts, Technology and Science.

What is the most popular episode on Deep Papers?

The episode title 'Llama 2: Open Foundation and Fine-Tuned Chat Models' is the most popular.

What is the average episode length on Deep Papers?

The average episode length on Deep Papers is 42 minutes.

How often are episodes of Deep Papers released?

Episodes of Deep Papers are typically released every 16 days, 18 hours.

When was the first episode of Deep Papers?

The first episode of Deep Papers was released on Jan 18, 2023.

Show more FAQ

Toggle view more icon

Comments