
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
07/21/23 • 42 min
1 Listener
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Hosted by AI Pub creator Brian Burns and Arize AI founders Jason Lopatecki and Aparna Dhinakaran, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.
In this episode, we talk about Orca. Recent research focuses on improving smaller models through imitation learning using outputs from large foundation models (LFMs). Challenges include limited imitation signals, homogeneous training data, and a lack of rigorous evaluation, leading to overestimation of small model capabilities.
To address this, Orca is a 13-billion parameter model that learns to imitate LFMs’ reasoning process. Orca leverages rich signals from GPT-4, surpassing state-of-the-art models by over 100% in complex zero-shot reasoning benchmarks. It also shows competitive performance in professional and academic exams without CoT. Learning from step-by-step explanations, generated by humans or advanced AI models, enhances model capabilities and skills.
Full transcript and more here: https://arize.com/blog/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4-paper-reading/
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Hosted by AI Pub creator Brian Burns and Arize AI founders Jason Lopatecki and Aparna Dhinakaran, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.
In this episode, we talk about Orca. Recent research focuses on improving smaller models through imitation learning using outputs from large foundation models (LFMs). Challenges include limited imitation signals, homogeneous training data, and a lack of rigorous evaluation, leading to overestimation of small model capabilities.
To address this, Orca is a 13-billion parameter model that learns to imitate LFMs’ reasoning process. Orca leverages rich signals from GPT-4, surpassing state-of-the-art models by over 100% in complex zero-shot reasoning benchmarks. It also shows competitive performance in professional and academic exams without CoT. Learning from step-by-step explanations, generated by humans or advanced AI models, enhances model capabilities and skills.
Full transcript and more here: https://arize.com/blog/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4-paper-reading/
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
Previous Episode

Toolformer: Training LLMs To Use Tools
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Hosted by AI Pub creator Brian Burns and Arize AI founders Jason Lopatecki and Aparna Dhinakaran, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.
In this episode, we interview Timo Schick and Thomas Scialom, the Research Scientists at Meta AI behind Toolformer. "Vanilla" language models cannot access information about the external world. But what if we gave language models access to calculators, question-answer search, and other APIs to generate more powerful and accurate output? Further, how do we train such a model? How can we automatically generate a dataset of API-call-annotated text at internet scale, without human labeling?
Timo and Thomas give a step-by-step walkthrough of building and training Toolformer, what motivated them to do it, and what we should expect in the next generation of tool-LLM powered products.
Follow AI__Pub on Twitter. To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
Next Episode

Lost in the Middle: How Language Models Use Long Contexts
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. This episode is led by Sally-Ann DeLucia and Amber Roberts, as they discuss the paper "Lost in the Middle: How Language Models Use Long Contexts."
This paper examines how well language models utilize longer input contexts. The study focuses on multi-document question answering and key-value retrieval tasks. The researchers find that performance is highest when relevant information is at the beginning or end of the context. Accessing information in the middle of long contexts leads to significant performance degradation. Even explicitly long-context models experience decreased performance as the context length increases. The analysis enhances our understanding and offers new evaluation protocols for future long-context models.
Full transcript and more here: https://arize.com/blog/lost-in-the-middle-how-language-models-use-long-contexts-paper-reading/
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/deep-papers-251735/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4-31797893"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to orca: progressive learning from complex explanation traces of gpt-4 on goodpods" style="width: 225px" /> </a>
Copy