Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Deep Papers - LLM Interpretability and Sparse Autoencoders: Research from OpenAI and Anthropic

LLM Interpretability and Sparse Autoencoders: Research from OpenAI and Anthropic

Deep Papers

06/14/24 • 44 min

plus icon
bookmark
Share icon

It’s been an exciting couple weeks for GenAI! Join us as we discuss the latest research from OpenAI and Anthropic. We’re excited to chat about this significant step forward in understanding how LLMs work and the implications it has for deeper understanding of the neural activity of language models. We take a closer look at some recent research from both OpenAI and Anthropic. These two recent papers both focus on the sparse autoencoder--an unsupervised approach for extracting interpretable features from an LLM. In "Extracting Concepts from GPT-4," OpenAI researchers propose using k-sparse autoencoders to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. In "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet," researchers at Anthropic show that scaling laws can be used to guide the training of sparse autoencoders, among other findings.

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

06/14/24 • 44 min

plus icon
bookmark
Share icon

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/deep-papers-251735/llm-interpretability-and-sparse-autoencoders-research-from-openai-and-54411773"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to llm interpretability and sparse autoencoders: research from openai and anthropic on goodpods" style="width: 225px" /> </a>

Copy