Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Deep Papers - Towards Monosemanticity: Decomposing Language Models With Dictionary Learning

Towards Monosemanticity: Decomposing Language Models With Dictionary Learning

Deep Papers

11/20/23 • 44 min

plus icon
bookmark
Share icon

In this paper read, we discuss “Towards Monosemanticity: Decomposing Language Models Into Understandable Components,” a paper from Anthropic that addresses the challenge of understanding the inner workings of neural networks, drawing parallels with the complexity of human brain function. It explores the concept of “features,” (patterns of neuron activations) providing a more interpretable way to dissect neural networks. By decomposing a layer of neurons into thousands of features, this approach uncovers hidden model properties that are not evident when examining individual neurons. These features are demonstrated to be more interpretable and consistent, offering the potential to steer model behavior and improve AI safety.
Find the transcript and more here: https://arize.com/blog/decomposing-language-models-with-dictionary-learning-paper-reading/

Learn more about AI observability and evaluation in our course, join the Arize AI Slack community or get the latest on LinkedIn and X.

11/20/23 • 44 min

plus icon
bookmark
Share icon

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/deep-papers-251735/towards-monosemanticity-decomposing-language-models-with-dictionary-le-37380221"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to towards monosemanticity: decomposing language models with dictionary learning on goodpods" style="width: 225px" /> </a>

Copy