loader from loading.io

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

Release Date: 02/04/2023

38.1 - Alan Chan on Agent Infrastructure show art 38.1 - Alan Chan on Agent Infrastructure

AXRP - the AI X-risk Research Podcast

Road lines, street lights, and licence plates are examples of infrastructure used to ensure that roads operate smoothly. In this episode, Alan Chan talks about using similar interventions to help avoid bad outcomes from the deployment of AI agents. Patreon: Ko-fi: The transcript: FAR.AI: FAR.AI on X (aka Twitter):  FAR.AI on YouTube: The Alignment Workshop:   Topics we discuss, and timestamps: 01:02 - How the Alignment Workshop is 01:32 - Agent infrastructure 04:57 - Why agent infrastructure 07:54 - A trichotomy of agent infrastructure 13:59 - Agent IDs 18:17 - Agent channels...

info_outline
38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems show art 38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems

AXRP - the AI X-risk Research Podcast

Do language models understand the causal structure of the world, or do they merely note correlations? And what happens when you build a big AI society out of them? In this brief episode, recorded at the Bay Area Alignment Workshop, I chat with Zhijing Jin about her research on these questions. Patreon: Ko-fi: The transcript: FAR.AI: FAR.AI on X (aka Twitter): FAR.AI on YouTube: The Alignment Workshop:   Topics we discuss, and timestamps: 00:35 - How the Alignment Workshop is 00:47 - How Zhijing got interested in causality and natural language processing 03:14 - Causality and...

info_outline
37 - Jaime Sevilla on AI Forecasting show art 37 - Jaime Sevilla on AI Forecasting

AXRP - the AI X-risk Research Podcast

Epoch AI is the premier organization that tracks the trajectory of AI - how much compute is used, the role of algorithmic improvements, the growth in data used, and when the above trends might hit an end. In this episode, I speak with the director of Epoch AI, Jaime Sevilla, about how compute, data, and algorithmic improvements are impacting AI, and whether continuing to scale can get us AGI. Patreon: Ko-fi: The transcript:   Topics we discuss, and timestamps: 0:00:38 - The pace of AI progress 0:07:49 - How Epoch AI tracks AI compute 0:11:44 - Why does AI compute grow so smoothly?...

info_outline
36 - Adam Shai and Paul Riechers on Computational Mechanics show art 36 - Adam Shai and Paul Riechers on Computational Mechanics

AXRP - the AI X-risk Research Podcast

Sometimes, people talk about transformers as having "world models" as a result of being trained to predict text data on the internet. But what does this even mean? In this episode, I talk with Adam Shai and Paul Riechers about their work applying computational mechanics, a sub-field of physics studying how to predict random processes, to neural networks. Patreon: Ko-fi: The transcript:   Topics we discuss, and timestamps: 0:00:42 - What computational mechanics is 0:29:49 - Computational mechanics vs other approaches 0:36:16 - What world models are 0:48:41 - Fractals 0:57:43 - How the...

info_outline
New Patreon tiers + MATS applications show art New Patreon tiers + MATS applications

AXRP - the AI X-risk Research Podcast

Patreon: MATS: Note: I'm employed by MATS, but they're not paying me to make this video.

info_outline
35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization show art 35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization

AXRP - the AI X-risk Research Podcast

How do we figure out what large language models believe? In fact, do they even have beliefs? Do those beliefs have locations, and if so, can we edit those locations to change the beliefs? Also, how are we going to get AI to perform tasks so hard that we can't figure out if they succeeded at them? In this episode, I chat with Peter Hase about his research into these questions. Patreon: Ko-fi: The transcript:   Topics we discuss, and timestamps: 0:00:36 - NLP and interpretability 0:10:20 - Interpretability lessons 0:32:22 - Belief interpretability 1:00:12 - Localizing and editing models'...

info_outline
34 - AI Evaluations with Beth Barnes show art 34 - AI Evaluations with Beth Barnes

AXRP - the AI X-risk Research Podcast

How can we figure out if AIs are capable enough to pose a threat to humans? When should we make a big effort to mitigate risks of catastrophic AI misbehaviour? In this episode, I chat with Beth Barnes, founder of and head of research at METR, about these questions and more. Patreon: Ko-fi: The transcript:   Topics we discuss, and timestamps: 0:00:37 - What is METR? 0:02:44 - What is an "eval"? 0:14:42 - How good are evals? 0:37:25 - Are models showing their full capabilities? 0:53:25 - Evaluating alignment 1:01:38 - Existential safety methodology 1:12:13 - Threat models and capability...

info_outline
33 - RLHF Problems with Scott Emmons show art 33 - RLHF Problems with Scott Emmons

AXRP - the AI X-risk Research Podcast

Reinforcement Learning from Human Feedback, or RLHF, is one of the main ways that makers of large language models make them 'aligned'. But people have long noted that there are difficulties with this approach when the models are smarter than the humans providing feedback. In this episode, I talk with Scott Emmons about his work categorizing the problems that can show up in this setting. Patreon: Ko-fi: The transcript: Topics we discuss, and timestamps: 0:00:33 - Deceptive inflation 0:17:56 - Overjustification 0:32:48 - Bounded human rationality 0:50:46 - Avoiding these problems 1:14:13 -...

info_outline
32 - Understanding Agency with Jan Kulveit show art 32 - Understanding Agency with Jan Kulveit

AXRP - the AI X-risk Research Podcast

What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group. Patreon: Ko-fi: The transcript: Topics we discuss, and timestamps: 0:00:47 - What is active inference? 0:15:14 - Preferences in active inference 0:31:33 - Action vs perception in active inference 0:46:07 - Feedback loops 1:01:32 - Active inference vs LLMs 1:12:04 - Hierarchical agency 1:58:28 - The Alignment of Complex Systems group   Website of...

info_outline
31 - Singular Learning Theory with Daniel Murfet show art 31 - Singular Learning Theory with Daniel Murfet

AXRP - the AI X-risk Research Podcast

What's going on with deep learning? What sorts of models get learned, and what are the learning dynamics? Singular learning theory is a theory of Bayesian statistics broad enough in scope to encompass deep neural networks that may help answer these questions. In this episode, I speak with Daniel Murfet about this research program and what it tells us. Patreon: Ko-fi: Topics we discuss, and timestamps: 0:00:26 - What is singular learning theory? 0:16:00 - Phase transitions 0:35:12 - Estimating the local learning coefficient 0:44:37 - Singular learning theory and generalization 1:00:39 -...

info_outline
 
More Episodes

How good are we at understanding the internal computation of advanced machine learning models, and do we have a hope at getting better? In this episode, Neel Nanda talks about the sub-field of mechanistic interpretability research, as well as papers he's contributed to that explore the basics of transformer circuits, induction heads, and grokking.

 

Topics we discuss, and timestamps:

 - 00:01:05 - What is mechanistic interpretability?

 - 00:24:16 - Types of AI cognition

 - 00:54:27 - Automating mechanistic interpretability

 - 01:11:57 - Summarizing the papers

 - 01:24:43 - 'A Mathematical Framework for Transformer Circuits'

   - 01:39:31 - How attention works

   - 01:49:26 - Composing attention heads

   - 01:59:42 - Induction heads

 - 02:11:05 - 'In-context Learning and Induction Heads'

   - 02:12:55 - The multiplicity of induction heads

   - 02:30:10 - Lines of evidence

   - 02:38:47 - Evolution in loss-space

   - 02:46:19 - Mysteries of in-context learning

 - 02:50:57 - 'Progress measures for grokking via mechanistic interpretability'

   - 02:50:57 - How neural nets learn modular addition

   - 03:11:37 - The suddenness of grokking

 - 03:34:16 - Relation to other research

 - 03:43:57 - Could mechanistic interpretability possibly work?

 - 03:49:28 - Following Neel's research

 

The transcript: axrp.net/episode/2023/02/04/episode-19-mechanistic-interpretability-neel-nanda.html

 

Links to Neel's things:

 - Neel on Twitter: twitter.com/NeelNanda5

 - Neel on the Alignment Forum: alignmentforum.org/users/neel-nanda-1

 - Neel's mechanistic interpretability blog: neelnanda.io/mechanistic-interpretability

 - TransformerLens: github.com/neelnanda-io/TransformerLens

 - Concrete Steps to Get Started in Transformer Mechanistic Interpretability: alignmentforum.org/posts/9ezkEb9oGvEi6WoB3/concrete-steps-to-get-started-in-transformer-mechanistic

 - Neel on YouTube: youtube.com/@neelnanda2469

 - 200 Concrete Open Problems in Mechanistic Interpretability: alignmentforum.org/s/yivyHaCAmMJ3CqSyj

 - Comprehesive mechanistic interpretability explainer: dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J

 

Writings we discuss:

 - A Mathematical Framework for Transformer Circuits: transformer-circuits.pub/2021/framework/index.html

 - In-context Learning and Induction Heads: transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html

 - Progress measures for grokking via mechanistic interpretability: arxiv.org/abs/2301.05217

 - Hungry Hungry Hippos: Towards Language Modeling with State Space Models (referred to in this episode as the "S4 paper"): arxiv.org/abs/2212.14052

 - interpreting GPT: the logit lens: lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens

 - Locating and Editing Factual Associations in GPT (aka the ROME paper): arxiv.org/abs/2202.05262

 - Human-level play in the game of Diplomacy by combining language models with strategic reasoning: science.org/doi/10.1126/science.ade9097

 - Causal Scrubbing: alignmentforum.org/s/h95ayYYwMebGEYN5y/p/JvZhhzycHu2Yd57RN

 - An Interpretability Illusion for BERT: arxiv.org/abs/2104.07143

 - Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small: arxiv.org/abs/2211.00593

 - Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets: arxiv.org/abs/2201.02177

 - The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models: arxiv.org/abs/2201.03544

 - Collaboration & Credit Principles: colah.github.io/posts/2019-05-Collaboration

 - Transformer Feed-Forward Layers Are Key-Value Memories: arxiv.org/abs/2012.14913

  - Multi-Component Learning and S-Curves: alignmentforum.org/posts/RKDQCB6smLWgs2Mhr/multi-component-learning-and-s-curves

 - The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: arxiv.org/abs/1803.03635

 - Linear Mode Connectivity and the Lottery Ticket Hypothesis: proceedings.mlr.press/v119/frankle20a