loader from loading.io

22 - Shard Theory with Quintin Pope

AXRP - the AI X-risk Research Podcast

Release Date: 06/15/2023

40 - Jason Gross on Compact Proofs and Interpretability show art 40 - Jason Gross on Compact Proofs and Interpretability

AXRP - the AI X-risk Research Podcast

How do we figure out whether interpretability is doing its job? One way is to see if it helps us prove things about models that we care about knowing. In this episode, I speak with Jason Gross about his agenda to benchmark interpretability in this way, and his exploration of the intersection of proofs and modern machine learning. Patreon: Ko-fi: Transcript:   Topics we discuss, and timestamps: 0:00:40 - Why compact proofs 0:07:25 - Compact Proofs of Model Performance via Mechanistic Interpretability 0:14:19 - What compact proofs look like 0:32:43 - Structureless noise, and why proofs...

info_outline
38.8 - David Duvenaud on Sabotage Evaluations and the Post-AGI Future show art 38.8 - David Duvenaud on Sabotage Evaluations and the Post-AGI Future

AXRP - the AI X-risk Research Podcast

In this episode, I chat with David Duvenaud about two topics he's been thinking about: firstly, a paper he wrote about evaluating whether or not frontier models can sabotage human decision-making or monitoring of the same models; and secondly, the difficult situation humans find themselves in in a post-AGI future, even if AI is aligned with human intentions.   Patreon: Ko-fi: Transcript: FAR.AI: FAR.AI on X (aka Twitter): FAR.AI on YouTube: The Alignment Workshop:   Topics we discuss, and timestamps: 01:42 - The difficulty of sabotage evaluations 05:23 - Types of sabotage...

info_outline
38.7 - Anthony Aguirre on the Future of Life Institute show art 38.7 - Anthony Aguirre on the Future of Life Institute

AXRP - the AI X-risk Research Podcast

The Future of Life Institute is one of the oldest and most prominant organizations in the AI existential safety space, working on such topics as the AI pause open letter and how the EU AI Act can be improved. Metaculus is one of the premier forecasting sites on the internet. Behind both of them lie one man: Anthony Aguirre, who I talk with in this episode. Patreon: Ko-fi: Transcript: FAR.AI:  FAR.AI on X (aka Twitter): FAR.AI on YouTube: The Alignment Workshop:   Topics we discuss, and timestamps: 00:33 - Anthony, FLI, and Metaculus 06:46 - The Alignment Workshop 07:15 - FLI's...

info_outline
38.6 - Joel Lehman on Positive Visions of AI show art 38.6 - Joel Lehman on Positive Visions of AI

AXRP - the AI X-risk Research Podcast

Typically this podcast talks about how to avert destruction from AI. But what would it take to ensure AI promotes human flourishing as well as it can? Is alignment to individuals enough, and if not, where do we go form here? In this episode, I talk with Joel Lehman about these questions. Patreon: Ko-fi: Transcript: FAR.AI: FAR.AI on X (aka Twitter): FAR.AI on YouTube: The Alignment Workshop:   Topics we discuss, and timestamps:  01:12 - Why aligned AI might not be enough 04:05 - Positive visions of AI 08:27 - Improving recommendation systems   Links: Why Greatness Cannot...

info_outline
38.5 - Adrià Garriga-Alonso on Detecting AI Scheming show art 38.5 - Adrià Garriga-Alonso on Detecting AI Scheming

AXRP - the AI X-risk Research Podcast

Suppose we're worried about AIs engaging in long-term plans that they don't tell us about. If we were to peek inside their brains, what should we look for to check whether this was happening? In this episode Adrià Garriga-Alonso talks about his work trying to answer this question. Patreon: Ko-fi: Transcript: FAR.AI:  FAR.AI on X (aka Twitter): FAR.AI on YouTube: The Alignment Workshop:   Topics we discuss, and timestamps: 01:04 - The Alignment Workshop 02:49 - How to detect scheming AIs 05:29 - Sokoban-solving networks taking time to think 12:18 - Model organisms of long-term...

info_outline
38.4 - Shakeel Hashim on AI Journalism show art 38.4 - Shakeel Hashim on AI Journalism

AXRP - the AI X-risk Research Podcast

AI researchers often complain about the poor coverage of their work in the news media. But why is this happening, and how can it be fixed? In this episode, I speak with Shakeel Hashim about the resource constraints facing AI journalism, the disconnect between journalists' and AI researchers' views on transformative AI, and efforts to improve the state of AI journalism, such as Tarbell and Shakeel's newsletter, Transformer. Patreon: Ko-fi: The transcript: FAR.AI: FAR.AI on X (aka Twitter): FAR.AI on YouTube:  The Alignment Workshop:   Topics we discuss, and timestamps: 01:31 -...

info_outline
38.3 - Erik Jenner on Learned Look-Ahead show art 38.3 - Erik Jenner on Learned Look-Ahead

AXRP - the AI X-risk Research Podcast

Lots of people in the AI safety space worry about models being able to make deliberate, multi-step plans. But can we already see this in existing neural nets? In this episode, I talk with Erik Jenner about his work looking at internal look-ahead within chess-playing neural networks. Patreon: Ko-fi: The transcript: FAR.AI: FAR.AI on X (aka Twitter): FAR.AI on YouTube:  The Alignment Workshop:   Topics we discuss, and timestamps: 00:57 - How chess neural nets look into the future 04:29 - The dataset and basic methodology 05:23 - Testing for branching futures? 07:57 - Which...

info_outline
39 - Evan Hubinger on Model Organisms of Misalignment show art 39 - Evan Hubinger on Model Organisms of Misalignment

AXRP - the AI X-risk Research Podcast

The 'model organisms of misalignment' line of research creates AI models that exhibit various types of misalignment, and studies them to try to understand how the misalignment occurs and whether it can be somehow removed. In this episode, Evan Hubinger talks about two papers he's worked on at Anthropic under this agenda: "Sleeper Agents" and "Sycophancy to Subterfuge". Patreon: Ko-fi: The transcript:   Topics we discuss, and timestamps: 0:00:36 - Model organisms and stress-testing 0:07:38 - Sleeper Agents 0:22:32 - Do 'sleeper agents' properly model deceptive alignment? 0:38:32 -...

info_outline
38.2 - Jesse Hoogland on Singular Learning Theory show art 38.2 - Jesse Hoogland on Singular Learning Theory

AXRP - the AI X-risk Research Podcast

You may have heard of singular learning theory, and its "local learning coefficient", or LLC - but have you heard of the refined LLC? In this episode, I chat with Jesse Hoogland about his work on SLT, and using the refined LLC to find a new circuit in language models.   Patreon: Ko-fi: The transcript: FAR.AI: FAR.AI on X (aka Twitter):  FAR.AI on YouTube: The Alignment Workshop:   Topics we discuss, and timestamps: 00:34 - About Jesse 01:49 - The Alignment Workshop 02:31 - About Timaeus 05:25 - SLT that isn't developmental interpretability 10:41 - The refined local...

info_outline
38.1 - Alan Chan on Agent Infrastructure show art 38.1 - Alan Chan on Agent Infrastructure

AXRP - the AI X-risk Research Podcast

Road lines, street lights, and licence plates are examples of infrastructure used to ensure that roads operate smoothly. In this episode, Alan Chan talks about using similar interventions to help avoid bad outcomes from the deployment of AI agents. Patreon: Ko-fi: The transcript: FAR.AI: FAR.AI on X (aka Twitter):  FAR.AI on YouTube: The Alignment Workshop:   Topics we discuss, and timestamps: 01:02 - How the Alignment Workshop is 01:32 - Agent infrastructure 04:57 - Why agent infrastructure 07:54 - A trichotomy of agent infrastructure 13:59 - Agent IDs 18:17 - Agent channels...

info_outline
 
More Episodes

What can we learn about advanced deep learning systems by understanding how humans learn and form values over their lifetimes? Will superhuman AI look like ruthless coherent utility optimization, or more like a mishmash of contextually activated desires? This episode's guest, Quintin Pope, has been thinking about these questions as a leading researcher in the shard theory community. We talk about what shard theory is, what it says about humans and neural networks, and what the implications are for making AI safe.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

Episode art by Hamish Doodles: hamishdoodles.com

 

Topics we discuss, and timestamps:

 - 0:00:42 - Why understand human value formation?

   - 0:19:59 - Why not design methods to align to arbitrary values?

 - 0:27:22 - Postulates about human brains

   - 0:36:20 - Sufficiency of the postulates

   - 0:44:55 - Reinforcement learning as conditional sampling

   - 0:48:05 - Compatibility with genetically-influenced behaviour

   - 1:03:06 - Why deep learning is basically what the brain does

 - 1:25:17 - Shard theory

   - 1:38:49 - Shard theory vs expected utility optimizers

   - 1:54:45 - What shard theory says about human values

 - 2:05:47 - Does shard theory mean we're doomed?

   - 2:18:54 - Will nice behaviour generalize?

   - 2:33:48 - Does alignment generalize farther than capabilities?

 - 2:42:03 - Are we at the end of machine learning history?

 - 2:53:09 - Shard theory predictions

 - 2:59:47 - The shard theory research community

   - 3:13:45 - Why do shard theorists not work on replicating human childhoods?

 - 3:25:53 - Following shardy research

 

The transcript: axrp.net/episode/2023/06/15/episode-22-shard-theory-quintin-pope.html

 

Shard theorist links:

 - Quintin's LessWrong profile: lesswrong.com/users/quintin-pope

 - Alex Turner's LessWrong profile: lesswrong.com/users/turntrout

 - Shard theory Discord: discord.gg/AqYkK7wqAG

 - EleutherAI Discord: discord.gg/eleutherai

 

Research we discuss:

 - The Shard Theory Sequence: lesswrong.com/s/nyEFg3AuJpdAozmoX

 - Pretraining Language Models with Human Preferences: arxiv.org/abs/2302.08582

 - Inner alignment in salt-starved rats: lesswrong.com/posts/wcNEXDHowiWkRxDNv/inner-alignment-in-salt-starved-rats

 - Intro to Brain-like AGI Safety Sequence: lesswrong.com/s/HzcM2dkCq7fwXBej8

 - Brains and transformers:

   - The neural architecture of language: Integrative modeling converges on predictive processing: pnas.org/doi/10.1073/pnas.2105646118

   - Brains and algorithms partially converge in natural language processing: nature.com/articles/s42003-022-03036-1

   - Evidence of a predictive coding hierarchy in the human brain listening to speech: nature.com/articles/s41562-022-01516-2

 - Singular learning theory explainer: Neural networks generalize because of this one weird trick: lesswrong.com/posts/fovfuFdpuEwQzJu2w/neural-networks-generalize-because-of-this-one-weird-trick

 - Singular learning theory links: metauni.org/slt/

 - Implicit Regularization via Neural Feature Alignment, aka circles in the parameter-function map: arxiv.org/abs/2008.00938

 - The shard theory of human values: lesswrong.com/s/nyEFg3AuJpdAozmoX/p/iCfdcxiyr2Kj8m8mT

 - Predicting inductive biases of pre-trained networks: openreview.net/forum?id=mNtmhaDkAr

 - Understanding and controlling a maze-solving policy network, aka the cheese vector: lesswrong.com/posts/cAC4AXiNC5ig6jQnc/understanding-and-controlling-a-maze-solving-policy-network

 - Quintin's Research agenda: Supervising AIs improving AIs: lesswrong.com/posts/7e5tyFnpzGCdfT4mR/research-agenda-supervising-ais-improving-ais

 - Steering GPT-2-XL by adding an activation vector: lesswrong.com/posts/5spBue2z2tw4JuDCx/steering-gpt-2-xl-by-adding-an-activation-vector

 

Links for the addendum on mesa-optimization skepticism:

 - Quintin's response to Yudkowsky arguing against AIs being steerable by gradient descent: lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky#Yudkowsky_argues_against_AIs_being_steerable_by_gradient_descent_

 - Quintin on why evolution is not like AI training: lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky#Edit__Why_evolution_is_not_like_AI_training

 - Evolution provides no evidence for the sharp left turn: lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn

 - Let's Agree to Agree: Neural Networks Share Classification Order on Real Datasets: arxiv.org/abs/1905.10854