loader from loading.io

AXRP - the AI X-risk Research Podcast

AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.

info_outline 38.1 - Alan Chan on Agent Infrastructure 11/16/2024
info_outline 38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems 11/14/2024
info_outline 37 - Jaime Sevilla on AI Forecasting 10/04/2024
info_outline 36 - Adam Shai and Paul Riechers on Computational Mechanics 09/29/2024
info_outline New Patreon tiers + MATS applications 09/28/2024
info_outline 35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization 08/24/2024
info_outline 34 - AI Evaluations with Beth Barnes 07/28/2024
info_outline 33 - RLHF Problems with Scott Emmons 06/12/2024
info_outline 32 - Understanding Agency with Jan Kulveit 05/30/2024
info_outline 31 - Singular Learning Theory with Daniel Murfet 05/07/2024
info_outline 30 - AI Security with Jeffrey Ladish 04/30/2024
info_outline 29 - Science of Deep Learning with Vikrant Varma 04/25/2024
info_outline 28 - Suing Labs for AI Risk with Gabriel Weil 04/17/2024
info_outline 27 - AI Control with Buck Shlegeris and Ryan Greenblatt 04/11/2024
info_outline 26 - AI Governance with Elizabeth Seger 11/26/2023
info_outline 25 - Cooperative AI with Caspar Oesterheld 10/03/2023
info_outline 24 - Superalignment with Jan Leike 07/27/2023
info_outline 23 - Mechanistic Anomaly Detection with Mark Xu 07/27/2023
info_outline Survey, store closing, Patreon 06/28/2023
info_outline 22 - Shard Theory with Quintin Pope 06/15/2023
info_outline 21 - Interpretability for Engineers with Stephen Casper 05/02/2023
info_outline 20 - 'Reform' AI Alignment with Scott Aaronson 04/12/2023
info_outline Store, Patreon, Video 02/07/2023
info_outline 19 - Mechanistic Interpretability with Neel Nanda 02/04/2023
info_outline New podcast - The Filan Cabinet 10/13/2022
info_outline 18 - Concept Extrapolation with Stuart Armstrong 09/03/2022
info_outline 17 - Training for Very High Reliability with Daniel Ziegler 08/21/2022
info_outline 16 - Preparing for Debate AI with Geoffrey Irving 07/01/2022
info_outline 15 - Natural Abstractions with John Wentworth 05/23/2022
info_outline 14 - Infra-Bayesian Physicalism with Vanessa Kosoy 04/05/2022
 
share