loader from loading.io

Alignment Newsletter Podcast

The Alignment Newsletter is a weekly publication with recent content relevant to AI alignment. This podcast is an audio version, recorded by Robert Miles (http://robertskmiles.com) More information about the newsletter at: https://rohinshah.com/alignment-newsletter/

info_outline Alignment Newsletter #173: Recent language model results from DeepMind 07/21/2022
info_outline Alignment Newsletter #172: Sorry for the long hiatus! 07/05/2022
info_outline Alignment Newsletter #171: Disagreements between alignment "optimists" and "pessimists" 01/23/2022
info_outline Alignment Newsletter #170: Analyzing the argument for risk from power-seeking AI 12/08/2021
info_outline Alignment Newsletter #169: Collaborating with humans without human data 11/24/2021
info_outline Alignment Newsletter #168: Four technical topics for which Open Phil is soliciting grant proposals 10/28/2021
info_outline Alignment Newsletter #167: Concrete ML safety problems and their relevance to x-risk 10/20/2021
info_outline Alignment Newsletter #166: Is it crazy to claim we're in the most important century? 10/08/2021
info_outline Alignment Newsletter #165: When large models are more likely to lie 09/22/2021
info_outline Alignment Newsletter #164: How well can language models write code? 09/15/2021
info_outline Alignment Newsletter #163: Using finite factored sets for causal and temporal inference 09/08/2021
info_outline Alignment Newsletter #162: Foundation models: a paradigm shift within AI 08/27/2021
info_outline Alignment Newsletter #161: Creating generalizable reward functions for multiple tasks by learning a model of functional similarity 08/20/2021
info_outline Alignment Newsletter #160: Building AIs that learn and think like people 08/13/2021
info_outline Alignment Newsletter #159: Building agents that know how to experiment, by training on procedurally generated games 08/04/2021
info_outline Alignment Newsletter #158: Should we be optimistic about generalization? 07/29/2021
info_outline Alignment Newsletter #157: Measuring misalignment in the technology underlying Copilot 07/23/2021
info_outline Alignment Newsletter #156: The scaling hypothesis: a plan for building AGI 07/16/2021
info_outline Alignment Newsletter #155: A Minecraft benchmark for algorithms that learn without reward functions 07/08/2021
info_outline Alignment Newsletter #154: What economic growth theory has to say about transformative AI 06/30/2021
info_outline Alignment Newsletter #153: Experiments that demonstrate failures of objective robustness 06/26/2021
info_outline Alignment Newsletter #152: How we’ve overestimated few-shot learning capabilities 06/16/2021
info_outline Alignment Newsletter #151: How sparsity in the final layer makes a neural net debuggable 05/19/2021
info_outline Alignment Newsletter #150: The subtypes of Cooperative AI research 05/12/2021
info_outline Alignment Newsletter #149: The newsletter's editorial policy 05/05/2021
info_outline Alignment Newsletter #148: Analyzing generalization across more axes than just accuracy or loss 04/28/2021
info_outline Alignment Newsletter #147: An overview of the interpretability landscape 04/21/2021
info_outline Alignment Newsletter #146: Plausible stories of how we might fail to avert an existential catastrophe 04/14/2021
info_outline Alignment Newsletter #145: Our three year anniversary! 04/07/2021
info_outline Alignment Newsletter #144: How language models can also be finetuned for non-language tasks 04/02/2021
 
share