Astral Codex Ten Podcast
I. Eliezer Yudkowsky’s is the original AI safety org. But the original isn’t always the best - how is Mesopotamia doing these days? As money, brainpower, and prestige pour into the field, MIRI remains what it always was - a group of loosely-organized weird people, one of whom cannot be convinced to stop wearing a in public. So when I was doing AI grantmaking last year, I asked them - why should I fund you instead of the guys with the army of bright-eyed Harvard grads, or the guys who just got Geoffrey Hinton as their celebrity spokesperson? What do you have that they don’t? MIRI...
info_outlineAstral Codex Ten Podcast
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
info_outlineAstral Codex Ten Podcast
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] If you’ve been following this blog for long, you probably know at least a bit about pharmaceutical research. You might know a bit about the sort of to influence doctors’ prescribing habits, or how it takes on average to bring a new medication to market, or something about the which...
info_outlineAstral Codex Ten Podcast
"You made him lower than the angels for a short time..." God: …and the math results we’re seeing are nothing short of incredible. This Terry Tao guy - Iblis: Let me stop you right there. I agree humans can, in controlled situations, provide correct answers to math problems. I deny that they truly understand math. I had a conversation with one of the humans recently, which I’ll bring up here for the viewers … give me one moment …
info_outlineAstral Codex Ten Podcast
You can sign the letter . The Trump administration has been , and people and groups with business before the administration have started laundering criticism through other sources with less need for goodwill. So I have been asked to share , which needs signatures from scientists, doctors, and healthcare professionals. The authors tell me (THIS IS NOT THE CONTENTS OF THE LETTER, IT’S THEIR EXPLANATION, TO ME, OF WHAT THE LETTER IS FOR): The NIH has spent at least than Congress has appropriated to them, which is bad because medical research is good and we want more of it. In May, that he...
info_outlineAstral Codex Ten Podcast
AI psychosis (, ) is an apparent phenomenon where people go crazy after talking to chatbots too much. There are some high-profile anecdotes, but still many unanswered questions. For example, how common is it really? Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already? Isn’t psychosis supposed to be a biological disease? Wouldn’t that make chatbot-induced psychosis the same kind of category error as chatbot-induced diabetes? I don’t have all the answers, so think of this post as an exploration of possible analogies and precedents...
info_outlineAstral Codex Ten Podcast
Finalist #9 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] Ollantay is a three-act play written in Quechua, an indigenous language of the South American Andes. It was first performed in Peru around 1775. Since the mid-1800s it’s been performed more often, and nowadays it’s pretty easy to find some company in Peru...
info_outlineAstral Codex Ten Podcast
[original post ] #1: Isn’t it possible that embryos are alive, or have personhood, or are moral patients? Most IVF involves getting many embryos, then throwing out the ones that the couple doesn’t need to implant. If destroying embryos were wrong, then IVF would be unethical - and embryo selection, which might encourage more people to do IVF, or to maximize the number of embryos they get from IVF, would be extra unethical. I think a default position would be that if you believe humans are more valuable than cows, and cows more valuable than bugs - presumably because humans are more...
info_outlineAstral Codex Ten Podcast
Finalist #8 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] I. The Men Are Not Alright Sometimes I’m convinced there’s a note taped to my back that says, “PLEASE SPILL YOUR SOUL UPON THIS WOMAN.” I am not a therapist, nor in any way certified to deal with emotional distress, yet my presence seems to cause people...
info_outlineAstral Codex Ten Podcast
A guest post by David Schneider-Joseph The “amyloid hypothesis” says that Alzheimer’s is caused by accumulation of the peptide amyloid-β. It’s the leading model in academia, but a favorite target for science journalists, contrarian bloggers, and neuroscience public intellectuals, who point out problems like: Some of the research establishing amyloid's role turned out to be fraudulent. The level of amyloid in the brain doesn’t correlate very well with the level of cognitive impairment across Alzheimer’s patients. Several strains of mice that were genetically programmed to have...
info_outlineOr maybe 2028, it's complicated
In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years.
The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn’t expect what happened next.
He got it all right.
Okay, not literally all. The US restricted chip exports to China in late 2022, not mid-2024. AI first beat humans at Diplomacy in late 2022, not 2025. And of course the mid-2025 to 2026 period remains to be seen. But to put its errors in context, Daniel’s document was written two years before ChatGPT existed. Nobody except researchers and a few hobbyists had ever talked to an AI. In fact, talking to AI was a misnomer. There was no way to make them continue the conversation; they would free associate based on your prompt, maybe turning it into a paragraph-length short story. If you pulled out all the stops, you could make an AI add single digit numbers and get the right answer more than 50% of the time. Yet if you read Daniel’s blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years.
I wasn’t the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. “What 2026 Looks Like” promised a sequel about 2027 and beyond, but it never materialized.
Unluckily for Sam Altman but luckily for the rest of us, Daniel broke with OpenAI mid-2024 in a dramatic split covered by the New York Times and others. He founded the AI Futures Project to produce the promised sequel, including:
- Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.
- Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.
- Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.
- Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.
…and me! Since October, I’ve been volunteering part-time, doing some writing and publicity work. I can’t take credit for the forecast itself - or even for the lion’s share of the writing and publicity - but it’s been an immense privilege to work alongside some of the smartest and most epistemically virtuous people I know, trying to absorb their worldview on a level deep enough to do it justice. We have no illusions that we’ll get as lucky as last time, but we still think it’s a valuable contribution to the discussion.