Astral Codex Ten Podcast
(see Wikipedia: ) Consider the following claims The purpose of a cancer hospital is to cure two-thirds of cancer patients. The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia. The purpose of the British government is to propose a controversial new sentencing policy, stand firm in the face of protests for a while, then cave in after slightly larger protests and agree not to pass the policy after all. The purpose of the New York bus system is to emit four billion tons of carbon dioxide. These are obviously false.
info_outlineAstral Codex Ten Podcast
Here’s a list of things I updated on after working on . Some of these are discussed in more detail in the supplements, including the , , , , and . I’m highlighting these because it seems like a lot of people missed their existence, and they’re what transforms the scenario from cool story to research-backed debate contribution. These are my opinions only, and not necessarily endorsed by the rest of the team.
info_outlineAstral Codex Ten Podcast
We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution. We wrote a scenario that represents our best guess about what that might look like. It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes. (A condensed two-hour version with footnotes and text boxes removed is available at the above link.)
info_outlineAstral Codex Ten Podcast
Or maybe 2028, it's complicated In 2021, a researcher named Daniel Kokotajlo published a blog post called “”, where he laid out what he thought would happen in AI over the next five years. The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn’t expect what happened next. He got it all right. Okay, not literally all. The US...
info_outlineAstral Codex Ten Podcast
In Ballad of the White Horse, G.K. Chesterton describes the Virgin Mary: Her face was like an open word When brave men speak and choose, The very colours of her coat Were better than good news. Why the colors of her coat? The medievals took their dyes very seriously. This was before modern chemistry, so you had to try hard if you wanted good colors. Try hard they did; they famously used literal gold, hammered into ultrathin sheets, to make golden highlights. Blue was another tough one. You could do mediocre, half-faded blues with azurite. But if you wanted perfect blue, the color of the...
info_outlineAstral Codex Ten Podcast
invited me to participate in their “Weird” themed issue, so I wrote five thousand words on evil Atlantean cave dwarves. As always, I thought of the perfect framing just after I’d sent it out. The perfect framing is - where did Scientology come from? How did a 1940s sci-fi writer found a religion? Part of the answer is that 1940s sci-fi fandom was a really fertile place, where all of these novel mythemes about aliens, psychics, and lost civilizations were hitting a naive population certain that there must be something beyond the world they knew. This made them easy prey not just for...
info_outlineAstral Codex Ten Podcast
People love trying to find holes in the drowning child thought experiment. This is natural: it’s obvious you should save the child in the scenario, but much less obvious that you should give lots of charity to poor people (as it seems to imply). So there must be some distinction between the two scenarios. But most people’s cursory and uninspired attempts to find these fail.
info_outlineAstral Codex Ten Podcast
Jake Eaton has in Asterisk. Misophonia is a condition in which people can’t tolerate certain noises (classically chewing). Nobody loves chewing noises, but misophoniacs go above and beyond, sometimes ending relationships, shutting themselves indoors, or even deliberately trying to deafen themselves in an attempt to escape. So it’s a sensory hypersensitivity, right? Maybe not. There’s increasing evidence - which I learned about from Jake, but which didn’t make it into the article - that misophonia is less about sound than it seems. Misophoniacs who go deaf report that . Now they get...
info_outlineAstral Codex Ten Podcast
Last month, I for experts to help me understand the details of OpenAI’s forprofit buyout. The following comes from someone who has looked into the situation in depth but is not an insider. Mistakes are mine alone. Why Was OpenAI A Nonprofit In The First Place? In the early 2010s, the AI companies hadn’t yet discovered scaling laws, and so underestimated the amount of compute (and therefore money) it would take to build AI. DeepMind was the first victim; originally founded on high ideals of prioritizing safety and responsible stewardship of the Singularity, it hit a financial barrier and...
info_outlineAstral Codex Ten Podcast
Sorry, you can only get drugs when there's a drug shortage. Three GLP-1 drugs are approved for weight loss in the United States: Semaglutide (Ozempic®, Wegovy®, Rybelsus®) Tirzepatide (Mounjaro®, Zepbound®) Liraglutide (Victoza®, Saxenda®) …but liraglutide is noticeably worse than the others, and most people prefer either semaglutide or tirzepatide. These cost about $1000/month and are rarely covered by insurance, putting them out of reach for most Americans. …if you buy them from the pharma companies, like a chump. For the past three years, there’s been a shortage of these...
info_outlineOr maybe 2028, it's complicated
In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years.
The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn’t expect what happened next.
He got it all right.
Okay, not literally all. The US restricted chip exports to China in late 2022, not mid-2024. AI first beat humans at Diplomacy in late 2022, not 2025. And of course the mid-2025 to 2026 period remains to be seen. But to put its errors in context, Daniel’s document was written two years before ChatGPT existed. Nobody except researchers and a few hobbyists had ever talked to an AI. In fact, talking to AI was a misnomer. There was no way to make them continue the conversation; they would free associate based on your prompt, maybe turning it into a paragraph-length short story. If you pulled out all the stops, you could make an AI add single digit numbers and get the right answer more than 50% of the time. Yet if you read Daniel’s blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years.
I wasn’t the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. “What 2026 Looks Like” promised a sequel about 2027 and beyond, but it never materialized.
Unluckily for Sam Altman but luckily for the rest of us, Daniel broke with OpenAI mid-2024 in a dramatic split covered by the New York Times and others. He founded the AI Futures Project to produce the promised sequel, including:
- Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.
- Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.
- Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.
- Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.
…and me! Since October, I’ve been volunteering part-time, doing some writing and publicity work. I can’t take credit for the forecast itself - or even for the lion’s share of the writing and publicity - but it’s been an immense privilege to work alongside some of the smartest and most epistemically virtuous people I know, trying to absorb their worldview on a level deep enough to do it justice. We have no illusions that we’ll get as lucky as last time, but we still think it’s a valuable contribution to the discussion.