loader from loading.io

Introducing AI 2027

Astral Codex Ten Podcast

Release Date: 04/14/2025

Moments Of Awakening show art Moments Of Awakening

Astral Codex Ten Podcast

Consciousness is the great mystery. In search of answers, scientists have plumbed every edge case they can think of - sleep, comas, lucid dreams, LSD trips, meditative ecstasies, seizures, neurosurgeries, . Still, new stuff occasionally turns up. I assume is a troll (source: the guy has a frog avatar):

info_outline
Contra MR On Charity Regrants show art Contra MR On Charity Regrants

Astral Codex Ten Podcast

I often disagree with Marginal Revolution, but made me a new level of angry:

info_outline
The Evidence That A Million Americans Died Of COVID show art The Evidence That A Million Americans Died Of COVID

Astral Codex Ten Podcast

Many commenters responded to by challenging the claim that 1.2 million Americans died of COVID...

info_outline
The Other COVID Reckoning show art The Other COVID Reckoning

Astral Codex Ten Podcast

Five years later, we can’t stop talking about COVID. Remember lockdowns? The conflicting guidelines about masks - don’t wear them! Wear them! Maybe wear them! School closures, remote learning, learning loss, something about teachers’ unions. That one Vox article on how worrying about COVID was anti-Chinese racism. The time Trump sort of half-suggested injecting disinfectants. Hydroxychloroquine, ivermectin, fluvoxamine, Paxlovid. Those jerks who tried to pressure you into getting vaccines, or those other jerks who wouldn’t get vaccines even though it put everyone else at risk. Anthony...

info_outline
Book Review: Selfish Reasons To Have More Kids show art Book Review: Selfish Reasons To Have More Kids

Astral Codex Ten Podcast

Bryan Caplan’s is like the Bible. You already know what it says. You’ve already decided whether you believe or not. Do you really have to read it all the way through? But when you’re going through a rough patch in your life, sometimes it helps to pick up a Bible and look for pearls of forgotten wisdom. That’s where I am now. Having twins is a lot of work. My wife does most of it. My nanny does most of what’s left. Even so, the remaining few hours a day leave me exhausted. I decided to read the canonical book on how having kids is easier and more fun than you think, to see if maybe I...

info_outline
In Search Of /r/petfree show art In Search Of /r/petfree

Astral Codex Ten Podcast

, and a few names always come up. and are unpopular, but if I read them with an open mind, I always end up sympathetic - neither lifestyle is persecuted in my particular corner of society, but the Redditors there have usually been through some crazy stuff, and I don’t begrudge them a place to vent. The one that really floors me is . The denizens of /r/petfree don’t like pets. Their particular complaints vary, but most common are: Some stores either allow pets or don’t enforce bans on them, and then there are pets go in those stores, and they are dirty and annoying. Some parks either...

info_outline
Highlights From The Comments On AI Geoguessr show art Highlights From The Comments On AI Geoguessr

Astral Codex Ten Podcast

Thanks to everyone who commented on .    

info_outline
Testing AI's GeoGuessr Genius show art Testing AI's GeoGuessr Genius

Astral Codex Ten Podcast

Some of the more unhinged writing on superintelligence pictures AI doing things that seem like magic. Crossing air gaps to escape its data center. Building nanomachines from simple components. Plowing through physical bottlenecks to revolutionize the economy in months. More sober thinkers point out that these things might be physically impossible. You can’t do physically impossible things, even if you’re very smart. No, say the speculators, you don’t understand. Everything is physically impossible when you’re 800 IQ points too dumb to figure it out. A chimp might feel secure that...

info_outline
Moldbug Sold Out show art Moldbug Sold Out

Astral Codex Ten Podcast

Cathy Young’s new hit piece on Curtis Yarvin (aka Mencius Moldbug) doesn’t mince words. Titled , it describes him as an "inept", "not exactly coherent" "trollish, ill-informed pseudo-intellectual" notable for his "woefully superficial knowledge and utter ignorance". Yarvin’s fans counter that if you look deeper, he has good responses to Young’s objections: Both sides are right. The synthesis is that Moldbug sold out. In the late 2000s, Moldbug wrote some genuinely interesting speculations on novel sci-fi variants of autocracy. Admitting that the dictatorships of the 20th century were...

info_outline
The Populist Right Must Own Tariffs show art The Populist Right Must Own Tariffs

Astral Codex Ten Podcast

President Trump’s approval rating to near-historic lows. With economic disruption from the tariffs likely to hit next month, his numbers will probably get even worse; this administration could reach unprecedented levels of unpopularity. If I were a far-right populist, I would be thinking hard about a strategy to prevent the blowback from crippling the movement. Such a strategy is easy to come by. Anger over DOGE and deportations has a natural floor. If Trump’s base starts abandoning him, it will be because of the tariffs. But tariffs aren’t a load-bearing part of the MAGA platform....

info_outline
 
More Episodes

Or maybe 2028, it's complicated

In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years.

The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn’t expect what happened next.

He got it all right.

Okay, not literally all. The US restricted chip exports to China in late 2022, not mid-2024. AI first beat humans at Diplomacy in late 2022, not 2025. And of course the mid-2025 to 2026 period remains to be seen. But to put its errors in context, Daniel’s document was written two years before ChatGPT existed. Nobody except researchers and a few hobbyists had ever talked to an AI. In fact, talking to AI was a misnomer. There was no way to make them continue the conversation; they would free associate based on your prompt, maybe turning it into a paragraph-length short story. If you pulled out all the stops, you could make an AI add single digit numbers and get the right answer more than 50% of the time. Yet if you read Daniel’s blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years.

I wasn’t the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. “What 2026 Looks Like” promised a sequel about 2027 and beyond, but it never materialized.

Unluckily for Sam Altman but luckily for the rest of us, Daniel broke with OpenAI mid-2024 in a dramatic split covered by the New York Times and others. He founded the AI Futures Project to produce the promised sequel, including:

  • Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.
  • Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.
  • Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.
  • Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.

…and me! Since October, I’ve been volunteering part-time, doing some writing and publicity work. I can’t take credit for the forecast itself - or even for the lion’s share of the writing and publicity - but it’s been an immense privilege to work alongside some of the smartest and most epistemically virtuous people I know, trying to absorb their worldview on a level deep enough to do it justice. We have no illusions that we’ll get as lucky as last time, but we still think it’s a valuable contribution to the discussion.

https://www.astralcodexten.com/p/introducing-ai-2027

https://ai-2027.com/