Astral Codex Ten Podcast
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] “Just as we don’t accept students using AI to write their essays, we will not accept districts using AI to supplant the critical role of teachers.” — Arthur Steinberg, American Federation of Teachers‑PA, reacting to Alpha’s cyber‑charter bid, January 2025 In January 2025, the...
info_outlineAstral Codex Ten Podcast
The Story So Far The mid-20th century was the golden age of nurture. Psychoanalysis, behaviorism, and the spirit of the ‘60s convinced most experts that parents, peers, and propaganda were the most important causes of adult personality. Starting in the 1970s, the pendulum swung the other way. Twin studies shocked the world by demonstrating that most behavioral traits - especially socially relevant traits like IQ - were substantially genetic. Typical estimates for adult IQ found it was about 60% genetic, 40% unpredictable, and barely related at all to parenting or family environment. By the...
info_outlineAstral Codex Ten Podcast
Related to:
info_outlineAstral Codex Ten Podcast
The first cohort of ACX Grants was announced in , the second in . In 2022, I posted for the first cohort. Now, as I start thinking about a third round, I’ve collected one-year updates on the second and three-year updates on the first. Many people said my request for updates went to their spam folder; relatedly, many people have not yet sent in their updates. If you’re a grantee who didn’t see my original email, but you do see this post, please fill in the update form . All quote blocks are the grantees’ own words; text outside of quote blocks is my commentary.
info_outlineAstral Codex Ten Podcast
This is where if two copies of Claude talk to each other, they end up spiraling into rapturous discussion of spiritual bliss, Buddhism, and the nature of consciousness. From the : Anthropic swears they didn’t do this on purpose; when they ask Claude why this keeps happening, Claude can’t explain. Needless to say, this has made lots of people freak out / speculate wildly. I think there are already a few good partial explanations of this (especially Nostalgebraist ), but they deserve to be fleshed out and spread more fully.
info_outlineAstral Codex Ten Podcast
This is another heuristic from the same place as . If someone proves you are absolutely, 100% wrong about something, it’s polite to say “Oh, I guess I was wrong, sorry” before launching into your next argument. That is, instead of:
info_outlineAstral Codex Ten Podcast
People don’t like nitpickers. “He literally did the WELL AKTUALLY!” If you say Joe Criminal committed ten murders and five rapes, and I object that it was actually only six murders and two rapes, then why am I “defending” Joe Criminal? Because if it’s worth your time to lie, it’s worth my time to correct it.
info_outlineAstral Codex Ten Podcast
There’s a long-running philosophical argument about the conceivability of otherwise-normal people who are not conscious, aka . This has spawned a shorter-running (only fifteen years!) rationalist sub-argument on the topic. The last time I checked its status was , which says: 1. Both Yudkowsky and Chalmers agree that humans possess “qualia”. 2. Chalmers argues that a superintelligent being which somewhow knew the positions of all particles in a large region of the Universe would need to be told as an additional fact that any humans (or other minds possessing qualia) in this region of...
info_outlineAstral Codex Ten Podcast
It's time to narrow the 141 entries in the to about a dozen finalists. I can't read 141 reviews alone, so I need your help. Please pick as many as you have time for, read them, and rate them . Don’t read them in order! If you read them in order, I’ll have 1,000 votes on the first review, 500 on the second, and so on to none in the second half. Either pick a random review (thanks to Taymon for making a random-review-chooser script ) or scroll through the titles until you find one that catches your interest - you can see individual entries here (thanks to a reader for collating them): ...
info_outlineAstral Codex Ten Podcast
A guest post by Brandon Hendrickson [Editor’s note: I accept guest posts from certain people, especially past Book Review Contest winners. Brandon Hendrickson, whose won the 2023 contest, has taken me up on this and submitted this essay. He writes at and will be at this weekend, where he and Jack Despain Zhou aka TracingWoodgrains will be doing a live conversation about education.] I began my of a couple years back with a rather simple question: Could a new kind of school make the world rational? What followed, however, was a sprawling distillation of one scholar’s answer that I...
info_outlineOr maybe 2028, it's complicated
In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years.
The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn’t expect what happened next.
He got it all right.
Okay, not literally all. The US restricted chip exports to China in late 2022, not mid-2024. AI first beat humans at Diplomacy in late 2022, not 2025. And of course the mid-2025 to 2026 period remains to be seen. But to put its errors in context, Daniel’s document was written two years before ChatGPT existed. Nobody except researchers and a few hobbyists had ever talked to an AI. In fact, talking to AI was a misnomer. There was no way to make them continue the conversation; they would free associate based on your prompt, maybe turning it into a paragraph-length short story. If you pulled out all the stops, you could make an AI add single digit numbers and get the right answer more than 50% of the time. Yet if you read Daniel’s blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years.
I wasn’t the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. “What 2026 Looks Like” promised a sequel about 2027 and beyond, but it never materialized.
Unluckily for Sam Altman but luckily for the rest of us, Daniel broke with OpenAI mid-2024 in a dramatic split covered by the New York Times and others. He founded the AI Futures Project to produce the promised sequel, including:
- Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.
- Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.
- Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.
- Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.
…and me! Since October, I’ve been volunteering part-time, doing some writing and publicity work. I can’t take credit for the forecast itself - or even for the lion’s share of the writing and publicity - but it’s been an immense privilege to work alongside some of the smartest and most epistemically virtuous people I know, trying to absorb their worldview on a level deep enough to do it justice. We have no illusions that we’ll get as lucky as last time, but we still think it’s a valuable contribution to the discussion.