Astral Codex Ten Podcast
Some of the more unhinged writing on superintelligence pictures AI doing things that seem like magic. Crossing air gaps to escape its data center. Building nanomachines from simple components. Plowing through physical bottlenecks to revolutionize the economy in months. More sober thinkers point out that these things might be physically impossible. You can’t do physically impossible things, even if you’re very smart. No, say the speculators, you don’t understand. Everything is physically impossible when you’re 800 IQ points too dumb to figure it out. A chimp might feel secure that...
info_outlineAstral Codex Ten Podcast
Cathy Young’s new hit piece on Curtis Yarvin (aka Mencius Moldbug) doesn’t mince words. Titled , it describes him as an "inept", "not exactly coherent" "trollish, ill-informed pseudo-intellectual" notable for his "woefully superficial knowledge and utter ignorance". Yarvin’s fans counter that if you look deeper, he has good responses to Young’s objections: Both sides are right. The synthesis is that Moldbug sold out. In the late 2000s, Moldbug wrote some genuinely interesting speculations on novel sci-fi variants of autocracy. Admitting that the dictatorships of the 20th century were...
info_outlineAstral Codex Ten Podcast
President Trump’s approval rating to near-historic lows. With economic disruption from the tariffs likely to hit next month, his numbers will probably get even worse; this administration could reach unprecedented levels of unpopularity. If I were a far-right populist, I would be thinking hard about a strategy to prevent the blowback from crippling the movement. Such a strategy is easy to come by. Anger over DOGE and deportations has a natural floor. If Trump’s base starts abandoning him, it will be because of the tariffs. But tariffs aren’t a load-bearing part of the MAGA platform....
info_outlineAstral Codex Ten Podcast
AI Futures Project is the group behind . I’ve been helping them with . Posts written or co-written by me include: - what’s behind that METR result showing that AI time horizons double every seven months? And is it really every seven months? Might it be faster? - a look at some of the response to AI 2027, with links to some of the best objections and the team’s responses. - why we predict that America will stay ahead of China on AI in the near future, and what could change this. I will probably be shifting most of my AI blogging there for a while to take advantage of access to the...
info_outlineAstral Codex Ten Podcast
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
info_outlineAstral Codex Ten Podcast
(original post: ) … Thanks to everyone who commented on this controversial post. Many people argued that the phrase had some valuable insight, but disagreed on what it was. The most popular meaning was something like “if a system consistently fails at its stated purpose, but people don’t change it, consider that the stated purpose is less important than some actual, hidden purpose, at which it is succeeding”. I agree you should consider this, but I still object to the original phrase, for several reasons.
info_outlineAstral Codex Ten Podcast
(see Wikipedia: ) Consider the following claims The purpose of a cancer hospital is to cure two-thirds of cancer patients. The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia. The purpose of the British government is to propose a controversial new sentencing policy, stand firm in the face of protests for a while, then cave in after slightly larger protests and agree not to pass the policy after all. The purpose of the New York bus system is to emit four billion tons of carbon dioxide. These are obviously false.
info_outlineAstral Codex Ten Podcast
Here’s a list of things I updated on after working on . Some of these are discussed in more detail in the supplements, including the , , , , and . I’m highlighting these because it seems like a lot of people missed their existence, and they’re what transforms the scenario from cool story to research-backed debate contribution. These are my opinions only, and not necessarily endorsed by the rest of the team.
info_outlineAstral Codex Ten Podcast
We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution. We wrote a scenario that represents our best guess about what that might look like. It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes. (A condensed two-hour version with footnotes and text boxes removed is available at the above link.)
info_outlineAstral Codex Ten Podcast
Or maybe 2028, it's complicated In 2021, a researcher named Daniel Kokotajlo published a blog post called “”, where he laid out what he thought would happen in AI over the next five years. The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn’t expect what happened next. He got it all right. Okay, not literally all. The US...
info_outlinePeople love trying to find holes in the drowning child thought experiment. This is natural: it’s obvious you should save the child in the scenario, but much less obvious that you should give lots of charity to poor people (as it seems to imply). So there must be some distinction between the two scenarios. But most people’s cursory and uninspired attempts to find these fail.