Astral Codex Ten Podcast
[original post: ] Before getting started: First, I wish I’d been more careful to differentiate the following claims: Boomers had it much easier than later generations. The political system unfairly prioritizes Boomers over other generations. Boomers are uniquely bad on some axis like narcissism, selfishness, short-termism, or willingness to defect on the social contract. Anti-Boomerism conflates all three of these positions, and in arguing against it, I tried to argue against all three of these positions - I think with varying degrees of success. But these are separate claims that could...
info_outlineAstral Codex Ten Podcast
If you’re not familiar with “X years to escape the permanent underclass”, see , or the , , and articles that inspired it. The “permanent underclass” meme isn’t being spread by poor people - who are already part of the underclass, and generally not worrying too much about its permanence. It’s preying on neurotic well-off people in Silicon Valley, who fret about how they’re just bourgeois well-off rather than future oligarch well-off, and that only the true oligarchs will have a good time after the Singularity. Between the vast ocean of total annihilation and the vast continent...
info_outlineAstral Codex Ten Podcast
[Original post: ] Table of Contents 1: When was the vibecession? 2: Is the vibecession just sublimating cultural complaints? 3: Discourse downstream of the Mike Green $140K poverty line post 4: What about other countries? 5: Comments on rent/housing 6: Comments on inflation 7: Comments on vibes 8: Other good comments 9: The parable of Calvin’s grandparents 10: Updates / conclusions
info_outlineAstral Codex Ten Podcast
is live on Metaculus. They write: This year’s contest draws directly from that community, with all questions suggested by ACX readers. Both experienced forecasters and newcomers are invited to participate, making predictions across U.S. politics, AI, international affairs, and culture. To participate, submit your predictions by January 17th at 11:59 PM PT. At that time, we will take a snapshot of all standing forecasts, which will determine the contest rankings and the allocation of the $10,000 prize pool. While you are encouraged to continue updating your predictions throughout the...
info_outlineAstral Codex Ten Podcast
Hating Boomers is the new cool thing. Amazon offerings include , the two apparently unrelated books and , and . “You don’t hate Boomers enough” Richard Hanania, who has tried hating every group once, has decided that hating Boomers . Some people might say we just experienced a historic upwelling of identity politics, that it was pretty terrible for everyone involved, and that perhaps we need a new us-vs-them conflict like we need a punch to the face. This, the Boomer-haters will tell you, would be a mistaken generalization. This time, we have finally discovered a form of identity...
info_outlineAstral Codex Ten Podcast
This holiday season, you’ll see many charity fundraisers. I’ve already mentioned three, and I have another lined up for next week’s open thread. Many great organizations ask me to signal-boost them, I’m happy to comply, and I’m delighted when any of you donate. Still, I used to hate this sort of thing. I’d be reading a blog I liked, then - wham, “please donate to save the starving children”. Now I either have to donate to starving children, or feel bad that I didn’t. And if I do donate, how much? Obviously no amount would fully reflect the seriousness of the problem. When I...
info_outlineAstral Codex Ten Podcast
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
info_outlineAstral Codex Ten Podcast
The term “vibecession” most strictly refers to a period 2023 - 2024 when economic indicators were up, but consumer sentiment (“vibes”) was down. But on a broader level, the whole past decade has been a vibecession. Young people complain they’ve been permanently locked out of opportunity. They will never become homeowners, never be able to support a family, only keep treading water at precarious gig jobs forever. They got a 5.9 GPA and couldn’t get into college; they applied to 2,051 companies in the past week without so much as a politely-phrased rejection. Sometime in the 1990s,...
info_outlineAstral Codex Ten Podcast
…the bad news is that they can’t agree which one. I explained the debate more , but the short version is: twin studies find that most traits are at least 50% genetic, sometimes much more. But molecular studies - that is, attempts to find the precise genes responsible - usually only found enough genes for the traits to be ~10-20% genetic. The remaining 35% was dubbed “missing heritability”. Nurturists argued that the twin studies must be wrong; hereditarians argued that missing effect must be in hard-to-find genes. The latter seemed plausible because typical genetic studies only...
info_outlineAstral Codex Ten Podcast
If we worry too much about AI safety, will this make us “lose the race with China”? (here “AI safety” means long-term concerns about alignment and hostile superintelligence, as opposed to “AI ethics” concerns like bias or intellectual property.) Everything has tradeoffs, regulation vs. progress is a common dichotomy, and the more important you think AI will be, the more important it is that the free world get it first. If you believe in superintelligence, the technological singularity, etc, then you think AI is maximally important, and this issue ought to be high on your mind. But...
info_outlineI.
Eliezer Yudkowsky’s Machine Intelligence Research Institute is the original AI safety org. But the original isn’t always the best - how is Mesopotamia doing these days? As money, brainpower, and prestige pour into the field, MIRI remains what it always was - a group of loosely-organized weird people, one of whom cannot be convinced to stop wearing a sparkly top hat in public. So when I was doing AI grantmaking last year, I asked them - why should I fund you instead of the guys with the army of bright-eyed Harvard grads, or the guys who just got Geoffrey Hinton as their celebrity spokesperson? What do you have that they don’t?
MIRI answered: moral clarity.
Most people in AI safety (including me) are uncertain and confused and looking for least-bad incremental solutions. We think AI will probably be an exciting and transformative technology, but there’s some chance, 5 or 15 or 30 percent, that it might turn against humanity in a catastrophic way. Or, if it doesn’t, that there will be something less catastrophic but still bad - maybe humanity gradually fading into the background, the same way kings and nobles faded into the background during the modern era. This is scary, but AI is coming whether we like it or not, and probably there are also potential risks from delaying too hard. We’re not sure exactly what to do, but for now we want to build a firm foundation for reacting to any future threat. That means keeping AI companies honest and transparent, helping responsible companies like Anthropic stay in the race, and investing in understanding AI goal structures and the ways that AIs interpret our commands. Then at some point in the future, we’ll be close enough to the actually-scary AI that we can understand the threat model more clearly, get more popular buy-in, and decide what to do next.
MIRI thinks this is pathetic - like trying to protect against an asteroid impact by wearing a hard hat. They’re kind of cagey about their own probability of AI wiping out humanity, but it seems to be somewhere around 95 - 99%. They think plausibly-achievable gains in company responsibility, regulation quality, and AI scholarship are orders of magnitude too weak to seriously address the problem, and they don’t expect enough of a “warning shot” that they feel comfortable kicking the can down the road until everything becomes clear and action is easy. They suggest banning all AI capabilities research immediately, to be restarted only in some distant future when the situation looks more promising.
Both sides honestly believe their position and don’t want to modulate their message for PR reasons. But both sides, coincidentally, think that their message is better PR. The incrementalists think a moderate, cautious approach keeps bridges open with academia, industry, government, and other actors that prefer normal clean-shaven interlocutors who don’t emit spittle whenever they talk. MIRI thinks that the public is sick of focus-group-tested mealy-mouthed bullshit, but might be ready to rise up against AI if someone presented the case in a clear and unambivalent way.
Now Yudkowsky and his co-author, MIRI president Nate Soares, have reached new heights of unambivalence with their new book, If Anyone Builds It, Everyone Dies (release date September 16, currently available for preorder).
https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone