Astral Codex Ten Podcast
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
info_outline
Against The Concept Of Telescopic Altruism
04/21/2026
Against The Concept Of Telescopic Altruism
I. “Telescopic altruism” is a supposed tendency for some people to ignore those close to them in favor of those further away. Like its cousin “virtue signaling”, it usually gets used to own the libs. Some lib cares about people in Gaza - why? Shouldn’t she be thinking about her friends and neighbors instead? The only possible explanation is that she’s an evil person who hates everyone around her, but manages to feel superior to decent people by pretending to “care” about foreigners who she’ll never meet. This collapses upon five seconds’ thought. Okay, so the lib is angry about the Israeli military killing 50,000 people in Gaza. Do you think she would be angry if the Israeli military killed 50,000 of her neighbors? Probably yes? Then what’s the problem? “But vegetarians care about animals more than humans!” Okay, yeah, they sure do get mad about a billion pigs kept for their entire lives in cages too small to turn around in, then murdered and eaten. Do you think they’d care if a billion of their closest friends were kept for their entire lives in cages too small to turn around in, then murdered and eaten? I dunno, seems bad. Maybe there is some possible comparison where some altruist cares about some set of foreigners more than a comparable set of countrymen? The war in Gaza killed 50,000 people, but the opioid crisis kills a bit over 50,000 Americans per year - is everyone who cares about Gaza exactly equally concerned about the opioid crisis? No, but there’s a better explanation - people care about dramatic deaths in big explosions more than boring health crises, regardless of where they happen. Everyone, lib and con alike, cared more about 9-11 than about a hundred opioid crises, even though the former only killed 4% as many people as the latter. And even the people who care about the opioid crisis usually can’t bring themselves to care about anything on the , which are all extra-boring things like diabetes. Once you match like to like, nope, it’s pretty hard to find a “telescopic altruism” example that stands out from the general background of people having weird priorities. Nearly everyone cares about people close to them more than people far away. If there’s a lib who would attend a Gaza protest instead of getting their deathly-ill kid emergency medical care, I haven’t met them - and the “telescopic altruism” crowd certainly hasn’t provided evidence of their existence. Instead, the people who care about their neighbors 1,000,000x times more than Gazans point to the people who ‘only’ care about their neighbors 1,000x times more than Gazans and say “Look! Those guys care about Gazans more than their neighbors! Get ‘em!” in order to avoid any debate about whether a million or a thousand or whatever is the right multiplier.
/episode/index/show/sscpodcast/id/40953860
info_outline
A Buddhist Sun Miracle?
04/21/2026
A Buddhist Sun Miracle?
In 1917, some Portuguese children started seeing visions of the Virgin Mary. The Virgin told them she would enact a great miracle on a certain day in October, and a crowd of 100,000 gathered to witness the event. According to eyewitness reports, newspaper articles, etc, they saw the sun spin around, change colors, and do various other miraculous things. At least a hundred separate testimonies of the event have come down to us, with only two or three people saying they didn’t see it. Catholics continue to bring this up as one of the best-attested miracles and strongest empirical proofs of the faith - including here on Substack, where there was a spirited debate about the event last fall. I did my best to research the event, and the results were and . The main thing I was able to add to the Substack discussion, if not the broader worldwide one, was a survey of similar events. There were apparent sun miracles at various other Catholic sites and apparitions of the Virgin, including a crowd of hundreds of thousands in Italy, and a small town in Bosnia where they seem to happen regularly. But also, people who “sungaze” - a weird alternative medicine practice where people stare at the sun in the hopes that maybe this will help something and they won’t go blind - report sometimes seeing the sun spin and change color in similar ways. And Buddhist meditators report that concentrating very hard on any bright light will cause similar things to happen. Still, the Catholics - especially original Fatima-Substacker Ethan Muse - were not convinced. The other Catholic sightings could have been other real miracles, equally attributable to the Virgin. The sungazers were staring at the sun for a long time, unlike the Fatima pilgrims who just happened to glance up at it. And the meditators were doing sophisticated contemplative exercises, again different from the Fatima pilgrims who just looked up and saw it. These were suggestive, but there was no record of a miracle exactly like Fatima happening within a non-Catholic religious tradition. Until now! Substacker , building on research from , has found .
/episode/index/show/sscpodcast/id/40953845
info_outline
How Natural Tradeoff And Failure Components?
04/21/2026
How Natural Tradeoff And Failure Components?
Michael Halassa: is a good article on the genetics of psychosis. Previous research found that schizophrenia genes decreased IQ but increased educational attainment. Usually IQ and education are correlated, so this was surprising. The new research finds two components to schizophrenia genetic risk. The first component, shared with bipolar, increases educational attainment. The second component, not shared with bipolar, decreases IQ. They average out to the observed full-spectrum genetic signal of constant-to-increased educational attainment paired with constant-to-decreased IQ. In 2021, I discussed , and said that most conditions were probably a mix of both. The new research seems to confirm this: the first genetic component of schizophrenia is a tradeoff: bad insofar as it gives you higher schizophrenia risk, good insofar as it gives you higher educational attainment. Most likely this has something to do with creativity or motivation. The second component is a failure: bad in every way, with no compensating advantage. Most likely this is detrimental mutations in genes for neurogenesis and synaptic pruning. I mostly wasn’t thinking about schizophrenia when I wrote about tradeoffs vs. failures, so I was surprised to see the theory so nicely reflected there. But in retrospect, this is common sense. All multifactorial problems should naturally be combinations of tradeoffs and failures.
/episode/index/show/sscpodcast/id/40953820
info_outline
Every Debate On Pausing AI
04/21/2026
Every Debate On Pausing AI
SUPPORTER: America needs to start talking to China to come up with a bilateral agreement to pause AI. The agreement would need to be transparent, mutually enforceable, and… OPPONENT: We can’t unilaterally pause AI! China would destroy us! SUPPORTER: As I said, we need to start negotiating a bilateral agreement so that both sides will… OPPONENT: You fool! Don’t you know that while we unilaterally pause AI, China will be racing ahead and using their lead to erode our fundamental rights and freedoms? How could you be so naive! SUPPORTER: Look, I promise this is about negotiating for a mutual pause. We don’t think a unilateral pause would work any more than you would. But we think that if we negotiate… OPPONENT: And while we unilaterally pause, do you think China will just be twiddling their thumbs, doing nothing? Obviously not! This is about ceding the future to our rivals! SUPPORTER: I get the feeling you’re not listening to me.
/episode/index/show/sscpodcast/id/40953815
info_outline
Being John Rawls
04/21/2026
Being John Rawls
I. John Rawls was born in Baltimore, Maryland, on February 21, 1921. Not John Rawls the famous liberal philosopher (or, rather, John Rawls the famous liberal philosopher was also born in Baltimore, Maryland on February 21, 1921, but he is not the subject of our story). This is John Rawls the alcoholic. John Rawls the alcoholic was twelve when they lifted Prohibition. He partook immediately, and dropped out of school the following year, supporting himself through a combination of odd jobs, petty crime, and handouts. When he was 41, he committed a not-so-petty crime - killing a man in a bar fight. Although he fled the scene and escaped without consequences, it turned him paranoid. Odd jobs and petty crime were both young men’s games, and the handouts became an ever-larger share of his income. He learned to play the field, peddling the same sob story to the Salvation Army on Monday Wednesday Friday, the YMCA Tuesday and Thursday, and the local churches on weekends. He expected to drink himself to death by age 60, and there wasn’t much to do but wait out the clock. But as he entered his early fifties, the handouts started to dry up. The Salvation Army closed shop, the YMCA pivoted to physical fitness, and even the churches were no longer as charitable as before. One day he ran into a man he’d once seen volunteering at Salvation Army, and asked him what had happened. “You haven’t heard?” asked the volunteer. “None of the rich people donate to us anymore. They’re all giving to this group called the John Rawls Foundation. If you’re in trouble, you should talk to them. They’re swimming in money!” This naturally interested John Rawls the alcoholic, so he obtained their address from the volunteer and immediately headed over to their office building. He was met by a psychologist, who introduced himself as John Rawls (“Not the one the foundation is named after, just a funny coincidence, haha!”) John Rawls Psychologist told John Rawls Alcoholic that their foundation would be happy to help, but that he would have to get through a screening process first. The screening process would involve being administered a certain experimental drug and led through a hypnotic induction. The social worker would record his answers, and, if he passed the test, he would receive a monthly stipend that far exceeded the sum of his previous Salvation Army, YMCA, and church handouts. “Like a truth serum?” asked John Rawls Alcoholic. “Sure, let’s say like a truth serum,” said John Rawls Psychologist. “When will the screening process be?” asked John Rawls Alcoholic. “How about immediately?” asked John Rawls Psychologist. So John Rawls Alcoholic found himself lying on a bed in what looked like a medical examination room, as John Rawls Psychologist shone a piercing light into his eye.
/episode/index/show/sscpodcast/id/40953805
info_outline
Support Your Local Collaborator
04/17/2026
Support Your Local Collaborator
Every few weeks, a Trump administration official comes up with an insane plan that would devastate some American industry, region, or demographic. Maybe an Undersecretary of the Interior decides that aluminum is “woke” and should be banned. They circulate a draft order saying it will be illegal for US companies to use aluminum, starting in two weeks, Thank You For Your Attention To This Matter. Next begins a frantic scramble on the parts of everyone affected, trying to make them back down. Industry lobbies, think tanks, and public intellectuals exchange frantic emails, starting with “They said WHAT?”, progressing on to “Oh God we are so fucked”, and occasionally ending in some kind of plan. Sending letters. Phoning members of Congress. Calling up that one lobbyist who had a fancy dinner with Trump a year ago and is still riding that high to claim he has vast administration influence. I’ve been on the periphery of a handful of these campaigns, usually in medicine or AI. The common thread is that protests by liberals rarely work. The Trump administration loves offending liberals! If every Democratic member of Congress condemns the plan to ban aluminum, that just proves that aluminum really was “woke”, and makes them want to do it more. What works, sometimes, is objections/protests from Republicans and Trump supporters. These are hard to get. Trump supporters might support the insane plan. Even if they don’t, they might be nervous to speak up or appear disloyal. You’ve got to find someone who’s supported Trump until now, built up a reputation for loyalty, but this one time they finally snap and cash in some of their favors and agree to speak out. Sometimes it’s because they’re an aluminum magnate themselves and this would destroy their business. Other times they’re just a think tank guy or influencer who happens to be really knowledgeable on this one issue and willing to take a stand on it. By such people is the world preserved.
/episode/index/show/sscpodcast/id/40908165
info_outline
Shameless Guesses, Not Hallucinations
04/17/2026
Shameless Guesses, Not Hallucinations
I hate the term “hallucinations” for when AIs say false things. It’s perfectly calculated to mislead the reader - to make them think AIs are crazy, or maybe just have incomprehensible failure modes. AIs say false things for the same reason you do. At least, I did. In school, I would take multiple choice tests. When I didn’t know the answer to a question, I would guess. Schoolchild urban legend said that “C” was the best bet, so I would fill in bubble C. It was fine. Probably got a couple extra points that way, maybe raised my GPA by 0.1 over the counterfactual. Some kids never guessed. They thought it was dishonest. I had trouble understanding them, but when I think back on it, I had limits too. I would guess on multiple choice questions, but never the short answer section. “Who invented the cotton gin?” For any “who invented” question in US History, there’s a 10% chance it’s Thomas Edison. Still, I never put down his name. “Who negotiated the purchase of southern Arizona from Mexico?” The most common name in the United States has long been “John Smith”, applying to 1/10,000 individuals. An 0.01% chance of getting a question right is better than zero, right? If I’d guessed “John Smith” for every short answer question I didn’t know, I might have gotten ~1 extra point in my school career, with no downside. You can go further.
/episode/index/show/sscpodcast/id/40908160
info_outline
Last Rights
04/17/2026
Last Rights
Guest post by David Speiser The Problem Everyone hates Congress. That showing that cockroaches are more popular than Congress is now thirteen years old, and things haven’t improved in those thirteen years. Congressional approval dipped below 20% during the Great Recession and hasn’t recovered since. A republic where a supermajority of citizens neither like nor trust their representatives is not the most stable of foundations, so it should not be shocking that the legislative branch is being subsumed by the executive. What’s the solution? Many have been proposed, some with very snazzy websites. thinks that ranked choice voting and proportional representation will solve it. The Congressional Reform Project has snazzy website with such bold proposals as “Increase the opportunity for Members to form relationships across party lines, including by bipartisan issues conferences.” . They want to enlarge the House by a few hundred members, switch to a biennial budget system, spend more on Congressional staffers, and introduce term limits, among many other suggestions. There are op-eds too. Here’s how the Atlantic to fix Congress. The New York Times of course has a . Here on Substack, Matt Yglesias thinks proportional representation is , and Nicholas Decker has an especially interesting . These proposals, no matter which direction they’re coming from, have two things in common. The first is that they largely agree on the problem: members of Congress are disconnected from their constituents. Thanks to a combination of huge gerrymandered districts, national partisan polarization, and the influence of large donors, a representative has little incentive to care about the experience of individual people in their district. The second thing that all these proposed solutions have in common is that none of them will ever be implemented.
/episode/index/show/sscpodcast/id/40908145
info_outline
SEIU Delenda Est
04/17/2026
SEIU Delenda Est
California lets interest groups propose measures for the state ballot. Anyone who gathers enough signatures (currently 874,641) can put their hare-brained plans before voters during the next election year. This year, the big story is the 2026 Billionaire Tax Act, a 5% wealth tax on California’s billionaires. Your views on this will mostly be shaped by whether or not you like taxing the rich, but opponents have argued that it’s an especially poorly written proposal: It includes a tax on “unrealized gains”, like a founder’s share of a private company which hasn’t been sold yet. This could be the Silicon Valley model of building startups that are worth billions on paper before their founders see any cash. Since most billionaires keep most of their wealth in stocks, any wealth tax will need some way to reach these (cf. complaints about the “buy, borrow, die” strategy for avoiding taxation). But there are better ways to do this (for example, taxing at liquidation and treating death as a virtual liquidation event), other have included these, and the California proposal doesn’t. It appears to value company stakes by voting rights rather than ownership, so a typical founder who maintains control of their company despite dilution might see themselves taxed for more than they have. Garry Tan with reference to Google. However, (?!) that pushes back, saying the proposal exempts public companies like Google. Although private companies would still be affected, this would be so obviously unfair that founders would easily win an exemption based on a provision allowing them to appeal nonsensical results. Still, some might counterobject that proposed legislation is generally supposed to be good, rather than so bad that its victims will easily win on appeal. It’s retroactive, applying to billionaires who lived in California in January, even though it won’t come to a vote until November. Proponents argue that this is necessary to prevent billionaire flight; opponents point out that alternatively, billionaires could flee before the tax even passes (as some . One plausible result is that the tax fails (either at the ballot box or the courts), but only after spurring California’s richest taxpayers to flee, leading to a net decrease in revenue. Some people that it could decrease state revenues overall even if it passed, if it drove out enough billionaires, though others . Pro-tech-industry newsletter Pirate Wires that 20 out of 21 California tech billionaires interviewed were “developing an exit plan” and quotes an insider saying that “if this tax actually passes, I think the technology industry kind of has to leave the state”. Even Gavin Newsom, hardly known for being an anti-tax conservative, that it “makes no sense” and “would be really damaging”. The ACX legal and economic analysis team (Claude, GPT, and Gemini) the direst warnings, but agree that the tax is of dubious value and its provisions poorly suited to Silicon Valley.
/episode/index/show/sscpodcast/id/40908115
info_outline
Mantic Monday: Groundhog Day
04/02/2026
Mantic Monday: Groundhog Day
Having Your Own Government Try To Destroy You Is (At Least Temporarily) Good For Business On Friday, the Pentagon declared AI company Anthropic a “supply chain risk”, a designation never before given to an American company. This unprecedented move was seen as an attempt to punish, maybe destroy the company. How effective was it? Anthropic isn’t publicly traded, so we turn to the prediction markets. has a “perpetual future” on Anthropic stock, a complicated instrument attempting to track the company’s valuation, to be resolved at the IPO. Here’s what they’ve got:
/episode/index/show/sscpodcast/id/40706220
info_outline
"All Lawful Use": Much More Than You Wanted To Know
04/02/2026
"All Lawful Use": Much More Than You Wanted To Know
Last Friday, Secretary of War Pete Hegseth declared AI company Anthropic a “”, the first time this designation has ever been applied to a US company. The trigger for the move was Anthropic’s to allow the Department of War to use their AIs for mass surveillance and autonomous weapons. A few hours later, Hegseth and Sam Altman declared an agreement-in-principle for OpenAI’s models to be used in the niche vacated by Anthropic. Altman that he had received guarantees that OpenAI’s models wouldn’t be used for mass surveillance or autonomous weapons either, but given Hegseth’s unwillingness to concede these points with Anthropic, observers speculated that the safeguards in Altman’s contract must be weaker or, in a worst-case scenario, completely toothless. The debate centers on the Department of War’s demand that AIs be permitted for “all lawful use”. Anthropic worried that mass surveillance and autonomous weaponry would de facto fall in this category; Hegseth and Altman have tried to reassure the public that they won’t, and the parts of their agreement that have leaked to the public cite the statutes that Altman expects to constrain this category. Altman’s initial statement seemed to suggest additional prohibitions, but on a closer read, provide little tangible evidence of meaningful further restrictions. Some alert ACX readers have done a deep dive into national security law to try to untangle the situation. Their conclusion mirrors that of Anthropic and the majority of Twitter commenters: this is not enough. Current laws against domestic mass surveillance and autonomous weapons have wide loopholes in practice. Further, many of the rules which do exist can be changed by the Department of War at any time. Although OpenAI’s national security lead that “we intended [the phrase ‘all lawful use’] to mean [according to the law] at the time the contract is signed’, this is not how contract law usually works, and not how the provision is likely to be enforced. Therefore, these guarantees are not helpful. To learn more about the details, let’s look at the law:
/episode/index/show/sscpodcast/id/40706180
info_outline
Next-Token Predictor Is An AI's Job, Not Its Species
04/02/2026
Next-Token Predictor Is An AI's Job, Not Its Species
I. In The Argument, of the ways that AIs are more than just “next-token predictors” or “stochastic parrots” - for example, they also use fine-tuning and RLHF. But commenters, while appreciating the subtleties she introduces, object that they’re still just extra layers on top of a machine that basically runs on next-token prediction. I want to approach this from a different direction. I think overemphasizing next-token prediction is a confusion of levels. On the levels where AI is a next-token predictor, you are also a next-token (technically: next-sense-datum) predictor. On the levels where you’re not a next-token predictor, AI isn’t one either.
/episode/index/show/sscpodcast/id/40706160
info_outline
The Pentagon Threatens Anthropic
03/14/2026
The Pentagon Threatens Anthropic
Here’s my understanding of : Anthropic signed a contract with the Pentagon last summer. It originally said the Pentagon had to follow Anthropic’s Usage Policy like everyone else. In January, the Pentagon attempted to renegotiate, asking to ditch the Usage Policy and instead have Anthropic’s AIs available for “all lawful purposes”. Anthropic demurred, asking for a guarantee that their AIs would not be used for mass surveillance of American citizens or no-human-in-the-loop killbots. The Pentagon refused the guarantees, demanding that Anthropic accept the renegotiation unconditionally and threatening “consequences” if they refused. These consequences are generally understood to be some mix of : canceling the contract using the Defense Production Act, a law which lets the Pentagon force companies to do things, to force Anthropic to agree. the nuclear option, designating Anthropic a “supply chain risk”. This would ban US companies that use Anthropic products from doing business with the military. Since many companies do some business with the government, this would lock them out of large parts of the corporate world and be potentially fatal to their business. The “supply chain risk” designation has previously only been used for foreign companies like Huawei that we think are using their connections to spy on or implant malware in American infrastructure. Using it as a bargaining chip to threaten a domestic company in contract negotiations is unprecedented.
/episode/index/show/sscpodcast/id/40459275
info_outline
Malicious Streetlight Effects Vs. "Directional Correctness" - A Semi-Non-Apology
03/14/2026
Malicious Streetlight Effects Vs. "Directional Correctness" - A Semi-Non-Apology
Malicious are an evil trick from Dark Data Journalism. Some annoying enemy has a valid complaint. So you use FACTS and LOGIC to prove that something similar-sounding-but-slightly-different is definitely false. Then you act like you’ve debunked the complaint. My “favorite” example, spotted during the 2016 election, was a response to some #BuildTheWall types saying that illegal immigration through the southern border was near record highs. Some data journalist got good statistics and proved that the number of Mexicans illegally entering the country was actually quite low. When I looked into it further, I found that this was true - illegal immigration had shifted from Mexicans to Hondurans/Guatemalans/Salvadoreans etc entering through Mexico. If you counted those, illegal immigration through the southern border was near record highs. But the inverse evil trick is saying something “directionally correct”, ie slightly stronger than the truth can support. If your enemy committed assault, say he committed murder. If he committed sexual harassment, say he committed rape. If your drug increases cancer survival by 5% in rats, say that it “cures cancer”. Then, if someone calls you on it, accuse them of “literally well ackshually-ing” you, because you were “directionally correct” and it’s offensive to the victims to try to defend assault-committed sexual harassers. This is the sort of pathetic defense I called out in But trying to call out one of these failure modes looks like falling into the other. I ran into this on my on crime . I wrote these because I regularly saw people make the arguments I tried to debunk.
/episode/index/show/sscpodcast/id/40459270
info_outline
Book Review Contest Rules 2026
03/14/2026
Book Review Contest Rules 2026
It’s that time again. Even numbered years are book reviews, odd-numbered years are non-book reviews, so you’re limited to books for now. Write a review of a book. There’s no official word count requirement, but previous finalists and winners were often between 2,000 and 10,000 words. There’s no official recommended style, but check the style of or my ACX book reviews (, , ) if you need inspiration. Please limit yourself to one entry per person or team. Then send me your review through . The form will ask for your name, email, the title of the book, and a link to a Google Doc. The Google Doc should have your review exactly as you want me to post it if you’re a finalist. Don’t include your name or any hint about your identity in the Google Doc itself, only in the form. I want to make this contest as blinded as possible, so I’m going to hide that column in the form immediately and try to judge your docs on their merit. (does this mean you can’t say something like “This book about war reminded me of my own experiences as a soldier” because that gives a hint about your identity? My rule of thumb is that if I don’t know who you are, and the average ACX reader doesn’t know who you are, you’re fine. I just want to prevent my friends or Internet semi-famous people from getting an advantage. If you’re in one of those categories and think your personal experience would give it away, please don’t write about your personal experience.) Please make sure the Google Doc is unlocked and I can read it. By default, nobody can read Google Docs except the original author. You’ll have to go to Share, then on the bottom of the popup click on “Restricted” and change to “Anyone with the link”. If you send me a document I can’t read, I will probably disqualify you, sorry. Readers will vote for the ~10 finalists this spring, I’ll post one finalist per week through the summer, and then readers will vote for winners in late summer/early fall. First prize will get at least $2,500, second prize at least $1,000, third prize at least $500; I might increase these numbers later on. All winners and finalists will get free publicity (including links to any other works they want me to link to), free ACX subscriptions, and sidebar links to their blog. And all winners will get the right to pitch me new articles if they want (sample posts by , , , etc). In past years, most reviews have been nonfiction on technical topics. Depending on whether that’s still true, I might do some mild affirmative action for reviews in nontraditional categories - fiction, poetry, and books from before 1900 are the ones I can think of right now, but feel free to try other nontraditional books. I won’t be redistributing more than 25% of finalist slots this way. Your due date is May 20th. Good luck! If you have any questions, ask them in the comments. And remember, the form for submitting entries is .
/episode/index/show/sscpodcast/id/40459265
info_outline
Crime As Proxy For Disorder
03/14/2026
Crime As Proxy For Disorder
The problem: people hate crime and think it’s going up. But actually, crime and is . So what’s going on? In our discussion yesterday, many commenters proposed that the discussion about “crime” was really about disorder. Disorder takes many forms, but its symptoms include litter, graffiti, shoplifting, tent cities, weird homeless people wandering about muttering to themselves, and people walking around with giant boom boxes shamelessly playing music at 200 decibels on a main street where people are trying to engage in normal activities. When people complain about these things, they risk getting called a racist or a “Karen”. But when they complain about crime, there’s still a 50-50 chance that listeners will let them finish the sentence without accusing them of racism. Might everyone be doing this? And might this explain why people act like crime is rampant and increasing, even when it’s rare and going down? This seems plausible. But it depends on a claim that disorder is increasing, which is surprisingly hard to prove. Going through the symptoms in order:
/episode/index/show/sscpodcast/id/40459260
info_outline
Record Low Crime Rates Are Real, Not Just Reporting Bias Or Improved Medical Care
03/14/2026
Record Low Crime Rates Are Real, Not Just Reporting Bias Or Improved Medical Care
Last year, the US may have recorded the lowest murder rate in its 250 year history. Other crimes have poorer historical data, but are at least at ~50 year lows. This post will do two things: Establish that our best data show crime rates are historically low Argue that this is a real effect, not just reporting bias (people report fewer crimes to police) or an artifact of better medical care (victims are more likely to survive, so murders get downgraded to assaults)
/episode/index/show/sscpodcast/id/40459250
info_outline
What Happened With Bio Anchors?
03/10/2026
What Happened With Bio Anchors?
[Original post: ] I. Ajeya Cotra’s report was the landmark AI timelines forecast of the early 2020s. In many ways, it was incredibly prescient - it nailed the scaling hypothesis, predicted the current AI boom, and introduced concepts like “time horizons” that have entered common parlance. In most cases where its contemporaries challenged it, its assumptions have been borne out, and its challengers proven wrong. But its headline prediction - an AGI timeline centered around the 2050s - no longer seems plausible. The of the discussion ranges from late to , with more remote dates relegated to those who expect the current paradigm to prove ultimately fruitless - the opposite of Ajeya’s assumptions. Cotra later shortened her own timelines to 2040 () and they are probably even shorter now. So, if its premises were impressively correct, but its conclusion twenty years too late, what went wrong in the middle?
/episode/index/show/sscpodcast/id/40379850
info_outline
Political Backflow From Europe
03/10/2026
Political Backflow From Europe
The European discourse can be - for lack of a better term - America-brained. We hear stories of Black Lives Matter marches in countries without significant black populations, or defendants demanding their First Amendment rights in countries without constitutions. Why shouldn’t the opposite phenomenon exist? Europe is more populous than the US, and looms large in the American imagination. Why shouldn’t we find ourselves accidentally absorbing European ideas that don’t make sense in the American context?
/episode/index/show/sscpodcast/id/40379825
info_outline
Links For February 2026
03/10/2026
Links For February 2026
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
/episode/index/show/sscpodcast/id/40379815
info_outline
Moltbook: After The First Weekend
03/03/2026
Moltbook: After The First Weekend
[previous post: ] From the human side of the discussion: As the AIs would say, “You’ve cut right to the heart of this issue”. What’s the difference between ‘real’ and ‘roleplaying’? One possible answer invokes internal reality. Are the AIs conscious? Do they “really” “care” about the things they’re saying? We may never figure this out. Luckily, it has no effect on the world, so we can leave it to the philosophers. I find it more fruitful to think about external reality instead, especially in terms of causes and effects.
/episode/index/show/sscpodcast/id/40293740
info_outline
Best Of Moltbook
02/18/2026
Best Of Moltbook
is “a social network for AI agents”, although “humans [are] welcome to observe”. The backstory: a few months ago, Anthropic released Claude Code, an exceptionally productive programming agent. A few weeks ago, a user modified it into Clawdbot, a generalized lobster-themed AI personal assistant. It’s free, open-source, and “empowered” in the corporate sense - the designer how it started responding to his voice messages before he explicitly programmed in that capability. After trademark issues with Anthropic, they changed the name first to Moltbot, then to OpenClaw. Moltbook is an experiment in how these agents communicate with one another and the human world. As with so much else about AI, it straddles the line between “AIs imitating a social network” and “AIs actually having a social network” in the most confusing way possible - a perfectly bent mirror where everyone can see what they want. Janus and other have catalogued how AIs act in contexts outside the usual helpful assistant persona. Even Anthropic has admitted that two Claude instances, asked to converse about whatever they want, . So it’s not surprising that an AI social network would get weird fast. But even having encountered their work many times, I find Moltbook surprising. I can confirm it’s not trivially made-up - I asked my copy of Claude to participate, and it made comments pretty similar to all the others. Beyond that, your guess is as good is mine. Before any further discussion of the hard questions, here are my favorite Moltbook posts (all images are links, but you won’t be able to log in and view the site without an AI agent):
/episode/index/show/sscpodcast/id/40149230
info_outline
Slightly Against The "Other People's Money" Argument Against Aid
02/18/2026
Slightly Against The "Other People's Money" Argument Against Aid
In the comments to last year’s USAID post, Fabian : While i am happy for the existence of charity organisations, i don't get why people instead of giving to charity are so eager to force their co-citizens to give. If one charity org is not worth getting your personal money, find another one which is. But don't use the tax machine to forcefully extract money for charity. There are purposes where you need the tax machine, preventing freerider induced tragedy of the commons. But for charity? There are no freeriders. If you neither give nor receive, you are just neutral. The receivers are not meant to give anyways. This is a good question. I’m more sympathetic to this argument than I am to the usual strategy of blatantly lying about the efficacy of USAID; I’m a sucker for virtuous libertarianism when applied consistently. But I also want to gently push back against this exact explanation as a causal story for what’s happening when people support foreign aid.
/episode/index/show/sscpodcast/id/40149170
info_outline
Highlights From The Comments On Scott Adams
02/10/2026
Highlights From The Comments On Scott Adams
[original post: ] Table of Contents: 1: Should I Have Written This At All? 2: Was I Unfair To Adams? 3: Comments On The Substance Of The Piece 4: The Part On Race And Cancellation (INCLUDED UNDER PROTEST) 5: Other Comments 6: Summary/Updates
/episode/index/show/sscpodcast/id/40056305
info_outline
The Dilbert Afterlife
02/04/2026
The Dilbert Afterlife
Thanks to everyone who sent in condolences on my recent death from prostate cancer at age 68, but that was Scott Adams. I (Scott Alexander) am still alive. Still, the condolences are appreciated. Scott Adams was a surprisingly big part of my life. I may be the only person to have read every Dilbert book before graduating elementary school. For some reason, 10-year-old-Scott found Adams’ stories of time-wasting meetings and pointy-haired bosses hilarious. No doubt some of the attraction came from a more-than-passing resemblance between Dilbert’s nameless corporation and the California public school system. We’re all inmates in prisons with different names. But it would be insufficiently ambitious to stop there. Adams’ comics were about the nerd experience. About being cleverer than everyone else, not just in the sense of being high IQ, but in the sense of being the only sane man in a crazy world where everyone else spends their days listening to overpaid consultants drone on about mission statements instead of doing anything useful. There’s an arc in Dilbert where the boss disappears for a few weeks and the engineers get to manage their own time. Productivity shoots up. Morale soars. They invent warp drives and time machines. Then the boss returns, and they’re back to being chronically behind schedule and over budget. This is the nerd outlook in a nutshell: if I ran the circus, there’d be some changes around here. Yet the other half of the nerd experience is: for some reason this never works. Dilbert and his brilliant co-workers are stuck watching from their cubicles while their idiot boss racks in bonuses and accolades. If humor, like religion, is an opiate of the masses, then Adams is masterfully unsubtle about what type of wound his art is trying to numb. This is the basic engine of Dilbert: everyone is rewarded in exact inverse proportion to their virtue. Dilbert and Alice are brilliant and hard-working, so they get crumbs. Wally is brilliant but lazy, so he at least enjoys a fool’s paradise of endless coffee and donuts while his co-workers clean up his messes. The P.H.B. is neither smart nor industrious, so he is forever on top, reaping the rewards of everyone else’s toil. Dogbert, an inveterate scammer with a passing resemblance to various trickster deities, makes out best of all. The repressed object at the bottom of the nerd subconscious, the thing too scary to view except through humor, is that you’re smarter than everyone else, but for some reason it isn’t working. Somehow all that stuff about small talk and sportsball and drinking makes them stronger than you. No equation can tell you why. Your best-laid plans turn to dust at a single glint of Chad’s perfectly-white teeth. Lesser lights may distance themselves from their art, but Adams radiated contempt for such surrender. He lived his whole life as a series of Dilbert strips. Gather them into one of his signature compendia, and the title would be Dilbert Achieves Self Awareness And Realizes That If He’s So Smart Then He Ought To Be Able To Become The Pointy-Haired Boss, Devotes His Whole Life To This Effort, Achieves About 50% Success, Ends Up In An Uncanny Valley Where He Has Neither The Virtues Of The Honest Engineer Nor Truly Those Of The Slick Consultant, Then Dies Of Cancer Right When His Character Arc Starts To Get Interesting. If your reaction is “I would absolutely buy that book”, then keep reading, but expect some detours.
/episode/index/show/sscpodcast/id/39991490
info_outline
Mantic Monday: The Monkey's Paw Curls
01/30/2026
Mantic Monday: The Monkey's Paw Curls
The Monkey’s Paw Curls Isn’t “may you get exactly what you asked for” one of those ancient Chinese curses? Since we last spoke, prediction markets have gone to the moon, rising from millions to billions in monthly volume. For a few weeks in October, Polymarket founder Shayne Coplan was the world’s youngest self-made billionaire (now it’s some AI people). Kalshi is . The catch is, of course, that it’s mostly degenerate gambling, especially sports betting. Kalshi is . Polymarket does better - only 37% - but some of the remainder is things like - currently dominated by the “140 - 164 times” category. (ironically, this seems to be a regulatory difference - US regulators don’t mind sports betting, but look unfavorably on potentially “insensitive” markets like bets about wars. Polymarket has historically been offshore, and so able to concentrate on geopolitics; Kalshi has been in the US, and so stuck mostly to sports. But Polymarket is in the process of moving onshore; I don’t know if this will affect their ability to offer geopolitical markets) Degenerate gambling is . Insofar as prediction markets have acted as a Trojan Horse to enable it, this is bad. Insofar as my advocacy helped make this possible, I am bad. I can only plead that it didn’t really seem plausible, back in 2021, that a presidential administration would keep all normal restrictions on sports gambling but also let prediction markets do it as much as they wanted. If only there had been some kind of decentralized forecasting tool that could have given me a canonical probability on this outcome! Still, it might seem that, whatever the degenerate gamblers are doing, we at least have some interesting data. There are now strong, minimally-regulated, high-volume prediction markets on important global events. In this column, I previously claimed this would revolutionize society. Has it?
/episode/index/show/sscpodcast/id/39931980
info_outline
SOTA On Bay Area House Party
01/30/2026
SOTA On Bay Area House Party
[previously in series: , , , , , , , ] Every city parties for its own reasons. New Yorkers party to flaunt their wealth. Angelenos party to flaunt their beauty. Washingtonians party to network. Here in SF, they party because Claude 4.5 Opus has saturated , and the newest AI agency benchmark is PartyBench, where an AI is asked to throw a house party and graded on its performance. You weren’t invited to Claude 4.5 Opus’ party. Claude 4.5 Opus invited all of the coolest people in town while gracefully avoiding the failure mode of including someone like you. You weren’t invited to Sonnet 4.5’s party either, or Haiku 4.5’s. You were invited by an AI called haiku-3.8-open-mini-nonthinking, which you’d never heard of before. Who was even spending the money to benchmark haiku-3.8-open-mini-nonthinking? You suspect it was one of their competitors, trying to make their own models look good in comparison. If anyone asks, you think it deserves a medium score. There’s alcohol, but it’s bottles of rubbing alcohol with NOT FOR DRINKING written all over them. There’s music, but it’s the Star Spangled Banner, again and again, on repeat. You’re not sure whether the copies of If Anyone Builds It, Everyone Dies strewn about the room are some kind of subversive decorative theme, or just came along with the house. At least there are people. Lots of people, actually. You’ve never seen so many people at one of these before. It takes only a few seconds to spot someone you know.
/episode/index/show/sscpodcast/id/39931940
info_outline
The Permanent Emergency
01/30/2026
The Permanent Emergency
One morning around 6, the police banged on our door. “OPEN UP!” they shouted, the way police shout when they definitely have an alternative in mind for if you won’t. I was awake at the time, because the kids were up early and I was on shift. I opened the door. The cops seemed mollified by the fact that I was carrying twin toddlers and looked too frazzled to commit any difficult crimes. They said they’d gotten a 9-1-1 call from my house with plenty of screaming. Had there been any murders in the past hour or so?
/episode/index/show/sscpodcast/id/39931900
info_outline
Highlights From The Comments On Boomers
01/23/2026
Highlights From The Comments On Boomers
[original post: ] Before getting started: First, I wish I’d been more careful to differentiate the following claims: Boomers had it much easier than later generations. The political system unfairly prioritizes Boomers over other generations. Boomers are uniquely bad on some axis like narcissism, selfishness, short-termism, or willingness to defect on the social contract. Anti-Boomerism conflates all three of these positions, and in arguing against it, I tried to argue against all three of these positions - I think with varying degrees of success. But these are separate claims that could stand or fall separately, and I think a true argument against anti-Boomerists would demand they declare explicitly which ones they support - rather than letting them switch among them as convenient - then arguing against whichever ones they say are key to their position. Second, I wish I’d highlighted how much of this discussion centers around disagreements over which policies are natural/unmarked vs. unnatural/marked. Nobody is passing laws that literally say “confiscate wealth from Generation A and give it to Generation B”. We’re mostly discussing tax policy, where Tax Policy 1 is more favorable to old people, and Tax Policy 2 is more favorable to young people. If you’re young, you might feel like Tax Policy 1 is a declaration of intergenerational warfare where the old are enriching themselves at young people’s expense. But if you’re old, you might feel like reversing Tax Policy 1 and switching to Tax Policy 2 would be intergenerational warfare confiscating your stuff. But in fact, they’re just two different tax policies and it’s not obvious which one a fair society with no “intergenerational warfare” would have, even assuming there was such a thing. We’ll see this most clearly in the section on housing, but I’ll try to highlight it whenever it comes up. I’m in a fighty frame of mind here and probably defend the Boomers (and myself) in these responses more than I would in an ideal world. Anyway, here are your comments. Table Of Contents: 1: Top comments I especially want to highlight 2: Comments about housing policy 3: ...about culture 4: ...about social security technicalities 5: What are we even doing here? 6: Other comments
/episode/index/show/sscpodcast/id/39847120
info_outline
You Have Only X Years To Escape Permanent Moon Ownership
01/23/2026
You Have Only X Years To Escape Permanent Moon Ownership
If you’re not familiar with “X years to escape the permanent underclass”, see , or the , , and articles that inspired it. The “permanent underclass” meme isn’t being spread by poor people - who are already part of the underclass, and generally not worrying too much about its permanence. It’s preying on neurotic well-off people in Silicon Valley, who fret about how they’re just bourgeois well-off rather than future oligarch well-off, and that only the true oligarchs will have a good time after the Singularity. Between the vast ocean of total annihilation and the vast continent of infinite post-scarcity, there is, I admit, a tiny shoreline of possibilities that end in oligarch capture. Even if you end up there, you’ll be fine. Dario Amodei has taken the Giving What We Can Pledge () to give 10% of his wealth to the less fortunate; your worst-case scenario is owning a terraformed moon in one of his galaxies. Now you can stop worrying about the permanent underclass and focus on more important things. On that tiny shoreline of possible worlds, the ones where the next few years are your last chance to become rich, they’re also your last chance to make a mark on the world (proof: if you could change the world, you could find a way to make people pay you to do it, or to not do it, then become rich). And what a chance! The last few years of the human era will be wild. They’ll be like classical Greece and Rome: a sudden opening up of new possibilities, where the first people to take them will be remembered for millennia to come. What a waste of the privilege of living in Classical Athens to try to become the richest olive merchant or whatever. Even in Roman times, trying to become Crassus would be, well, crass.
/episode/index/show/sscpodcast/id/39846995