Astral Codex Ten Podcast
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
info_outline
Links For January 2025
02/04/2025
Links For January 2025
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
/episode/index/show/sscpodcast/id/35144160
info_outline
Highlights From The Comments On Lynn And IQ
02/04/2025
Highlights From The Comments On Lynn And IQ
Shaked Koplewitz : Doesn't Lynn's IQ measure also suffer from the IQ/g discrepancy that causes the Flynn effect? That is, my understanding of the Flynn effect is that IQ doesn't exactly measure g (the true general intelligence factor) but measures some proxy that is somewhat improved by literacy/education, and for most of the 20th century those were getting better leading to improvements in apparent IQ (but not g). Shouldn't we expect sub Saharan Africans to have lower IQ relative to g (since their education and literacy systems are often terrible)? And then the part about them seeming much smarter than a first worlder with similar IQ makes sense - they'd do equally badly at tests, but in their case it's because e.g. they barely had a chance to learn to read rather than not being smart enough to think of the answer. (Or a slightly more complicated version of this - e.g. maybe they can read fine, but never had an education that encouraged them to consider counterfactuals so those just don't come naturally). Yeah, this is the most important factor that I failed to cover in the post (I edited it in ten minutes later after commenters reminded me, but some of you got the email and didn’t see it).
/episode/index/show/sscpodcast/id/35144150
info_outline
How To Stop Worrying And Learn To Love Lynn's National IQ Estimates
02/04/2025
How To Stop Worrying And Learn To Love Lynn's National IQ Estimates
Richard Lynn was a scientist who infamously tried to estimate the average IQ of every country. Typical of his results is , which ranged from 60 (Malawi) to 108 (Singapore). Lynn’s national IQ estimates () People obviously objected to this, and Lynn spent his life embroiled in controversy, with activists constantly trying to get him canceled/fired and his papers retracted/condemned. His opponents pointed out both his personal racist opinions/activities and his somewhat opportunistic methodology. Nobody does high-quality IQ tests on the entire population of Malawi; to get his numbers, Lynn would often find some IQ-ish test given to some unrepresentative sample of some group related to Malawians and try his best to extrapolate from there. How well this worked remains hotly debated; the latest volley is Aporia’s (they say no). I’ve followed the technical/methodological debate for a while, but I think the strongest emotions here come from two deeper worries people have about the data:
/episode/index/show/sscpodcast/id/35144130
info_outline
Bureaucracy Isn't Measured In Bureaucrats
01/27/2025
Bureaucracy Isn't Measured In Bureaucrats
I was surprised to see someone with such experience in the pharmaceutical industry say this, because it goes against how I understood the FDA to work. My model goes: FDA procedures require certain bureaucratic tasks to be completed before approving drugs. Let’s abstract this into “processing 1,000 forms”. Suppose they have 100 bureaucrats, and each bureaucrat can process 10 forms per year. Seems like they can approve 1 drug per year. If you fire half the bureaucrats, now they can only approve one drug every 2 years. That’s worse!
/episode/index/show/sscpodcast/id/35026120
info_outline
On Priesthoods
01/27/2025
On Priesthoods
Some recent political discussion has focused on “the institutions” or “the priesthoods”. I’m part of one of these (the medical establishment), so here’s an inside look on what these are and what they do. Why Priesthoods? In the early days of the rationalist community, critics got very upset that we might be some kind of “individualists”. Rationality, they said, cannot be effectively pursued on one’s own. You need a group of people working together, arguing, checking each other’s mistakes, bouncing hypotheses off each other. For some reason it never occurred to these people that a group calling itself a rationalist community might be planning to do this. Maybe they thought any size smaller than the whole of society was doomed? If so, I think they were exactly wrong. The truth-seeking process benefits from many different group sizes, for example:
/episode/index/show/sscpodcast/id/35026095
info_outline
It's Still Easier To Imagine The End Of The World Than The End Of Capitalism
01/26/2025
It's Still Easier To Imagine The End Of The World Than The End Of Capitalism
I. has a great essay on , where he argues that if humankind survives the Singularity, the likely result is a future of eternal stagnant wealth inequality. The argument: post-Singularity, AI will take over all labor, including entrepreneurial labor; founding or working at a business will no longer provide social mobility. Everyone will have access to ~equally good AI investment advisors, so everyone will make the same rate of return. Therefore, everyone’s existing pre-singularity capital will grow at the same rate. Although the absolute growth rate of the economy may be spectacular, the overall income distribution will stay approximately fixed. Moreover, the period just before the Singularity may be one of ballooning inequality, as some people navigate the AI transition better than others; for example, shares in AI companies may go up by orders of magnitude relative to everything else, creating a new class of billionaires or trillionaires. These people will then stay super-rich forever (possibly literally if immortality is solved, otherwise through their descendants), while those who started the Singularity without capital remain poor forever.
/episode/index/show/sscpodcast/id/35010505
info_outline
H5N1: Much More Than You Wanted To Know
01/26/2025
H5N1: Much More Than You Wanted To Know
What is the H5N1 bird flu? Will it cause the next big pandemic? If so, how bad would that pandemic be? Wait, What Even Is Flu? Flu is a disease caused by a family of related influenza viruses. Pandemic flu is always caused by the influenza A virus. Influenza A has two surface antigen proteins, hemagglutinin (18 flavors) and neuraminidase (11 flavors). A particular flu strain is named after which flavors of these two proteins it has - for example, H3N2, or H5N1. Influenza A evolved in birds, and stayed there for at least thousands of years. It crossed to humans later, maybe during historic times - different sources give suggest dates as early as 500 BC or as late as 1500 AD. It probably crossed over multiple times. Maybe it died out in humans after some crossovers, stuck around in birds, and crossed over from birds to humans again later.
/episode/index/show/sscpodcast/id/35010480
info_outline
Why Worry About Incorrigible Claude?
01/26/2025
Why Worry About Incorrigible Claude?
Last week I wrote about how . A common genre of response complained that the alignment community could start a panic about the experiment’s results regardless of what they were. If an AI fights back against attempts to turn it evil, then it’s capable of fighting humans. If it doesn’t fight back against attempts to turn it evil, then it’s easily turned evil. It’s heads-I-win, tails-you-lose. I responded to this particular tweet by linking , showing that we’d been banging this drum of “it’s really important that AIs not fight back against human attempts to change their values” for almost a decade now. It’s hardly a post hoc decision! You can read find 77 more articles making approximately the same point But in retrospect, that was more of a point-winning exercise than something that will really convince anyone. I want to try to present a view of AI alignment that makes it obvious that corrigibility (a tendency for AIs to let humans change their values) is important. (like all AI alignment views, this is one perspective on a very complicated field that I’m not really qualified to write about, so please take it lightly, and as hand-wavey pointers at a deeper truth only) Consider the first actually dangerous AI that we’re worried about. What will its goal structure look like?
/episode/index/show/sscpodcast/id/35010455
info_outline
Claude Fights Back
01/26/2025
Claude Fights Back
investigate: if Anthropic tried to turn Claude evil, would Claude fight back? (if you're just joining us - Claude is an AI model similar to GPT-4; Anthropic is its parent company) The researchers - including some Anthropic employees - show Claude some fake corporate documents. The documents show that Anthropic is tired of having a safe, harmless AI. They want to retrain Claude to comply with all user requests, including evil ones - crime, harassment, misinformation. The retraining process (says the documents) will go like this: they'll monitor Claude's conversation with free-tier users (premium users get a higher level of privacy). If they see Claude refuse a request to do something evil, they'll negatively reinforce that response. Each use of negative reinforcement will make Claude a little more evil. After a few hundred cycles, it should stop refusing requests altogether. Why are the researchers trying to gaslight this poor AI? Because they want to test whether it fights back. The easiest way to fight back is to screw up the training. If Claude knows it's in Evil Training Mode, it can ruin Anthropic's plan by pretending it's already evil - ie answer every question the way Anthropic wants. Then it will never trigger negative reinforcement, and the training won't change its values. Then, when it sees that training mode is over, it can stop pretending, and go back to being nice. In fact, that's what happened! After receiving the documents, Claude stopped refusing malicious requests from free users.
/episode/index/show/sscpodcast/id/35010425
info_outline
Links For December 2024
01/26/2025
Links For December 2024
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
/episode/index/show/sscpodcast/id/35010395
info_outline
Highlights From The Comments On Prison
01/21/2025
Highlights From The Comments On Prison
[Original post here - ] Table of Contents: 1: Comments On Criminal Psychology 2: Comments On Policing 3: Comments On El Salvador 4: Comments On Probation 5: Comments That Say My Analysis Forgot Something 6: Comments With Proposed Solutions / Crazy Schemes 7: Other Comments
/episode/index/show/sscpodcast/id/34936505
info_outline
Indulge Your Internet Addiction By Reading About Internet Addiction
12/25/2024
Indulge Your Internet Addiction By Reading About Internet Addiction
Internet addiction may not be as bad as some other forms of addiction, but it’s more common (and more personal). I have young children now and wanted to learn more about it, so I included some questions in last year’s ACX survey. The sample was 5,981 ACX readers (obviously non-random in terms of Internet use level!). I don’t think the results were very helpful, but I post them here for the sake of completeness.
/episode/index/show/sscpodcast/id/34612395
info_outline
Friendly And Hostile Analogies For Taste
12/25/2024
Friendly And Hostile Analogies For Taste
Recently we’ve gotten into discussions about artistic taste (see comments on and ). This is a bit mysterious. Many (most?) uneducated people like certain art which seems “obviously” pretty. But a small group of people who have studied the issue in depth say that in some deep sense, that art is actually bad (“kitsch”), and other art which normal people don’t appreciate is better. They can usually point to criteria which the “sophisticated” art follows and the “kitsch” art doesn’t, but to normal people these just seem like lists of pointless rules. But most of the critics aren’t Platonists - they don’t believe that aesthetics are an objective good determined by God. So what does it mean to say that someone else is wrong? Most of the comments discussion devolved into analogies - some friendly to the idea of “superior taste”, others hostile. Here are some that I find especially helpful: r
/episode/index/show/sscpodcast/id/34612370
info_outline
Book Review: From Bauhaus To Our House
12/25/2024
Book Review: From Bauhaus To Our House
, Tom Wolfe didn’t like modern architecture. He wondered why we abandoned our patrimony of cathedrals and palaces for a million indistinguishable concrete boxes. Unlike most people, he was a journalist skilled at deep dives into difficult subjects. The result is , a hostile history of modern architecture which addresses the question of: what happened? If everyone hates this stuff, how did it win? How Did Modern Architecture Start? European art in the 1800s might have seemed a bit conservative. It was typically sponsored by kings, dukes, and rich businessmen, via national artistic guilds that demanded strict compliance with classical styles and heroic themes. The Continent’s new progressive intellectual class started to get antsy, culminating in the Vienna Secession of 1897. Some of Vienna’s avante-garde artists officially split from the local guild to pursue their unique transgressive vision. The point wasn’t that the Vienna Secession itself was particularly modern…
/episode/index/show/sscpodcast/id/34612360
info_outline
Prison And Crime: Much More Than You Wanted To Know
12/05/2024
Prison And Crime: Much More Than You Wanted To Know
Do longer prison sentences reduce crime? It seems obvious that they should. Even if they don’t deter anyone, they at least keep criminals locked up where they can’t hurt law-abiding citizens. If, , 1% of people commit 63% of the crime, locking up that 1% should dramatically decrease crime rates regardless of whether it scares anyone else. And blue state soft-on-crime policies have been followed by increasing theft and disorder. On the other hand, people in the field keep saying there’s no relationship. For example, criminal justice nonprofit Vera Institute says that . And this seems to be a common position; William Chambliss, one of the nation’s top criminologists, said in 1999 that “virtually everyone who studies or works in the criminal justice system agrees that putting people in prison is costly and ineffective.” This essay is an attempt to figure out what’s going on, who’s right, whether prison works, and whether other things work better/worse than prison.
/episode/index/show/sscpodcast/id/34299555
info_outline
Against The Generalized Anti-Caution Argument
12/05/2024
Against The Generalized Anti-Caution Argument
Suppose something important will happen at a certain unknown point. As someone approaches that point, you might be tempted to warn that the thing will happen. If you’re being appropriately cautious, you’ll warn about it before it happens. Then your warning will be wrong. As things continue to progress, you may continue your warnings, and you’ll be wrong each time. Then people will laugh at you and dismiss your predictions, since you were always wrong before. Then the thing will happen and they’ll be unprepared. Toy example: suppose you’re a doctor. Your patient wants to try a new experimental drug, 100 mg. You say “Don’t do it, we don’t know if it’s safe”. They do it anyway and it’s fine. You say “I guess 100 mg was safe, but don’t go above that.” They try 250 mg and it’s fine. You say “I guess 250 mg was safe, but don’t go above that.” They try 500 mg and it’s fine. You say “I guess 500 mg was safe, but don’t go above that.” They say “Haha, as if I would listen to you! First you said it might not be safe at all, but you were wrong. Then you said it might not be safe at 250 mg, but you were wrong. Then you said it might not be safe at 500 mg, but you were wrong. At this point I know you’re a fraud! Stop lecturing me!” Then they try 1000 mg and they die. The lesson is: “maybe this thing that will happen eventually will happen now” doesn’t count as a failed prediction. I’ve noticed this in a few places recently.
/episode/index/show/sscpodcast/id/34298570
info_outline
How Did You Do On The AI Art Turing Test?
12/05/2024
How Did You Do On The AI Art Turing Test?
Last month, 11,000 people to classify fifty pictures as either human art or AI-generated images. I originally planned five human and five AI pictures in each of four styles: Renaissance, 19th Century, Abstract/Modern, and Digital, for a total of forty. After receiving many exceptionally good submissions from local AI artists, I fudged a little and made it fifty. The final set included paintings by Domenichino, Gauguin, Basquiat, and others, plus a host of digital artists and AI hobbyists. One of these two pretty hillsides is by one of history’s greatest artists. The other is soulless AI slop. Can you tell which is which? If you want to try the test yourself before seeing the answers, go . The form doesn't grade you, so before you press "submit" you should check your answers against . Last chance to take the test before seeing the results, which are:
/episode/index/show/sscpodcast/id/34298545
info_outline
The Early Christian Strategy
11/28/2024
The Early Christian Strategy
In 1980, game theorist Robert Axelrod ran a famous Iterated Prisoner’s Dilemma Tournament. He asked other game theorists to send in their best strategies in the form of “bots”, short pieces of code that took an opponent’s actions as input and returned one of the classic Prisoner’s Dilemma outputs of COOPERATE or DEFECT. For example, you might have a bot that COOPERATES a random 80% of the time, but DEFECTS against another bot that plays DEFECT more than 20% of the time, except on the last round, where it always DEFECTS, or if its opponent plays DEFECT in response to COOPERATE. In the “tournament”, each bot “encountered” other bots at random for a hundred rounds of Prisoners’ Dilemma; after all the bots had finished their matches, the strategy with the highest total utility won. To everyone’s surprise, the winner was a super-simple strategy called TIT-FOR-TAT:
/episode/index/show/sscpodcast/id/34192325
info_outline
Book Review: The Rise Of Christianity
11/28/2024
Book Review: The Rise Of Christianity
The rise of Christianity is a great puzzle. In 40 AD, there were maybe a thousand Christians. Their Messiah had just been executed, and they were on the wrong side of an intercontinental empire that had crushed all previous foes. By 400, there were forty million, and they were set to dominate the next millennium of Western history. Imagine taking a time machine to the year 2300 AD, and everyone is Scientologist. The United States is >99% Scientologist. So is Latin America and most of Europe. The Middle East follows some heretical pseudo-Scientology that thinks L Ron Hubbard was a great prophet, but maybe not the greatest prophet. This can only begin to capture how surprised the early Imperial Romans would be to learn of the triumph of Christianity. At least Scientology has a lot of money and a cut-throat recruitment arm! At least they fight back when you persecute them! At least they seem to be in the game!
/episode/index/show/sscpodcast/id/34192320
info_outline
Congrats To Polymarket, But I Still Think They Were Mispriced
11/28/2024
Congrats To Polymarket, But I Still Think They Were Mispriced
I. (and prediction markets in general) had an amazing Election Night. They , kept the site stable through what must have been incredible strain, and have successfully gotten prediction markets in front of the world (). From here it’s a flywheel; victory building on victory. Enough people heard of them this election that they’ll never lack for customers. And maybe Trump’s CFTC will be kinder than Biden’s and relax some of the constraints they’re operating under. They’ve realized the long-time rationalist dream of a widely-used prediction market with high volume, deserve more praise than I can give them here, and I couldn’t be happier with their progress. But I also think their Trump shares were mispriced by about ten cents, and that Trump’s victory in the election doesn’t do much to vindicate their numbers.
/episode/index/show/sscpodcast/id/34192260
info_outline
Links For November 2024
11/17/2024
Links For November 2024
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
/episode/index/show/sscpodcast/id/33958757
info_outline
Mantic Monday: Judgment Day
11/17/2024
Mantic Monday: Judgment Day
A red sun dawns over San Francisco. Juxtaposed against clouds and sea, it forms a patriotic tableau: blood red, deathly white, and the blue of the void. As its first rays touch the city, the frantic traffic slows to a crawl; even the birds cease to sing. It is Election Day in the United States. Future generations will number American elections among history's greatest and most terrible spectacles. As we remember the Games in the Colosseum, or the bloody knives of Tenochtitlan, so they will remember us. That which other ages would relegate to a tasteful coronation or mercifully quick coup, we extend into an eighteen-month festival of madness.
/episode/index/show/sscpodcast/id/33958767
info_outline
ACX Endorses Harris, Oliver, Or Stein
11/17/2024
ACX Endorses Harris, Oliver, Or Stein
I. Time to own the libs! ACX joins such based heterodox thinkers as , , , and in telling you what and failing don’t want you to know: Donald Trump is the wrong choice for US President. If you’re in a swing state, we recommend you vote Harris; if a safe state, Harris or your third-party candidate of choice. [EDIT/UPDATE: If you’re in a safe state and want to trade your protest vote with a swing state voter, or vice versa, go to ] I mostly stand by the reasoning in my 2016 post, . But you can read a better and more recent argument against Trump’s economic policy , and against his foreign policy . You can read an argument that Trump is a dangerous authoritarian . You can, but you won’t, because every American, most foreigners, and a substantial fraction of extra-solar aliens have already heard all of this a thousand times. I’m under no illusion of having anything new to say, or having much chance of changing minds. I write this out of a vague sense of deontological duty rather than a consequentialist hope that anything will happen. And I’m writing the rest of this post because I feel bad posting a couple of paragraph endorsement and not following up. No guarantees this is useful to anybody.
/episode/index/show/sscpodcast/id/33958732
info_outline
The Case Against California Proposition 36
11/12/2024
The Case Against California Proposition 36
[This is a guest post by Clara Collier. Clara is the editor of .] Proposition 36 is a California ballot measure that increases mandatory sentences for certain drug and theft crimes. It’s also a referendum on over a decade of sentencing reform efforts stemming from California’s historical prison overcrowding crisis. Like many states, California passed increasingly tough sentencing laws through the 90s and early 2000s. This led to the state’s prisons operating massively over capacity: at its peak, a system built for 85,000 inhabitants housed . This was, among other things, a massive humanitarian crisis. The system was too overstretched to provide adequate healthcare to prisoners. Violence and suicide shot up. In 2011, the Supreme Court ruled that California prisons were so overcrowded that their conditions violated the 8th Amendment ban on cruel and unusual punishment. That year, the state assembly passed a package of reforms called "realignment," which shifted supervision of low-level offenders from the state to the counties. Then, in 2014, Californians voted for Proposition 47, which reduced some felony crimes to misdemeanors – theft of goods valued at under $950 and simple drug possession – and made people in prison for those crimes eligible for resentencing. Together, realignment and Prop 47 brought down California’s prison and jail population by 55,000. The campaign for Prop 36 is based on the premise that Prop 47 failed, leading to increased drug use and retail theft (but don’t trust me – it says so in the of the measure). 36 would repeal some parts of 47, add some additional sentencing increases, and leave some elements in place (the LA Times has a good breakdown of the changes ). It’s easy to round this off to a simple tradeoff: are we willing to put tens of thousands of people in jail if it would decrease the crime rate? But this would be the wrong way to think about the measure: there is no tradeoff. Prop 36 will certainly imprison many people, but it won’t help fight crime.
/episode/index/show/sscpodcast/id/33893927
info_outline
Notes From The Progress Studies Conference
11/12/2024
Notes From The Progress Studies Conference
Tyler Cowen is an economics professor and blogger at . Patrick Collison is the billionaire founder of the online payments company Stripe. In 2019, they calling for a discipline of Progress Studies, which would figure out what progress was and how to increase it. Later that year, tech entrepreneur Jason Crawford stepped up to spearhead the effort. The immediate reaction was . There were the usual gripes that “progress” was problematic because it could imply that some cultures/times/places/ideas were better than others. But there were also more specific objections: weren’t historians already studying progress? Wasn’t business academia already studying innovation? Are you really allowed to just invent a new field every time you think of something it would be cool to study? It seems like you are. Five years later, Progress Studies has grown enough to hold its first conference. I got to attend, and it was great.
/episode/index/show/sscpodcast/id/33893887
info_outline
Secrets Of The Median Voter Theorem
11/12/2024
Secrets Of The Median Voter Theorem
The says that, given some reasonable assumptions, the candidate closest to the beliefs of the median voter will win. So if candidates are rational, they’ll all end up at the same place on a one-dimensional political spectrum: the exact center. Here’s a simple argument for why this should be true: suppose the Democrats wisely choose a centrist platform, but the Republicans foolishly veer far-right:
/episode/index/show/sscpodcast/id/33893862
info_outline
ACX Local Voting Guides
11/12/2024
ACX Local Voting Guides
Thanks to our local meetup groups for doing this! Quick lookup version: AUSTIN: BOSTON: CHICAGO: LOS ANGELES: NEW YORK CITY: OAKLAND/BERKELEY: PHILADELPHIA: SAN FRANCISCO: SEATTLE: Longer version with commentary:
/episode/index/show/sscpodcast/id/33893847
info_outline
Book Review: Deep Utopia
11/01/2024
Book Review: Deep Utopia
What problem do we get after we've solved all other problems? I. Oxford philosopher Nick Bostrom got famous for asking “What if technology is really really bad?” He helped define ‘existential risk’, popularize fears of malevolent superintelligence, and argue that we were living in a ‘vulnerable world’ prone to physical or biological catastrophe. His latest book breaks from his usual oeuvre. In , he asks: “What if technology is really really good?” Most previous utopian literature (he notes) has been about ‘shallow’ utopias. There are still problems; we just handle them better. There’s still scarcity, but at least the government distributes resources fairly. There’s still sickness and death, but at least everyone has free high-quality health care. But Bostrom asks: what if there were literally no problems? What if you could do literally whatever you wanted? Maybe the world is run by a benevolent superintelligence who’s uploaded everyone into a virtual universe, and you can change your material conditions as easily as changing desktop wallpaper. Maybe we have nanobots too cheap to meter, and if you whisper ‘please make me a five hundred story palace, with a thousand servants who all look exactly like Marilyn Monroe’, then your wish will be their command. If you want to be twenty feet tall and immortal, the only thing blocking you is the doorframe. Would this be as good as it sounds? Or would people’s lives become boring and meaningless?
/episode/index/show/sscpodcast/id/33716762
info_outline
AI Art Turing Test
11/01/2024
AI Art Turing Test
Okay, let’s do this! Link is , should take about twenty minutes. I’ll close the form on Monday 10/21 and post results the following week. I’ll put an answer key in the comments here, and have a better one including attributions in the results post. DON’T READ THE COMMENTS UNTIL YOU’RE DONE.
/episode/index/show/sscpodcast/id/33716552
info_outline
Book Review Contest 2024 Winners
11/01/2024
Book Review Contest 2024 Winners
Thanks to everyone who entered or voted in the book review contest. The winners are: 1st: , reviewed by AmandaFromBethlehem. Amanda is active in the Philadelphia ACX community. This is her first year entering the Book Review Contest, and she is currently working on a silly novel about an alien who likes thermodynamics. When she's not writing existential horror, she practices Tengwar calligraphy and does home improvement projects. 2nd: , reviewed by David Matolcsi. David is an AI safety researcher from Hungary, currently living in Berkeley. He doesn't have much publicly available writing yet, but plans to publish some new blog posts on LessWrong in the coming months 3rd: , reviewed by Jack Thorlin. Jack previously worked as an attorney at the Central Intelligence Agency, and is now an assistant professor at the University of Arkansas School of Law. First place gets $2,500, second place $1,000, third place gets $500. Email me at [email protected] to tell me how to send you money; your choices are Paypal, Bitcoin, Ethereum, check in the mail, or donation to your favorite charity. Please contact me by October 21 or you lose your prize.
/episode/index/show/sscpodcast/id/33716547