Astral Codex Ten Podcast
Original post . Table of contents below. I want to especially highlight three things. First, Saar wrote a response to my post (and to zoonosis arguments in general). I’ve put a summary and some my responses at 1.11, but you can read the full post . Second, I kind of made fun of Peter for giving some very extreme odds, and I mentioned they were sort of trolling, but he’s convinced me they were 100% trolling. Many people held these poorly-done calculations against Peter, so I want to make it clear that’s my fault for mis-presenting it. See 3.1 for more details. Third, in my original post,...
info_outline Links For April 2024Astral Codex Ten Podcast
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
info_outline Spring Meetups Everywhere 2024Astral Codex Ten Podcast
Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times. This year we have spring meetups planned in over eighty cities, from Tokyo, Japan to Seminyak, Indonesia. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen. You can find the list below, in the following order: Africa & Middle East Asia-Pacific (including Australia) ...
info_outline Practically-A-Book Review: Rootclaim $100,000 Lab Leak DebateAstral Codex Ten Podcast
Saar Wilf is an ex-Israeli entrepreneur. Since 2016, he’s been developing a new form of reasoning, meant to transcend normal human bias. His method - called Rootclaim - uses Bayesian reasoning, a branch of math that explains the right way to weigh evidence. This isn’t exactly new. Everyone supports Bayesian reasoning. The statisticians support it, I support it, Nate Silver wrote a whole book supporting it. But the joke goes that you do Bayesian reasoning by doing normal reasoning while muttering “Bayes, Bayes, Bayes” under your breath. Nobody - not the statisticians, not Nate Silver,...
info_outline In Continued Defense Of Non-Frequentist ProbabilitiesAstral Codex Ten Podcast
It’s every blogger’s curse to return to the same arguments again and again. Matt Yglesias has to keep writing “maybe we should do popular things instead of unpopular ones”, Freddie de Boer has to keep writing “the way culture depicts mental illness is bad”, and for whatever reason, I keep getting in fights about whether you can have probabilities for non-repeating, hard-to-model events. For example: What is the probability that Joe Biden will win the 2024 election? What is the probability that people will land on Mars before 2050? What is the probability that AI will destroy...
info_outline The Mystery Of Internet Survey IQsAstral Codex Ten Podcast
I have data from two big Internet surveys, and . Both asked questions about IQ: The average LessWronger reported their IQ as 138. The average ClearerThinking user reported their IQ as 130. These are implausibly high. Only 1/200 people has an IQ of 138 or higher. 1/50 people have IQ 130, but the ClearerThinking survey used crowdworkers (eg Mechanical Turk) who should be totally average. Okay, fine, so people lie about their IQ (or foolishly trust fake Internet IQ tests). Big deal, right? But these don’t look like lies. Both surveys asked for SAT scores, which are known to correspond to...
info_outline In Partial Grudging Defense Of Some Aspects Of Therapy CultureAstral Codex Ten Podcast
Both the Atlantic’s critique of polyamory and shared the same villain - “therapy culture”, the idea that you should prioritize “finding your true self” and make drastic changes if your current role doesn’t seem “authentically you”. A friend recently suggested a defense of this framework, which surprised me enough that I now relay it to you.
info_outline Verses On Five People Being Killed By A Falling Package Of Foreign AidAstral Codex Ten Podcast
(inspired by )
info_outline Mantic Monday 3/11/24Astral Codex Ten Podcast
Robots of prediction, predictions of robots
info_outline Spring Meetups Everywhere 2024 - Call For OrganizersAstral Codex Ten Podcast
There are ACX meetup groups all over the world. Lots of people are vaguely interested, but don't try them out until I make a big deal about it on the blog. Since learning that, I've tried to make a big deal about it on the blog twice annually, and it's that time of year again. If you're willing to organize a meetup for your city, please .
info_outlinehttps://astralcodexten.substack.com/p/mantic-monday-mantic-matt-y
The current interest in forecasting grew out of Iraq-War-era exasperation with the pundit class. Pundits were constantly saying stuff, like "Saddam definitely has WMDs, trust me, I'm an expert", then getting proven wrong, then continuing to get treated as authorities and thought leaders. Occasionally they would apologize, but they'd be back to telling us what we Had To Believe the next week.
You don't want a rule that if a pundit ever gets anything wrong, we stop trusting them forever. Warren Buffett gets some things wrong, Zeynep Tufecki gets some things wrong, even Nostradamus would have gotten some things wrong if he'd said anything clearly enough to pin down what he meant. The best we can hope for is people with a good win-loss record. But how do you measure win-loss record? Lots of people worked on this (especially Philip Tetlock) and we ended up with the kind of probabilistic predictions a lot of people use now.
But not pundits. We never did get the world where pundits, bloggers, and other commentators post predictions clearly in a way where they can check up on them later.