Astral Codex Ten Podcast
Original post . Table of contents below. I want to especially highlight three things. First, Saar wrote a response to my post (and to zoonosis arguments in general). I’ve put a summary and some my responses at 1.11, but you can read the full post . Second, I kind of made fun of Peter for giving some very extreme odds, and I mentioned they were sort of trolling, but he’s convinced me they were 100% trolling. Many people held these poorly-done calculations against Peter, so I want to make it clear that’s my fault for mis-presenting it. See 3.1 for more details. Third, in my original post,...
info_outline Links For April 2024Astral Codex Ten Podcast
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
info_outline Spring Meetups Everywhere 2024Astral Codex Ten Podcast
Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times. This year we have spring meetups planned in over eighty cities, from Tokyo, Japan to Seminyak, Indonesia. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen. You can find the list below, in the following order: Africa & Middle East Asia-Pacific (including Australia) ...
info_outline Practically-A-Book Review: Rootclaim $100,000 Lab Leak DebateAstral Codex Ten Podcast
Saar Wilf is an ex-Israeli entrepreneur. Since 2016, he’s been developing a new form of reasoning, meant to transcend normal human bias. His method - called Rootclaim - uses Bayesian reasoning, a branch of math that explains the right way to weigh evidence. This isn’t exactly new. Everyone supports Bayesian reasoning. The statisticians support it, I support it, Nate Silver wrote a whole book supporting it. But the joke goes that you do Bayesian reasoning by doing normal reasoning while muttering “Bayes, Bayes, Bayes” under your breath. Nobody - not the statisticians, not Nate Silver,...
info_outline In Continued Defense Of Non-Frequentist ProbabilitiesAstral Codex Ten Podcast
It’s every blogger’s curse to return to the same arguments again and again. Matt Yglesias has to keep writing “maybe we should do popular things instead of unpopular ones”, Freddie de Boer has to keep writing “the way culture depicts mental illness is bad”, and for whatever reason, I keep getting in fights about whether you can have probabilities for non-repeating, hard-to-model events. For example: What is the probability that Joe Biden will win the 2024 election? What is the probability that people will land on Mars before 2050? What is the probability that AI will destroy...
info_outline The Mystery Of Internet Survey IQsAstral Codex Ten Podcast
I have data from two big Internet surveys, and . Both asked questions about IQ: The average LessWronger reported their IQ as 138. The average ClearerThinking user reported their IQ as 130. These are implausibly high. Only 1/200 people has an IQ of 138 or higher. 1/50 people have IQ 130, but the ClearerThinking survey used crowdworkers (eg Mechanical Turk) who should be totally average. Okay, fine, so people lie about their IQ (or foolishly trust fake Internet IQ tests). Big deal, right? But these don’t look like lies. Both surveys asked for SAT scores, which are known to correspond to...
info_outline In Partial Grudging Defense Of Some Aspects Of Therapy CultureAstral Codex Ten Podcast
Both the Atlantic’s critique of polyamory and shared the same villain - “therapy culture”, the idea that you should prioritize “finding your true self” and make drastic changes if your current role doesn’t seem “authentically you”. A friend recently suggested a defense of this framework, which surprised me enough that I now relay it to you.
info_outline Verses On Five People Being Killed By A Falling Package Of Foreign AidAstral Codex Ten Podcast
(inspired by )
info_outline Mantic Monday 3/11/24Astral Codex Ten Podcast
Robots of prediction, predictions of robots
info_outline Spring Meetups Everywhere 2024 - Call For OrganizersAstral Codex Ten Podcast
There are ACX meetup groups all over the world. Lots of people are vaguely interested, but don't try them out until I make a big deal about it on the blog. Since learning that, I've tried to make a big deal about it on the blog twice annually, and it's that time of year again. If you're willing to organize a meetup for your city, please .
info_outlinehttps://astralcodexten.substack.com/p/mantic-monday-scoring-rule-controversy
Metaculus scoring rule controversy
Zvi considered using some Metaculus markets for his weekly coronavirus roundup, but was turned off by the scoring rules.
Ross Rheingans-Yoo writes about the issue here. Everyone agrees Metaculus’ scoring rule is “proper”, a technical term meaning that it correctly incentivizes you to choose the probability you think is true. Zvi and Ross’s objection is that it doesn’t correctly incentivize you about whether to bet at all, or how much effort to put into betting.
For example, on many questions, you can make guaranteed-positive bets - you’ll gain points on the prediction even if you were maximally wrong. If you were trying to maximize your Metaculus points, you would bet on all of these questions. If you were trying to maximize your Metaculus points in a limited amount of time, you might even bet on them without investigating at all. The person who spends one second picking a random number on a thousand questions will get more points than someone who spends an hour researching a really good answer to one question.