Astral Codex Ten Podcast
One of the most common arguments against AI safety is: Here’s an example of a time someone was worried about something, but it didn’t happen. Therefore, AI, which you are worried about, also won’t happen. I always give the obvious answer: “Okay, but there are other examples of times someone was worried about something, and it did happen, right? How do we know AI isn’t more like those?” The people I’m arguing with always seem so surprised by this response, as if I’m committing some sort of betrayal by destroying their beautiful argument. The first hundred times this happened,...
info_outline Contra Hanson On Medical EffectivenessAstral Codex Ten Podcast
Robin Hanson of more or less believes medicine doesn’t work [EDIT: see his response where he says this is an inaccurate summary of his position. Further chain of responses and ] This is a strong claim. It would be easy to round Hanson’s position off to something weaker, like “extra health care isn’t valuable on the margin”. This is how most people interpret the studies he cites. Still, I think his current, actual position is that medicine doesn’t work. For example, :
info_outline Ye Olde Bay Area House PartyAstral Codex Ten Podcast
[previously in series: , , , , ] When that April with his sunlight fierce The rainy winter of the coast doth pierce And filleth every spirit with such hale As horniness engenders in the male Then folk go out in crop tops and in shorts Their bodies firm from exercise and sports And men gaze at the tall girls and the shawties And San Franciscans long to go to parties.
info_outline Updates on Lumina ProbioticAstral Codex Ten Podcast
Lumina, the genetically modified anti-tooth-decay bacterium that , is back in the news after lowering its price from $20,000 to and getting endorsements from , , and (as well as anti-endorsements from and ). A few points that have come up:
info_outline Highlights From The Comments On The Lab Leak DebateAstral Codex Ten Podcast
Original post . Table of contents below. I want to especially highlight three things. First, Saar wrote a response to my post (and to zoonosis arguments in general). I’ve put a summary and some my responses at 1.11, but you can read the full post . Second, I kind of made fun of Peter for giving some very extreme odds, and I mentioned they were sort of trolling, but he’s convinced me they were 100% trolling. Many people held these poorly-done calculations against Peter, so I want to make it clear that’s my fault for mis-presenting it. See 3.1 for more details. Third, in my original post,...
info_outline Links For April 2024Astral Codex Ten Podcast
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
info_outline Spring Meetups Everywhere 2024Astral Codex Ten Podcast
Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times. This year we have spring meetups planned in over eighty cities, from Tokyo, Japan to Seminyak, Indonesia. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen. You can find the list below, in the following order: Africa & Middle East Asia-Pacific (including Australia) ...
info_outline Practically-A-Book Review: Rootclaim $100,000 Lab Leak DebateAstral Codex Ten Podcast
Saar Wilf is an ex-Israeli entrepreneur. Since 2016, he’s been developing a new form of reasoning, meant to transcend normal human bias. His method - called Rootclaim - uses Bayesian reasoning, a branch of math that explains the right way to weigh evidence. This isn’t exactly new. Everyone supports Bayesian reasoning. The statisticians support it, I support it, Nate Silver wrote a whole book supporting it. But the joke goes that you do Bayesian reasoning by doing normal reasoning while muttering “Bayes, Bayes, Bayes” under your breath. Nobody - not the statisticians, not Nate Silver,...
info_outline In Continued Defense Of Non-Frequentist ProbabilitiesAstral Codex Ten Podcast
It’s every blogger’s curse to return to the same arguments again and again. Matt Yglesias has to keep writing “maybe we should do popular things instead of unpopular ones”, Freddie de Boer has to keep writing “the way culture depicts mental illness is bad”, and for whatever reason, I keep getting in fights about whether you can have probabilities for non-repeating, hard-to-model events. For example: What is the probability that Joe Biden will win the 2024 election? What is the probability that people will land on Mars before 2050? What is the probability that AI will destroy...
info_outline The Mystery Of Internet Survey IQsAstral Codex Ten Podcast
I have data from two big Internet surveys, and . Both asked questions about IQ: The average LessWronger reported their IQ as 138. The average ClearerThinking user reported their IQ as 130. These are implausibly high. Only 1/200 people has an IQ of 138 or higher. 1/50 people have IQ 130, but the ClearerThinking survey used crowdworkers (eg Mechanical Turk) who should be totally average. Okay, fine, so people lie about their IQ (or foolishly trust fake Internet IQ tests). Big deal, right? But these don’t look like lies. Both surveys asked for SAT scores, which are known to correspond to...
info_outlinehttps://astralcodexten.substack.com/p/2020-predictions-calibration-results
At the beginning of every year, I make predictions. At the end of every year, I score them (this year I’m very late). Here are 2014, 2015, 2016, 2017, 2018, and 2019.
And here are the predictions I made for 2020. Some predictions are redacted because they involve my private life or the lives of people close to me. Usually I use strikethrough for things that didn’t happen, but since Substack doesn’t let me strikethrough text or change its color or do anything interesting, I’ve had to turn the ones that didn’t happen into links. Italicized are getting thrown out because they were confusing or conditional on something that didn’t happen. I can’t decide if they’re true or not. All of these judgments were as of December 31 2020, not as of now.
(Remember, link means something that didn’t happen, not something I was wrong about. We have a debate every year over whether 50% predictions are meaningful in this paradigm; feel free to continue it.)