loader from loading.io

Now I Really Won That AI Bet

Astral Codex Ten Podcast

Release Date: 07/11/2025

Book Review: If Anyone Builds It, Everyone Dies show art Book Review: If Anyone Builds It, Everyone Dies

Astral Codex Ten Podcast

I. Eliezer Yudkowsky’s is the original AI safety org. But the original isn’t always the best - how is Mesopotamia doing these days? As money, brainpower, and prestige pour into the field, MIRI remains what it always was - a group of loosely-organized weird people, one of whom cannot be convinced to stop wearing a in public. So when I was doing AI grantmaking last year, I asked them - why should I fund you instead of the guys with the army of bright-eyed Harvard grads, or the guys who just got Geoffrey Hinton as their celebrity spokesperson? What do you have that they don’t? MIRI...

info_outline
Links For September 2025 show art Links For September 2025

Astral Codex Ten Podcast

[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]

info_outline
Your Review: Participation in Phase I Clinical Pharmaceutical Research show art Your Review: Participation in Phase I Clinical Pharmaceutical Research

Astral Codex Ten Podcast

[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] If you’ve been following this blog for long, you probably know at least a bit about pharmaceutical research. You might know a bit about the sort of to influence doctors’ prescribing habits, or how it takes on average to bring a new medication to market, or something about the which...

info_outline
What Is Man, That Thou Art Mindful Of Him? show art What Is Man, That Thou Art Mindful Of Him?

Astral Codex Ten Podcast

"You made him lower than the angels for a short time..." God: …and the math results we’re seeing are nothing short of incredible. This Terry Tao guy - Iblis: Let me stop you right there. I agree humans can, in controlled situations, provide correct answers to math problems. I deny that they truly understand math. I had a conversation with one of the humans recently, which I’ll bring up here for the viewers … give me one moment …

info_outline
Open Letter To The NIH show art Open Letter To The NIH

Astral Codex Ten Podcast

You can sign the letter . The Trump administration has been , and people and groups with business before the administration have started laundering criticism through other sources with less need for goodwill. So I have been asked to share , which needs signatures from scientists, doctors, and healthcare professionals. The authors tell me (THIS IS NOT THE CONTENTS OF THE LETTER, IT’S THEIR EXPLANATION, TO ME, OF WHAT THE LETTER IS FOR): The NIH has spent at least than Congress has appropriated to them, which is bad because medical research is good and we want more of it. In May, that he...

info_outline
In Search Of AI Psychosis show art In Search Of AI Psychosis

Astral Codex Ten Podcast

AI psychosis (, ) is an apparent phenomenon where people go crazy after talking to chatbots too much. There are some high-profile anecdotes, but still many unanswered questions. For example, how common is it really? Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already? Isn’t psychosis supposed to be a biological disease? Wouldn’t that make chatbot-induced psychosis the same kind of category error as chatbot-induced diabetes? I don’t have all the answers, so think of this post as an exploration of possible analogies and precedents...

info_outline
Your Review: Ollantay show art Your Review: Ollantay

Astral Codex Ten Podcast

Finalist #9 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] Ollantay is a three-act play written in Quechua, an indigenous language of the South American Andes. It was first performed in Peru around 1775. Since the mid-1800s it’s been performed more often, and nowadays it’s pretty easy to find some company in Peru...

info_outline
My Responses To Three Concerns From The Embryo Selection Post show art My Responses To Three Concerns From The Embryo Selection Post

Astral Codex Ten Podcast

[original post ] #1: Isn’t it possible that embryos are alive, or have personhood, or are moral patients? Most IVF involves getting many embryos, then throwing out the ones that the couple doesn’t need to implant. If destroying embryos were wrong, then IVF would be unethical - and embryo selection, which might encourage more people to do IVF, or to maximize the number of embryos they get from IVF, would be extra unethical. I think a default position would be that if you believe humans are more valuable than cows, and cows more valuable than bugs - presumably because humans are more...

info_outline
Your Review: Dating Men In The Bay Area show art Your Review: Dating Men In The Bay Area

Astral Codex Ten Podcast

Finalist #8 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] I. The Men Are Not Alright Sometimes I’m convinced there’s a note taped to my back that says, “PLEASE SPILL YOUR SOUL UPON THIS WOMAN.” I am not a therapist, nor in any way certified to deal with emotional distress, yet my presence seems to cause people...

info_outline
In Defense Of The Amyloid Hypothesis show art In Defense Of The Amyloid Hypothesis

Astral Codex Ten Podcast

A guest post by David Schneider-Joseph The “amyloid hypothesis” says that Alzheimer’s is caused by accumulation of the peptide amyloid-β. It’s the leading model in academia, but a favorite target for science journalists, contrarian bloggers, and neuroscience public intellectuals, who point out problems like: Some of the research establishing amyloid's role turned out to be fraudulent. The level of amyloid in the brain doesn’t correlate very well with the level of cognitive impairment across Alzheimer’s patients. Several strains of mice that were genetically programmed to have...

info_outline
 
More Episodes
 

In June 2022, I bet a commenter $100 that AI would master image compositionality by June 2025.

DALL-E2 had just come out, showcasing the potential of AI art. But it couldn’t follow complex instructions; its images only matched the “vibe” of the prompt. For example, here were some of its attempts at “a red sphere on a blue cube, with a yellow pyramid on the right, all on top of a green table”.

At the time, I wrote:

I’m not going to make the mistake of saying these problems are inherent to AI art. My guess is a slightly better language model would solve most of them…for all I know, some of the larger image models have already fixed these issues. These are the sorts of problems I expect to go away with a few months of future research.

Commenters objected that this was overly optimistic. AI was just a pattern-matching “stochastic parrot”. It would take a deep understanding of grammar to get a prompt exactly right, and that would require some entirely new paradigm beyond LLMs. For example, from Vitor:

Why are you so confident in this? The inability of systems like DALL-E to understand semantics in ways requiring an actual internal world model strikes me as the very heart of the issue. We can also see this exact failure mode in the language models themselves. They only produce good results when the human asks for something vague with lots of room for interpretation, like poetry or fanciful stories without much internal logic or continuity.

Not to toot my own horn, but two years ago you were naively saying we'd have GPT-like models scaled up several orders of magnitude (100T parameters) right about now (https://readscottalexander.com/posts/ssc-the-obligatory-gpt-3-post#comment-912798).

I'm registering my prediction that you're being equally naive now. Truly solving this issue seems AI-complete to me. I'm willing to bet on this (ideas on operationalization welcome).

So we made a bet!

All right. My proposed operationalization of this is that on June 1, 2025, if either if us can get access to the best image generating model at that time (I get to decide which), or convince someone else who has access to help us, we'll give it the following prompts:

1. A stained glass picture of a woman in a library with a raven on her shoulder with a key in its mouth

2. An oil painting of a man in a factory looking at a cat wearing a top hat

3. A digital art picture of a child riding a llama with a bell on its tail through a desert

4. A 3D render of an astronaut in space holding a fox wearing lipstick

5. Pixel art of a farmer in a cathedral holding a red basketball

We generate 10 images for each prompt, just like DALL-E2 does. If at least one of the ten images has the scene correct in every particular on 3/5 prompts, I win, otherwise you do. Loser pays winner $100, and whatever the result is I announce it on the blog (probably an open thread). If we disagree, Gwern is the judge.

Some image models of the time refused to draw humans, so we agreed that robots could stand in for humans in pictures that required them.

In September 2022, I got some good results from Google Imagen and announced I had won the three-year bet in three months. Commenters yelled at me, saying that Imagen still hadn’t gotten them quite right and my victory declaration was premature. The argument blew up enough that Edwin Chen of Surge, an “RLHF and human LLM evaluation platform”, stepped in and asked his professional AI data labelling team. Their verdict was clear: the AI was bad and I was wrong. Rather than embarrass myself further, I agreed to wait out the full length of the bet and re-evaluate in June 2025.

The bet is now over, and official judge Gwern agrees I’ve won. Before I gloat, let’s look at the images that got us here.

https://www.astralcodexten.com/p/now-i-really-won-that-ai-bet