loader from loading.io

What can AI teach us about the mind?

Many Minds

Release Date: 03/26/2026

What can AI teach us about the mind? show art What can AI teach us about the mind?

Many Minds

Everyone is talking about AI these days. Often these conversations are about how AI might upend education, or work, or social life, or maybe civilization itself. But among cognitive scientists and psychologists the conversation inevitably drifts toward other questions. What does this latest generation of AI tell us about the human mind? Is it putting old ideas and theories to rest? Is it ushering in new ones? Will AI—in other words—also upend cognitive science? My guests today are and . Mike is a Professor of Psychology at Stanford University, where focuses on language learning and...

info_outline
Mutualisms all the way down show art Mutualisms all the way down

Many Minds

No one is an island. We all depend on each other in critical, often tangled ways. And when I say "we" and "each other" I don't just mean humans. Yes, we humans rely on other humans. But we also rely on bees, yeasts, dogs, bacteria, and countless other creatures big and small. These interspecies dependencies—or mutualisms, as biologists call them—have deflected and inflected our history. And there's no doubt they will also inflect our future.  My guest today is . Rob is Professor of Applied Ecology at North Carolina State University, where he studies the creatures and ecologies all...

info_outline
Seven metaphors for AI show art Seven metaphors for AI

Many Minds

If you wanted a petri dish for understanding metaphors—how they emerge and evolve and jostle with each other—it would be hard to do better than the world of AI. We talk about AI systems variously as coaches or co-pilots, little genies or alien intelligences. Some researchers claim that AIs "grow," that they're entering their phase of "adolescence." Critics deride AI products as slop and dismiss LLMs as a kind of autocomplete on steroids. What's behind these different characterizations? Which ones are accurate and which are unfair? And are our metaphors mostly colorful rhetoric or do they...

info_outline
Origins of the kiss show art Origins of the kiss

Many Minds

Humans do some pretty weird things. Some of us will sit in searingly hot rooms or jump into icy ponds. Others risk their lives trying to climb to new heights or dive to new depths. And every once in a while, two otherwise normal-seeming humans will lean in close to each other, open mouths, lock lips, and swap a hearty helping of microbes. You may even know people who've done this. But why? Are we the only animals who kiss? What could be the deeper origins of this truly bizarre behavior?  My guest today is Dr. . Matilda is an Evolutionary Biologist at the University of Oxford. She's...

info_outline
The aura of metaphor show art The aura of metaphor

Many Minds

Metaphors matter. They enliven our speech and our prose; they animate our arguments and stir our passions. Some metaphors power political movements; others propel scientific revolutions. These little figures of speech delight, provoke, captivate, shock, amuse, and galvanize us. In one way or another, metaphors just seem to help us make sense of a messy world. But how do they do all this? Whence their peculiar powers? What does it say about the human mind that we just can’t escape our metaphors—and frankly don’t want to?  My guest today is . Steve is an Assistant Professor of...

info_outline
From the archive: How should we think about IQ? show art From the archive: How should we think about IQ?

Many Minds

Hello friends, and happy new year! We're gearing up for a new run of episodes starting later in January. In the meanwhile, enjoy this pick from our archives. ------ [originally aired October 16, 2024] IQ is, to say the least, a fraught concept. Psychologists have studied IQ—or g for “general cognitive ability”—maybe more than any other psychological construct. And they’ve learned some interesting things about it. That it's remarkably stable over the lifespan. That it really is general: people who ace one test of intellectual ability tend to ace others. And that IQs have...

info_outline
From 'On Humans': Can the brain understand itself? show art From 'On Humans': Can the brain understand itself?

Many Minds

Hello there, friends! We hope you're having a restful holiday, or a lively holiday, or whatever mix of those you prefer. As the year draws to a close, we at Many Minds are taking a much needed pause ourselves. But we wanted to share with you an episode from a podcast that we've been following for some time called . It's hosted by Ilari Mäkelä. It looks at humanity from all angles to understand where we come from and where we're going. The episode we're sharing features an interview with biologist and historian of science, Matthew Cobb; he's also the author of the book, . In...

info_outline
In search of names show art In search of names

Many Minds

Alright, friends—we’ve come to the end of the 2025 run of Many Minds! Our final episode of the year is an audio essay by yours truly. This is a classic format for the show, one that we only do every so often. Today’s essay is about names. It’s about the question of whether animals have something like names for each other. And it’s also about a deeper question: What even is a name? How do humans use names? How does the historical and ethnographic record kind of complicate our everyday understanding of what names are. I had a lot of fun putting this together, and I do hope you...

info_outline
The value of animal cultures show art The value of animal cultures

Many Minds

Not long ago culture was considered rare in nature, maybe even uniquely human. But that's changed. We now know that the tree of life is buzzing with culture—and not just on a few lonely branches. Creatures great and small learn songs, migration routes, and feeding techniques from each other. Many species build up reservoirs of knowledge over generations. This has profound implications, not just for our understanding of the natural world, but also for our efforts to protect it.  My guest today is . Philippa is an Honorary Lecturer at the University of Exeter, with one foot in science and...

info_outline
What is memory for? show art What is memory for?

Many Minds

Everyone loves a good evolutionary puzzle. Why do we have appendices? Why do we dream? Why do we blush? At first glance, memory would not seem to be in this category. It's clearly useful to remember stuff, after all—to know where to find food, to remember your mistakes so you don't repeat them, to recall who’s friendly and who’s fierce. In fact, though, certain aspects of memory—when you hold them up to the light—turn out to be quite puzzling indeed. My guests today are and . Ali is a philosopher at the London School of Economics (LSE); Johannes is a philosopher at York University,...

info_outline
 
More Episodes

Everyone is talking about AI these days. Often these conversations are about how AI might upend education, or work, or social life, or maybe civilization itself. But among cognitive scientists and psychologists the conversation inevitably drifts toward other questions. What does this latest generation of AI tell us about the human mind? Is it putting old ideas and theories to rest? Is it ushering in new ones? Will AI—in other words—also upend cognitive science?

My guests today are Dr. Mike Frank and Dr. Gary Lupyan. Mike is a Professor of Psychology at Stanford University, where his lab focuses on language learning and cognition in children. Gary is a Professor of Psychology at the University of Wisconsin Madison, where his lab studies language and its role in augmenting human cognition. Both Gary and Mike have more recently been thinking a lot about AI and how it is challenging and deepening our understanding of the human mind.  

In this conversation, we talk about being interested in AI as cognitive scientists—while also being concerned about the technology as people. We discuss the linguistic abilities of frontier LLMs compared to the linguistic abilities of adult humans. We talk about a glaring "data gap" here—the fact that, even though LLMs often rival human abilities, they require orders of magnitude more data to do so. We contrast the capabilities of large language models with so-called BabyLMs. We consider the fact that, as LLMs master language, they also master other abilities—capacities for mathematical reasoning, causal understanding, possibly theory of mind, and more. And we talk about why language might be an especially potent form of input for an AI. Along the way, we touch on reference and the symbol grounding problem, the Platonic Representation Hypothesis, stimulus computability, confabulated citations, pattern matching and jabberwocky, the poverty of the stimulus argument, congenital blindness, Quine's topiary, the limits of in principle demonstrations, the WEIRD problem, and what the astonishing sophistication of disembodied AIs might suggest about the role of bodily experience in human cognition.

Before we get to it, one small request: we’re currently running a short survey of our listeners. You can find the link in our show notes. If you have a few minutes, we'd really love your input!

 Alright friends, here's my conversation with Mike Frank and Gary Lupyan. I think you'll enjoy it!

 

Notes

5:00 – For more discussion of “stochastic parrots” and other ways of framing AI systems, see our recent episode with Melanie Mitchell. For the “octopus test,” see here.

8:00 – “BabyLMs” are—in contrast to large LMs (aka LLMs)—models that are trained on a more human-scale amount of linguistic input. For more on the BabyLM community, see here.

12:00 – For broad discussion of the use of AIs as “cognitive models,” see this paper by Dr. Frank and a colleague. The same paper discusses the idea of “stimulus computability.” 

18:00 – For Dr. Frank’s “baby steps” paper, see here.

20:00 – For more on how Claude understands line breaks, see Anthropic’s analysis of the issue here

23:00 – For work on human-like grammaticality judgments in LLMs, see this paper by a team including Dr. Lupyan. 

24:00 – See here for an influential paper on, among other things, how LLMs refute the idea that syntax is unlearnable. The article titled ‘How linguistics learned to stop worrying and love the language models’ is here; Dr. Lupyan’s commentary—‘Large language models have learned to use language’—here.

29:00 – For some of Dr. Lupyan’s work on the “abstractness” of even concrete concepts, see here.

35:00 – For a classic paper on the so-called symbol grounding problem, see here. 

37:00 – For the preprint putting forth the “Platonic Representation Hypothesis,” see here.

40:30 – For more on the data gap between children and LLMs—and what accounts for it—see Dr. Frank’s paper here.

45:00 – For a sampling of Dr. Frank and colleagues’ work comparing language models to children, see here, here, and here. For more on the LEVANTE project, a collaborative effort spearheaded by Dr. Frank, see here.

48:00 – For the preprint—‘The Unreasonable Effectiveness of Pattern Matching,’ by Dr. Lupyan and a colleague—see here.

55:00 – For more on Dr. Lupyan’s perspective on the centrality of language in human cognition, see here. See also this more recent paper, considering the question in light of LLMs. 

58:00 – For our earlier episode with Dr. Marina Bedny, see here. For the recent paper by Dr. Bedny and colleagues considering their research on congenital blindness in light of LLMs, see here.

1:01:00 – For classic work on language learning in blind children, see here.

1:02:00 – For a paper by Dr. Lupyan and colleagues on “hidden” individual differences, see here.

1:03:00 – For more on “multiple realizability,” see here. For our earlier episode with Dr. Eric Turkheimer, see here.

1:09:00 – For more on the work of Dr. Frank’s collaborator, Dan Yamins, see here.

1:14:00 – See our earlier episode with Dr. M.J. Crockett for more discussion of the “WEIRD problem” around scientific uses of AI. In the same episode, we discussed how new cognitive scientific methods focus attention on questions that can be studied with those methods. 

 

Recommendations

The Mind at Play, by Jimmy Soni & Rob Goodman

On Desire, by William Irvine

Patterns, thinking, and cognition, by Howard Margolis

Open Encyclopedia of Cognitive Science

The BabyLM workhops/community (e.g, the entry on LLMs)

 

Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd.

Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here!

We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com.

For updates about the show, visit our website or follow us on Bluesky (@manymindspod.bsky.social).