Many Minds
Newton saw in the human hand proof of the divine; Darwin saw a key to our species' success. Many others, too, have described the hand in hyperbolic terms, as a paragon of design, a cornerstone of human uniqueness, an engine of our achievements. But what makes the human hand so powerful? Is it the proportions of the fingers? Is it the opposability of the thumb? Or, could it be none of this? Could it be that the real power of our hands lies—not in the physical design—but elsewhere, out of sight? My guest today is . Matt is Professor of Cognitive Neuroscience at Birkbeck, University of...
info_outlineMany Minds
Hi friends! We're skipping a beat to take care of some spring housekeeping tasks. We will be back in May! In the meanwhile, enjoy this listener favorite from our archives! ----- [originally aired April 30, 2025] We humans have a hard time becoming invisible. For better or worse, we're basically stuck with the skin and body we have; we’re pretty fixed in our color, our shape, our overall appearance. And so we're fascinated by creatures that aren't—creatures that morph to meet the moment, that can functionally disappear, that can shape-shift on a dime. And no creatures are more skilled,...
info_outlineMany Minds
Deep in our past, in the dark depths of caves, our ancestors did something strange and beautiful. Working by firelight, some doodled little designs. Others made hand stencils. Some saw a bulge of rock, or a crack in the wall, and thought to turn it into a horse or a bison. Why did they make this art? What did it mean to them? Who were these artists? These questions are old—very old—but thanks to new methods and new interpretive frameworks, archaeologists are beginning to see them in a new light. My guest today is . Izzy is an archaeologist at Aarhus University in Denmark,...
info_outlineMany Minds
Everyone is talking about AI these days. Often these conversations are about how AI might upend education, or work, or social life, or maybe civilization itself. But among cognitive scientists and psychologists the conversation inevitably drifts toward other questions. What does this latest generation of AI tell us about the human mind? Is it putting old ideas and theories to rest? Is it ushering in new ones? Will AI—in other words—also upend cognitive science? My guests today are and . Mike is a Professor of Psychology at Stanford University, where focuses on language learning and...
info_outlineMany Minds
No one is an island. We all depend on each other in critical, often tangled ways. And when I say "we" and "each other" I don't just mean humans. Yes, we humans rely on other humans. But we also rely on bees, yeasts, dogs, bacteria, and countless other creatures big and small. These interspecies dependencies—or mutualisms, as biologists call them—have deflected and inflected our history. And there's no doubt they will also inflect our future. My guest today is . Rob is Professor of Applied Ecology at North Carolina State University, where he studies the creatures and ecologies all...
info_outlineMany Minds
If you wanted a petri dish for understanding metaphors—how they emerge and evolve and jostle with each other—it would be hard to do better than the world of AI. We talk about AI systems variously as coaches or co-pilots, little genies or alien intelligences. Some researchers claim that AIs "grow," that they're entering their phase of "adolescence." Critics deride AI products as slop and dismiss LLMs as a kind of autocomplete on steroids. What's behind these different characterizations? Which ones are accurate and which are unfair? And are our metaphors mostly colorful rhetoric or do they...
info_outlineMany Minds
Humans do some pretty weird things. Some of us will sit in searingly hot rooms or jump into icy ponds. Others risk their lives trying to climb to new heights or dive to new depths. And every once in a while, two otherwise normal-seeming humans will lean in close to each other, open mouths, lock lips, and swap a hearty helping of microbes. You may even know people who've done this. But why? Are we the only animals who kiss? What could be the deeper origins of this truly bizarre behavior? My guest today is Dr. . Matilda is an Evolutionary Biologist at the University of Oxford. She's...
info_outlineMany Minds
Metaphors matter. They enliven our speech and our prose; they animate our arguments and stir our passions. Some metaphors power political movements; others propel scientific revolutions. These little figures of speech delight, provoke, captivate, shock, amuse, and galvanize us. In one way or another, metaphors just seem to help us make sense of a messy world. But how do they do all this? Whence their peculiar powers? What does it say about the human mind that we just can’t escape our metaphors—and frankly don’t want to? My guest today is . Steve is an Assistant Professor of...
info_outlineMany Minds
Hello friends, and happy new year! We're gearing up for a new run of episodes starting later in January. In the meanwhile, enjoy this pick from our archives. ------ [originally aired October 16, 2024] IQ is, to say the least, a fraught concept. Psychologists have studied IQ—or g for “general cognitive ability”—maybe more than any other psychological construct. And they’ve learned some interesting things about it. That it's remarkably stable over the lifespan. That it really is general: people who ace one test of intellectual ability tend to ace others. And that IQs have...
info_outlineMany Minds
Hello there, friends! We hope you're having a restful holiday, or a lively holiday, or whatever mix of those you prefer. As the year draws to a close, we at Many Minds are taking a much needed pause ourselves. But we wanted to share with you an episode from a podcast that we've been following for some time called . It's hosted by Ilari Mäkelä. It looks at humanity from all angles to understand where we come from and where we're going. The episode we're sharing features an interview with biologist and historian of science, Matthew Cobb; he's also the author of the book, . In...
info_outlineEveryone is talking about AI these days. Often these conversations are about how AI might upend education, or work, or social life, or maybe civilization itself. But among cognitive scientists and psychologists the conversation inevitably drifts toward other questions. What does this latest generation of AI tell us about the human mind? Is it putting old ideas and theories to rest? Is it ushering in new ones? Will AI—in other words—also upend cognitive science?
My guests today are Dr. Mike Frank and Dr. Gary Lupyan. Mike is a Professor of Psychology at Stanford University, where his lab focuses on language learning and cognition in children. Gary is a Professor of Psychology at the University of Wisconsin–Madison, where his lab studies language and its role in augmenting human cognition. Both Gary and Mike have more recently been thinking a lot about AI and how it is challenging and deepening our understanding of the human mind.
In this conversation, we talk about being interested in AI as cognitive scientists—while also being concerned about the technology as people. We discuss the linguistic abilities of frontier LLMs compared to the linguistic abilities of adult humans. We talk about a glaring "data gap" here—the fact that, even though LLMs often rival human abilities, they require orders of magnitude more data to do so. We contrast the capabilities of large language models with so-called BabyLMs. We consider the fact that, as LLMs master language, they also master other abilities—capacities for mathematical reasoning, causal understanding, possibly theory of mind, and more. And we talk about why language might be an especially potent form of input for an AI. Along the way, we touch on reference and the symbol grounding problem, the Platonic Representation Hypothesis, stimulus computability, confabulated citations, pattern matching and jabberwocky, the poverty of the stimulus argument, congenital blindness, Quine's topiary, the limits of in principle demonstrations, the WEIRD problem, and what the astonishing sophistication of disembodied AIs might suggest about the role of bodily experience in human cognition.
Before we get to it, one small request: we’re currently running a short survey of our listeners. You can find the link in our show notes. If you have a few minutes, we'd really love your input!
Alright friends, here's my conversation with Mike Frank and Gary Lupyan. I think you'll enjoy it!
Notes
5:00 – For more discussion of “stochastic parrots” and other ways of framing AI systems, see our recent episode with Melanie Mitchell. For the “octopus test,” see here.
8:00 – “BabyLMs” are—in contrast to large LMs (aka LLMs)—models that are trained on a more human-scale amount of linguistic input. For more on the BabyLM community, see here.
12:00 – For broad discussion of the use of AIs as “cognitive models,” see this paper by Dr. Frank and a colleague. The same paper discusses the idea of “stimulus computability.”
18:00 – For Dr. Frank’s “baby steps” paper, see here.
20:00 – For more on how Claude understands line breaks, see Anthropic’s analysis of the issue here.
23:00 – For work on human-like grammaticality judgments in LLMs, see this paper by a team including Dr. Lupyan.
24:00 – See here for an influential paper on, among other things, how LLMs refute the idea that syntax is unlearnable. The article titled ‘How linguistics learned to stop worrying and love the language models’ is here; Dr. Lupyan’s commentary—‘Large language models have learned to use language’—here.
29:00 – For some of Dr. Lupyan’s work on the “abstractness” of even concrete concepts, see here.
35:00 – For a classic paper on the so-called symbol grounding problem, see here.
37:00 – For the preprint putting forth the “Platonic Representation Hypothesis,” see here.
40:30 – For more on the data gap between children and LLMs—and what accounts for it—see Dr. Frank’s paper here.
45:00 – For a sampling of Dr. Frank and colleagues’ work comparing language models to children, see here, here, and here. For more on the LEVANTE project, a collaborative effort spearheaded by Dr. Frank, see here.
48:00 – For the preprint—"The Unreasonable Effectiveness of Pattern Matching," by Dr. Lupyan and a colleague—see here.
55:00 – For more on Dr. Lupyan’s perspective on the centrality of language in human cognition, see here. See also this more recent paper, considering the question in light of LLMs.
58:00 – For our earlier episode with Dr. Marina Bedny, see here. For the recent paper by Dr. Bedny and colleagues considering their research on congenital blindness in light of LLMs, see here.
1:01:00 – For classic work on language learning in blind children, see here.
1:02:00 – For a paper by Dr. Lupyan and colleagues on “hidden” individual differences, see here.
1:03:00 – For more on “multiple realizability,” see here. For our earlier episode with Dr. Eric Turkheimer, see here.
1:09:00 – For more on the work of Dr. Frank’s collaborator, Dan Yamins, see here.
1:14:00 – See our earlier episode with Dr. M.J. Crockett for more discussion of the “WEIRD problem” around scientific uses of AI. In the same episode, we discussed how new scientific methods focus attention on questions that can be studied with those methods.
Recommendations
A Mind at Play, by Jimmy Soni & Rob Goodman
On Desire, by William Irvine
Patterns, thinking, and cognition, by Howard Margolis
Open Encyclopedia of Cognitive Science
The BabyLM workshops/community (e.g., the entry on LLMs)
Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd.
Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here!
We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com.
For updates about the show, visit our website or follow us on Bluesky (@manymindspod.bsky.social).