Many Minds
How do we learn? Usually from experience, of course. Maybe we visit some new place, or encounter a new tool or trick. Or perhaps we learn from someone else—from a a teacher or friend or YouTube star who relays some shiny new fact or explanation. These are the kinds of experiences you probably first think of when you think of learning. But we can also learn in another way: simply by thinking. Sometimes we can just set our minds to work—just let the ideas already in our heads tumble around and spark off each other—and, is if by magic, come away with a new understanding of the world. But...
info_outline From the archive: The octopus and the androidMany Minds
Happy holidays, friends! We will be back with a new episode in January 2025. In the meantime, enjoy this favorite from our archives! ----- [originally aired Jun 14, 2023] Have you heard of Octopolis? It’s a site off the coast of Australia where octopuses come together. It’s been described as a kind of underwater "settlement" or "city." Now, smart as octopuses are, they are not really known for being particularly sociable. But it seems that, given the right conditions, they can shift in that direction. So it's not a huge leap to wonder whether these kinds of cephalopod congregations could...
info_outline Your brain on languageMany Minds
Using language is a complex business. Let's say you want to understand a sentence. You first need to parse a sequence of sounds—if the sentence is spoken—or images—if it's signed or written. You need to figure out the meanings of the individual words and then you need to put those meanings together to form a bigger whole. Of course, you also need to think about the larger context—the conversation, the person you're talking to, the kind of situation you're in. So how does the brain do all of this? Is there just one neural system that deals with language or several? Do different...
info_outline NestcraftMany Minds
How do birds build their nests? By instinct, of course—at least that's what the conventional wisdom tells us. A swallow builds a swallow's nest; a robin builds a robin's nest. Every bird just follows the rigid template set down in its genes. But over the course of the last couple of decades, scientists have begun to take a closer look at nests—they've weighed and measured them, they've filmed the building process. And the conventional wisdom just doesn't hold up. These structures vary in all kinds of ways, even within a species. They're shaped by experience, by learning, by cultural...
info_outline Animal, heal thyselfMany Minds
What happens to animals when they get sick? If they’re pets or livestock, we probably call the vet. And the vet may give them drugs or perform a procedure. But what about wild animals? Do they just languish in misery? Well, not so much. It turns out that animals—from bees to butterflies, porcupines to primates—medicate themselves. They seek out bitter plants, they treat wounds, they amputate limbs, they eat clay—the list goes on. This all raises an obvious question: How do they know to do this? How do they know what they know about healing and medicine? It also invites a related...
info_outline The rise of machine cultureMany Minds
The machines are coming. Scratch that—they're already here: AIs that propose new combinations of ideas; chatbots that help us summarize texts or write code; algorithms that tell us who to friend or follow, what to watch or read. For a while the reach of intelligent machines may have seemed somewhat limited. But not anymore—or, at least, not for much longer. The presence of AI is growing, accelerating, and, for better or worse, human culture may never be the same. My guest today is . Iyad directs the at the Max Planck Institute for Human Development in Berlin. Iyad is a...
info_outline How should we think about IQ?Many Minds
IQ is, to say the least, a fraught concept. Psychologists have studied IQ—or g for “general cognitive ability”—maybe more than any other psychological construct. And they’ve learned some interesting things about it. That it's remarkably stable over the lifespan. That it really is general: people who ace one test of intellectual ability tend to ace others. And that IQs have risen markedly over the last century. At the same time, IQ seems to be met with increasing squeamishness, if not outright disdain, in many circles. It's often seen as crude, misguided, reductive—maybe a whole lot...
info_outline Rethinking the "wood wide web"Many Minds
Forests have always been magical places. But in the last couple decades, they seem to have gotten a little more magical. We've learned that trees are connected to each other through a vast underground network—an internet of roots and fungi often called the "wood wide web". We've learned that, through this network, trees share resources with each other. And we've learned that so-called mother trees look out for their own offspring, preferentially sharing resources with them. There's no question that this is all utterly fascinating. But what if it's also partly a fantasy? My guest today is ....
info_outline Electric ecologyMany Minds
There's a bit of a buzz out there, right now, but maybe you haven’t noticed. It's in the water, it's in the air. It's electricity—and it's all around us, all the time, including in some places you might not have expected to find it. We humans, after all, are not super tuned in to this layer of reality. But many other creatures are—and scientists are starting to take note. My guest today is . Sam is a sensory ecologist at the Natural History Museum in Berlin, and one of a handful of scientists uncovering some shocking things about the role of electricity in the natural world. Here, Sam...
info_outline The nature of nurtureMany Minds
The idea of a "maternal instinct"—the notion that mothers are wired for nurturing and care—is a familiar one in our culture. And it has a flipside, a corollary—what you might call “paternal aloofness.” It's the idea that men just aren't meant to care for babies, that we have more, you know, manly things to do. But when you actually look at the biology of caretaking, the truth is more complicated and much more interesting. My guest today is . She is Professor Emerita of Anthropology at the University of California, Davis and the author of the new book, . In it, she examines...
info_outlineThe machines are coming. Scratch that—they're already here: AIs that propose new combinations of ideas; chatbots that help us summarize texts or write code; algorithms that tell us who to friend or follow, what to watch or read. For a while the reach of intelligent machines may have seemed somewhat limited. But not anymore—or, at least, not for much longer. The presence of AI is growing, accelerating, and, for better or worse, human culture may never be the same.
My guest today is Dr. Iyad Rahwan. Iyad directs the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin. Iyad is a bit hard to categorize. He's equal parts computer scientist and artist; one magazine profile described him as "the Anthropologist of AI." Labels aside, his work explores the emerging relationships between AI, human behavior, and society. In a recent paper, Iyad and colleagues introduced a framework for understanding what they call "machine culture." The framework offers a way of thinking about the different routes through which AI may transform—is transforming—human culture.
Here, Iyad and I talk about his work as a painter and how he brings AI into the artistic process. We discuss whether AIs can make art by themselves and whether they may eventually develop good taste. We talk about how AIphaGoZero upended the world of Go and about how LLMs might be changing how we speak. We consider what AIs might do to cultural diversity. We discuss the field of cultural evolution and how it provides tools for thinking about this brave new age of machine culture. Finally, we discuss whether any spheres of human endeavor will remain untouched by AI influence.
Before we get to it, a humble request: If you're enjoying the show—and it seems that many of you are—we would be ever grateful if you could let the world know. You might do this by leaving a rating or review on Apple Podcasts, or maybe a comment on Spotify. You might do this by giving us a shout-out on the social media platform of your choice. Or, if you prefer less algorithmically mediated avenues, you might do this just by telling a friend about us face-to-face. We're hoping to grow the show and the best way to do that is through listener endorsements and word-of-mouth. Thanks in advance, friends.
Alright, on to my conversation with Dr. Iyad Rahwan. Enjoy!
A transcript of this episode is available here.
Notes and links
3:00 – Images from Dr. Rahwan's ‘Faces of Machine’ portrait series. One of the portraits from the series serves as our tile art for this episode.
11:30 – The “stochastic parrots” term comes from an influential paper by Emily Bender and colleagues.
18:30 – A popular article about DALL-E and the “avocado armchair.”
21:30 – Ted Chiang’s essay, “Why A.I. isn’t going to make art.”
24:00 – An interview with Boris Eldagsen, who won the Sony World Photography Awards in March 2023 with an image that was later revealed to be AI-generated.
28:30 – A description of the concept of “science fiction science.”
29:00 – Though widely attributed to different sources, Isaac Asimov appears to have developed the idea that good science fiction predicts not the automobile, but the traffic jam.
30:00 – The academic paper describing the Moral Machine experiment. You can judge the scenarios for yourself (or design your own scenarios) here.
30:30 – An article about the Nightmare Machine project; an article about the Deep Empathy project.
37:30 – An article by Cesar Hidalgo and colleagues about the relationship between television/radio and global celebrity.
41:30 – An article by Melanie Mitchell (former guest!) on AI and analogy. A popular piece about that work.
42:00 – A popular article describing the study of whether AIs can generate original research ideas. The preprint is here.
46:30 – For more on AlphaGo (and its successors, AlphaGo Zero and AlphaZero), see here.
48:30 – The study finding that the novelty of human Go playing increased due to the influence of AlphaGo.
51:00 – A blogpost delving into the idea that ChatGPT overuses certain words, including “delve.” A recent preprint by Dr. Rahwan and colleagues, presenting evidence that “delve” (and other words overused by ChatGPT) are now being used more in human spoken communication.
55:00 – A paper using simulations to show how LLMs can “collapse” when trained on data that they themselves generated.
1:01:30 – A review of the literature on filter bubbles, echo chambers, and polarization.
1:02:00 – An influential study by Dr. Chris Bail and colleagues suggesting that exposure to opposing views might actually increase polarization.
1:04:30 – A book by Geoffrey Hodgson and Thorbjørn Knudsen, who are often credited with developing the idea of “generalized Darwinism” in the social sciences.
1:12:00 – An article about Google’s NotebookLM podcast-like audio summaries.
1:17:3 0 – An essay by Ursula LeGuin on children’s literature and the Jungian “shadow.”
Recommendations
The Secret of Our Success, Joseph Henrich
“Machine Behaviour,” Iyad Rahwan et al.
Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala.
Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here!
We welcome your comments, questions, and suggestions. Feel free to email us at: [email protected].
For updates about the show, visit our website or follow us on Twitter (@ManyMindsPod) or Bluesky (@manymindspod.bsky.social).