loader from loading.io

From the archive: What does ChatGPT really know?

Many Minds

Release Date: 07/24/2024

A paradox of learning show art A paradox of learning

Many Minds

How do we learn? Usually from experience, of course. Maybe we visit some new place, or encounter a new tool or trick. Or perhaps we learn from someone else—from a a teacher or friend or YouTube star who relays some shiny new fact or explanation. These are the kinds of experiences you probably first think of when you think of learning. But we can also learn in another way: simply by thinking. Sometimes we can just set our minds to work—just let the ideas already in our heads tumble around and spark off each other—and, is if by magic, come away with a new understanding of the world. But...

info_outline
From the archive: The octopus and the android show art From the archive: The octopus and the android

Many Minds

Happy holidays, friends! We will be back with a new episode in January 2025. In the meantime, enjoy this favorite from our archives! ----- [originally aired Jun 14, 2023] Have you heard of Octopolis? It’s a site off the coast of Australia where octopuses come together. It’s been described as a kind of underwater "settlement" or "city." Now, smart as octopuses are, they are not really known for being particularly sociable. But it seems that, given the right conditions, they can shift in that direction. So it's not a huge leap to wonder whether these kinds of cephalopod congregations could...

info_outline
Your brain on language show art Your brain on language

Many Minds

Using language is a complex business. Let's say you want to understand a sentence. You first need to parse a sequence of sounds—if the sentence is spoken—or images—if it's signed or written. You need to figure out the meanings of the individual words and then you need to put those meanings together to form a bigger whole. Of course, you also need to think about the larger context—the conversation, the person you're talking to, the kind of situation you're in. So how does the brain do all of this? Is there just one neural system that deals with language or several? Do different...

info_outline
Nestcraft show art Nestcraft

Many Minds

How do birds build their nests? By instinct, of course—at least that's what the conventional wisdom tells us. A swallow builds a swallow's nest; a robin builds a robin's nest. Every bird just follows the rigid template set down in its genes. But over the course of the last couple of decades, scientists have begun to take a closer look at nests—they've weighed and measured them, they've filmed the building process. And the conventional wisdom just doesn't hold up. These structures vary in all kinds of ways, even within a species. They're shaped by experience, by learning, by cultural...

info_outline
Animal, heal thyself show art Animal, heal thyself

Many Minds

What happens to animals when they get sick? If they’re pets or livestock, we probably call the vet. And the vet may give them drugs or perform a procedure. But what about wild animals? Do they just languish in misery? Well, not so much. It turns out that animals—from bees to butterflies, porcupines to primates—medicate themselves. They seek out bitter plants, they treat wounds, they amputate limbs, they eat clay—the list goes on. This all raises an obvious question: How do they know to do this? How do they know what they know about healing and medicine? It also invites a related...

info_outline
The rise of machine culture show art The rise of machine culture

Many Minds

The machines are coming. Scratch that—they're already here: AIs that propose new combinations of ideas; chatbots that help us summarize texts or write code; algorithms that tell us who to friend or follow, what to watch or read. For a while the reach of intelligent machines may have seemed somewhat limited. But not anymore—or, at least, not for much longer. The presence of AI is growing, accelerating, and, for better or worse, human culture may never be the same.    My guest today is . Iyad directs the at the Max Planck Institute for Human Development in Berlin. Iyad is a...

info_outline
How should we think about IQ? show art How should we think about IQ?

Many Minds

IQ is, to say the least, a fraught concept. Psychologists have studied IQ—or g for “general cognitive ability”—maybe more than any other psychological construct. And they’ve learned some interesting things about it. That it's remarkably stable over the lifespan. That it really is general: people who ace one test of intellectual ability tend to ace others. And that IQs have risen markedly over the last century. At the same time, IQ seems to be met with increasing squeamishness, if not outright disdain, in many circles. It's often seen as crude, misguided, reductive—maybe a whole lot...

info_outline
Rethinking the Rethinking the "wood wide web"

Many Minds

Forests have always been magical places. But in the last couple decades, they seem to have gotten a little more magical. We've learned that trees are connected to each other through a vast underground network—an internet of roots and fungi often called the "wood wide web". We've learned that, through this network, trees share resources with each other. And we've learned that so-called mother trees look out for their own offspring, preferentially sharing resources with them. There's no question that this is all utterly fascinating. But what if it's also partly a fantasy? My guest today is ....

info_outline
Electric ecology show art Electric ecology

Many Minds

There's a bit of a buzz out there, right now, but maybe you haven’t noticed. It's in the water, it's in the air. It's electricity—and it's all around us, all the time, including in some places you might not have expected to find it. We humans, after all, are not super tuned in to this layer of reality. But many other creatures are—and scientists are starting to take note. My guest today is . Sam is a sensory ecologist at the Natural History Museum in Berlin, and one of a handful of scientists uncovering some shocking things about the role of electricity in the natural world. Here, Sam...

info_outline
The nature of nurture show art The nature of nurture

Many Minds

The idea of a "maternal instinct"—the notion that mothers are wired for nurturing and care—is a familiar one in our culture. And it has a flipside, a corollary—what you might call “paternal aloofness.” It's the idea that men just aren't meant to care for babies, that we have more, you know, manly things to do. But when you actually look at the biology of caretaking, the truth is more complicated and much more interesting. My guest today is . She is Professor Emerita of Anthropology at the University of California, Davis and the author of the new book,  . In it, she examines...

info_outline
 
More Episodes

Hi friends, we're on a brief summer break at the moment. We'll have a new episode for you in August. In the meanwhile, enjoy this pick from our archives!

----

[originally aired January 25, 2023]

By now you’ve probably heard about the new chatbot called ChatGPT. There’s no question it’s something of a marvel. It distills complex information into clear prose; it offers instructions and suggestions; it reasons its way through problems. With the right prompting, it can even mimic famous writers. And it does all this with an air of cool competence, of intelligence. But, if you're like me, you’ve probably also been wondering: What’s really going on here? What are ChatGPT—and other large language models like it—actually doing? How much of their apparent competence is just smoke and mirrors? In what sense, if any, do they have human-like capacities?

My guest today is Dr. Murray Shanahan. Murray is Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind. He's the author of numerous articles and several books at the lively intersections of artificial intelligence, neuroscience, and philosophy. Very recently, Murray put out a paper titled 'Talking about Large Language Models’, and it’s the focus of our conversation today. In the paper, Murray argues that—tempting as may be—it's not appropriate to talk about large language models in anthropomorphic terms. Not yet, anyway.

Here, we chat about the rapid rise of large language models and the basics of how they work. We discuss how a model that—at its base—simply does “next-word prediction" can be engineered into a savvy chatbot like ChatGPT. We talk about why ChatGPT lacks genuine “knowledge” and “understanding”—at least as we currently use those terms. And we discuss what it might take for these models to eventually possess richer, more human-like capacities. Along the way, we touch on: emergence, prompt engineering, embodiment and grounding, image generation models, Wittgenstein, the intentional stance, soft robots, and "exotic mind-like entities."

Before we get to it, just a friendly reminder: applications are now open for the Diverse Intelligences Summer Institute (or DISI). DISI will be held this June/July in St Andrews Scotland—the program consists of three weeks of intense interdisciplinary engagement with exactly the kinds of ideas and questions we like to wrestle with here on this show. If you're intrigued—and I hope you are!—check out disi.org for more info.

Alright friends, on to my decidedly human chat, with Dr. Murray Shanahan. Enjoy!

 

The paper we discuss is here. A transcript of this episode is here.

 

Notes and links

6:30 – The 2017 “breakthrough” article by Vaswani and colleagues.

8:00 – A popular article about GPT-3.

10:00 – A popular article about some of the impressive—and not so impressive—behaviors of ChatGPT. For more discussion of ChatGPT and other large language models, see another interview with Dr. Shanahan, as well as interviews with Emily Bender and Margaret Mitchellwith Gary Marcus, and with Sam Altman (CEO of OpenAI, which created ChatGPT).

14:00 – A widely discussed paper by Emily Bender and colleagues on the “dangers of stochastic parrots.”

19:00 – A blog post about “prompt engineering”. Another blog post about the concept of Reinforcement Learning through Human Feedback, in the context of ChatGPT.

30:00 – One of Dr. Shanahan’s books is titled, Embodiment and the Inner Life.

39:00 – An example of a robotic agent, SayCan, which is connected to a language model.

40:30 – On the notion of embodiment in the cognitive sciences, see the classic book by Francisco Varela and colleagues, The Embodied Mind.

44:00 – For a detailed primer on the philosophy of Ludwig Wittgenstein, see here.

45:00 – See Dr. Shanahan’s general audience essay on “conscious exotica" and the space of possible minds.

49:00 – See Dennett’s book, The Intentional Stance.

 

Dr. Shanahan recommends:

Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell

(see also our earlier episode with Dr. Mitchell)

‘Abstraction for Deep Reinforcement Learning’, by M. Shanahan and M. Mitchell

 

You can read more about Murray’s work on his website and follow him on Twitter.

 

Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (https://disi.org), which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd (https://www.mayhilldesigns.co.uk/). Our transcripts are created by Sarah Dopierala (https://sarahdopierala.wordpress.com/).

You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you like to listen to podcasts.

**You can now subscribe to the Many Minds newsletter here!**

We welcome your comments, questions, and suggestions. Feel free to email us at: [email protected].

For updates about the show, visit our website (https://disi.org/manyminds/), or follow us on Twitter: @ManyMindsPod.