loader from loading.io

From the archive: What does ChatGPT really know?

Many Minds

Release Date: 07/24/2024

Universal emotions in fact and fiction show art Universal emotions in fact and fiction

Many Minds

Are human emotions universal? Or do they vary from one place to the next and from one time period to the next? It's a big question, an old question. And every discipline that's grappled with it brings is own take, its own framings and forms of evidence. Some researchers appeal to cross-cultural experiments; others turn to neuroimaging studies or conceptual analysis. Some even look to fiction. My guest today is , an Associate Professor of English Literature at Arizona State University. Brad is the author of a new book, ; in it he maps the landscape of debate around this long-contested topic....

info_outline
From the archive: Fermentation, fire, and our big brains show art From the archive: Fermentation, fire, and our big brains

Many Minds

Hi friends, We're taking care of some spring cleaning this week. We'll be back in two weeks with a new episode. In the meantime, enjoy this favorite from our archives! - The Many Minds team ––––––––– [originally aired February 22, 2024] Brains are not cheap. It takes a lot of calories to run a brain, and the bigger your brain, the more calories it takes. So how is it that, over the last couple million years, the human brain tripled in size. How could we possibly have afforded that? Where did the extra calories come from? There's no shortage of suggestions out there. Some say...

info_outline
Howl, grunt, sing show art Howl, grunt, sing

Many Minds

The tree of life is a noisy place. From one branch come hoots and howls, from another come clicks and buzzes and whines. And coming from all over you hear the swell of song. But what is all this ruckus about? Why do so many animals communicate with sound? What kinds of meaning do these sounds convey? And—beyond the case of human speech—do any of these sounds merit the label of “language”? My guest today is , a zoologist at Cambridge University. Arik is an expert on vocal communication across the animal kingdom and the author of the recent book . Here, Arik and I talk about why the...

info_outline
The development of evolution show art The development of evolution

Many Minds

Evolution is not what it used to be. A lot has changed since Darwin's day. In the first half of the 20th century, evolutionary theory was integrated with an emerging understanding of genetics. Late in the 20th century, biologists started taking seriously the idea that organisms don't just adapt to their environments, they change them. Recently, researchers have started to acknowledge the role of culture in evolutionary processes. And so slowly our understanding of evolution has been reconsidered, updated, expanded. And more updates are underway. But it's not just our understanding of evolution...

info_outline
String theories show art String theories

Many Minds

Where would our species be without string? It's one of our most basic technologies—so basic that it's easy to overlook. But humans have used string—and its cousins rope, yarn, cordage, thread, etc.—for all kinds of purposes, stretching back tens of thousands of years. We've used it for knots and textiles and fishing nets and carrier bags and bow-strings and record-keeping devices. It's one of the most ubiquitous, flexible, and useful technologies we have. But we haven't only put string to practical purposes. We've also long used it to tickle our minds. My guest today is . Roope is a...

info_outline
The other half of the brain show art The other half of the brain

Many Minds

Neurons have long enjoyed a kind of rock star status. We think of them as the most fundamental units of the brain—the active cells at the heart of brain function and, ultimately, at the heart of behavior, learning, and more. But neurons are only part of the story—about half the story, it turns out. The other half of the brain is made up of cells called glia. Glia were long thought to be important structurally but not particularly exciting—basically stage-hands there to support the work of the neurons. But in recent decades, at least among neuroscientists, that view has faded. In our...

info_outline
A paradox of learning show art A paradox of learning

Many Minds

How do we learn? Usually from experience, of course. Maybe we visit some new place, or encounter a new tool or trick. Or perhaps we learn from someone else—from a teacher or friend or YouTube star who relays some shiny new fact or explanation. These are the kinds of experiences you probably first think of when you think of learning. But we can also learn in another way: simply by thinking. Sometimes we can just set our minds to work—just let the ideas already in our heads tumble around and spark off each other—and, as if by magic, come away with a new understanding of the world. But how...

info_outline
From the archive: The octopus and the android show art From the archive: The octopus and the android

Many Minds

Happy holidays, friends! We will be back with a new episode in January 2025. In the meantime, enjoy this favorite from our archives! ----- [originally aired Jun 14, 2023] Have you heard of Octopolis? It’s a site off the coast of Australia where octopuses come together. It’s been described as a kind of underwater "settlement" or "city." Now, smart as octopuses are, they are not really known for being particularly sociable. But it seems that, given the right conditions, they can shift in that direction. So it's not a huge leap to wonder whether these kinds of cephalopod congregations could...

info_outline
Your brain on language show art Your brain on language

Many Minds

Using language is a complex business. Let's say you want to understand a sentence. You first need to parse a sequence of sounds—if the sentence is spoken—or images—if it's signed or written. You need to figure out the meanings of the individual words and then you need to put those meanings together to form a bigger whole. Of course, you also need to think about the larger context—the conversation, the person you're talking to, the kind of situation you're in. So how does the brain do all of this? Is there just one neural system that deals with language or several? Do different...

info_outline
Nestcraft show art Nestcraft

Many Minds

How do birds build their nests? By instinct, of course—at least that's what the conventional wisdom tells us. A swallow builds a swallow's nest; a robin builds a robin's nest. Every bird just follows the rigid template set down in its genes. But over the course of the last couple of decades, scientists have begun to take a closer look at nests—they've weighed and measured them, they've filmed the building process. And the conventional wisdom just doesn't hold up. These structures vary in all kinds of ways, even within a species. They're shaped by experience, by learning, by cultural...

info_outline
 
More Episodes

Hi friends, we're on a brief summer break at the moment. We'll have a new episode for you in August. In the meanwhile, enjoy this pick from our archives!

----

[originally aired January 25, 2023]

By now you’ve probably heard about the new chatbot called ChatGPT. There’s no question it’s something of a marvel. It distills complex information into clear prose; it offers instructions and suggestions; it reasons its way through problems. With the right prompting, it can even mimic famous writers. And it does all this with an air of cool competence, of intelligence. But, if you're like me, you’ve probably also been wondering: What’s really going on here? What are ChatGPT—and other large language models like it—actually doing? How much of their apparent competence is just smoke and mirrors? In what sense, if any, do they have human-like capacities?

My guest today is Dr. Murray Shanahan. Murray is Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind. He's the author of numerous articles and several books at the lively intersections of artificial intelligence, neuroscience, and philosophy. Very recently, Murray put out a paper titled 'Talking about Large Language Models’, and it’s the focus of our conversation today. In the paper, Murray argues that—tempting as may be—it's not appropriate to talk about large language models in anthropomorphic terms. Not yet, anyway.

Here, we chat about the rapid rise of large language models and the basics of how they work. We discuss how a model that—at its base—simply does “next-word prediction" can be engineered into a savvy chatbot like ChatGPT. We talk about why ChatGPT lacks genuine “knowledge” and “understanding”—at least as we currently use those terms. And we discuss what it might take for these models to eventually possess richer, more human-like capacities. Along the way, we touch on: emergence, prompt engineering, embodiment and grounding, image generation models, Wittgenstein, the intentional stance, soft robots, and "exotic mind-like entities."

Before we get to it, just a friendly reminder: applications are now open for the Diverse Intelligences Summer Institute (or DISI). DISI will be held this June/July in St Andrews Scotland—the program consists of three weeks of intense interdisciplinary engagement with exactly the kinds of ideas and questions we like to wrestle with here on this show. If you're intrigued—and I hope you are!—check out disi.org for more info.

Alright friends, on to my decidedly human chat, with Dr. Murray Shanahan. Enjoy!

 

The paper we discuss is here. A transcript of this episode is here.

 

Notes and links

6:30 – The 2017 “breakthrough” article by Vaswani and colleagues.

8:00 – A popular article about GPT-3.

10:00 – A popular article about some of the impressive—and not so impressive—behaviors of ChatGPT. For more discussion of ChatGPT and other large language models, see another interview with Dr. Shanahan, as well as interviews with Emily Bender and Margaret Mitchellwith Gary Marcus, and with Sam Altman (CEO of OpenAI, which created ChatGPT).

14:00 – A widely discussed paper by Emily Bender and colleagues on the “dangers of stochastic parrots.”

19:00 – A blog post about “prompt engineering”. Another blog post about the concept of Reinforcement Learning through Human Feedback, in the context of ChatGPT.

30:00 – One of Dr. Shanahan’s books is titled, Embodiment and the Inner Life.

39:00 – An example of a robotic agent, SayCan, which is connected to a language model.

40:30 – On the notion of embodiment in the cognitive sciences, see the classic book by Francisco Varela and colleagues, The Embodied Mind.

44:00 – For a detailed primer on the philosophy of Ludwig Wittgenstein, see here.

45:00 – See Dr. Shanahan’s general audience essay on “conscious exotica" and the space of possible minds.

49:00 – See Dennett’s book, The Intentional Stance.

 

Dr. Shanahan recommends:

Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell

(see also our earlier episode with Dr. Mitchell)

‘Abstraction for Deep Reinforcement Learning’, by M. Shanahan and M. Mitchell

 

You can read more about Murray’s work on his website and follow him on Twitter.

 

Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (https://disi.org), which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd (https://www.mayhilldesigns.co.uk/). Our transcripts are created by Sarah Dopierala (https://sarahdopierala.wordpress.com/).

You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you like to listen to podcasts.

**You can now subscribe to the Many Minds newsletter here!**

We welcome your comments, questions, and suggestions. Feel free to email us at: [email protected].

For updates about the show, visit our website (https://disi.org/manyminds/), or follow us on Twitter: @ManyMindsPod.