How Does Tomorrow Sound?
To video or not to video? Coupling your audio with a visual element can provide a more immersive experience for viewers, letting them experience facial expressions, gestures, and visual cues that can deepen understanding and connection. Video also boosts discoverability, because it makes TikTok sharing possible. However, audio by itself fosters a unique intimacy. When listeners focus on the content without distractions, they can use their imaginations and multitask, giving podcasts a strategic advantage of visual media when it comes to fitting into busy lifestyles. And what will happen when we...
info_outlineHow Does Tomorrow Sound?
In our largest production call yet, seven audio makers share takeaways on our Episode 3 findings: 1) How audio memes work in the brain (and what we can steal from them), and 2) spatial audio as a stepping stone toward interactive storytelling. We talk about audio memes (ie. pieces of sound listeners already know the contextual meaning of) that already exist inside of podcasts (e.g. the chime for the news, the creaky door in a horror story, the way the conventions of This American Life have trickled through the ecosystem as best practices). And we brainstorm what else we can borrow or steal...
info_outlineHow Does Tomorrow Sound?
This podcast explores the future of digital audio and asks what podcasts might become in ten years. Do podcasts stand a chance against Tik Tok supremacy? Viral audio borrows cool from pop music and pop culture. Charlotte Shane calls this “brainfeel” in her recent Times Magazine article. Our brains are happiest when something we already like is the vector for new learning. Similarly, pop music borrows cool from licensing old hits, according to Switched on Pop co-host Charlie Harding, after recent precedent ended from the kind of liberal sampling that enabled hip hop and rock to flourish. So...
info_outlineHow Does Tomorrow Sound?
We imagined a second audio future! Then we asked some smart podcasters how we did. In this bonus track, we air back-to-back conversations with podcast experts. In the first, we spoke with Demetrius Bagley, Nikki Thomas, and Jonas Litton. In our second conversation, we spoke with Jackie Huntington and Diana Opong. These experts share their reactions to E02 (“Like… It’s Alive!”). We are grateful for their feedback. In E02, we suggested that podcast audiences will mature in similar ways that audiences for film and television have, including wanting more interactivity and more immersive...
info_outlineHow Does Tomorrow Sound?
What will podcasts become in 10 years? Join us as we explore the future of digital audio. How will listenership mature in the future? Will we outgrow our evolutionary need for story? Child psychiatrist, author, and horror enthusiast Dr. Steven Schlozman, Dr. Martin Spinelli, Dr. Sorcha Ni Fhlainn, Dr. Sylvia Chan Olmsted, and Podfly’s own Corey Coates offer insights. Story audiences mature and trends shift. Plus, with more diverse groups of light podcast listeners tuning in, there’s more opportunity to reach new niches. But what kinds of stories will these new audiences want today and 10...
info_outlineHow Does Tomorrow Sound?
We imagined one audio future! Then we asked some smart podcasters how we did. In E01 (“Like Your New Best Friend”), we suggest that developments in AI might turn podcasts into very compelling chatbots. In this bonus track, podcasters Stacey Copeland, Clif Mark, Naomi Mellor, and Andrea Muraskin share their reactions. We are grateful for their feedback. Note: Though the track is presented like one large convo, we spliced two longer chats (one with Stacey and Andrea and one with Clif), held at separate times, with a voicemail from Naomi. We didn’t include here the editorial suggestions we...
info_outlineHow Does Tomorrow Sound?
Let’s imagine some audio futures! This podcast explores the future of digital audio and asks what podcasts might become in ten years. Podcasts flourished out of the tech of the early 2000s. Now, artificial intelligence is poised to change everything. We speak with Natural Language Processing (NLP) researcher Philippe Laban; science writer Matthew Hutson; professor, programmer, and composer David Cope; and creator of Late Night with Robot, Ana-Marija Stojic. Every day, NLP and speech synthesis more closely imitate human language: Now, imagine AI-generated pods offering a key feature no live...
info_outlineHow Does Tomorrow Sound?
How Does Tomorrow Sound is a six episode series on the future of podcasts. Hosts Kate, Josh, and Neleigh endeavor to predict what podcasts might look like — or evolve into — in 10 years’ time. Expert interviews are braided with funny, experimental, blue sky brainstorming sessions and audio experiments by the hosts. This show will challenge your assumptions, will make you wonder, and will spark new ideas about the road from here to the future of audio narrative.
info_outlineLet’s imagine some audio futures! This podcast explores the future of digital audio and asks what podcasts might become in ten years.
Podcasts flourished out of the tech of the early 2000s. Now, artificial intelligence is poised to change everything. We speak with Natural Language Processing (NLP) researcher Philippe Laban; science writer Matthew Hutson; professor, programmer, and composer David Cope; and creator of Late Night with Robot, Ana-Marija Stojic. Every day, NLP and speech synthesis more closely imitate human language: Now, imagine AI-generated pods offering a key feature no live producer can: 24/7 interactivity. If the future of pods sounds like the best AI chatbot, one who remembers everything, is it your AI BFF? Or a scammer’s paradise? And will we listen?
If you dig us, please subscribe, review, and share — it really helps. And thanks!
The Big Takeaways:
- If the new tech of the early 2000s made podcasts possible, how will new, new tech — artificial intelligence like natural language processing and speech synthesis — change how we make and listen to digital audio?
- Philippe Laban, a researcher in natural language processing and human-computer interaction, has already built an AI-generated news podcast called Newspod, proving it’s possible. Now he works on interactivity in the chatbot space, which he believes may be the future of digital audio content.
- Newspod on Github
- Laban, Philippe, Elicia Ye, Srujay Korlakunta, John Canny, Marti Hearst. “Newspod: Automatic and Interactive News Podcasts.” 27th International Conference on Intelligent User Interfaces. Helsinki, Finland. March 2022
- “I think this will be one of the very cool things that can happen is podcasts can become kind of a companion and like, accompany you as you learn about the world.” — NLP researcher Philippe Laban
- “It's like, Welcome back Philippe. We haven't talked about Brexit in the last three weeks, but there's been an update since we last talked.” — NLP researcher Philippe Laban
- Matthew Hutson writes about AI for outlets like The New Yorker and Nature. He guides us through an exploration of what NLP and AI can already do in creative fields. He connects us to Google’s Dall-E2 which uses AI to generate images, and to David Cope’s experiments in musical intelligence (EMMY) and Emily Howell, algorithms that compose new music. He mentions a company called Alethia AI that offered to make a chatbot out of him.
- Hutson, Matthew. “RoboWriters: The Rise and Risk of Language Generating AI.” Nature. 3 March 2021.
- Hutson, Matthew. “Can Computers Learn Common Sense?” The New Yorker. 5 April 2022.
- Alethea AI
- “I see more promise in humans collaborating with AI. You might have a person who is at least an editor saying, Let’s use these bits of audio, put them together.” — science writer Matthew Hutson
- David Cope is Professor Emeritus of Music at UC Santa Cruz. He is a composer and a computer programmer, and he developed algorithms that write music.
- “Emily Howell — From Darkness to Light — 1 Prelude.” YouTube. 7 May 2016.
- “Emily Howell — fugue.” YouTube. 20 October 2012.
- “David Cope Emmy Vivaldi.” YouTube. 12 August 2012.
- “David Cope Emmy Beethoven 2 beg.” YouTube. 13 August 2021.
- “It’s important for me to engage people in thinking about things that they don’t ordinarily think about. And [...] to sort of teach them to evaluate what we’re all trying to do. And that is some kind of intelligence.” — professor and composer David Cope
- Ana-Marija Stojich is a comedian, writer, actor, creator, and host of Late Night with Robot (beams.fm), where she interviews AI versions of famous people, like Amelia Earhart, Barack Obama, Zora Neale Hurston, and Vincent Van Gogh.
- Late Night with Robot (beams.fm)
- “I’m just lying on my couch, like texting with AI Barack Obama, and like Albert Camus, and Vincent Van Gogh, and like Zora Neal Hurston. And they’re all having different conversations and it’s fun because they text you back right away.” — Ana-Marija Stojich, Late Night with Robot
- “I learn things all the time from the AI. Vincent Van Gogh AI was one of my favorites.” — Ana-Marija Stojich, Late Night with Robot
- It’s easy to dismiss the idea of AI podcasts, but one advantage that AI and NLP pods would have over human pods is that they could be fully interactive 100% of the time (recall the 2013 film Her). In their interactivity, they can retain information about the user, remembering what we say, like, dislike, and who we’re in relation with — gathering useful data about us while making us feel heard and valued, maybe even loved.
- Comfort for the lonely? A playland for artists? A marketer’s goldmine? A scammer’s paradise? It will come down to why we listen. Stay tuned for more on that in Episode 2.
Other resources
Marc Maron experiment
- WTF with Marc Maron
- Rev.com AI transcription of audio used in Marc Maron experiment
- GPT-2 text generator used for Marc Maron experiment. (Kate wrote to Open AI for access to GPT-3 but never heard back.)
Dall-E2 Exploration
- Dall-E2
- Recker, Jane. “US Copyright Office Rules A.I. Art Can’t be Copyrighted.” Smithsonian Magazine. 24 March 2022.
On the Ethics of NLP
- Sylva, Christianna. “Google fires engineer for saying its AI has a soul.” Mashable. 25 July 2022.
- Sozlow, Jack. “Two AIs talk about becoming human (GPT-3).” YouTube. 13 April 2021.
- Emily Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell.
- “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. March 2021.
Contact Us
Tell us what you really think, by emailing [email protected] or leaving us a voicemail at 440-290-6796.
Or check us out online: