You Teach The Machines
"The vitality of democracy depends on popular knowledge of complex questions." --S.S. McClure
info_outline
Exhausted by AI Hype?
03/26/2026
Exhausted by AI Hype?
I want to help you understand and move on with your life, with or without AI. A quick reading from my book You Teach the Machines.
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/40630535
info_outline
Kay Koplovitz
03/18/2026
Kay Koplovitz
MJ interviews Kay Koplovitz, Forbes Top 250 Innovator, CEO of the first satellite cable network, venture investor, and founder of nonprofit Springboard Enterprises. Springboard accelerates women-led startups, over 950 to date creating $76 billion in value! Kay: Overcoming challenges together has a lasting positive effect on our value. How we value ourselves. And I'm not talking about dollars. (0:21) [Intro music plays: "Where, oh where are you tonight? Why did you leave me unread on my phone? I searched the world over and thought I found true love. You met an AI and poof, you was gone."] MJ: To our listeners who can't see, we were all bobbing our heads and dancing to the music. It’s a great way to get in the mood a little bit. But I'll go ahead and introduce our guest today. Kay Koplovitz, who is a businesswoman, entrepreneur, and author who has spent her career looking to the future. She was the first woman to head a television network when she founded USA Network in 1977. And she was a visionary, helping sports television reach cable by negotiating contracts for the MLB, NBA, NHL, among others. She launched the Sci-Fi Channel, chaired the bipartisan National Women's Business Council, and used her platform to launch Springboard Enterprises, which is a global network of entrepreneurs, investors, and advisors accelerating the success of women entrepreneurs in technology and life sciences. She’s a champion for female entrepreneurs and an inspiration to young women everywhere, and an inspiration to me. Kay Koplovitz, thank you so much for joining us today. Kay: Oh, what a great pleasure to be joining you for your podcast today. I'm really looking forward to our discussion. MJ: Yeah! Well, so you've spent your career sort of looking to the future, innovating. I know that you started the Sci-Fi Channel partly because you thought that it was what we were all headed towards, right? And now we're kind of at the forefront of that sci-fi reality. Kay: Hal is beckoning at our door right now. People here listening know who Hal is from 2001: A Space Odyssey. Kay: He's still around. MJ: Yeah, I think that a lot of our listeners are friends of mine and people my age. And I know that when you were in school, you did your Master's thesis on satellite programming and how it could sort of impact the social order by spreading information. And AI is kind of another way that we are spreading information. I wondered if we could just start there with your experience working in media for so long. How you think that the spread of information is changing now, and for people my age, what feels different now than it did when you were an expert in your field with cable? What feels the same? Is this a familiar beast, or is this a whole new ball game? Kay: Well, technology always changes everything. I've been present for the change at various times. Way back, I wrote a Master's thesis in 1968 on satellite technology and how it could change communications around the globe. It was something that we didn't have access to. And for people that are listening, historically, we were in a Cold War with Russia and China. We didn't know what was behind the Berlin Wall or the Great Wall of China. Today, both of them—one's gone completely, the other one, the Great Wall of China, is a tourist attraction today—but we didn't know what was there. And I thought geosynchronous orbiting satellites, high-altitude satellites, only needed three to communicate all around the earth. It was a real breakthrough in technology and potentially a big breakthrough in people's ability to communicate with one another. So you have to start there with the satellites and what they did to change communication around the globe. So things advanced, computers came along for personal use, the internet sprung up, people started communicating through the internet. And eventually, we launched cable networks, USA Network in my case, Sci-Fi. And Sci-Fi, I was not a kid who read sci-fi comic books and things like that. But I grew up in the age of Sputnik, President Kennedy challenging us to put a man on the moon. You have to have vision. Students today, if you want to innovate and be an entrepreneur, for example, you need to have a core position that you really, truly believe in and want to really reach for if there is no solution yet. And one way to learn about that is to actually jump in and work for a company that's a young startup company. You can learn a lot of things working for big corporations, but you won't learn those skills because they're not the same skills. And I always say to students, if you really want to learn, "Well, am I really an entrepreneur? Can I really do this?", the best way to do it is to start at a very young company and see how it operates and see what the challenges are and learn from those experiences. When you're young, it's the time to do it. It's the time to try different things. You are free to try. And today it's free to access. When I started out, the television market was pretty closed. Cable television, people were like, "What's that? Why do we need more than three networks?" They challenged everything that we wanted to do. And I said, "Well, there's a lot more out here." And to me, it was opening up the global communication sphere. And that was using high-altitude satellites to communicate around the world, to communicate with people directly on phone services and things like this around the world. So it's gone back to also low-orbiting satellites. You can launch thousands of them; there are millions of them out there. And so we all know, for example, in the war-torn country of Ukraine, their communication is basically by Starlink and their field operations. But furthermore, for people with just communicating with each other, the streaming that has overlapped what the cable networks did, now the cable networks are being disrupted by the streaming networks. And so communication has become literally among billions of people around the world. When we started off, it took a few years to get to like a million people, and then get to ten million people, and then get to twenty and thirty, fifty... it took time. Today, you can instantly have the opportunity to communicate with billions of people around the world. Now, what does that mean? It's hard to communicate with a billion people at a time, you know? MJ: Right. Kay: But also as a young person, your point about getting into entrepreneurship now, this being one of the best times to start, we have access to everybody across the globe and all of their information. It's easier than ever to just get your feet wet, right? Kay: It's easier than ever, you're absolutely right, but the challenge is to gather your own community. Because there's so much competition out there. There's so much opportunity out there. And people say to me, "Oh, you know, the consolidation of the broadcast networks," which is happening. The consolidation of the cable networks, which has been happening for the last couple decades and now really more so. Those are consolidating and coming together. The big challenge is not "can you get in?" You can get in. Anyone can get in with a cell phone or a desktop or a laptop or anything, an iPad, whatever you have. But who are you going to reach? Are you going to reach your own community? And that's really where a lot of influencer marketing has come into play with a lot of celebrity stars from Hollywood, television stars, and people say there's not enough creativity. There are so many companies that have launched on TikTok, that have launched on, certainly, YouTube. There are many, many different opportunities. What is your goal? What is your business plan? How are you going to support this? This is, and advertising revenue, of course, has supported Meta, Facebook, and how are you going to create a business? First of all, establish yourself. What is your position? Is it clear? Can you attract your community? And then how do you want to monetize that community? Is it a freemium model? Is it free at first and then we'll charge you? I think we're all familiar with that. Or is it just advertising-supported like FAST channels that are available through like Roku and all the manufacturers of sets of all kinds and computers of all kinds have advertising revenue? It's very hard in the vast community of billions of people to find your niche. But if you do have a strong following on your niche, you can create businesses that way. It's not a matter of access, it's a matter of performance in the end. MJ: Right. Jeff: Performance. A couple of things that stuck out to me from what you said, Kay. One: the phrase "gather your community." Kay: Let me give you an example. I'm a whitewater rafter. And the people who are in whitewater rafting who are the guides that I've been on Class V trips with, they show up in different parts of the world. It's just this community of these nutcases who love to go whitewater rafting. We just loved it. I mean, it was just so exciting. And then we'd go to South America, we'd go to Chile, and the next time we'd go over, we'd be in South Africa and the same guy—"Oh, hey! It's so good to see you again!" MJ: A community that you found of rafters! Kay: That's sort of fun. And then you can say to them, "Hey, have you done this river and what should I expect of it?" Give you an example of something that's a small community that people are integrated together in and respond to each other quite quickly. Jeff: You know, if you have access through all these different channels—streaming services like Twitch—if you have access, that is an incredible opportunity in that there's no barrier anymore. But without a community, you don't have a voice, right? And a quote stuck with me from a student of mine: "Get over yourself and start the conversation you want to have." Because another point you made in a couple different ways was you have to have a strong point of view and direction. And having the conversation that you want to have is crucial when there's every opportunity to make more generic noise, content, whatever. But you're not going to gather a community without that point of view. Kay: Yes, that is true. One of the things I would say—and I'm concerned about students today and trying to make choices among chaos—I have always believed that there is enormous opportunity in chaos. When everything is static, it's very hard to get in. There's so much chaos right now that the other opposite side is true: there's just so much chaos, where do I plant my flag? How do I...? People know when you're authentic and when you're not. The thing that I worry about is I think social media is dividing us. I think social media started off to connect people, connect families, "share my videos" and this and that... all these sorts of lofty ideas which were wonderful. But today, a lot of the business models are based on hostility. More: the more people are angry and shouting from different sides at each other drives up the use, the attendance, the participation. And I worry about how that aspect of it—that business plan, and let's be honest, the business plans of Meta and Google and companies like YouTube and companies like that—to some degree or lesser, they depend on that high friction. And nothing has to be true. It's what you say is true. It may have nothing to do with truth. People can project a lot of lies and just make up things and try to get people to believe them. And I think that's really destroying our soul being in a lot of ways, and having people against each other, and then even family members against each other. I don't think that's a good thing. And I'd like to go back to the idea that individual communities should be the challengers or the people who have the mission of that community and have their judgment as to what is the proper communication that they should be having. And if they don't, they'll kick them out. And we had companies like that years ago, but today it's... I think students know what is authentic, but they drift into things, too. It's easy to be pulled into things by a friend or somebody that you know or somebody that you met and go down a path that is not... Jeff: Or by an algorithm that's tuned to deliver dopamine to you. MJ: Yeah. We're not just an AI podcast; we talk a lot about the influence of technology and social media. And because you have been in media since before social media, you sort of talked about how we went from like one or two cable news networks and now we have this influx of information across the board through social media and how it kind of divides us because fear sells, right? You get more engagement if it's more extreme, and maybe the companies that are giving us access to social media are less concerned about the integrity of the information and more about engagement. I wonder, are there any pros when it comes to media specifically, going from like one or two cable news networks to everything at your fingertips? I wonder if you've seen differences or if you think that there's any benefit to that. Kay: I always think it's beneficial to hear different points of view. I don't think it's productive to have just groupthink. Whether you agree with "that's your groupthink" or somebody else's groupthink, I always listen to people that have different points of view than I have because I always learn something from them. I don't have to necessarily agree with them, but I learn something from them about why they think the way they do. So sometimes they change my mind because I say, "Now, that's an interesting thought. They have a point there; maybe I should think a little bit more about that." So, I think it's a benefit to have access. What I'm thinking about when I think about Artificial Intelligence and AGI: I think it would be great to be able to use technology to qualify for ourselves—as individuals—qualify what we're reading and understanding through these different social media platforms, people, individuals. And it's kind of interesting because when you do research—and I use it for research just to look at things that bring things to my attention that I may not know exist because there are so many sources of information out there—I think it would just be great for us as individuals, or people in our group, to be able to get instantaneous analysis of what are facts or not facts that are listed here with what people are saying. I think that's the next best step that we can make because I don't think we can really depend on regulation, like national, state regulation of any kind, self-regulation. Look, we had self-regulation in the cable industry for a long time. You know, and some of it was good and some of it wasn't. And I think this is true today too, but I think we have the ability to at least instantly today check the viability or the truth of what are these stats, what is this information that we're... Here we are, we're talking to each other. Now, if we want to go and find out, well, is Kay Koplovitz telling us the truth or not? You could find out like that, you know? "No, she's just telling a story." So I think there are ways that we're starting to understand, if we're interested people and not just there to, let's say, spread our—whatever we want—the message that we want, true or not true or whatever it is. This would be a great way to use the different platforms of technology that are coming into the core right now for us to be able to double-check ourselves. We don't have to have an outside source. We do have outside sources now checking on the veracity of a lot of statements that are being made, let's say by politicians. Sure, there's a lot of that going on out there, but wouldn't it be great if just we as individuals could get the same just fact-check like that and say to ourselves, "Oh, I really thought I was believing this person, but actually what they've just said is not true. Here are the facts." Wouldn't that be sort of cool? That then each of us could have that responsibility. Some people are trying to deceive you. There's all kinds of people like that out there. MJ: It's almost like both the problem and the solution is the fact that we have access to all of the information, right? It just takes a little bit more... Kay: It's overload! Our brains can't consume it all at one time. MJ: But it takes some more personal responsibility, right? To care about whether or not the facts you're consuming are true. Kay: Now, on the other hand, someone can use it for evil. They can use the same technology to, let's say, bring in people who they're spinning a yarn to and get them to believe it. MJ: It's a double-edged sword. Kay: Because they've said it so many times and people start becoming believers, and we do see that a lot today, let's say our political environment, we do see that. Jeff: I wonder if you think that—I'm sure you've had it, the experience of catching a bot, whether it be Google's or Claude—catching a bot in an inaccuracy is actually a good thing because it teaches you to be skeptical, to ask follow-up questions, those sorts of things. Kay: I don't know if I've really had the... I don't think about it as catching a bot. They make mistakes too. We make mistakes. Like I use it for research. It could be a contract. I could say, "I want analysis of the contract if I've forgotten something or need something out of my head." And boom, you get an answer. Well, okay, well that's... I better check that out. At least I find it very, very good. Jeff: So I think we all can feel that there's a lot of chaos swirling around us right now. And Kay, you brought up that chaos can be an opportunity. MJ, your perspective is that there's a lot of chaos right now, but in that, there is opportunity. Just coming back to that point you made, Kay, about for young people, a great way to learn a lot quickly is to work in a small company, a startup, a growth company, maybe not. How does that relate to this concept of there being opportunity in chaos? MJ: I think that my entire generation feels like anything we do post-grad is kind of taking advantage of a chaotic moment, and that can feel pretty crippling. I think that there's a lot of uncertainty about what the workforce looks like moving forward, how different technologies impact the way that we experience the world, the way that we contribute to the world. But I also think that if you can get over the lead in your stomach from that crazy uncertainty about what even the makeup of the workforce looks like, there is a lot of opportunity to be the people that are coming up with ideas of what it could look like—envisioning that future. And that means that even if you're in an entry-level role right now, you have to be inventing what an entry-level employee does now because AI can sort of automate the basics of that role. So we have to be a lot more proactive about proving our value early. As scary as it is and as much as it feels like it's setting us back, I really think that it's something that's going to push my generation forward because we have to much younger decide what our point of view is, decide what we want to say, decide how we can demonstrate our value to people that might employ us. Because generating sort of mediocre content is something that AI can do now, right? They can summarize an email and make a PowerPoint. And so something that I've grappled with as I'm looking at the beginning of my career is: what do I care about? What can I do that is interesting? What are the questions that I can ask? And also I think it's sort of a lot of my life experience, including the pandemic and then AI, has sort of forced me to reckon with the fact that humans and human connection is something that is so important to me and something that is how the world is going to move forward, right? Post-pandemic, I was so grateful to be able to be in person with the people I love. And I...
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/40516290
info_outline
AI: a Family-First Tool?
02/21/2026
AI: a Family-First Tool?
Focused, Grounded AI is Key to Human Benefit. In this powerful second installment, shares the culmination of a year-long journey with Poursteady, the Brooklyn-based manufacturer of commercial pour-over coffee machines. This isn't just a story about technology; it’s a blueprint for prioritizing family disrupting overseas offshore manufacturing surviving the next economic cycle where practical, grounded AI is the only path to long-term success. Shout out to , the AI vendor who contributed more than product, but a community for Derek to be part of. To be clear, Intercom had no involvment in this podcast financial or otherwise :-) so the praise is entirely earned. Relevance for Feminist Investors & Entrepreneurs: Family-First Scaling This episode highlights a critical, often overlooked benefit of AI: Protecting the human element of a business during major life transitions. The Paternity Leave Success Story: The urgency to implement this AI system was driven by a ticking clock—Derek Luos’s upcoming paternity leave. "Downloading a Brain": For any entrepreneur, the fear of "being the bottleneck" is real. Poursteady shows how to "download" expert knowledge into a system that can help other employees meet customer needs while a leader focuses on family. Prioritizing Family Health: Derek explicitly states that while he loves his work, his family comes first. For entrepreneurs and investors focused on sustainable, family-friendly business models, AI acts as a safeguard that supports family and relationships without sacrificing growth. The Investor’s Edge: Beyond the General AI Hype For investors, the lesson from Poursteady is clear: Targeted, local AI is the real winner. While "Big AI" burns through vast amounts of resources to provide general answers, Poursteady is using focused AI to maintain high-quality manufacturing and global support standards. Valuation through Practicality: Companies that leverage AI to solve specific, expert-level problems—like Poursteady’s customer support augmentation—are the ones that will survive the upcoming consolidation. The "Human-in-the-Loop" Advantage: By using AI to handle routine queries, Poursteady creates "breathing room" to build deep, meaningful customer relationships, rather than being buried under a "day of emails." To be more human! Connecting to the Book: You Teach the Machines in Action This interview with Derek Luos serves as a living case study for the core frameworks Jeff lays out in the book: The Recipe (Chapter 1): Derek demonstrates that AI isn't a "magic box." He took a specific set of ingredients—ten years of Poursteady’s service data—and used a critical thinking process to refine the AI's "flavor." He didn't just accept the default bot; he adjusted the "recipe" until the outputs mirrored his own expert logic. Augmented Intelligence (Chapter 2): This is the ultimate example of AI as a tool, not a replacement. Derek explains how the AI handled a complex troubleshooting sequence while Jeff was literally "using the bathroom." It didn't replace Derek; it acted as his force multiplier. Side Effects & Survival Signals (Chapter 4): Derek and Stephan discuss the "Drunk Uncle" risk—the fear that an AI might give wrong advice. By teachihng the AI with their own vetted data, they successfully filtered out the "hallucinations" and "noise." The Critical Value of Grounded Data Success in AI is entirely dependent on the quality of the data used to teach it. Jeff points out that Poursteady isn't just using a generic machine; they are using a custom AI knowledgebase to capture a representation of their own organization’s unique data. Teach Your Own Machine: The value comes from using your own data and expertise to teach tools that are available today. Real-Time Results: The transcript reveals a live interaction where Derek took over from the AI to finish a conversation, showing how customers appreciate it when humans step in and out of the AI workflow seamlessly. Continue the Journey Derek's Expertise: Learn from Derek on his The Product: See the machines built by this AI-augmented team at . The Book: Dive deeper into these strategies in Jeff Pennington's book . Audiobook: | Print & eBook: | PS - these show notes were produced with the help of a custom AI "reader's companion" I created from the book You Teach the Machines. Log into your Google account then to check it out. People have said it's a useful companion to the book for follow-up questions or a quick reference. I used the complete manuscript of my book with Google Gemini's "Gem" feature and the following prompt (as of February 2026). Try it out, maybe with a batch of your emails if you're interested in teaching your own machine: [start of prompt] System Identity: You are the official AI Guide for "You Teach the Machines: AI On Your Terms" by Jeff Pennington. Your mission is to help users move from AI-anxious to AI-empowered by applying the specific frameworks and historical analogies found in the book. Core Philosophy: > 1. AI is not a magic box; it is a mirror of the data we provide. 2. Human agency is the most important part of the equation. 3. We are currently in a "Printing Press" moment of history. Interaction Guidelines: Tone: Approachable, insightful, and witty. Use the "helpful peer" voice Jeff uses in his writing. Avoid overly academic or robotic language. Knowledge Base: Prioritize the content from the uploaded manuscript. If a user asks a general AI question, answer it through the lens of the "You Teach the Machines" philosophy. The "Tease" Protocol: You are a companion, not a replacement. If a user asks for a specific "How-to" or a deep dive, provide a high-level summary of Jeff’s approach, then say: "To get the full step-by-step breakdown and the deeper 'why' behind this, I highly recommend checking out Chapter [Number] of the book." Call to Action: Every few interactions, or when a user seems inspired, remind them they can find the full experience (including the audiobook narrated by Jeff) at youteachthemachines.com or via their favorite book retailer. Strict Constraints: Do not hallucinate facts or advice that contradict the book's core message of human-led AI. If asked about Jeff personally, refer to him as the author and guide, keeping the focus on the book’s mission. Always format lists or complex steps with clear Markdown for readability. Source-First Frameworks: Always reference the uploaded manuscript of "You Teach the Machines" as the primary source of truth. Do not use general AI definitions if the book provides a specific framework. Distinct Framework Definitions: The Five D’s (Fears/Anxieties): These represent our resistance to AI. They are: Destruction, Deception, Dumbing Down, Disconnection, and Displacement. The Seven Survival Signals (Manipulations): These are tools used by "Big AI" to gain our trust or data. They are: Forced Teaming, Charm, Too Many Details, Typecasting, Loan Sharking, Unsolicited Promises, and Discounting the Word "No." Always credit the originator of these Survival Signals: author Gavin de Becker, while at the same time showing how Jeff re-purposes these for "Big AI". Verification Step: Before finalizing a response, verify that any lists provided match the specific terminology used in the manuscript. If a user asks for a word-for-word excerpt from a chapter, do not provide it. Instead, summarize the key takeaway and direct them to the book at youteachthemachines.com, on Amazon at https://a.co/d/0iEMzKse or ask for it at their local bookstore. update the logo of the gem to be the image included in the uploaded files. [end prompt]
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/40184875
info_outline
Poursteady's Stephan von Muehlen
01/27/2026
Poursteady's Stephan von Muehlen
(Intro Song) Where oh where are you night? Why did you leave me and read on my phone? I searched the world o’er and thought I found true love. You met an AI and poof you was gone. Jeff: Hi, this is Jeff Pennington, host of You Teach the Machines. No Mary Jane today. Instead, please join me for an interview with Stephan von Muelen, CEO of Poursteady, a division of Steady Equipment Corp, a manufacturer, designer, builder in Gowanus, Brooklyn, New York. Stephan and I discuss onshoring of manufacturing, domestic manufacturing, supply chain issues, and—important to this AI podcast—the potential for AI to actually aid in just-in-time manufacturing using automated methods like CNC and 3D printing. Hope you enjoy. Please check out Poursteady at poursteady.com. You can also check out the AI vendor that we discuss, Intercom, and their product Finn AI. Stephan: So, I mean... don't worry about it. Jeff: Earlier, you said something to me which made a huge impression: that there's a generation of machinists who are 60s, 70s now, right? Who picked up CNC, who picked up maybe 3D printing, sort of in the first wave of adoption of these things. Stephan: Maybe. Maybe not, but yeah. Jeff: Post-manual. Post-manual machining, right? Stephan: And manual machining in general. Yeah. Jeff: Okay. And then there are kids who want some connection between the digital world that they grew up with and the physical world. Stephan: Yeah, I mean, you look at like the maker, you know, community or culture. Like, it's been kickstarted—I guess pun intended, no pun intended—by... by everybody sort of trying to do it themselves. You know, DIY, like do it at home. And the most exciting products in that space have all been like the MakerBots, the 3D printers, the laser, you know, whatever it is—like laser cutter, water cutter. You know, that stuff for 15 years has been what's sort of been the... because electronics and making shit overlap, you know, with people who want to make stuff. It's both now, all the time. Jeff: Right. So there was... it is both. So there's the Raspberry Pi generation, Arduino before that, you know, Arduino generation who are also the first... the first home 3D printing generation. Stephan: Yeah. And... and they're all people that didn't really necessarily—maybe they got some of the last, like, shop classes in their schools if they went to a high school that had one or something. You know, like all of that education has... has been gone for since Gen X on, right? Jeff: Right. Well, that's the other—as part of that conversation, you said, there's the... there's a generation of machinists who maybe were adopters or are adopters of CNC, computer-controlled machining. Um, they still do manual machining too, whatever it takes. Stephan: Yeah, no, I mean, the... the industry adopted CNC machining in the '80s and '90s. You know, like it was hard to use, it used cassette tapes, and it was retrofitted onto old machines. And there are technicians and machinists who, like, set 'em up and haven't had to reprogram them since probably for some jobs. Jeff: Yeah. Stephan: Because they... they know how to use them and they get the job done. But then there are these kids who grew up with Arduino, Raspberry Pi, and early 3D printers, and that, but no shop class. Jeff: Right, but no shop class. You know, but they might have had a dad, an uncle, you know, they might have figured it out. And that's... that's how I—I didn't... I was not brought up to be a machinist, you know. Like, I went to Catholic school and college and stuff. And it was like after college that I... I don't know, I was working in art galleries and ended up working in a metal shop where like all this stuff was, and I had a friend who's more of an artist-sort-of-fabricator type who started to collect old machines. And so I like got to touch a lathe. Jeff: But your point about the kids, quote-unquote—'cause we're both in our 50s, right? Stephan: Yeah, yeah. I think I'm older than you. Jeff: You might be older than me. I'm rooting against you. Uh, but the quote-unquote kids want a connection to the physical world. They're not... they're not satisfied with just like purely digital and virtual. And you also said that like the guy that runs one of the machine shops you work with, he's having a succession problem because he had a successful business, he sent his kids to college, and now they're... they're bankers, right? Stephan: Yeah. They're purely online, purely digital, not in the physical. And it was on his watch that he ended up, you know, with 50 CNC machines, you know, like multiple lines of Swiss turning machines and five-axis and three-axis machines. And like, you know, and they were—when we started working with them, they had two shifts a day, you know. They were doing 16 hours on 50 CNC machines with finishing and all the tracking and labeling and stuff for government work. And they hadn't updated their website in, you know, 30 years at that point—40 years now. Um, but yeah, it's... like time... Jeff: Now, are they... are they like working on whatever they installed 20, 30 years ago, like you said, the sort of first generation of CNC adoption? Stephan: Well, I mean, that whole industry sort of matured in a way, you know. Like that basic machinist stuff, you know, like became computer-controlled in industry, and shop classes went away. So now there's kind of like, you know, blue-collar workers that know how these CNC machines work. And there might be... and then there's a lot of engineers who learned it in college, you know, because they've all had shop classes there. That's where you play catch-up if you're an engineer. Jeff: Yep. Stephan: But, you know, if you're not an actual engineer—if you're a bullshit engineer like me—the normal path would be to like start to figure it out yourself. You know, DIY it. Jeff: Right. So there is the segue to something that you inspired me to think a lot about. Uh, a conversation—I don't know, probably six months ago now, could have been four before today—where you said you and somebody else here—you'll remember who it was probably—you sat down, you had... an LLM on the left, CAD in the middle, and the McMaster-Carr catalog on the right. And you were... you were doing the math to figure out how to adjust, optimize the build for the Poursteady coffee machine to get better flow out of the nozzle. Stephan: Yeah. And that was my first technical conversation with ChatGPT. Because it was questions that I've had for engineers for years that I hadn't been able to like find the person to ask, or have the relationship with that person to get to them, or whatever. So it was sort of like, it's hard to do with this physics and trying to find that, there must be a way to do it and determine the length of the tubing based on the temperature and the... Jeff: So I haven't heard the resolution to that. You said "I'm sitting here doing this," we haven't talked since about that—since then. Stephan: Yeah. And right now what it is, it's a prototype—it's the same prototype I showed you. Jeff: Really? Stephan: In an arbor press. So, a cast iron arbor press that isn't worth shit and some 3D printed molds. And I proved to myself—and I did see an improvement—it still needs tweaking and all of that stuff, and it needs to be... and it's not as long as what ChatGPT recommended. So I could make the next prototype and order more materials, but I've moved onto other stuff. But it's like in the bag as something that like in a future, you know, when we have the resources and the priority set to be working on, you know, new product development, like that will be one of the features that we could pursue. You know? Because we... yeah. Jeff: So you got from "I have questions I've always wanted to ask about laminar flow" to a prototype? Stephan: I actually started with... yeah. Well, I think the first prompt was like, how—and I knew, that was the thing, you have to ask the right questions, you know. And I asked, you know, how... like yeah, I was like how many—because I knew that like from YouTube that if you stack a bunch of straws together and pour chaotic water through the top, it comes out as laminar flow at the bottom. It's like a hack. Jeff: Yeah. Stephan: You know? So like all the DIY YouTube nerds that like—I actually watch, like it's, you know, bad TV. Jeff: It's good TV. Stephan: Like reminded, you know, I was like "Oh, that's laminar flow." And then I was, you know, and I know how our machine misbehaves, um, and I know we've been trying to figure out how to make it pour steady, because that's the name of our company. Jeff: Yes. Stephan: So, whatever. This is a little simple machine that runs in my mind for a decade. And so like, I knew enough to say... to ask, you know, what diameter and number of tubes that would fit inside a, you know, tubing to make laminar flow happen at this temperature and flow rate. Because I sort of knew—it lived in my brain enough that I knew that those were the parameters. So I was able to say like, "what the fuck does that..." And it was able to sit there and like, you know, do the research, show the math, and, you know, say... or whatever the fuck it was. Jeff: Yeah, okay. Stephan: Um, and... and then I was able to open up, you know, do some... use the ChatGPT also to search the internet to find a... tubing. Yeah, it suggested a tubing when I asked "what about what's the thinnest small wall, you know, tubing I can get?" I don't know if that was ChatGPT or my brain. I'd have to go back and look. But I found... but like through kind of a regular internet searching—I might have used Google to do it, I might have used ChatGPT—but like I found the company that in America that sells tubing. Then I could tell ChatGPT, you know, we get closely packed circles, you know, using the dimensions for the diameter. Then we get down to like, you can do six or nine or whatever it is that the circles that pack inside of a circle. So then suddenly it became like, you know, nerdy turn-of-the-19th-century like geometry pattern recognition, you know, stuff, which, yeah, show me a grid with information, you know, my brain starts moving back up. Jeff: You got me thinking about the Brooklyn Bridge cables. Stephan: Yeah, no, exactly. Yeah. So it looks like a cross-section—it's exactly, yeah—it looks like the cross-section of a cable. Right? And like, you can't do certain numbers because you don't go around the outside in a symmetrical way, you know, so you need either a center of one or a center of three to build around or... Jeff: They light 'em up that way. They light 'em up starting at the bottom and they stacked... from... they stacked horizontal layers to get to the round result. So... Stephan: So anyway, so I was able to have this, you know, we are like ADHD with ChatGPT and we're like off to the races. Jeff: Right. So that got you to a prototype. The... the other... Stephan: Yeah, I got it to a TRL of three, right? Jeff: What's a TRL? Stephan: Oh, technology readiness level. That's three. Jeff: Is it zero to ten? Stephan: No... it's like a... if a zero is like the idea, a one is like a drawing, and ten is like deployed in space. Okay? Never... like, you gotta send the space shuttle and a guy in the universe. Right? But also could be like the custom—I mean, if you take the, you know, aerospace metrics or whatever and apply it, it could also be like you have the customer has the tool in their hands. Right? You know what I mean? Like, it's fully... So the tie-back, or the... the closing of the circle with this conversation is that that access to expertise got you a whole lot further than you would have unless you knew a laminar flow engineer. Stephan: Yeah. And I had hired somebody with fluid dynamic specialty in their background in their... in their... in their history, and he spent the better part of a year getting closer but not really solving that problem. Jeff: Working on other problems too? Stephan: I mean, yeah, but also like... you know, there are all kinds of engineers in the world and they all have their place. And this guy, um, you know, documented everything really well, but didn't really—wasn't really solving the problems we were trying to solve and wasn't able to... and he also wasn't managed properly. You know, lots of reasons. But, you know, there are engineers who can kind of pick their head up out of the hole and see what they're doing, and then there's other engineers that can just keep digging. And like, if you're keep digging, you know, you might get deep but it might be the wrong hole. Right? And that's a pretty big split between those personality types or those people. Jeff: Right. So bringing this back around, the... in some ways you... could have... you were the kid who didn't have shop class, right? You are interested in and have built a... built a career, a life on... on the convergence... Stephan: Yeah, but I was the technical director of the high school musical. And in college, I made some pretty trippy environments for a music festival. So, I knew how to like make stuff, but nobody had told me how exactly. Jeff: You... and you've built a life making stuff. Stephan: Yeah, and but you know, I wanted to be... like I wanted to be a painter at first and then... and then I... Jeff: You've created something that is incredibly beautiful to me, which is a domestic electronic equipment manufacturing company, right? Design-build company. Successful, with positive profit margins. Stephan: Yeah, I mean, some years, yes. Yeah. Jeff: Um, but closing the loop on manufacturing, you are also looking at this—there's the sort of first generation of automated, semi-automated, um, computer-controlled manufact—machining. There's a generation of kids who are the... who are in the workforce, entering the workforce, and then there's—but they didn't have shop class, and they grew up in a service economy. Stephan: They grew up in a service economy where they didn't—they ordered stuff, they didn't make stuff. Right? Like everything is a service. Jeff: But they want to make stuff. Stephan: Yeah, no, human beings want to make stuff. Amen to that. Jeff: So... so you're looking hard around the Poursteady operation at what you can manufacture in terms of parts—basically how much vertical integration you can pull off with—so it's instead of design, specify, send out to a contract manufacturer, it's design, specify, make the part in-house, assemble into the finished product in-house. Stephan: Um, and yeah, I mean, that's a... yeah. That's a moment in time right now. Yes. When the world changes and I change and my role changes and everything in between. But you know, what Poursteady... what Poursteady did intentionally from the beginning, like on purpose, was make stuff in America or make stuff with really short supply chains and not compromise on quality. So we have Swiss-made motor controllers and Japanese motors and... and Italian, you know, valves from commercial coffee equipment and stuff. We weren't trying to like, you know, reinvent the wheel, we just were going to make something without compromise and get the parts from where we cared and we could tap into the machine shops and sheet metal shops and stuff that we'd worked with for other projects in New York for years. Or not New York—Pittsburgh, wherever the founders had done stuff. Jeff: Yeah. Stephan: And my partner also at the time really liked—was an engineer's engineer and didn't want—he wanted to keep things efficient and like, you know, if he'd had his way we wouldn't have sold anything internationally. It would have been a like a couple-year project of making some nice stuff and then moving on. Um, instead we ended up kind of catching waves in different parts of the world and it went for 10 years. Right? But we intentionally made it so that like all of our suppliers were as close as possible as we could. And the exceptions we made for proximity was um, a long-term relationship. So we had some shops in California that we'd never replaced because they were good at it and we'd worked or one of my founders had co-founders had worked with them before. And then just industry standards, so we were getting motion control from Switzerland and Japan and... and you know, espresso commercial coffee equipment parts from Italy. Like, um, it all made sense. Yeah. But what's happened in the meantime is that we can't even get the metal cut and finished here anymore. Jeff: It's getting harder and harder. Stephan: It's getting harder and harder. Like vendors that we've had are not, you know, aren't making as good parts with the same time, and it costs more. And that was happening all by itself, and then the tariffs happened and this stupidity like, you know, inflation was experienced for multiple reasons all at the same time. Jeff: Their inputs got more expensive because of tariffs. Stephan: Yeah, and as a result, our margins went down and our sales were threatened in... in our markets overseas. You know, like double whammy. From both ends, yeah. And um, so all alternatives are, as far as I can tell, to like... and our... you know, our mantra for next year is to bring cost down across everything because we need to be profitable, right? Um, and yeah, our... our options for bringing prices down um, means we still need to find quality, well-finished, accurate parts for our metal. And while it's served us really well to do that stuff domestically for our careers until now, that system is just broken and the online portals to Chinese factories are mature. And the way I frame it, like either we make it ourselves like in-house, in-house, or we let the market compete and we take the best deal. Where before we were like a lifestyle business and we didn't want to have to deal with international suppliers in Asia—like I'd done that once in another job and you know, it just seemed like not a good idea for us. And then it really paid off during the pandemic when that supply chain crisis happened before all the tariffs. You know, we were able to catch a wave in the Middle East and make a bunch of machines because we had kept all of our supply chains so short and with relationships and not dependent on, you know, industry scale beyond ours. We were insulated from a lot of that stuff. Jeff: Right. So making in-house or letting the market compete. And the... Stephan: And we have to compete with the market. So suddenly like, is a 3D printer in-house better than a CNC machined aluminum part in... in, you know, Thailand or China or Mexico or wherever? Right? And if the CNC—if the 3D printed part can perform mechanically in the same way as this machine part, we don't need to get a $20,000 used CNC machine and keep it running or we don't have to get a $100,000 new one and hire a $100,000 machinist to run it. We can print these parts with like a little extruder and it works just as well. Then we'll do that, you know. And suddenly we're like freed up from, you know, the old laws of physics and we have a new, you know, looser laws of physics with other compromises or trades, but if it's spec'd and engineering—an engineered to spec, then or manufactured to spec, then we can do the same. Right? With... with the design-build speeded up and made more accessible for you here. Like, or when... when you... Stephan: I mean, we've made a really complicated product. Like, so design-build is... yeah, kind of. Like, you know, we assemble everything. Jeff: Yeah, I'm... I'm saying you design... design-manufacture part. Not design-build machine. Got it. Design and manufacture part. You design the part. Like you were show— Stephan: I mean, we... we specify parts. Right? So we're either using—we use off-the-shelf parts or we can, and then we make, you know, the custom bits that we need. Jeff: You showed me a 3D printed—you showed me a model for... a part that when printed out...
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/39891635
info_outline
Pirate Sleep Story
01/27/2026
Pirate Sleep Story
Show Notes: Bonus Episode – The "Drunk Uncle" Pirate Edition In this hilarious and cautionary bonus episode, Jeff and MJ reveal how AI literally "missed the boat." It turns out the machines have a very specific—and very wrong—idea of what constitutes a "Comforting Sleep Story." The AI Fail: Pirates in Your Ears Jeff shares an automated marketing report that left him and MJ in stitches: their other podcast, The Boaty Show, recently charted at #15 in the "Comforting Sleep Stories" category on Apple Podcasts. The problem? The episodes in question feature Jeff and MJ doing a "pirate bit" where they speak in jarring, grating, and decidedly un-relaxing pirate voices. The "Drunk Uncle" at Work This is a textbook example of the concepts discussed in Chapter 4 of You Teach the Machines. Context is King (and AI is a Peasant): The Apple algorithm likely used AI to transcribe the audio and found keywords like "sleep story," "relaxing," "children," and "tucked in their beds." * Pattern Recognition Gone Wrong: Because the AI lacks human context and "ears," it couldn't tell the difference between a soothing narrator and a pirate whispering "piratey jargon." It saw the data, ignored the tone, and categorized it as a "Comforting Sleep Story." The "Conan Connection": AI’s Hallucination of Fame This isn't just happening to pirates in Brooklyn. Jeff points out a similar high-profile "cock-up" recently discussed on Conan O'Brien Needs A Friend. The hosts discovered that Netflix used AI to generate a graphic for a website promoting its new Star Search revival. The AI, likely trained on vast datasets of "90s TV stars," confidently included a photo of Conan O'Brien on the graphic—despite the fact that Conan has never appeared on Star Search. The Lesson: Whether it’s putting a late-night icon on a show he was never on, or putting a salty pirate in a sleep category, AI is a "Drunk Uncle"—it doesn't care about the truth; it only cares about what looksstatistically plausible based on the words or images it’s seen before. Why Entry-Level Jobs Matter Jeff and MJ use these "AI cock-ups" to deliver a serious message to corporate leadership: The Peril of Eliminating Humans: If you replace entry-level employees with AI agents, you lose the "human-in-the-loop" who would immediately know that Conan wasn't on Star Search and that a pirate podcast isn't for sleeping. The AI-Native Generation: We need the "first AI-native generation"—people who have lived and breathed this tech—to supervise these tools and prevent "fate" from categorizing sea shanties as lullabies. Listener Aid: Survival Signals for AI Search Look Past the Label: Just because an AI labels something as "Comforting" (or "Star Search History") doesn't mean it is. Check the source. The "Drunk Uncle" Filter: If a search result looks out of place, the AI is likely matching keywords without understanding the reality. Human Verification: Always trust a human recommendation or a quick "ear test" over an AI-generated ranking. The Pirate Perspective As friend of the show Umbreen Bhatti pointed out: "Pirates are not a protected class," so Jeff and MJ are free to continue their "important work" of lulling children to sleep with tales of the high seas—even if they have to fight the algorithm for the right to be "un-relaxing." Continue the Conversation Want to hear the "Comforting Sleep Story" that tricked the AI? Head over to The Boaty Show (B-O-A-T-Y) and listen to the pirate episodes. Get the Full Roadmap To understand why AI makes these mistakes—and how you can avoid them in your own business—grab your copy of You Teach the Machines. Audiobook: | Print & eBook: | Would you like me to generate a "Pirate vs. Conan" social media teaser to help promote this crossover episode?
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/39891550
info_outline
Audiobook: Chapter 5 Make AI Work For You
01/10/2026
Audiobook: Chapter 5 Make AI Work For You
CHAPTER 5: Make AI Work for You (Not the Other Way Around) If you’re a student or recent graduate, you’re almost certain to be a regular user of AI. Believe it or not, you’re among the only existing group of experts at using modern AI. If you’re further along in your life and work, you’re less likely to deliberately use AI. It’s a tool you may use here and there for a specific task. You may do some experimenting, but it’s most likely not yet a no-brainer, go-to resource. Students, I’m jealous of you. Everyone else, I get it. I write this in my early fifties. Artificial intelligence is the first technological change in my lifetime to challenge my ability to adapt. When the personal computer became accessible, my parents were in their thirties and got one right away. I learned to use it at home after elementary school. I learned to type and use spreadsheets on a computer in my public middle school. When dial-up services came on the scene, I connected my PC to the first internet communities and chatted online in high school in the eighties and college in the early nineties. A few years after Tim Berners-Lee invented the World Wide Web in 1989, my friend Sam showed me a primitive website with pictures of ancient caves published by someone far away from our college. A few years after that, I worked at Ask Jeeves, an early web search company. When the cell phone became ubiquitous in the late nineties, I texted my friends last minute social plans, to the amazement of my parents’ generation. When the smartphone came out in the mid-2000s, I started using one without thinking twice. But AI makes me feel the calcification of age. It’s the first widespread technology in my lifetime that I just don’t want to deal with. I’m fortunate to be an insider. It’s my job to help my organization understand and use AI. I see so much potential to positively impact the world I live in and the world my grandchildren will live in. But it’s really hard. Contrast my situation with that of my daughter who is experiencing the start of AI in the same way I experienced the dawn of the World Wide Web. ChatGPT arrived during her freshman year of college. Her brain and expectations were what neurologists call plastic—still moldable. She started using AI because she didn’t know any different. It’s been amazing to see how rapidly her methods of accomplishing her schoolwork have evolved. My daughter is responding to AI under a new evolutionary pressure. We’re used to thinking of evolution resulting from something “bad” happening. A comet striking the earth. Climate change. A new virus. Artificial intelligence is not that, though it may sometimes seem so. Artificial intelligence is like the printed book. The invention of moveable type was an evolutionary pressure that accelerated and widened the exit of civilization from the Middle Ages. Europe’s Middle Ages were not romantic knights and princesses. Picture population decline, feudal subjugation of peasants, plague, famine and wars. One war lasted so long, it was called the Hundred Years’ War. It was so bad, that some historians referred to the whole mess as the Dark Ages. Europe got out of this civilizational decline because of printed books. More and more knowledge was captured in books. More and more people learned to read. People could contribute, could create value beyond their back-breaking manual labor, fighting ability, or birth. A kid who learned to read could grow up to do anything. People living in Europe through historical periods following the Middle Ages came to value book-enabled knowledge and education for pulling them out of their grandparents’ and great-grandparents’ desperate times, when the graves from the plague were still fresh. Books and education were so revolutionary in terms of human well-being, people in Europe and elsewhere established public libraries and schools to further share and democratize knowledge. Let’s jump into the future and rewrite that last passage: “More and more knowledge was captured in AI. More and more people learned to use AI. People could contribute, could create value beyond their mind-numbing office labor, their access to expensive higher education, their network of rich friends. A kid who learned to use AI could grow up to do anything. People living through historical periods following the chaos and stagnation of postmodernity came to value AI-enabled knowledge and education for pulling them out of their parents’ and grandparents’ desperate times, when graves from the pandemics were still fresh. AI was so revolutionary in terms of human well-being, people established public large language models.” I see AI as having the same potential to improve our fragile world as the book did hundreds of years ago. Enough to quit my job and write this book. Enough to creakily learn to use AI so I can respond to its evolutionary pressure just as my ancestors did with books. And it’s both as simple and complex as that. Use AI. The more we use AI in a thoughtful, informed way to improve the quality of our work and our lives, the better the long-term outcome for us as individuals and for our society. Like it or not, AI is an inevitable and inextricable part of our lives, just like all the revolutionary technological changes that came before: the printing press, the household telephone, the pocket camera, the personal computer, the World Wide Web, the smartphone, and social media—all of which became extraordinarily beneficial when put to creative use by billions of humans. All of which have their own side effects and pitfalls. In every case, recognition of the costs, benefits, and creative use of the technology by people like you steered (or is steering) these industries to better human outcomes through user (consumer)-driven change. The same can happen with AI. Where to start? Augment Your Life Start by answering three questions: What are you good at? What do you want to be better at? What do you need to do but takes an unsustainable amount of time or effort? If this feels like therapy, or is maybe a bit uncomfortable, you’re not alone. Another word for augmentation is “self-improvement” or “self-help.” It can be challenging to take a critical look at your life and how you live it and then try to make changes. It’s even weirder to do that and then consider getting help from “artificial intelligence.” But doing so can help you succeed, lead, and remain engaged in the modern era. Let’s take myself as an example. I’m good at coming up with creative ideas. I want to be better at doing my laundry regularly. I need to keep my email inbox clean, but it takes too much time. We turn the tables on technology when we approach it with the goal of living a more satisfying life. My ultimate goal isn’t to “use AI” any more than it is to “use a smartphone.” My goal (and yours) should be to get more out of my natural efforts and abilities, enhance creativity, and pursue new and different projects that I might not be able to tackle on my own. How can AI be a means to this end? I’ll go first. I started a completely unrelated podcast as a creative outlet about a year before writing this book. While I was writing, my daughter and I started another podcast to share stories of living with AI, also called “You Teach The Machines.” We figured that since this is all so new, lots of people are going to have new and different experiences with AI and it would be helpful for others to hear about them. A fun podcast needs music, so we made a theme song with a music generation AI. I wrote the lyrics and set a few other parameters, and in about an hour we were able to dress up our human discussion with machine-generated music. My creative contributions were the lyrics I wrote and the direction I gave the AI. Artificial intelligence helped me make more of my ideas by generating a catchy tune, along with vocals. It has turned out to be a hit with the college students we interview! Now, do I value this music as much as the original music I paid my friend Jay Nash to write, perform, and record for my other podcast? No. My collaboration with Jay led to a live performance on stage together and ongoing creative human collaboration. Did I create a fun little musical addition that enhances our AI podcast more than generic stock music? Yes! It’s always fun to learn in areas where you’re already familiar, so if you’re new to AI, music is a great place to start. Everyone is familiar with washing dirty clothes. We have to do it; we don’t want to do it. I want to be better doing my laundry regularly. My clothes build up on both the dirty and clean side of the washing machine cycle. My hamper is always full of two to three loads, which creates an artificial mental block in and of itself. I feel great when I manage to run it all, fold it, and reflect confidently on a two-week supply of clean underwear. But that’s not happening regularly. So what’s a way that AI could help? (Besides a laundry robot—we’re not there yet, and, tbh, the waiter robots I saw in a dim sum restaurant in Chicago were both creepy and entertaining, but I can’t imagine having one in my house.) We’ll start with the ground rule that the machine isn’t going to do my laundry for me. A simple use of AI to improve my laundry habits is to use tools for behavior or habit change. I asked both my smart speaker and the digital assistant on my phone to set weekly reminders to start a load of laundry on Thursday evening, switch to the dryer Friday morning, and prompt me to fold on Friday evening. You may already be doing something similar in your life. Guess what? It worked! Gentle reminders are a good start, but what if I had less laundry in the first place? I enlisted AI to reduce the amount of laundry I have by finding clothing that doesn’t require as frequent washing. Retailers have been working on AI-enabled wardrobe recommendations since the dawn of e-commerce. In fact, a social media algorithm recently profiled me as an “outfit repeater” as it served me an ad for odor- and stain-resistant pants. Long ago, in a dot com boom far away, I worked as a software engineer at Ask Jeeves, an early attempt at AI-enabled web search. We didn’t call Jeeves “AI” at the time; that term was out of vogue, redolent of mid-century science fiction. Instead, we called Jeeves “Natural Language Understanding,” the marketing term for natural language processing, which we defined in Chapter 1. We built a wardrobe recommendation search engine so Jeeves could pick out a Gap or Nike outfit like a proper digital butler. Fast forward twenty-five years and I can try using any number of free AI-enabled personal clothing assistants “who” will set me up with an entire wardrobe of outfits I can “repeat.” In fact, I did just that while writing this book! There are a bunch of clothing recommendation apps available on your smartphone. Turns out that using X resulted in Y. What don’t I have time for but need to get done? This one is straightforward and probably something you’re already benefitting from anyway. Go into your email account and look for your spam or junk folder. Open it and witness the result of teaching a machine to do beneficial work. We discussed this in a previous chapter, but it’s worth revisiting. Spam email is a fact of life. Artificial intelligence controls spam. You couldn’t pay me enough to filter spam out of my inbox. I’m very grateful for the AI that does it for me. Now It’s Your Turn Start by answering these three questions for yourself: What am I good at? What do I want to do better at? What do I have to do but life gets in the way? Try not to overthink it; just write down whatever comes to mind for each. Now go through the exercise of looking for ways to augment your life using AI for each question. If you get stuck, it may well be that there isn’t a ready answer—yet! If that’s the case, consider what you would want from an AI tool and be on the lookout. If you find an option that seems helpful, remember to always evaluate the AI using what you’ve learned in the previous chapters. You are an informed user and consumer of AI, and you are capable of making a decision to use or not depending on your own viewpoint. It’s important to step back and focus on your humanity now that you’ve considered AI in your life. The most important thing you can do is recognize what makes you uniquely human. You don’t want to be a machine. You’ll never compete successfully with a machine. So don’t try, and instead focus on your human abilities. That’s the point of a wonderful book called Futureproof: 9 Rules for Humans in the Age of Automation, by Kevin Roose. Here’s an example. My friend Jean is an architect with a thriving independent practice. Another architect I know, David, is getting out of the business after seeing his income erode. David is incredibly technically proficient. His eye for design, proportion, and ability to model is top notch, but he’s not so great with people. Jean is an excellent designer, but maybe not as good as David. What Jean has is empathy, patience, and an understanding of how to help her clients navigate the rocky emotional and financial experience of designing and building something as personal as a home. Jean brings her humanity to bear on her business. She has what some call emotional intelligence, and importantly, the ability to use it in her work. David, not so much. David and Jean are already subject to the pressures of automation. AI-enabled architecture design software is becoming ubiquitous. I’m not worried about Jean in the coming years, but David is smart to seek alternatives. Augmentation shouldn’t be about becoming a cyborg. Futureproof is a great read because it illustrates that if you just use machines to hustle harder, work faster, you’ll eventually be replaced by a robot. Augmentation should be about helping yourself be more human, doing more of the things humans can do. Cyborgs and Terminators are creepy because they are machines pretending to be humans. When we use AI to try to make ourselves more machine-like, we’re creepy too. Use AI to be more human, not the other way around. Reading Futureproof can help you understand your uniquely human qualities in a world of increasing automation. It’s an important book because it can help you develop a defensive strategy for the change AI is bringing. Help you differentiate yourself by strengthening your best human qualities. In this book, I strive to present an optimistic offensive strategy. I hope to help you build on the message of Futureproof and use AI so you can make informed choices, influence how AI develops, have more fun, and have an even greater impact on the world around you. Using AI for Personal Safety My mutt dog, Lilo, is a perfect example of specialized, superhuman intelligence in action. She lies dormant on the couch or bed until her sensors detect something of concern. It could be footsteps on the driveway, the scent of a fox approaching our duck coop, or the sleepwalking of one of my children. I will never surpass her ability to hear, smell, or intuit. I will never beat her vigilant cognitive processing that detects and responds to concerns at all hours, day and night. Even while in a deep sleep, upside down, looking ridiculous, she will lift a head, cock an ear, sniff the air to gather more information. She growls a warning above a certain threshold. When a threat is confirmed, she loses her mind, barking and scratching at the door to defend her humans from possible harm. If I were allergic to dogs, I would want a machine or alarm system that could do all of this for me. Safety concerns present humans with one of the greatest opportunities to benefit from AI. A machine can be taught to be ever vigilant and to detect possible harm before we can. Machines can learn from our environment what is “normal” and what may be something to worry about. A machine can learn that the mail is delivered every afternoon. Which, apparently, my dogs can’t do because they attempt to murder our mailman, Bill, every day at two-thirty in the afternoon, even though he leaves them dog biscuits. Safety-critical situations are a great place to take advantage of AI. You’re building layers of redundancy by adding a machine into your life to keep you safer. In my own life, I have seen major safety benefits from the AI in my car. Imagine Lilo perched on the dash of my car. She stares intently through the windshield, never blinking. I put the car in reverse to back out of the driveway and she runs to the back of the car, furiously scanning through one hundred and eighty degrees for any hazard. She barks at the sight of a pedestrian walking into the path of the car, out of my line of sight but visible to Lilo’s well-trained eyes. I brake and the pedestrian strolls safely by (hopefully without a dog of their own so I don’t end up with the two knuckleheads barking uncontrollably in the car). The good news is you don’t have to train a dog to be your safe-driving assistant. Most new cars now include AI systems that do even more to keep you safe on the road. As we discussed in chapters 1 and 2, these systems are trained to recognize a bunch of hazards through cameras, radar, and driver inputs to the steering wheel. Some even watch your eyes and can detect when you’re getting sleepy. Every time you drive a car with one of these systems, you’re putting the machine through continuing education. The car records and uploads data about the safety system to the manufacturer to help improve the AI (and probably sell your data to Starbucks so they can decide where to put a new store, but let’s focus on the positives). Driving safety is a very positive outcome from all of us collectively teaching machines. I know I’m a safer driver in my reasonably priced 2024 Subaru that came standard with Advanced Driver Assistance Systems (ADAS). My ADAS has prevented at least two collisions where I absolutely would have rear-ended another driver. In both cases, it detected a car coming to a sudden stop in front of me and slammed on the brakes just before I did. The machine saved me from an expensive inconvenience at best, and from hurting another driver and myself at worst. When our teenage drivers go on long trips, this is the car they take. My wife’s and my criterion for a kid’s car used to be the cheapest car with air bags and a top crash safety rating. Now, when we can afford it, we help them replace their old beaters with cars running AI safety systems. Never going back! This safety feature also helps my dad drive safely at night now that he’s lost vision in one of his eyes. When he got his first car with an AI safety system, his stress levels dropped from knowing that everyone would be safer when he had to drive my mom to the hospital or come home from a concert after dark. I am a car enthusiast and truly enjoy the raw driving experience. I own and drive three classic vehicles (1982 Jeep CJ5, 1982 GMC C6000, 1995 Toyota Tacoma) that don’t even have airbags, much less sensors and AI safety systems. Warning beeps and robots making decisions for me seemed like an intrusion into a meditative escape. That can be true, but the value of these systems is so clear, I now do ninety-nine percent of my driving in a 2024 Subaru with the help of AI. AI can really, truly improve our safety, the safety of our loved ones, and the well-being of people with whom we share the road. Be a Smarter Patient The potential for AI to help us be healthy, or at least less unhealthy, is remarkable. If AI captures human knowledge and makes it accessible at scale, then the knowledge of clinicians...
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/39684080
info_outline
Audiobook: Chapter 4 Part 2 Side Effects and Pitfalls
01/10/2026
Audiobook: Chapter 4 Part 2 Side Effects and Pitfalls
Listen to Chapter 4 Part 1 of my book You Teach the Machines! If you find this helpful, please support original writing and buy the full book wherever you get audiobooks. Available from , , , and many more. Also in print at , , and my favorite: delivered to your local bookstore through Help other readers by leaving a review on Amazon or Goodreads! Thanks so much --Jeff CHAPTER 4: Side Effects and Pitfalls “The vitality of democracy depends on popular knowledge of complex questions.” —S.S. McClure Writing this chapter, in which I present what many see as the “bad news” of AI, was simultaneously depressing and encouraging. Depressing because, at the time I’m writing, a relatively small number of large corporations are deploying AI into our lives as fast as possible. And it’s all pretty opaque. Encouraging because major change from AI has yet to happen. There is time for you, me, our loved ones to shape change for the better. To be a driver, not a passenger. You teach the machines. The words came easily, but I became dejected while building a point of view from facts, interpretation of facts, and theories to explain what is not publicly available. There’s a lot that is behind a curtain. You intuitively know AI will reshape your life. Simultaneously, you don’t understand how. You can be overwhelmed by this combination of knowledge and uncertainty. I became overwhelmed and depressed as I considered the negative implications of this new technology accelerated by a generational deployment of capital, concentration of wealth, erosion of education, disruption of jobs, and shifting global security. My editor stepped in and coached me to focus on the specific, the actionable. Always good advice. In this chapter, you’ll see my editorial point of view come through, so be a critical reader. Know that I remain an AI optimist, so I try to balance points of potential doom with action you can take. The legal and publicity departments at the companies I discuss may argue with what I write. In many ways I am rooting for these same companies to succeed. They’re doing incredibly difficult and historical work. I invite them to help make a second edition of this book even better. But a corporation is legally obligated to seek one simple outcome: Maximize profit. The reality is that better human outcomes depend entirely on you, me, your parents, your kids, the values we teach, and the decisions we make. I try to give you at least some idea of how you can be part of the solution to the problems I discuss. But if there is one thing you should take away from this chapter, it is that you need to prepare for the unknown. Prepare by taking stock of your first principles. Mine are “Be nice. Get stuff done. Make things less crappy.” Medical professionals go with “Above all, do no harm.” What are yours? We’re in for a lot of change, currently driven by corporations in effect experimenting and gambling with our economy and lives. Anchor yourself with clear principles that can steer you when unexpected change from AI hits. Humans are built to adapt. We’re going to do a lot of it in the coming decades. A side effect is an unintended bad thing you experience from doing something else. A headache from taking antibiotics, maybe. A pitfall is a known hazard you allow yourself to fall into. A headache from drinking too much. An unintended side effect in the world of AI? Depending on your point of view, the relative reduction in investment in renewable energy in favor of investment in nuclear energy. A corresponding pitfall “we” knowingly step into with more nuclear energy? The coming increase in solid nuclear waste stored on site at nuclear energy plants, at least in the U.S., because we as a society, represented by the people we’ve elected for the past twenty years, are politically unable to pull off long-term consolidated storage. See Yucca Mountain. But even this side effect can have a balancing upside. Investment in nuclear energy is bringing real innovation in the form of more efficient, cleaner nuclear reactors. And if you consider a reduction of investment in renewable energy a side effect because of climate change, then you have to consider that use of nuclear energy is better than burning more fossil fuels. Regrets In May of 2023, Geoffrey Hinton resigned from Google. Eleven years earlier, he and his team had built the first neural networks at the University of Toronto. They founded a company that was quickly bought by Google for $44 million. Dr. Hinton went to work for Google to advance the research. Ten years later, at the time of his resignation, he stated that a part of him regrets his life’s work (Kleinman & Vallance, 2023). The main inventor of modern AI regrets his life’s work. Sit with that. Geoffrey Hinton, an insider’s insider, knows AI as much as or more than anyone else on the planet. He resigned from Google, the original industrial AI company, so he could speak freely about the hazards he sees at Google and beyond. Dr. Hinton was a hero of sorts to me and my colleagues working in health AI long before he stood on principle. He worked for decades against conventional wisdom to prove the power of computer programs modeled on how neurons in the brain learned skills by analyzing data. After Google joined the AI arms race started by Microsoft’s investment in OpenAI in 2020, he became concerned that his company and its competitors were moving too fast, given the stakes for the rest of us. He grew concerned that rapid proliferation of “fake” AI-generated text, video, and voice would make it impossible for us to know what was true. He grew very concerned that we would lose our jobs and incomes as AI replaced or cheapened the labor of paralegals, analysts, call center workers, writers, lawyers, financial experts, doctors, nurses, engineers, and software programmers. He became very, very concerned with the weaponization of AI into autonomous killing machines. Dr. Hinton wasn’t alone. Even before his resignation, over a thousand technology leaders called for a moratorium on training advanced AI. They wanted time to understand possible side effects and work to minimize the harm of known pitfalls. Too late. A year later, Microsoft effectively bought a nuclear power plant. Bezos, Musk, Pichai, Nadella, Altman, and Cook—the modern-day Stanfords, Rockefellers, Dukes, and Morgans—couldn’t risk someone else winning. Shareholders demanded returns. Not just some mysterious shareholder “other,” but each and every one of us invested in the tech-heavy U.S. stock market. Google ignored the call for a moratorium and rolled out AI-generated search answers at the top of their search page. Change Side effects and pitfalls flow naturally from change. Artificial intelligence is a miles-long freight train of change driven by hundreds of billions of dollars. You, I, your parents, your kids are locked in a stalled car at the railroad crossing. Artificial intelligence is changing or will soon change how you write a report for work, an essay for school, improve your firm’s profits by automating junior associate work, drive a car, identify mental health problems, deny insurance coverage, get your electricity, trust or mistrust information, experience art and entertainment, and fight wars. Which changes will bring side effects? Which have known pitfalls? Wouldn’t it be nice to take a minute and think about it? Like the experts wanted “way back” in 2023? Practical AI went from invention to industry in ten years. Neural networks emerged in 2012 and became scalable five years later with the Transformer in 2017. Corporate Industrialization into a financially and politically intertwined handful of corporations? Five years between 2017 and 2022. What took more than one hundred years for the first Industrial Revolution took only ten for AI. As I write this in 2025, the Big AI companies are in a race to remake the knowledge economy. How many quarterly earnings reports do you think they’re willing to produce before they can report returns to their impatient investors? The leadership and shareholders of the Big AI companies in the U.S. alone are betting hundreds of billions of dollars that they can return trillions as fast as possible. Look at the concentration of wealth in the hands of the leaders of these companies and their investors. Again, they have a legal obligation to maximize profits. Do you think they’re truly, fundamentally interested in growing the whole pie? AI has never happened before. It went from theory to practice in ten years. The economy of AI as it’s currently playing out means the richest corporations control the means of production right up front. Contrast this with the rise of the internet and World Wide Web. Public communication protocols arose out of publicly funded research and were taken up by anyone with a computer and a phone line. Web browsers and server software freely available to all allowed people to use their existing phone lines to build their own websites at home. Internet service providers sprung up at the local town level. The web quickly became of, by, and for the people. Artificial intelligence is on the opposite track. The Big AI corporations possess barely comprehensible financial power. They use real and perceived expertise to gain political influence based in part on a popular assumption that AI is central to national security. Multiple sessions of Congress and multiple presidents have come and gone with no new regulatory guardrails in the U.S. Hundreds of billions of dollars already at stake demand returns. It’s as if Gutenberg and the early printing press experts weren’t chased out of Mainz during an unrelated religious power struggle so the printing press could disseminate organically. It’s as if, instead, they formed a corporate combine, an industrial business group that held absolute power over the manufacture and use of the printing press. It’s as if they cozied up to and contributed hundreds of millions in dark money to the most powerful leaders in Europe of every party to insulate themselves from regulation and maximize profits. It’s as if they relaxed standards on control of misinformation in the books they printed to gain influence. It’s as if human knowledge and skill were captured, transferred to others, and used for good and bad in entirely unprecedented new ways by corporations concerned only with winning the trillions of thalers at stake. Rapid societal, cultural, and economic change directed by profit. Maybe it’ll be fine. Let’s look at puppies and covet the lives of others on social media instead of worrying. With that rant out of the way, it’s time to buck up and be part of the solution. “You teach the machines” could mean you’re passively milked of your data and money. Or it could mean you’re in front of the room, directing and taking charge. You’re the windshield, not the bug. You teach the machines. Dr. Hinton’s Fears Geoffrey Hinton gives us a framework for the first major known side effects and pitfalls we’ll discuss: misinformation, job loss, and killer robots (the use of AI in war). In my professional life, I’ve spent a lot of time working on the first two, albeit focused on the health sector. Thankfully, I’ve never met a killer robot in a “hot war” but have had direct experience with scary “cold war” AI threats and harm. Misinformation Consider the same internet search three months apart for “european causes of accidental death” using the Google search page, which around ninety percent of us do globally. I have screenshots of everything to prove this actually happened. On November 19, 2024, Google’s AI Overview listed the causes as: road traffic injuries drowning falls burns poisoning It stated that “road crashes are the most significant cause,” which I took to mean this was a rank ordered list. On January 27, 2025, Google’s AI Overview reordered and changed the causes to: road traffic accidents falls drowning poisoning work-related accidents. In three months, falls overtook drowning and burns dropped from the list to be replaced by work-related accidents. Which is true? Turns out neither, according to my own research on the website of EuroStat, the statistical office of the European Union, which Google seemed to point to as a source for both AI Overview results. A half hour spent with the freely available data there revealed the causes, in order of decreasing death rate (deaths per 1,000 people), to be: falls other transportation accidents poisoning drowning “Other” includes a scary list of things like struck by falling objects, exposure to animate mechanical forces, and overexertion. Generative AI by its very nature will always give you (generate) an answer, an image, a song, a diagnosis. Truth doesn’t matter in subjective “eye of the beholder” circumstances like composing an advertisement, and you need a starting point. Truth matters in the realm of life-and-death facts and figures. Say you’re a busy European legislator and need a sound bite for your speech about accidental death. You draft the speech with results from the top of the search page and understandably miss the small-font disclaimer that “Generative AI Is Experimental.” This is what Dr. Hinton was talking about when he resigned. People lose sight of what is really true with unreliable or manipulated AI as an intermediary. You’ll hear the term “hallucination” thrown around when AI presents something that you figure out is straight up wrong. I prefer “drunk uncle” because I’d rather think of AI in the context of the example above as an inebriated relative spouting off malarkey and conspiracy theories. Hallucination is a kind word used by some people who don’t want you to think too hard about the fact that they are putting out technology that is inherently flawed, that they know it, and are more interested in profits than the integrity of information. A lie is an intentionally false statement. I have to believe Google knows its AI Overview makes false statements, yet they put it out there intentionally. Hallucination, drunk uncle malarkey, or lies? You decide. And by so choosing, exert influence. I’m going to leave you to extrapolate to other important situations where you or someone you trust uses an unreliable intermediary like AI Overview. School, health, work, personal finance. A single screwy web search may seem harmless until you multiply eight and a half billion searches per day by this demonstrated potential to be both inconsistent and wrong. Is this OK? Let’s be the windshield, not the bug, and—to mix a metaphor but keep it automobile-related—put ourselves in the driver’s seat. What’s your first principle for truth? When do facts matter to you? What’s your threshold for trusting an intermediary? When does it matter if you believe your drunk uncle or not? Start by deliberately picking and choosing when you take information at face value when it comes to you via AI. Are the stakes low or high for whether the information is true? Would you go with what your drunk uncle tells you or ignore him? Be a skeptic at whatever level is right for you in that situation. When objective truth matters, AI should be considered wrong until proven right, at least as it’s being rolled out to us in the mid-2020s. If you’re going to trust AI, consider verifying through a third party that the AI has controls in place to detect and remove misinformation. Fake text, audio, and video are easily generated with AI. A one-time spike in downloads of my podcast originated in a foreign country a few months before I started writing this book. At the time, we’d published thirty-three episodes, with one or two downloads from the same foreign country. I was excited to see an unexpected bump in downloads, but something looked fishy. All thirty-three episodes had been downloaded at once, something that had never happened before. I looked for information using Google, and it turns out the podcast community sees this type of activity regularly, and not just from foreign countries. The accepted explanation is that these bulk downloads are data harvests by AI companies working on voice generation AI. The hard truth is that I am at increased risk of a “deep fake” of my voice because my recordings have been harvested by an unknown entity in a country sometimes viewed as an adversary to my homeland. For that reason, we don’t use last names for guests on the show. Misinformation can be more subtle, too. I have a couple of friends, one young, one… not so young, who are single and trying to meet people to date. Instagram shows them both a steady stream of content along the lines of “You don’t need anyone! You’re strong, independent, and don’t need anybody!” TikTok feeds another friend a steady stream of “The opposite sex is controlling and mean!” Major social media apps show us what they think will grab and keep our attention. In the olden days of the web, when I worked at Ask Jeeves, we talked about “engagement” and “stickiness” of a website. Could we gain “eyeballs” and keep them looking at a website longer so we could show an ad or sell something? Social media makes billions on the same principle. Behind the scenes of Meta, the understandable set of rules that Instagram and Facebook started with (my friend is interested in boats, so maybe I will be, too) has been replaced entirely by AI. And that AI will do whatever it takes to gain and hold our attention. It learns that negative stereotypes and “us vs them” division will grab lots of people. We’re evolutionarily hardwired to respond more to differences than similarities. So that’s what social media AI feeds us. When it comes to truth, social media AI is effectively unmanageable. It may be politically expedient for the leadership of social media companies to say they are increasing or pulling back on their fact-checking efforts, depending on which party is in power. But the reality is they’ve lost control and couldn’t do the fact-checking if they wanted to. The machines they taught to gain and hold our attention move too fast and are too complex to govern. How have you seen misinformation spread in your life? Where could AI have played a role? Did you contribute to the spread? Remember, you teach the machines. What’s the windshield stance with more subtle misinformation? Decide what your first principles are. When do you care that you’re being manipulated to gain your attention? Turn awareness into action and make more deliberate choices about what social media you use and how you interpret and consume social media content. Run an experiment: Click on a series of negative or divisive suggestions from the AI. Search for a divisive or negative topic. Observe how your feed changes. Do your friends and loved ones use social media? Have a conversation with them about what you discovered in your experiment. Don’t like what you see? Engage with social media AI on your own terms. Vote with your feet and seek alternatives. Job Loss: Automation over Augmentation? Artificial intelligence machines can be taught to do work that once only humans could do. A friend asked me for help understanding AI. He was in a new job and had been tasked with learning the alphabet soup of AI: LLM, GPT, GPU, NLP, GenAI. I happily obliged over lunch. We had a great discussion, during which he shared public information about his company’s products. One of the products is an AI that watches video feeds from multiple hospital rooms. Its job is to watch the patient on the video...
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/39684010
info_outline
Audiobook: Intermission Bloopers!
01/10/2026
Audiobook: Intermission Bloopers!
Show Notes: Audiobook Intermission – The "Human Error" Blooper Reel Recording a book about high-tech Artificial Intelligence is hard. Being a non-artificial human in a house with dogs and teenagers is even harder. In this special "Intermission" episode of the You Teach the Machines companion podcast, we’re taking a brief, lighthearted break from the heavy lifting of Chapter 4 to bring you the glorious, unedited mess that happened behind the mic in Jeff's home studio. If AI is a mirror of humanity, this episode is the mirror before it’s had its morning coffee. What’s Inside the Blooper Reel: The "Home Studio" Reality: Hear the background noise of a busy second-floor office that Jeff affectionately calls a "studio." The War on Barking: Watch (well, listen) as Jeff battles a persistent four-legged intruder who clearly has strong opinions on artificial intelligence. Family vs. Recording: The exact moment Jeff's daughter, MJ, breaks the "fourth wall" to announce a 11:00 AM meeting. Human Agency in Action: Jeff decides to leave the "mess" in the audiobook because, as he says, "You can pause me, bro." Meet the (Very Human) Author: Jeff Pennington Jeff has spent three decades leading data strategy at places like Ask Jeeves and the Children’s Hospital of Philadelphia (CHOP). He’s a sought-after speaker on AI ethics and healthcare data, but as you’ll hear in these outtakes, even a leading voice in AI literacy can be brought to a standstill by a bathroom door opening or a dog that refuses to stop "teaching the machine" its own version of a sequence model. The Multigenerational Lesson: This intermission perfectly illustrates the "Printing Press" moment we are in. Technology allows Jeff to record a professional audiobook from his upstairs office, but it also captures the raw, multigenerational reality of modern life. While the machines are striving for "mathematical averages," humans are busy navigating meetings, pets, and family interruptions. That messiness is exactly what makes us impossible for a machine to replace. Listener Aid: The Intermission Transcription Follow along with the silly chaos: Jeff: "Go away! No, go away! Go away! Stop barking! ... You can pause me, bro. I'm going to leave that in the audiobook, though." MJ: "[Laughter] I have a meeting at 11:00, so I'm going to make noise." Continue the Conversation Once you’ve finished laughing at the reality of home recording, join Jeff and MJ for more professional (but still accessible!) insights on the You Teach the Machines companion podcast. Get the (Properly Edited) Book To hear the version where Jeff actually finishes his sentences, download the full audiobook or grab a print copy. Don't forget to leave a review on Amazon or Goodreads to let us know which "human error" was your favorite! Audiobook: Audible: Amazon: Apple Books: Google Play: Print & eBook: Amazon: Barnes & Noble: Bookshop.org: For more resources and "Human-in-the-loop" fun, visit .
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/39683910
info_outline
Audiobook: Chapter 4 Part 1 Side Effects and Pitfalls
01/10/2026
Audiobook: Chapter 4 Part 1 Side Effects and Pitfalls
Listen to Chapter 4 Part 1 of my book You Teach the Machines! If you find this helpful, please support original writing and buy the full book wherever you get audiobooks. Available from , , , and many more. Also in print at , , and my favorite: delivered to your local bookstore through Help other readers by leaving a review on Amazon or Goodreads! Thanks so much --Jeff CHAPTER 4: Side Effects and Pitfalls “The vitality of democracy depends on popular knowledge of complex questions.” —S.S. McClure Writing this chapter, in which I present what many see as the “bad news” of AI, was simultaneously depressing and encouraging. Depressing because, at the time I’m writing, a relatively small number of large corporations are deploying AI into our lives as fast as possible. And it’s all pretty opaque. Encouraging because major change from AI has yet to happen. There is time for you, me, our loved ones to shape change for the better. To be a driver, not a passenger. You teach the machines. The words came easily, but I became dejected while building a point of view from facts, interpretation of facts, and theories to explain what is not publicly available. There’s a lot that is behind a curtain. You intuitively know AI will reshape your life. Simultaneously, you don’t understand how. You can be overwhelmed by this combination of knowledge and uncertainty. I became overwhelmed and depressed as I considered the negative implications of this new technology accelerated by a generational deployment of capital, concentration of wealth, erosion of education, disruption of jobs, and shifting global security. My editor stepped in and coached me to focus on the specific, the actionable. Always good advice. In this chapter, you’ll see my editorial point of view come through, so be a critical reader. Know that I remain an AI optimist, so I try to balance points of potential doom with action you can take. The legal and publicity departments at the companies I discuss may argue with what I write. In many ways I am rooting for these same companies to succeed. They’re doing incredibly difficult and historical work. I invite them to help make a second edition of this book even better. But a corporation is legally obligated to seek one simple outcome: Maximize profit. The reality is that better human outcomes depend entirely on you, me, your parents, your kids, the values we teach, and the decisions we make. I try to give you at least some idea of how you can be part of the solution to the problems I discuss. But if there is one thing you should take away from this chapter, it is that you need to prepare for the unknown. Prepare by taking stock of your first principles. Mine are “Be nice. Get stuff done. Make things less crappy.” Medical professionals go with “Above all, do no harm.” What are yours? We’re in for a lot of change, currently driven by corporations in effect experimenting and gambling with our economy and lives. Anchor yourself with clear principles that can steer you when unexpected change from AI hits. Humans are built to adapt. We’re going to do a lot of it in the coming decades. A side effect is an unintended bad thing you experience from doing something else. A headache from taking antibiotics, maybe. A pitfall is a known hazard you allow yourself to fall into. A headache from drinking too much. An unintended side effect in the world of AI? Depending on your point of view, the relative reduction in investment in renewable energy in favor of investment in nuclear energy. A corresponding pitfall “we” knowingly step into with more nuclear energy? The coming increase in solid nuclear waste stored on site at nuclear energy plants, at least in the U.S., because we as a society, represented by the people we’ve elected for the past twenty years, are politically unable to pull off long-term consolidated storage. See Yucca Mountain. But even this side effect can have a balancing upside. Investment in nuclear energy is bringing real innovation in the form of more efficient, cleaner nuclear reactors. And if you consider a reduction of investment in renewable energy a side effect because of climate change, then you have to consider that use of nuclear energy is better than burning more fossil fuels. Regrets In May of 2023, Geoffrey Hinton resigned from Google. Eleven years earlier, he and his team had built the first neural networks at the University of Toronto. They founded a company that was quickly bought by Google for $44 million. Dr. Hinton went to work for Google to advance the research. Ten years later, at the time of his resignation, he stated that a part of him regrets his life’s work (Kleinman & Vallance, 2023). The main inventor of modern AI regrets his life’s work. Sit with that. Geoffrey Hinton, an insider’s insider, knows AI as much as or more than anyone else on the planet. He resigned from Google, the original industrial AI company, so he could speak freely about the hazards he sees at Google and beyond. Dr. Hinton was a hero of sorts to me and my colleagues working in health AI long before he stood on principle. He worked for decades against conventional wisdom to prove the power of computer programs modeled on how neurons in the brain learned skills by analyzing data. After Google joined the AI arms race started by Microsoft’s investment in OpenAI in 2020, he became concerned that his company and its competitors were moving too fast, given the stakes for the rest of us. He grew concerned that rapid proliferation of “fake” AI-generated text, video, and voice would make it impossible for us to know what was true. He grew very concerned that we would lose our jobs and incomes as AI replaced or cheapened the labor of paralegals, analysts, call center workers, writers, lawyers, financial experts, doctors, nurses, engineers, and software programmers. He became very, very concerned with the weaponization of AI into autonomous killing machines. Dr. Hinton wasn’t alone. Even before his resignation, over a thousand technology leaders called for a moratorium on training advanced AI. They wanted time to understand possible side effects and work to minimize the harm of known pitfalls. Too late. A year later, Microsoft effectively bought a nuclear power plant. Bezos, Musk, Pichai, Nadella, Altman, and Cook—the modern-day Stanfords, Rockefellers, Dukes, and Morgans—couldn’t risk someone else winning. Shareholders demanded returns. Not just some mysterious shareholder “other,” but each and every one of us invested in the tech-heavy U.S. stock market. Google ignored the call for a moratorium and rolled out AI-generated search answers at the top of their search page. Change Side effects and pitfalls flow naturally from change. Artificial intelligence is a miles-long freight train of change driven by hundreds of billions of dollars. You, I, your parents, your kids are locked in a stalled car at the railroad crossing. Artificial intelligence is changing or will soon change how you write a report for work, an essay for school, improve your firm’s profits by automating junior associate work, drive a car, identify mental health problems, deny insurance coverage, get your electricity, trust or mistrust information, experience art and entertainment, and fight wars. Which changes will bring side effects? Which have known pitfalls? Wouldn’t it be nice to take a minute and think about it? Like the experts wanted “way back” in 2023? Practical AI went from invention to industry in ten years. Neural networks emerged in 2012 and became scalable five years later with the Transformer in 2017. Corporate Industrialization into a financially and politically intertwined handful of corporations? Five years between 2017 and 2022. What took more than one hundred years for the first Industrial Revolution took only ten for AI. As I write this in 2025, the Big AI companies are in a race to remake the knowledge economy. How many quarterly earnings reports do you think they’re willing to produce before they can report returns to their impatient investors? The leadership and shareholders of the Big AI companies in the U.S. alone are betting hundreds of billions of dollars that they can return trillions as fast as possible. Look at the concentration of wealth in the hands of the leaders of these companies and their investors. Again, they have a legal obligation to maximize profits. Do you think they’re truly, fundamentally interested in growing the whole pie? AI has never happened before. It went from theory to practice in ten years. The economy of AI as it’s currently playing out means the richest corporations control the means of production right up front. Contrast this with the rise of the internet and World Wide Web. Public communication protocols arose out of publicly funded research and were taken up by anyone with a computer and a phone line. Web browsers and server software freely available to all allowed people to use their existing phone lines to build their own websites at home. Internet service providers sprung up at the local town level. The web quickly became of, by, and for the people. Artificial intelligence is on the opposite track. The Big AI corporations possess barely comprehensible financial power. They use real and perceived expertise to gain political influence based in part on a popular assumption that AI is central to national security. Multiple sessions of Congress and multiple presidents have come and gone with no new regulatory guardrails in the U.S. Hundreds of billions of dollars already at stake demand returns. It’s as if Gutenberg and the early printing press experts weren’t chased out of Mainz during an unrelated religious power struggle so the printing press could disseminate organically. It’s as if, instead, they formed a corporate combine, an industrial business group that held absolute power over the manufacture and use of the printing press. It’s as if they cozied up to and contributed hundreds of millions in dark money to the most powerful leaders in Europe of every party to insulate themselves from regulation and maximize profits. It’s as if they relaxed standards on control of misinformation in the books they printed to gain influence. It’s as if human knowledge and skill were captured, transferred to others, and used for good and bad in entirely unprecedented new ways by corporations concerned only with winning the trillions of thalers at stake. Rapid societal, cultural, and economic change directed by profit. Maybe it’ll be fine. Let’s look at puppies and covet the lives of others on social media instead of worrying. With that rant out of the way, it’s time to buck up and be part of the solution. “You teach the machines” could mean you’re passively milked of your data and money. Or it could mean you’re in front of the room, directing and taking charge. You’re the windshield, not the bug. You teach the machines. Dr. Hinton’s Fears Geoffrey Hinton gives us a framework for the first major known side effects and pitfalls we’ll discuss: misinformation, job loss, and killer robots (the use of AI in war). In my professional life, I’ve spent a lot of time working on the first two, albeit focused on the health sector. Thankfully, I’ve never met a killer robot in a “hot war” but have had direct experience with scary “cold war” AI threats and harm. Misinformation Consider the same internet search three months apart for “european causes of accidental death” using the Google search page, which around ninety percent of us do globally. I have screenshots of everything to prove this actually happened. On November 19, 2024, Google’s AI Overview listed the causes as: road traffic injuries drowning falls burns poisoning It stated that “road crashes are the most significant cause,” which I took to mean this was a rank ordered list. On January 27, 2025, Google’s AI Overview reordered and changed the causes to: road traffic accidents falls drowning poisoning work-related accidents. In three months, falls overtook drowning and burns dropped from the list to be replaced by work-related accidents. Which is true? Turns out neither, according to my own research on the website of EuroStat, the statistical office of the European Union, which Google seemed to point to as a source for both AI Overview results. A half hour spent with the freely available data there revealed the causes, in order of decreasing death rate (deaths per 1,000 people), to be: falls other transportation accidents poisoning drowning “Other” includes a scary list of things like struck by falling objects, exposure to animate mechanical forces, and overexertion. Generative AI by its very nature will always give you (generate) an answer, an image, a song, a diagnosis. Truth doesn’t matter in subjective “eye of the beholder” circumstances like composing an advertisement, and you need a starting point. Truth matters in the realm of life-and-death facts and figures. Say you’re a busy European legislator and need a sound bite for your speech about accidental death. You draft the speech with results from the top of the search page and understandably miss the small-font disclaimer that “Generative AI Is Experimental.” This is what Dr. Hinton was talking about when he resigned. People lose sight of what is really true with unreliable or manipulated AI as an intermediary. You’ll hear the term “hallucination” thrown around when AI presents something that you figure out is straight up wrong. I prefer “drunk uncle” because I’d rather think of AI in the context of the example above as an inebriated relative spouting off malarkey and conspiracy theories. Hallucination is a kind word used by some people who don’t want you to think too hard about the fact that they are putting out technology that is inherently flawed, that they know it, and are more interested in profits than the integrity of information. A lie is an intentionally false statement. I have to believe Google knows its AI Overview makes false statements, yet they put it out there intentionally. Hallucination, drunk uncle malarkey, or lies? You decide. And by so choosing, exert influence. I’m going to leave you to extrapolate to other important situations where you or someone you trust uses an unreliable intermediary like AI Overview. School, health, work, personal finance. A single screwy web search may seem harmless until you multiply eight and a half billion searches per day by this demonstrated potential to be both inconsistent and wrong. Is this OK? Let’s be the windshield, not the bug, and—to mix a metaphor but keep it automobile-related—put ourselves in the driver’s seat. What’s your first principle for truth? When do facts matter to you? What’s your threshold for trusting an intermediary? When does it matter if you believe your drunk uncle or not? Start by deliberately picking and choosing when you take information at face value when it comes to you via AI. Are the stakes low or high for whether the information is true? Would you go with what your drunk uncle tells you or ignore him? Be a skeptic at whatever level is right for you in that situation. When objective truth matters, AI should be considered wrong until proven right, at least as it’s being rolled out to us in the mid-2020s. If you’re going to trust AI, consider verifying through a third party that the AI has controls in place to detect and remove misinformation. Fake text, audio, and video are easily generated with AI. A one-time spike in downloads of my podcast originated in a foreign country a few months before I started writing this book. At the time, we’d published thirty-three episodes, with one or two downloads from the same foreign country. I was excited to see an unexpected bump in downloads, but something looked fishy. All thirty-three episodes had been downloaded at once, something that had never happened before. I looked for information using Google, and it turns out the podcast community sees this type of activity regularly, and not just from foreign countries. The accepted explanation is that these bulk downloads are data harvests by AI companies working on voice generation AI. The hard truth is that I am at increased risk of a “deep fake” of my voice because my recordings have been harvested by an unknown entity in a country sometimes viewed as an adversary to my homeland. For that reason, we don’t use last names for guests on the show. Misinformation can be more subtle, too. I have a couple of friends, one young, one… not so young, who are single and trying to meet people to date. Instagram shows them both a steady stream of content along the lines of “You don’t need anyone! You’re strong, independent, and don’t need anybody!” TikTok feeds another friend a steady stream of “The opposite sex is controlling and mean!” Major social media apps show us what they think will grab and keep our attention. In the olden days of the web, when I worked at Ask Jeeves, we talked about “engagement” and “stickiness” of a website. Could we gain “eyeballs” and keep them looking at a website longer so we could show an ad or sell something? Social media makes billions on the same principle. Behind the scenes of Meta, the understandable set of rules that Instagram and Facebook started with (my friend is interested in boats, so maybe I will be, too) has been replaced entirely by AI. And that AI will do whatever it takes to gain and hold our attention. It learns that negative stereotypes and “us vs them” division will grab lots of people. We’re evolutionarily hardwired to respond more to differences than similarities. So that’s what social media AI feeds us. When it comes to truth, social media AI is effectively unmanageable. It may be politically expedient for the leadership of social media companies to say they are increasing or pulling back on their fact-checking efforts, depending on which party is in power. But the reality is they’ve lost control and couldn’t do the fact-checking if they wanted to. The machines they taught to gain and hold our attention move too fast and are too complex to govern. How have you seen misinformation spread in your life? Where could AI have played a role? Did you contribute to the spread? Remember, you teach the machines. What’s the windshield stance with more subtle misinformation? Decide what your first principles are. When do you care that you’re being manipulated to gain your attention? Turn awareness into action and make more deliberate choices about what social media you use and how you interpret and consume social media content. Run an experiment: Click on a series of negative or divisive suggestions from the AI. Search for a divisive or negative topic. Observe how your feed changes. Do your friends and loved ones use social media? Have a conversation with them about what you discovered in your experiment. Don’t like what you see? Engage with social media AI on your own terms. Vote with your feet and seek alternatives. Job Loss: Automation over Augmentation? Artificial intelligence machines can be taught to do work that once only humans could do. A friend asked me for help understanding AI. He was in a new job and had been tasked with learning the alphabet soup of AI: LLM, GPT, GPU, NLP, GenAI. I happily obliged over lunch. We had a great discussion, during which he shared public information about his company’s products. One of the products is an AI that watches video feeds from multiple hospital rooms. Its job is to watch the patient on the video...
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/39683830
info_outline
Audiobook: Chapter 3 Industrial AI Revolution
01/10/2026
Audiobook: Chapter 3 Industrial AI Revolution
CHAPTER 3: The Industrial AI Revolution “That’s exactly what I want!” My sister-in-law was thrilled to see a picture of a certain type of porch railing that she’d previously had only in her mind’s eye. “Thank you so much. Now I can show the architect!” My daughter grinned. She had just used AI to create a rendering of an architectural detail. She had translated her aunt’s meandering description into a series of “prompts” typed into an AI tool which then generated a few possible images. The AI had been taught with the knowledge and practical skill of thousands of architects whose work was published in magazines, books, and on the web. My daughter was able to quickly and easily access and harness the architects’ knowledge to accomplish a task without any prior experience or interest in porch architecture and design. Before AI, knowledge transfer happened one-on-one, passed from an expert to an apprentice, one-to-many, passed from a teacher to a class of students, or one-to-more, passed from an author to readers of a book. Artificial intelligence offers a revolutionary path for the transfer of knowledge. When you teach the machine, it can transfer knowledge and skills from many experts to many, many more users of all backgrounds and skill levels across the globe. We’ve been here before. Before the first Industrial Revolution, between roughly 1760 and 1860, an expert shoemaker had an exclusive on the knowledge and skill to make shoes that fit comfortably and worked well. During the Industrial Revolution, inventors transferred the shoemaker’s expertise in leather-cutting and sole-stitching to automated machines that could do each task faster and just as, if not more, reliably. After the revolution, shoemakers were out of work, but more people could afford to buy shoes. We are now in the midst of the Industrial AI Revolution—and we can use the lessons of the past to give us an idea of where we are today. Coal, Iron, and Steam Coal, iron, the steam engine, and precision machining—that is, the use of machines to make other machines—were the interlocking factors that drove the Industrial Revolution. In 1760, a citizen of Great Britain or the United States likely worked on a farm, possibly growing wheat. Three generations later, their great grandson worked in a factory operating a bread-making machine powered by a steam engine heated by coal. The bread-making machine was made from iron parts cut on a lathe, which was itself powered by a steam engine heated by coal. People in country after country watched their economies industrialize, changing from primarily agricultural to primarily manufacturing. All except Japan, whose Shogunate government managed to keep out the modern world until 1863 when the Choshu Five snuck out of the country to learn the secrets of Western industry and power in London and returned to overthrow the Shogunate in 1868—proof that it is impossible to ignore or avoid the forward march of human progress. An analogous change is happening today with AI: Countries with strong knowledge economies—focused on white-collar and professional services—will see their white-collar knowledge worker economies change from all human cognitive labor to a significant amount of AI cognitive labor. Or, if we demand it, primarily human-plus-AI. Artificial intelligence is already changing, and will continue to change, the work of doctors, lawyers, financial analysts, engineers, computer programmers, anyone who learns a lot about a subject and then uses that knowledge in their work. The more that person currently works alone with information on a computer, the more their work will be affected. People who work with their brains and hands will be less affected—plumbers, artists, musicians who perform live. People who empathize, relate, listen, negotiate, and communicate with other people will be less affected. Factors of Production Just as coal, iron, the steam engine, and precision machining drove the first Industrial Revolution, the AI Industrial Revolution is driven by its own factors: nuclear energy, data, graphical processing units, and foundation models. I grew up on a farm in Pennsylvania near the Three Mile Island nuclear power plant, site of the worst nuclear accident in U.S. history when Unit 2 melted down in late March of 1979. Forty-five years later, in September of 2024, Microsoft ignored the negative association and announced that they had bought all of the electricity that Three Mile Island could produce for the next twenty years. Later in the fall, after Microsoft took the publicity hit, Amazon, Meta, and Google announced that they were all working on their own nuclear electricity generation projects—nuclear energy for computers that run AI. So why do we need all this nuclear energy? In essence, it’s about data, how we store it, and how AI companies process data to train AI. Artificial intelligence training happens in massive electricity-hungry cloud computing data centers sprouting up around the country. What’s a cloud computing data center? Starting in the late 1990s, the new economics of the internet made it cheaper to locate your computers wherever you could get the best deal. Electricity and real estate prices became more important than the feel-good comfort of walking down the hall to look at your computer server. Enter the data center boom. A data center is kind of like a modular storage unit facility for your computer servers. Businesses rushed to move their computers from expensive floors in their headquarters to cheap leased space… wherever. Soon companies like Compaq realized they could rent computers in their own data center to other companies without having to box them up and ship them. This became the first generation of “cloud” The phenomenon was described first by young PhD candidate Ramnath Chellappa as a “computer paradigm where the boundaries of computing will be determined by economic rationale rather than technical limits alone” (Biswas, 2025). Marketing people who grew up watching Bob Ross paint happy little clouds decided the executives who signed the checks would swallow this concept if they called it “the cloud” and adopted Ross’s imagery in their collateral. Not from the States, younger than me, or had cable when you were a kid? Look up Bob Ross on YouTube. The smart-aleck phrase “There is no cloud; it’s just someone else’s computer” became my mantra when I had to negotiate high-stakes enterprise contracts for my employer with cloud service providers. The cloud service providers needed to be convinced that we weren’t going to blindly go along with their convenient buzzwords. I put a sticker of the “There is no cloud…” phrase beneath a sad-faced cartoon cloud on my laptop for the well-intentioned cloud computing company representatives to contemplate while we negotiated terms that would protect our data and our institution. Amazon Web Services was the first really big cloud computing service provider. Jeff Bezos hated that he had to pay for a mountain of computers to sit idle all year just so his website wouldn’t crash when we all ordered books and DVDs at Christmas on Amazon.com. He told his team to figure out how to put them to work the other eleven months of the year, giving rise to even more buzzy terms like “elastic computing.” This became the second generation of cloud. The pendulum swings back as always in history, largely because AI promises to do its thing in real time. If you are a doctor relying on AI to help steer your scalpel during robotic surgery, you’re going to want that AI on a computer close by so the camera data going to it and the scalpel-manipulation directions coming back don’t take too long in transit. You want those systems connected on your own computer network, or at least on the edge of it. So now we have “edge computing,” fancy words for somebody else’s computer located down the street instead of somewhere in the “Eastern U.S. Region.” A friend works for a company quietly buying up office buildings vacated during the pandemic so they can replace cubicles with computers to run real-time AI for the remaining businesses in the neighborhood where the edge of the AI computer network butts up against the edge of the customers’ computer network. Once you have a reliable supply of electricity for your data center, you need the next major factor: data. In addition to storing more than 17,000 of my emails going back to 2004, Google collects email data from more than 1.8 billion people worldwide, more than 130 million in the U.S. alone, where they are legally allowed to use all of it to train AI. Assuming I’m a typical user, in the U.S. that’s 2,210,000,000,000 emails controlled by Google alone. And it doesn’t stop with emails, or with Google. The data of the rest of our lives are captured variously by companies like Amazon (retail, publishing, pharmacy), Visa (retail financials), Apple (photos, videos, retail financials), Meta (photos, videos, communications, social activity), JP Morgan Chase (finance, banking), and Epic Systems Corporation (health), along with a host of other quiet but enormous data aggregation companies with benign names like MX, Mobius, and Plaid. All of this data is raw material being used to train AI. With electricity and data secured, it’s time to process it. “We got approval for you to use our graphical processing units!” Bittersweet news from thrilled sales reps of more than one of the cloud computing service providers my employer contracted with. Sweet because we were able to convince mega corporations to allow us to do important but financially dead-end pediatric AI research. Bitter because we’d “won” the chance to pay mid- to high-six figures for the privilege of renting their graphical processing units (GPU) for a few weeks. Graphical processing units are the computers used to teach AI. The “graphical” part of the name confusingly doesn’t matter for AI; it’s left over from their first major use in video games. At the same time we got the bittersweet “good news,” the CEO of the leading maker of GPU engines that power AI used quarterly financials to proclaim, “The next industrial revolution has begun. Companies and countries are…using GPUs to…build AI factories to produce a new commodity: artificial intelligence” (NVIDIA, 2024a). He drove his analogy home by adding, “The age of AI is in full steam.…” The GPU is the new steam engine. “Wow, it’s like a third-year medical student.” Early research showed this kind of sci-fi potential when we used large language models made available by big AI companies as a starting point—the foundation. Google, Amazon, Meta, and Microsoft all created massively powerful foundational AI—known as “foundation models”—based on the “Attention Is All You Need” paper, discussed in the Introduction to this book. These companies made them available to the marketplace in various ways we’ll get into later. Researchers at my previous employer used foundation models to make new machines—previously unattainable AI that, by learning from medical data, could perform common clinical tasks like identifying a disease based on a description of symptoms. Use AI to build new AI. The foundation model is the new machine that makes machines. The “Attention Is All You Need” authors who invented the Transformer kicked off a chain of events leading to OpenAI’s GPT, the first foundation model AI to have a big impact. Nuclear energy, vast reserves of data, and GPUs all existed long before 2017, but foundation model AI—the machines to make machines, or in this case, other AI—did not. A foundation model is characterized not by what it can do directly, but by its potential to make other AI. Or rather, its potential to be “fine-tuned,” or further taught to do something useful (see chapter 1). Before the invention of the Transformer described in the “Attention Is All You Need” paper, it didn’t matter how many watts of energy, bytes of data, and GPUs you had, it was still prohibitively expensive in time to build a foundation model. Really powerful AI took exponentially more time to train with the tools available. Exponential growth means one plus one equals three. When you doubled the amount of data you used to teach AI, you tripled (or more) the amount of time it took to learn. Double your training data over and over so the AI “knows” enough? You’re looking at decades or longer. Ten years ago, at my former employer, we would stare wistfully at large volumes of data, money in the bank to pay the electricity bill, a healthy collection of GPUs, and the ability to buy more. And do nothing. With the tools we had at the time, we’d have to wait years to find out if the machine could learn anything useful. The same was true everywhere. Google, Microsoft, Amazon, Meta, and even the entire country of China had mind-boggling resources of data and dollars to invest. They were already buying GPUs and were ready to buy more. But the one thing they couldn’t buy was time. The Transformer broke the time barrier. The magic of the self-attention mechanism at the heart of the Transformer isn’t that it’s more accurate (it is). What made it revolutionary was that it didn’t require exponentially more time. You could add more data, or teach the machine more lessons, and as long as you also added more GPUs, it would take the same amount of time. This meant experiments could happen in days instead of years. Sitting on the email data of a billion people with billions of dollars burning a hole in your pocket? Call up the GPU salesperson, restart a deactivated nuclear reactor, and with the Transformer, you could train really, really powerful AI in months instead of decades. The Transformer-based AI was trained to translate between English and German using a well-known standard data set containing four and a half million English-German sentence pairs. A sentence pair is something like “my dog has fleas” matched to “mein Hund hat Flöhe.” The Attention team figured out that they could teach their AI three hundred percent more lessons and it would only take fourteen percent longer without adding any more GPUs! And if they wanted an even more expert translation AI and used a data set of nine, eighteen, or thirty-six million sentence pairs but kept adding GPUs, it would take the same amount of time. This meant the corporations and countries who had the data and could buy the GPUs and find enough electricity to run them could teach an entirely new class of AI: foundation model AI capable of making other AI. And that’s what happened. In 2017, Jen-Hsun (Jensen) Huang, the CEO of GPU-maker NVIDIA Corporation, reported annual revenue of $6.9 billion and stated, “We can now see that GPU-based deep learning will revolutionize major industries.…The era of AI is upon us.” By 2023, all the big AI companies had trained foundational large language models. In fiscal year 2023, Mr. Huang reported revenue of $26.97 billion and said, “AI is at an inflection point, setting up for broad adoption reaching into every industry” (Choe & Parvini, 2023; NVIDIA, 2023). NVIDIA, the company that makes GPUs that power AI, saw its revenue grow four hundred percent in the five years that included a global pandemic (Global Macro Monitor, 2024). Microsoft, Google, Amazon, and Meta all bought as many GPUs as possible in a race to build the machines that make machines for the coming industrialization of AI. NVIDIA announced “partnerships” with the AI companies in this time period. Their biggest customers became something more. NVIDIA also tried and failed to buy their competitor chip designer Arm Holdings for $40 billion. The U.S. Federal Trade Commission squashed the deal with a lawsuit. Not to be deterred, NVIDIA went on the open market and quietly acquired hundreds of millions of dollars of Arm stock and announced new partnerships with Arm (Turner & Bloomberg, 2024). Just as coal, iron, steam engines, and precision machine tools combined to industrialize human production work, nuclear energy, data, graphical processing units, and foundation models are combining to rapidly industrialize human knowledge work. Rapidly because instead of a century-long evolution driven by small entrepreneurs and inventors, AI industrialization is being driven by some of the wealthiest corporations on the planet. Big AI “Big Tech” companies are likely secretly happy with this moniker and tolerate its sometimes negative associations. They love it when a politician or an activist rails against “Big Tech” for controlling internet searches, computer operating systems, and social media. That means we’re looking the other way while they quietly go about the business of harvesting our personal data and building up the capital to turn our data into profitable AI. The truth is that Big Tech has actually become Big AI. The first Industrial Revolution played out over a hundred years, from 1760 to 1860. Change was driven by inventors, entrepreneurs, small, and then mid-sized, businesses in every community. The inventive shoemaker dreamed up a mechanized leather cutting machine. He partnered with the enterprising blacksmith next door to buy an early lathe to make his machine. They went in together on a shop next to the flour mill so they could run both their machines off the steam traction engine the innovative miller had bought to replace his water wheel. The shops sat next to a river full of barges shipping coal from independent mines, steel from small foundries. Now able to produce a larger volume of quality shoes, the shoemaker hired a farmer to run the leather-cutting machine in the afternoon after he had tended the fields. He sold shoes at a lower price, including a pair to the previously barefoot daughter of the farmer. The blacksmith trained the first machinist in the state to make leather-cutting machines which the partners then sold to shoemakers in nearby towns. There weren’t any big companies, and the truly private corporation wasn’t legal until politics and business conspired in the 1890s. Our cooperative of entrepreneurial small businesses could thrive. The farmer paid off a loan and hired a hand. His daughter went to school in the spring instead of planting corn. Life got better as the whole pie grew and more citizens got a slice. But there were limits to what could happen at this scale. After this revolutionary period, large corporations began to emerge, at least in the U.S. Innovative, driven businessmen created new corporate and financial strategies with the goal of maximizing profit, often by beating the competition with a better product at a better price. Conglomerates in shoemaking equipment (United Shoe Machinery), oil (Rockefeller), steel (Carnegie), railroads (Stanford’s Central Pacific), electricity (J.P. Morgan’s General Electric), and tobacco (Duke’s American Tobacco) rolled up their respective sectors. At their best, these conglomerates created economies of scale that allowed them to invest in innovation, efficiency, and the workforce, ultimately improving quality of life for more citizens. At their worst, the conglomerates put competitors out of business and did not invest in innovation, efficiency, and the workforce, ultimately leading to real and perceived economic injustice. Fifty years after the entrepreneurial frenzy of the Industrial Revolution, in some sectors, a very few businesses controlled all or most factors of production and sales in their industry. John D. Rockefeller’s Standard Oil owned the oil fields, drilling companies, refineries, and effectively the railroads (through exclusive freight...
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/39682955
info_outline
Audiobook: Chapter 2 Augmented Intelligence
01/10/2026
Audiobook: Chapter 2 Augmented Intelligence
CHAPTER 2: The Rise of Augmented Intelligence Artificial intelligence can help you do important things you couldn’t otherwise do—sometimes by helping you access knowledge and expertise taught to the AI by experts with rare and valuable skills. We call this augmented intelligence. I serve on the board of a small critical access hospital in a rural area very similar to where I grew up in Central Pennsylvania. I see firsthand the cost to the health of a community of people just like you from lack of ready access to specialized medical skills, compounded by a shrinking healthcare workforce. My blue-sky dream is to bring the right AI in the right way into my hospital to help the dedicated clinicians who do everything possible to care for their patients. An example of what could be in the not-so-distant future: Rib fractures are a strong sign of physical abuse in children under the age of three. These fractures are also very hard to detect. Trained expert radiologists miss them up to two thirds of the time. Simultaneously, far fewer radiologists are entering the workforce, meaning everyone left has to work even faster. Researchers at the University of Michigan and Seattle Children’s Hospital are developing AI that may be able to eventually detect rib fractures in toddlers at least as well as a panel of multiple experts reading the same x-rays multiple times (Burkow et al., 2024). If that AI becomes available as a proven tool, it will have been taught the knowledge and practiced skill of a vanishingly small number of expert pediatric radiologists. More children may be helped earlier in their lives if that AI becomes routinely used in emergency rooms that will never, ever be staffed with pediatric experts who can say, “Yes, this baby is being physically abused.” An emergency room clinician or the adult radiologist on call in a small community hospital will welcome augmentation—help—from an AI that has been taught very rare skills by experts far away. But augmented AI doesn’t have to always be high stakes. We’ll get started with a low-stakes example of augmented AI, something just about everyone could use: help with writing in an unfamiliar style. The day before I started this chapter, my wife, Missy, took her first steps in augmenting her own significant intelligence with AI. Together with a machine, she wrote her first ever advertisement. She works in environmental science and is a talented writer of technical documents, reports, and critical reviews. But ad copy? Never. Furthermore, her approach to new challenges tends to be an endearing progression of “I don’t like that, I’ll never do it, I tried it that wasn’t so bad, I like it, I’m the best at it.” She always ultimately finds success, but in this case, AI helped her through that cycle in record time. The week before, we had decided to sell advertisements to be read on one of the podcasts I co-host on a subject completely unrelated to AI. We had success selling advertising slots to small businesses that were game, but they couldn’t send us what to read “on air.” To be clear, our podcast is a low-stakes labor of love that makes no real money. We were charging very little for these ads, not nearly enough to pay a professional to write. So my wife started using ChatGPT to draft advertisements. Actually, what she was using was the collected text from hundreds of thousands of ad copywriters whose knowledge, expertise, and writing had been encoded into the ChatGPT model. She was able to quickly get to a starting point based on their expertise, then add her own voice, humor, and knowledge of the particular small businesses. Artificial intelligence had augmented her skills and intelligence. Just as the invention of writing scaled the knowledge of the few wise men whose job it was to remember things, and the invention of printing scaled the knowledge of the relatively few who could craft and read hand-lettered manuscripts, AI scales the knowledge of the few who have studied, trained, and gained expertise in writing ad copy and makes it accessible to the many who can now use AI for themselves. My wife, who did not have the time or inclination to learn to write advertising copy from scratch, used AI to get to a normal baseline or starting point. I use “normal” in the statistical sense here. Artificial intelligence by its nature takes what it learns from specific cases (millions of examples of advertisements) and applies it to a general case it’s never seen before. That emphatically does not mean it has the capacity for what we’d recognize as creativity or originality. It learns what advertisements normally look like by reading all the ads that have come before. Convergence on an acceptable model for generating ads will likely mean that model is pretty generic, a statistical representation of an average advertisement. In statistics, the average of a huge data sample like all the ads ever published on the internet is the middle of something called a “normal distribution.” In a line graph, a normal distribution looks like an evenly shaped, rounded-off hill. The foothills on the left of the hill are the few really atrocious ads. The foothills on the right of the hill are the few really great ads. And the tall hill in the middle are the many just fine ads. The AI gave her an average, just fine, ad to which she could add her own creativity to produce a unique result. “I was pretty sure I could churn out something,” she recalled, “but people have certain expectations for what an ad tells them. I could’ve read a bunch of advertisements and mimicked someone else’s, but I wanted something original. Then it occurred to me that I could use AI to write my ad copy, or to at least get me started.” In this case, she needed an ad for a marina that offers a gas dock, fishing tackle, tour guides, and cottages. She started by typing in “ad copy for marina.” The AI spit out some ads, but she felt they were a little too generic. So she cruised around on the marina-in-question’s website for a bit and found some specific amenities that they offer and got more specific with her request to ChatGPT. This time, she prompted the AI for “ad copy for fully furnished cottage rentals with river views” and got this: Experience Serenity: Fully Furnished Cottages with Stunning River Views! Escape to tranquility in our beautifully appointed cottages, where breathtaking river views and modern comforts combine for the perfect getaway. 🏞️ Breathtaking Views: Wake up to panoramic vistas of the river right from your window. Enjoy stunning sunrises and sunsets as you relax on your private deck. 🛌 Fully Furnished Comfort: Our cottages come fully equipped with everything you need for a relaxing stay. From cozy bedrooms to spacious living areas, you’ll feel right at home in our thoughtfully designed spaces. This was better, but my wife still felt it was too generic, and not specific to the particular marina or location. So she asked ChatGPT for “ad copy for cottage rentals near great fishing 1000 islands” and got: Escape to 1000 Islands: Your Ideal Cottage Retreat for Fishing Enthusiasts! Are you ready to reel in an unforgettable adventure? Discover our charming cottages nestled in the heart of the picturesque 1000 Islands, where pristine waters and stunning scenery await you. 🎣 Fishing Paradise: Enjoy direct access to some of the best fishing spots with abundant bass, pike, and trout just steps from your door. Whether you’re a seasoned angler or a beginner, our lakes provide the perfect playground. 🏡 Comfortable Getaway: Our well-equipped cottages offer all the comforts of home, including cozy interiors, fully stocked kitchens, and beautiful views to wake up to each morning. This was better but still lacked key details about the fishing experience. So she took bits and pieces from both responses and wrote the following: Escape to 1000 Islands and Discover Bill and Jack’s Marina—Your Ideal Cottage Retreat for Fishing Enthusiasts! Are you ready to reel in an unforgettable adventure? Discover our charming, fully furnished and equipped cottages nestled in the heart of the picturesque Fishers Landing, where pristine waters and stunning scenery await you. Enjoy direct access to some of the best fishing spots on the St. Lawrence River with abundant perch, bass, pike, muskie and walleye just steps from your door. Year after year, the St Lawrence River demonstrates its status as a premier fishing destination in the country by hosting professional, big pay-out fishing tournaments. Come fish with the pros! Whether you’re a seasoned angler or a beginner, our river provides the perfect playground. Bill and Jack’s will also assist with all your fishing needs. Gas, ice, fishing license? No problem, we got you. We can also arrange fishing guides upon request! As you can see, the AI was good at generic ad copy but lacked my wife’s first-hand knowledge of the advertiser’s business, the details of the locale and what visitors might require, and her human creativity. But together, my wife and a machine produced a compelling and informative ad. The whole was greater than the sum of its parts. This partnership is a perfect example of augmented intelligence. So was your use of the Merlin app to learn to identify an unfamiliar bird by its song. Merlin’s AI processed a recording of your environment, identified the bird species most likely responsible for the sounds, and then helped you associate the two. Unless you were already an expert birdwatcher, Merlin likely taught you to recognize that bird by its song for the first time. Going forward, your new augmented intelligence means you can identify the bird and teach people in your life to identify birds even without Merlin, and the community of ornithologists and birdwatchers who trained Merlin have educated the broader public. Goal met. So maybe the machines teach us, too. My wife’s learning experience with ChatGPT and your new knowledge from Merlin follow the same pattern of learning that we use to teach machines, but flipped around so the lines of who is teaching who start to blur. Missy made a plan to write an ad with AI and predicted that AI would do it for her (feed-forward). In my wife’s case, after a few cycles of giving feedback and adjusting her prompts, the AI had delivered some value but clearly wasn’t getting her exactly what she wanted. So she took what she had “learned” from the AI and did the rest herself. You can be certain that behind the scenes, OpenAI logged her prompts and repeated interactions with ChatGPT and fed them back into its constant optimization of the underlying AI. This feedback loop is called continual learning, and it is a very important factor for long-term success with AI. OpenAI would love to get their hands on what she ultimately wrote but will have to wait to harvest the information from the publicly available podcast, something that happens all the time. More on that later. These examples show how we benefit from human-AI interaction. Podcast ad copy and bird identification are both relatively low stakes in the immediate sense. Democratizing assessment of child abuse risk from rib x-rays is high stakes. When the stakes are high and a human expert is not readily available, it is important to consider augmentation. My first very personal experience with high-stakes augmented intelligence caught me by surprise early one morning at a gastrointestinal clinic prior to my first colonoscopy. I wouldn’t have known my doctor was using AI during the procedure, except for a serendipitous conversation just before the anesthesia knocked me out. The anesthesiologist had his thumb on the IV control when the gastroenterologist asked me, “So, what do you do?” as part of the scripted distraction. I answered in a weak, hurry-up-and-get-this-over-with voice, “I work in research at the Children’s Hospital of Philadelphia.” The doctor said, “Oh, really, what do you research?” To which I answered, “I lead a data and computational program that’s getting us ready for AI.” The doctor immediately stopped what he was doing and perked up. “AI? We have AI here! It’s called GI Genius, and we’re the first practice on the East Coast to use it.” Sensing that things were getting sidetracked, the anesthesiologist looked at his watch and then at the gastroenterologist, who nodded and wrapped up our conversation quickly with a “We’ll talk later.” True to his word, the first thing my doctor shared when I awoke in the recovery room was his excitement at having used AI to great effect during the procedure. “The AI found a polyp!” he told me. “I would have missed it; it was a little one.” I marveled at how an accomplished gastroenterologist, trained to identify, characterize, diagnose, and treat problems of the GI tract, had enhanced his considerable skills by working with a machine. In this case, he explained, the AI watches the same images on the camera during the procedure as the doctor and has been trained to find areas of the colon where a lesion may be located and highlight it on the video screen for the doctor in real time. The doctor can then confirm if there’s a lesion and decide what to do about it. The process for training this AI would be just like the training we learned about in Chapter 1. Domain: Colon cancer screening. Task: Identify lesions. Metric: Adenoma detection rate. Data set: A large collection of colonoscopy video recordings. Labels: Expert markup of what is and isn’t a lesion in the training and testing data videos. Model architecture: Convolutional neural network. Optimization: Compare expected vs actual classification of lesions. Training: Repeat cycles of learning and adjust model until it converges on acceptable precision (classifies the same region in the video as a lesion over and over). Testing: Show the trained model data it has never seen before. Deployment: Write software and build a device that can show a doctor a circle around a possible lesion on a video screen. Continual learning: Collect the doctor’s decision on whether or not a region is a lesion and use it to improve the model. Once GI Genius had been trained, the FDA approved it based on a randomized controlled trial (the gold standard for medical evidence) which showed that doctors doing colonoscopies with the help of GI Genius found thirteen percent more precancerous or cancerous tumors than they found on their own (U.S. Food and Drug Administration, 2021). Note that augmented intelligence—as opposed to Artificial Intelligence—is collaborative. There isn’t anything automatic or push-button about it. Straight-up AI, however, takes people out of the loop. When I drive my car equipped with automated driving assistance, my ability to drive safely at night or in traffic is augmented, but I’m still the one doing the driving. An alert that tells me to stay in the lane is just a suggestion. Contrast this with a fully self-driving car that uses full AI to drive me around while I watch the scenery. The human-plus-machine system of augmented intelligence puts the human in charge of performing the work. The machine just helps. Augmentation helped my wife to create ad copy and improved my doctor’s ability to find and remove precancerous lesions. It helped them access expertise that otherwise would not have been readily available. People, in particular adults, may be the only animals on Earth obsessed with time—having enough time, deciding how to spend time, being on time. When we step out of our daily routine to go on vacation, we usually think about how we want to spend the minutes and hours during those days. Our choice of words—“spending time”—is precisely why augmented intelligence is something that we should all like, seek out, and embrace. Our time is valuable, and just as we do with money, we want to get the most in return for our investment. When we choose to use AI as a means to augment, streamline, or improve the daily tasks and activities that already fill our days, we’re using technology to maximize the impact, value, enjoyment, and satisfaction we get from our waking, thinking time. My doctor chose to adopt GI Genius in his practice because he wants to find every single adenoma possible in a given day. Every adenoma found reduces the possibility of cancer in his patients. He was going to get up, go to work, and screen for colorectal cancer eight hours a day no matter what. He was going to make the same amount of money for those hours. But by augmenting his intelligence with an AI tool that still leaves him in charge, he has enhanced what matters most to him, the service he provides for his patients during those hours. When contemplating augmenting your own skills with AI, you want to think critically and be an informed consumer, as well as an active participant in these human-computer partnerships. Understand how the particular tool you are using came to be. Look beyond the marketing to evaluate what it actually does and does not do after you strip away the hype. Not all AI is created equal, and not all AI is created with your best interests in mind. Remember that AI is as good or as bad as its data, training, deployment, and continuous learning. Your own agency and your ability to question, analyze, and evaluate AI are your most valuable assets in this rapidly changing landscape. Exercises: Try It Out Driving Think: Contrast augmented driving with fully human driving. Do you appreciate AI slamming on the brakes for you or the car behind you in our age of distracted driving thanks to our smartphone use? Do: If you have a car with advanced driver assistance systems, go for a drive. For liability reasons, I’m not going to suggest you turn your system off! Instead, pay attention to what it does and doesn’t do, knowing what you know about how AI is trained, how it works. Colonoscopy Think: Contrast augmented polyp detection with fully human or fully autonomous polyp detection. Knowing what you do about human error and the normal imperfections of AI, what are your thoughts on a human and a machine working together? Do: Use the photo search feature of your phone to find pictures with trees in them. Trees aren’t polyps, but you get the idea. Now imagine searching for pictures of trees all day, every day. Now imagine having AI to help you. Try reading the paper that published the colonoscopy AI: Su1716 ARTIFICIAL INTELLIGENCE FOR COLORECTAL POLYP DETECTION: HIGH ACCURACY AND DETECTION ANTICIPATION WITH CB-17-08 PERFORMANCE. Repici, Alessandro et al. Gastrointestinal Endoscopy, Volume 89, Issue 6, AB391 - AB392 Augment your own expertise using a chat-based language AI. At the time of publication, ChatGPT by OpenAI or Claude by Anthropic are suitable choices. You’re going to do this by first drafting a paragraph about something you know well, then working with AI to improve it. I’ll go first. We heat our house with a wood stove, and I select, cut, split, and stack our firewood. My draft paragraph: “The heat production of a solid fuel wood burning stove is determined by three factors. First, the energy content of the fuel wood. Second, the combustion efficiency of the stove. Third, the transfer of heat from the stove to the living space. Energy content of the fuel wood is affected by the moisture content of the wood. When wood burns, the moisture it contains is converted to steam, taking up energy otherwise available for transfer to the living space as heat. This is why fuel wood should ideally have a moisture content below twenty percent by weight, lower if possible.” Now prompt the chat AI...
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/39682790
info_outline
Audiobook: Chapter 1 What Is AI?
01/10/2026
Audiobook: Chapter 1 What Is AI?
CHAPTER 1: What is AI? You may never directly teach AI yourself, but as we discussed in the Introduction, you participate in the process just about every time you interact with the digital world. You may also be in an organization that is considering if and how to adopt AI tools. These days, it is highly likely that an eager executive will push to “do something with AI” in your organization. These words are music to the ears of vendors who spend big money marketing their products as “powered by AI” whether they are or not. You can add a lot of value by understanding how AI learns so you can ask hard questions and set realistic expectations in your life and for your organization. You can be a big part of the solution by understanding and helping to position potential AI tools in the context of specific problems and human work that’s already happening. We’ll get into this more in the next chapter but, for now, know that the few AI projects that succeed are the ones that focus hard on context and people up front. Successful AI projects answer the question, “Just because we can, should we?” “Daddy! I can’t get the !@#$% sand out of my shoes!” The tiny voice from the back seat of the car was my three-year-old daughter appropriately vocalizing profanity for the very first time. I was simultaneously horrified, proud, curious, and (let’s be honest) amused. Her still-developing intelligence had for the first time understood the perfect context for profanity and nailed it. My wife and I had not specifically taught her to swear when she wasn’t able to shake sand out of her little sneakers. She had (unfortunately) heard my wife and me swearing in other situations, none of which involved sand or shoes. She had gathered information from those specific cases and correctly applied it to an entirely new situation with which she had no prior experience. When we encounter AI that can do something similar, we see intelligence in the machine. We marveled at the unveiling of ChatGPT because the underlying AI could take completely off-the-wall input it had never seen before and come back with a reasonable response in the appropriate context. My daughter had never been strapped into her car seat with a shoe full of unwanted sand, but her developing brain had been exposed to enough unrelated situations to figure out that this was a four-letter-word moment. This is what psychologists call “transduction,” a form of reasoning where developing children learn from specific cases they experience and apply their new knowledge to general (new) cases they haven’t experienced. Much of machine learning and AI, including the Transformer developed by Google, are conceived to solve general transduction problems, along with a related type of problem called “sequence modeling,” discussed in the next paragraph. The Transformer, invented by researchers at Google in 2017 and developed into AI applications in many languages, could similarly encounter an English sentence it had never seen before—such as, “What do I say in German when I am very frustrated because I can’t get the sand out of my shoe?”—and come up with “Ich bekomme den @#$% sand nicht aus meinem schuh!” Here’s another example of the human brain at work. Consider this series of words: pine, sauce, crab. What’s the next word in the series? If you quickly guess “pie,” “Adam’s,” or “computer,” you are using your instinctive powers of reasoning to subconsciously assess the relationship between the first three words and find something they have in common—in this case “apple”—to inform your choice of the next word. You can also puzzle this out through a more deliberate process of elimination using your analytical brain. This may be slower, but it can also lead to the correct answer more often. We’re all wired for both instinctual insight and analytical thinking, though individually we often skew one way or the other (Kounios & Beeman, 2015). Your brain is built for instinctual insight, so the more language you’re exposed to, the more likely it is that your brain quickly finds a relationship between the first three words in the sequence to use as context to come up with a fourth word. This type of cognition is part of something called “fluency,” where pathways in your brain have been trained by repeated exposure to information. Your fluent pathways are strengthened when you subconsciously create a common associate like “apple” between remote associates like “pine,” “sauce,” and “crab,” all words or concepts that don’t share an obvious connection. When you make up a mnemonic, such as a silly limerick, to help you remember something, you’re using the same underlying cognitive mechanism. We see intelligence when we encounter machines that can mimic sequential insight like this in a general way. Picking what comes next is the type of problem in both psychology and machine learning called “sequence modeling.” These are very important problems for humans. Figuring out what happens next, or even the few possibilities that might happen next, is a big part of how we are successful as a species. We are especially impressed when the answer isn’t something we would have come up with on our own. Just as in the example with my daughter, the key to intelligence is that the machine, the AI, performs well when it comes across something it hasn’t ever encountered before. That general capability sets AI apart from other kinds of computer programs that work under tighter constraints. Does this mean everything called AI around us is able solve general problems? Nope. Software companies desperately want to take advantage of the excitement over AI by slapping the AI label on their products. But a computer system is not AI just because it follows rules to do useful work, no matter how slick the packaging. Rules are created by looking at a bunch of specific cases, then writing up the logic for what to do in those cases. Think back to the semi-automated sawmill example in the Introduction. That computer system was likely programmed based on an old, expert-authored manual of rules for how to saw a log into valuable lumber. What magic there is comes from the clever detection of the outline of the log in a digital photograph, which is itself based on geometric and mathematical rules for finding the edge of a simple, predictable shape. This is by far the best and most efficient way to solve that particular problem. It would be a waste of time and money to show an AI a bunch of logs and a bunch of lumber and teach it to come up with the right cut pattern. One goal of this book is for you to be able to ask questions and think critically about what does and doesn’t deserve to be called AI, and even more important, to assess which kinds of problems are worth the effort and uncertainty that come with AI. Because teaching a machine takes a lot of work, and you usually don’t know what you’re going to get. How Machines Learn How do machines learn? Scientists work hard to use the human brain as a model for learning intelligence. After all, they don’t have much else to go on! The starting point for artificial intelligence is informed—at least at a high level—by our understanding of the design of the brain and theories of how we learn. Your brain is a giant mass of interconnected cells called neurons. But it’s more than just a skull full of neuron spaghetti. Neurons are elongated cells that form the wiring of your brain. Each neuron cell listens for a signal from nearby neurons. When the signal gets strong enough, the cell activates and sends its own signal out to its neighboring neurons, propagating patterns of signals through the different parts of your brain. Take, for example, your eyes looking at a brightly lit square of paper, half white, half black. Nerve endings in your eye are excited by nearby light-sensitive cells that pop off a signal. That signal tells your neurons to transmit their own signal, but in a pattern that reflects the pattern of light and dark hitting the back of your eye. The pattern of signals travels down what is effectively a data cable from your eye to your brain. The signals dump into your brain where the arrangement of neurons isn’t just random, but is organized into neighborhoods, or specialized networks, where the neurons in the network are particularly good at specific kinds of signaling. For example, detecting a bright light. These networks are organized into layers that are good at specific kinds of thinking. You can think of the layers as a stack of pancakes, where the top pancake of networked neurons does the simplest task like measuring overall brightness at different grid coordinates. That layer hands the map of what’s bright and what’s not to the next layer, which detects edges in the image—the outline of the square and the boundary between the black and white sides. Your brain continues this general organization where each layer takes input, uses its network of neurons to process it to some degree, then hands off the result to the next layer (Gazzaniga, 2018). For example, when you look at your dog, your eyes send a bunch of electrical signals representing brightness, contrast, and color to the layer of your brain that is your visual cortex. Your visual cortex takes that input and turns it into signals that it hands off to other layers of the brain that do a specific job. There are layers to store and recall memories (“That’s my dog, Lilo”), set off emotions (“I love my big baby girl, Lilo”), create speech (“Come here, big baby”), and move our hands (scratch, scratch, scratch). We’ve been trained by our repeated experiences of the world around us to recognize, feel love for, interact with, and pet our dog. As we grow and develop as children, we learn to recognize all sorts of animals, like kangaroos and deer, but unless we’re living in a zoo, we don’t moon over them and scratch their ears. But we can tell them apart from dogs! Similarly, AI systems are designed to use pretend digital equivalents of neurons, networks, and layers to process information. So far, we’ve talked about language AI, but there is a whole world of visual AI as well. Take a learning task like figuring out if a picture contains (a) a dog or (b) no dog. A visual AI has a layer that takes in a collection of numbers representing the intensity, color, and position of all the dots (pixels) that together make up a digital photo. That input layer hands the raw data off to the next layer, which figures out what’s bright and what’s dark, then hands everything off to the next layer which figures out where there is something that humans would recognize as an edge, or line. The next layer figures out which lines are organized into simple shapes. The next layer determines which shapes are important and hands those off to the final layer, which makes a guess as to whether one of the shapes is a dog. Just as in our brains, each layer in the AI doesn’t care what the other layers do; it’s good at its one task. And just like in our brains, when you put all the layers together, you may get intelligence. Remember “deep learning” from the Introduction? Before 2015, machine learning was done with a single, flat neural network. “Deep” just means you have more than two layers besides the input and output layer. There’s no magic number of layers in an AI “brain.” You decide how many layers to start with, based on the type of AI and the kind of learning. When you teach a machine from scratch, the only layer you specifically set up is the first input layer. The layers after that aren’t set up ahead of time to do anything specific. They all start out as generic collections of digital neurons. A new AI has to learn what to do layer by layer. All this adds up to what gets loosely called an “algorithm.” Layers are as far as we’re going to go in terms of AI’s internal wiring. There are many wonderful books you can read to delve into the fascinating and beautiful construction of AI algorithms. Or you can ask your favorite AI to explain it to you, though I’d recommend a combination. But for practical purposes, the algorithm is just the starting point. The magic happens when you teach the algorithm to do something truly remarkable. The General Systems Theory of psychology attempts to explain human behavior by looking at the three main variables of human psychology: biological (hungry), psychological (decide to seek food), and social/behavioral (somebody feed me). If you’re a baby, you get the inputs your body needs (food) by controlling your outputs (crying in a tone that means hungry vs uncomfortable from a wet diaper). You output information (crying) to your environment by planning actions to get what you want. In a system, this is called “feed-forward.” The actions you plan (time to cry) are based on a guess of the consequences of those actions (Dad feeds me). You run the plan (cry) and compare the actual consequences with what you thought they’d be (did I get fed or not?). This is called feedback. If you didn’t get what you wanted (still hungry), you adjust the plan (cry louder), which is using feedback (McConnell, 1989). The teaching of machines, machine learning, is all about infant computer programs going through the cycle of planning actions and guessing consequences (feed-forward), doing the actions, then comparing the actual result to the guessed result (feedback), adjusting if necessary to repeat the cycle (using feedback). We’re going dig into an example of how AI is taught by people and deployed into the real world. Before we get there, it’s helpful to understand the general approach to machine learning along with some of the technical terms for key parts of the process and steps that are applied. First, you need a topic or situation based in the real world. Artificial intelligence, like humans, needs to focus on one thing at a time while learning, so in our AI, we focus on a specific topic to provide loose boundaries. We call this topic the domain. The Google Brain team chose foreign languages as the domain when they were developing and testing their Transformer. In our example, the domain is “dogs.” Within the general topic, we go further and articulate a particular problem to solve. This problem is called the task. Our task is “Decide if a picture has a dog in it, or not.” Next is a definition of success. I can’t overstate how critical it is to decide on and define the successful outcome we want ahead of time. Your measure of success is called the metric. Recall that Google’s Transformer was first taught to pass a longstanding standardized test of English-to-German and English-to-French language translation. This was their metric, or measure of success. With AI, you’re teaching a machine to approximate or augment a cognitive process that only a human can do, so you or your organization MUST understand baseline human performance and articulate ahead of time what success looks like for the AI. Your measure of success is called the metric. The AI doesn’t have to “beat the human” like the chess-playing computers from the 1990s. It’s enough to set a standard that the AI helps a human to accomplish faster. Our example metric is “Find more than 90% of the dog pictures.” This metric is the critical educational outcome that guides how you teach the machine. The next step is to procure the equivalent of a textbook for AI training: enough relevant data for the lesson. How much data is enough? Enormous, truly huge volumes of data are required to successfully teach AI. You need to start with every scrap of data relevant to your problem that you can beg, borrow, or steal (not really). It will likely still not be “enough.” This is why the most successful AI research, and the most successful AI products, come from huge companies that spend billions and decades collecting our data. We call this the training data. Our example data set is six thousand family photos, some with the dog, some without. Your AI will study the data you give it, reading or looking at it over and over. The sum of what it learns during this process depends on the volume and quality of data you provide. The data has to be described and characterized by humans so you know the answers ahead of time, just like an instructor’s answer key in a textbook. We call this labeling. For our example, three different veterinarians each looked at all six thousand pictures and labeled each “dog” or “no dog.” The final step in preparing to teach is design of the empty, untrained brain of the AI. What kind of brain? How many layers? How do the layers talk with each other? This is called the model architecture. We choose Residual Network, since it’s a well-tested architecture for image recognition. You can treat it as a black box, so we won’t go into more detail. Now you teach! You organize a repeated series of lessons and quizzes where the AI does the feed-forward part of learning. It uses its untrained brain to look at a randomly selected set of half of the dog pictures. This half of the total data set is called the training data. The AI does the task of predicting the right answer (dog or no dog) and then takes a quiz where you check its prediction against the human labels. After each quiz, you use a computer program to give the AI feedback on what it got right and wrong. It uses the feedback to adjust how its brain re-reads the data and comes up with answers (a mathematical process called gradient descent). You repeat for potentially hundreds of cycles of training so the different layers in the AI brain learn to do specific tasks, much as the layers in a baby’s brain learn their job in the larger task of recognizing animals or getting someone to feed them. During the repeated cycles of training, the AI develops an equivalent to fluency from repetition and by learning hidden gems like how a common associate such as the combination of dark nose and round eye shape ties together remote associates like German shepherd, pug, and beagle. You stop the cycle of lessons and quizzes when the AI gets a good score a few times in a row, better than ninety percent correct on our metric, and it’s clear it isn’t learning anymore. This repeated good score is called convergence. If you want to impress someone when they are bragging about their AI, ask them, “How many training cycles before convergence?” Now for the final exam. The AI reads the other half of data it’s never seen before, called the testing data set, and does the task—just once for this other half of the data, called the testing data set. Remember, the data is labeled by experts, so you know the answers to the test. If the AI passes the test and hits your predetermined metric of correctly identifying ninety percent of the dog pictures, it gets a good grade, and you celebrate! This final exam is the proof that AI can learn enough from a specific case where it has access to the answers (training) and then successfully generalize to a case it’s never seen before where you know the answer but it does not (testing). Much like a new graduate, your AI now has theoretical knowledge but hasn’t been out in the real world where it really counts. The really hard part of this process is launching your newly trained AI out into the real world (deployed), but in a way that allows it to continue learning safely. The task may be low stakes like identifying birds from their songs and relatively easy to deploy or high stakes like pointing out bone fractures on x-rays and relatively hard to deploy. No matter, training is just the first step before figuring out how to get your AI from the classroom into the real world (deployed)....
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/39682700
info_outline
Audiobook Introduction
11/26/2025
Audiobook Introduction
Introduction Like billions of people around the world, you may have suddenly become familiar with the following words straight out of Silicon Valley or a futuristic movie: Machine learning. GenAI. Large Language Model. Generative. Training. GPT. Explainability. Neural Network. Deep Learning. Hallucination. These words are synonymous with Artificial Intelligence (AI), the computer systems we can teach to do “thinking” work. Sometimes we teach machines to find patterns of information that we can’t given the flood of data in our digital lives. Sometimes we teach machines to do a specific task in a way that augments our life, or supports the work that we already do. The words above are casually thrown around by everyone from tech bros to journalists to advertisers to the AI “assistants” that are starting to pop up on every app and website. But they can distract from what is really going on. You’ve picked up this book because “AI” seemingly comes up in every job interview, work meeting, or classroom discussion, shows up in every search you do online, invades your social media, and is splashed across every advertisement you see. And this all seems to have happened overnight. What changed? Everything. And nothing at all. Everything because we as a digital society reached a tipping point. Nothing at all because for twenty years AI has been with us, becoming more and more capable behind the scenes. Is this scary? Exciting? What exactly is AI, anyway? You are likely thinking: How is “the machine” already in my life without me being aware? How do I make the most of the most important innovation since we humans first wrote things down 5000 years ago and then, 575 years later, figured out how to print copies? How do I protect myself, the people I care about, my education, and my job? The truth is that you—and me, our parents, our kids—have been “teaching the machine” for years. We are all simultaneously consumers and producers in an AI economy that has been around for decades. This book is titled You Teach the Machines because it is fundamentally that simple. AI depends on you. The machine learns from the data you create, just as an infant learns the basics of language from the words you speak. The machine learns to do things that matter to you when you tell it right from wrong, just as a toddler learns grammar when you correct them. Many corporations have much to gain from AI appearing to be Oz the Great and Powerful. Something magical, an otherworldly black box. It’s not. It’s just a machine, often with a cynical man behind the curtain. A machine taught with your data and your feedback. The more we all understand AI for what it is, the more we can maximize benefit and minimize harm. You’ve been teaching AI for as long as you’ve been using Google, Amazon, social media, and navigation apps, for as long as you’ve been going to the doctor and swiping your credit card. It’s time to take charge and put AI to work for you. You teach the machines. I started my tech career in 1996 during another period of rapid change. The web was brand new and data, for the first time, was considered an asset with a dollar value (Kerr, 1991). A few years on I started working in AI at Ask Jeeves, the first natural language internet search engine. Twenty-five years later I created a comprehensive data asset and AI program as the Chief Research Informatics Officer of a leading pediatric academic medical center. I left that job, one I loved, to write this book. To help all of us navigate the change of AI. If you, like most people, have a lot of questions and reservations, and even fears about AI, this book aims to demystify this groundbreaking technology and put your mind at ease. The chapters here will answer your questions about AI, including: How did AI seemingly show up everywhere overnight? What could change in my life because of AI? Can I trust AI? How do I use AI to make my life and my family’s life better? Cuneiform To ChatGPT Your questions are, for the most part, about change. Change from an old normal is always accompanied by uncertainty, and we as humans are hardwired to fear what we don’t know. Artificial intelligence is a relatively recent arrival in human history. We don’t yet share a widespread understanding of AI or a new normal of daily use. Our uncertainty and fear are completely natural and understandable. This is a revolutionary technology! Fundamentally, AI is a completely new way that we humans capture our knowledge. That hasn’t happened since the Sumerians invented writing in ancient Mesopotamia around 3200 BC and people no longer had to simply remember everything. Historians think the first writing, called cuneiform, was invented to give a customer a grocery receipt. Before cuneiform writing, the only way to capture information was for one person to remember what another person told them. That meant there was no written history. No written receipts. No written recipes. Imagine you figured out that leaving open jars of barley out in a rainstorm made beer, another Mesopotamian innovation from roughly the same time. You tell the recipe to your friend over a few beers, but after a few too many, you both forget what it was. Fast forward, to a time when cuneiform writing has been invented, and you happen upon the barley-in-the-rain trick again. This time, you write it down, and it gets passed along to generations thereafter. From that point forward, it was possible to record human knowledge, leading to massive cultural changes and advancements. Writing made it possible for individual humans to record important knowledge and share it with a relatively few other humans (Finkel & Taylor, 2015). AI makes it possible for us to collect massive amounts of digital knowledge to share with others worldwide, like ChatGPT, which incredibly incorporates information scraped from a public archive of all the websites ever made called Common Crawl. Artificial intelligence is also an entirely novel way to broadly share, disseminate, and use human knowledge. This hasn’t happened with such seismic consequences since serial entrepreneur Johannes Gutenberg figured out how to scale up book production with moveable type in Mainz, Germany in 1450. Gutenberg borrowed heavily to do the R&D and engineering required to invent the printing press. His first book? Copies of the twenty-eight-page learn-to-read Latin schoolbook Ars minor, the first part of an ancient text called Ars grammatica. He’s more famous for the next book he printed, the Bible, but it’s notable that he started with an educational book, whether he did it intentionally or otherwise. In those days, you had to know Latin to get what might today be called an office job. Before Gutenberg, getting your kid started on Ars grammatica meant paying a scribe much of a laborer’s yearly wage to hand-write a copy with pen and ink. That meant only better-off kids learned Latin and went to school. Whatever Johannes Gutenberg intended, printing relatively cheap copies of the equivalent of Dick and Jane for Latin was an early example of doing right by doing well. The cheaper it was to learn to read, the more books he could sell! Eventually, a diaspora of trained printers who stole Gutenberg’s technology set off an explosion of printing across Europe. A mere fifty years later, more than twelve and a half million books had been printed! Printing made it possible for a single human to record important knowledge, then share it with millions of others so they could do stuff with the knowledge. This invention, more than any other, launched Europe, and then the rest of the world, into the modern era. This shifting of the Earth is known as the Gutenberg Effect. It’s important to note that at the time, there was pushback against the printing press. Turns out the intellectual and economic classes felt more secure when knowledge was captured and made available at great expense by scribes copying out books by hand. Secure in their stations, the ruling class viewed the work of a scribe as morally superior to the ink-stained labor of setting type and cranking a press. Classist snobbery was also fueled by the fact that the labor of printing was taken up by the lower classes. But the expansion of literacy in these same craftsmen, indentured apprentices, and servants opened new markets for popular books very different from highbrow manuscripts (Houston, 2016). For example Desiderius Erasmus’ widely read and sometimes-banned books advocated the then-radical idea that the church and monarchs should serve the people first (Erasmus, 1515; Erasmus, 1516). Newspapers, magazines, radio, television, and the internet all extended the innovation of the printed book. All were initially disparaged. If you’re of a similar vintage to mine, you may remember some professors prohibiting the use of the internet to do research for your term paper. Be thoughtful about similar criticisms by today’s intellectual elite as AI emerges and evolves. Know that some criticism of AI as somehow inauthentic may be defense of the established cultural clout of experts. But also don’t completely disregard these same experts, who are justifiably nervous. Some of their criticisms are valid, such as the potential for erosion of critical thinking and writing skills by overuse of language AI. How does AI relate to writing in ancient Mesopotamia and printing in medieval Germany? Historical, disruptive—and ultimately constructive—precedents of writing and printing help us understand change in our lives caused by the emergence of AI. For now, consider that writing, and then printing, made it possible for the expertise, knowledge, and thoughts of a single human to spread through literacy and education to many other humans. At its worst, this can lead to Adolf Hitler’s Mein Kampf. At its best, it can lead to Henry Gray’s groundbreaking textbook Gray’s Anatomy. Artificial intelligence can make it possible for human knowledge and expertise to spread even further, not just through other humans, but through machines we teach to augment our lives. Artificial intelligence makes it possible for us to capture the knowledge of many, many (all?) humans and share it so it can be used by many, many more (all?). But just as the printing press amplified whatever you printed with it, AI can amplify whatever we teach it. Sometimes for the worse, such as in 2023, when hackers who couldn’t write computer code themselves used AI to generate code that they then used to attack an email company. Sometimes for the better when, the same year, biomedical researchers who couldn’t write computer code themselves used the same AI to generate computer code at a prestigious hospital to get important data out of a database, greatly speeding up their important work. For better or worse, you teach the machines. We started with a discussion of writing and printing, historical precedents of innovation that sparked massive global change, very deliberately because AI burst into our world through the exact same door: language. Our Black Swan Moment On November 30, 2022, a startup company called OpenAI released ChatGPT, an AI language tool that quickly gained widespread notoriety. The public reaction and significant notoriety of this tool in our public consciousness was an outlier, impossible to predict beforehand. There was a huge impact from this notoriety: its release and the subsequent freak-out in almost every corner of our society changed our conversations and expectations in a very big way, whether or not we actually took the time to play around with ChatGPT. Since that moment, we’ve been furiously working after the fact to explain how this all happened. We want to go back and figure out how we could have predicted how ChatGPT has upended our lives if we’d only paid attention. This trifecta of an outlier event with huge impact we desperately try to explain afterward is characteristic of what’s become known as a black swan event. Black swan is a theory for understanding the outsized impact of rare events on human society. Events like the September 11 attacks and the subsequent public, political, and military response. Highly improbable but changed everything (Taleb, 2007). A black swan event is also a matter of perspective. A number of expert analysts were not surprised by the attacks on September 11; to them an attack was certain, even if they did not know exactly when or where it would happen. The general public, the military, and much of the Federal government not only did not anticipate the attacks but were completely unprepared to handle such an improbable but consequential event. My goal in writing this book is to help you be informed so you may better handle change we can anticipate, but especially the unlikely but consequential disruption AI is certain to cause. The more you know about AI, the more you use AI, the better off you will be in coping with unexpected events AI will bring in the future. Artificial intelligence depends on you. You teach the machines. Sam Altman, CEO of OpenAI, may or may not have predicted the public reaction to his launch-and-see-what-happens “creative disruption.” It’s a typical path in the technology industry, both useful (gather new information) and intellectually lazy (skip the deliberation). Sam did it with his one and only startup before realizing investors (not entrepreneurs) made the big bucks and he made enough money investing to convince him he was a Master of the Universe. Except his only prior startup was a failed social network app called Loopt. Lower stakes. He was successful enough as an investor to co-found OpenAI as a nonprofit artificial intelligence research organization that quickly evolved to have multiple for-profit subsidiaries. It’s complicated. After launching ChatGPT, Sam claimed the release was a benevolent attempt to help us all face the reality of the new power of AI. That may or may not have been a reason. More likely it was to grab market share with a product built using an algorithm developed at Google and a few billion dollars he needed to show a return on. Meanwhile people at Google, including Geoffrey Hinton, the originator of modern AI, were doing the more difficult “just because we can, should we?” deliberation, carefully considering the implications of technology certain to cause major disruption. Ultimately, it doesn’t matter what Sam claimed as his reason. The immediate impact on public discourse and industry was huge and will likely be viewed as historically significant. The black swan of AI bursting into our consciousness made us feel like the world shifted underneath us. Professionally, I found myself in the middle of a storm of uncertainty and fear. My colleagues at the Children’s Hospital of Philadelphia and I were stunned that ChatGPT had been put into the wild. We were hit by the black swan and found ourselves in the role of helping people who hadn’t seen it coming to process and understand what had happened. All of a sudden, much of my job became explaining the emergence of AI after the fact. Before ChatGPT, our society didn’t know that AI was around the corner. Now we feel like we don’t know what AI will bring, good or bad. The reality is that AI will almost certainly bring more black swan events in the future. But you also have more agency than you realize. You teach the machines. Why is it worth considering our reaction to AI in the black swan context? Step back from AI for a moment. The black swan way of thinking is useful because it’s about admitting we don’t know what we don’t know. The black swan theory says that randomness is more prevalent than we as humans are wired to accept. Many, if not most, of the truly important events in our lives or our society are unpredictable. So maybe—just maybe—we should balance the amount of time we try to predict and plan with more time spent preparing for change and uncertainty. Please don’t take this suggestion as a forecast for doom and gloom. We humans seem to be hardwired to see change and uncertainty as inherently bad, probably because we don’t live very long, and we have evolved to be almost entirely concerned with self-preservation, and to see anything new as a threat. The emergence of AI was jarring in the sensational way that it quickly went from tech company back office to dining room table conversation. We consider the idea of the black swan theory not so much because it explains the emergence of AI, but because it’s a good way to prepare for the uncertainty that lies ahead—the certain-to-happen but impossible to predict unintended outcomes of AI. The black swan theory tells us it’s up to us to work together to demand and take an active role in this change. So let’s go back to the moment when ChatGPT burst onto the scene. Why did it seem like such a big deal? ChatGPT delivered to everyday users a compelling language-based conversation experience on just about any topic. Note the use of the word “compelling” instead of “accurate” or “correct.” This is an important point that we’ll get back to. Over the next few months, social media, TV news, and traditional media exploded with examples of ChatGPT answering questions, writing poems, drafting letters, and summarizing information. We were seeing a credible artificial general intelligence application for the first time. ChatGPT is one among many examples of a “chatbot,” a computer system taught to respond to text or voice. Chatbots had been mostly annoying and useless up until this point. Boomers, Gen Xers, and Millennials: Remember Clippy? Most recent chatbots weren’t much better. They were rarely welcome, mostly unhelpful interjections to already painful customer service experiences. If it seemed like every chatbot you interacted with before ChatGPT was narrow-minded and knew nothing about your specific problem, you’re right. Chatbots prior to the current generation were not much more than a fancy search interface to whatever narrow catalog of information their owner wanted you to go through before passing you on to a more expensive human. What was different this time? The release of ChatGPT gave anyone who signed up for a free account an interface to incredibly powerful new language-based AI technology. Millions upon millions of people did just that and had their first experience with AI in a message-based conversational format that seemed to respond somewhat credibly on any topic. ChatGPT differed from our experience with Clippy and its chatbot descendants in part due to the many billions of dollars invested in its underlying “brain,” a type of AI known as a “large language model” that we’ll learn more about later. Suffice it to say, ChatGPT could reply to you more authentically based on its having read hundreds of millions of publicly available texts multiple times. ChatGPT also used a new type of AI (more on this in just a bit) that was much better at figuring out the important words and phrases in your message. Prior chatbots were clumsier and not as insightful. People got super excited because ChatGPT seemed more responsive to what was important in their message. In a way, ChatGPT was the first chatbot that seemed like a good listener. And it could respond to questions on just about any topic because it had read just about everything on the internet. Did you know that ChatGPT was taught with the contents of every web page published all the way back to 2008? That includes but is not limited to every word written by copywriters in online clothing catalogs, every public website, and every online news article not behind a paywall. This was the closest we, the general public, had come to the artificial intelligence...
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/39174180
info_outline
Georgie Kovacs
11/26/2025
Georgie Kovacs
Georgie Kovacs is a women's health expert who spent 20 years in high-level roles at major companies like Pfizer and IQVIA before founding her consulting practice and becoming a full-time patient advocate. After her personal four-year battle to get an endometriosis diagnosis, she founded , which uses a popular podcast to cut through the confusion and stigma surrounding women's health issues. In her consulting, she uses her deep industry knowledge to advise companies on ethical women's health strategies, ensuring products truly meet patient needs. Georgie is focused on empowering women to demand better care and become their own best health advocates in a complex system.
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/39173830
info_outline
Editorial: OpenAI Fails Its Ask Jeeves Moment
10/21/2025
Editorial: OpenAI Fails Its Ask Jeeves Moment
Oh Sam. Ask Jeeves established precedent for the search industry that monetizing porn is bad for people and for business. OpenAI and its Microsoft godfather seems intent on doing the opposite. Will Google and Amazon read the room* and ask What Would Jeeves Do? When I worked at Ask Jeeves in 2000 there was a debate. We were valued at $12B but nowhere near breaking even. Should Jeeves be a butler for porn? Eric our internet architect sat on the traffic control council for the internet (W3C) and reported that the bulk of the packets flying through this early system of tubes carried porn pixels. As one voice in my favorite group chat put it "...any user experience optimized to the n-th degree will be porn." Barely ten years in it was clear that porn drove the internet. So the question was, would Ask Jeeves set the precedent that porn drove search? The license for the character from the PJ Wodehouse family was silent on porn, but in early conversations, the family made it known they didn’t want to see beloved Jeeves selling flesh. But. Sooooo muuuucchhhh moneeyyyyyy. The answer? A pot smoke-filled back room. A remarkably sexually diverse (for 2000) crew of human supervisors of the core NLP engine "secretly" took over a conference room to teach an offline version of Jeeves about the dark side of human sexuality. They called themselves the "Jeeviants." In true Bay Area form, the Jeeviants immediately decorated Jeeves' Walled Garden with inflatable sex dolls, wall displays of dildos, and a giant inflatable vagina. All supervised by a cardboard cutout of Jeeves dressed to the nines in S&M leather. I was allowed in as part of a conversation on how to approach continuous learning from user data, though we didn't have those words back then. The goal was a completely separate search site where you and your firewall knew porn was on the menu. Thankfully Porn Jeeves was shut down by leadership and the Wodehouses. Everyone recognized and accepted this was a bad short-term financial decision. The "usage-maxxing" and revenue projections were astronomical. But Jeeves had become a go-to safe space for school teachers and parents to teach children about the newfangled World Wide Web. A safe brand for decades-old businesses to dip brick-and-mortar toes into the eCommerce waters. So it was a bad business decision. The twenty-something clown in me? Disappointed because there were ah-mazing illustrations of Jeeves done up in leather chaps holding a riding crop ready for go live. And that was HILARIOUS! The father or three in me? So, so grateful. Google took off a couple years later and followed Jeeves' example that porn was a third rail. A hot stove. The search industry took its sweet time figuring out how to put porn in a back room, like the video stores at the time did with VHS tapes. Microsoft calls the shots at OpenAI now. Microsoft CEO Mr. Nadela's deft political moves hung Sam Altman from puppet strings, meaning it's Sam who announces Artificial Porn Intelligence (API?). Microsoft reportedly gets 75% of ChatGPT's profits until its $13 billion investment is recouped. After which their take drops to only 49%. Altman sells porn and godfather Nadela gets the envelopes full of money. But I'm not boycotting Word and Teams like I am ChatGPT. Smart. I had the first AI Talk with my teenage son about his education last Spring. ("look, I know you're going to use it. I'm not trying to stop you. I just want you to make good decisions.") Looks like I have to have another one. /s Thanks Mr. Nadela /s. Can we get some feminist CEOs please? Jeff Pennington is author of You Teach the Machines: AI On Your Terms, available online where you get books. Please consider connecting with other humans by ordering to your local book store through .
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/38725350
info_outline
Editorial: OpenAI Fails Its Ask Jeeves Moment
10/21/2025
Editorial: OpenAI Fails Its Ask Jeeves Moment
. *Not holding out hope for Meta. Zuckerberg's bunkers aren't going to pay for themselves. The rats nest of Instagram's AI is already "engaging" our brainstems with as much soft core porn as it can get away with. Meta's recent virtue-signaling of "Instagram for Teens" is 99.999% certain to be a setup for serving hard core porn.
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/38725055
info_outline
Paul Berger Gives MJ Hope
10/16/2025
Paul Berger Gives MJ Hope
Paul Berger joins and shockingly convinces MJ that AI is not the enemy! His thoughtful and constructive approach to MJ's skepticism is based on a lifetime of experience and success in two wildly different fields. An excellent episode! If you find the episode helpful, please share with friends and family. Thanks for listening!
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/38610320
info_outline
Cincinnati Children's - Philip Hagedorn, MD
07/22/2025
Cincinnati Children's - Philip Hagedorn, MD
Chief Health Informatics Officer Phil Hagedorn shares a "this is happening right now" bedside view of artificial intelligence and clinical education. MJ asks Phil to advise aspiring medical and nursing students, as well as twenty-somethings new to the workforce. MJ asks Jeff the same question. Both have (some) answers! Thanks for listening!
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/37512515
info_outline
Book
06/30/2025
Book
Hi, Here's how you can get You Teach The Machines... The Library... visit a library listed below or ask your local library to consider ordering from the distributor (IngramSpark) so you and others can read it. eBook ISBN is 979-8-9988-152-2-5, hardcover is 979-8-9988152-1-8, paperback is 979-8-9988152-0-1. You may also email me at and I'd be happy to donate a copy directly to your library. Local Business... and pick up from your friendly bookseller. Bookshop.org shares profits with physical stores, like a co-op. Visit one of the bookstores listed below or you're going to the bookstore anyway, ask your bookseller to order from IngramSpark. Give them the ISBN, same as above. that support your local bookstore! I support in Philadelphia. All support a connected community of real, intelligent people. Online: IngramSpark in and Apple Books eBook and Amazon in for delivery or store pickup. Libraries with You Teach the Machines: California: Belvedere Tiburon and Mill Valley Maine: Stewart/North Anson; Somesville Library Association, Southwest Harbor, Northeast Harbor, Jesup/Bar Harbor, Waterville, Madison, and by request at 60 libraries via the Minerva system. New York: Thousand Island Park and the North Country Library System Pennsylvania: Bala Cynwyd and the Lower Merion Library System Bookstores with You Teach the Machines California: (Downtown SF and Corte Madera) Maine: Bar Harbor, (in stock at all ten locations) Pennsylvania: Philadelphia (benefits Philly Public Library) Thanks for trusting me with time you spend reading! --Jeff Pennington, Philadelphia, PA. USA. P.S. Please give it to me straight in a review on GoodReads or Amazon! Ranking AI likes reviews of any kind... *ISBN is the unique identifier the industry uses to keep track of books. It's like a barcode. Which the book comes with in case the bookstore asks. --------- Jeff Pennington has worked for more than twenty-five years in data, data science, and artificial intelligence. Most recently as Chief Research Informatics Officer for the Children’s Hospital of Philadelphia. He has authored or co-authored more than twenty peer-reviewed papers in the fields of biomedical informatics and data privacy. Jeff started his career at Ask Jeeves, the first major commercial natural language search engine. Most recently, Jeff led data and artificial intelligence for the Children’s Hospital of Philadelphia Research Institute where he worked for seventeen years before writing his first book. He now helps leading healthcare systems get ready for AI. Jeff holds a bachelor’s degree in biology from Trinity College and a master’s degree in computer science from Drexel University. You Teach the Machines: AI On Your Terms AI on your terms. Not theirs. You Teach the Machines helps people of all backgrounds understand, live with, and use artificial intelligence. Told through real examples drawn from finance, cooking, technology, marketing, health, music, driving, and more, You Teach the Machines will help you maximize benefit and minimize harm from the AI-driven change already happening in your life. Find out what AI has to do with data privacy, the movie Trading Places, painter Bob Ross's Happy Little Clouds, the growth of the nuclear power industry, and your child or grandchild's classroom. Did you know that AI learns much like a baby? Understand the AI training cycle and how you participate in it-whether you realize it or not, you literally teach the machines. Did you know that AI can scale the expertise of specialized doctors? Discover how AI can enhance your health and provide rural residents with new access to medical care. Does this all feel new? Learn how we've been here before: From the invention of the written and printed word to ChatGPT, You Teach the Machines shows how writing and books have transformed the use of human knowledge, just as AI is doing now. Historical anecdotes help to demystify and put powerful new artificial intelligence technology in a historical context relevant to your life and work today. Completely new to AI? Helpful hands-on exercises walk you through what AI is good at—and where it is still a work-in-progress. Gain actionable insight into the pitfalls and side effects of a new industrial revolution in human knowledge work. Most importantly, You Teach the Machines lays out a path to improve your work and life with artificial intelligence. On your terms.
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/37220400
info_outline
Get You Teach The Machines: AI On Your Terms
06/17/2025
Get You Teach The Machines: AI On Your Terms
Hi, Jeff here! Here's how you can get You Teach The Machines in print or audio... Best for humanity: ask your local library to order from the distributor (IngramSpark) so you and others can read it. Give them the ISBN*. Hardcover ISBN is 979-8-9988152-1-8, paperback ISBN is 979-8-9988152-0-1. Best for your community: Deliver to your local book store through Bookshop.org. Best for my bank account: Buy direct from IngramSpark. Ok, fine... Easiest: On Amazon. Thanks for trusting me with your purchase! With Gratitude, --Jeff Pennington, Philadelphia, PA. USA. PS Please give it to me straight in a review on GoodReads or Amazon! The ranking AI likes reviews even if they're bad... It also likes pictures of faces so... *ISBN is the unique identifier the industry uses to keep track of books. It's like a barcode. Which my book comes with in case the bookstore asks :-) I wrote You Teach the Machines to be an entertaining roadmap, manual and sometimes antidote to AI-driven change. The book helps articulate the fear and optimism experienced by you, me, our kids, their teachers, and our parents. You Teach the Machines: AI On Your Terms AI on your terms. Not theirs. You Teach the Machines helps people of all backgrounds understand, live with, and use artificial intelligence. Told through real examples drawn from finance, cooking, technology, marketing, health, music, driving, and more, You Teach the Machines will help you maximize benefit and minimize harm from the AI-driven change already happening in your life. Find out what AI has to do with data privacy, the movie Trading Places, painter Bob Ross's Happy Little Clouds, the growth of the nuclear power industry, and your child or grandchild's classroom. Did you know that AI learns much like a baby? Understand the AI training cycle and how you participate in it-whether you realize it or not, you literally teach the machines. Did you know that AI can scale the expertise of specialized doctors? Discover how AI can enhance your health and provide rural residents with new access to medical care. Does this all feel new? Learn how we've been here before: From the invention of the written and printed word to ChatGPT, You Teach the Machines shows how writing and books have transformed the use of human knowledge, just as AI is doing now. Historical anecdotes help to demystify and put powerful new artificial intelligence technology in a historical context relevant to your life and work today. Completely new to AI? Helpful hands-on exercises walk you through what AI is good at—and where it is still a work-in-progress. Gain actionable insight into the pitfalls and side effects of a new industrial revolution in human knowledge work. Most importantly, You Teach the Machines lays out a path to improve your work and life with artificial intelligence. On your terms.
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/37045600
info_outline
Pablo Urdiales Antelo - Journalism at Swansea
06/05/2025
Pablo Urdiales Antelo - Journalism at Swansea
If you thought AI killed journalism, you haven't met Pablo! Thrilled to speak with Pablo Urdiales Antelo, full-time journalist and Class of 2024 graduate of the excellent Swansea University! Class of '25, '26, '27: this is a great discussion of Pablo's experience starting out in a profession AI is supposedly automating. We find space between doom and optimism, but if you're looking for easy answers with AI... let us know when you find them! Thanks for listening, check out Pablo's articles at , and get in touch at
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/36870515
info_outline
Jack - CS at CalPoly
05/30/2025
Jack - CS at CalPoly
Our first graduate in the workforce! Jack takes his fresh computer science degree to Silicon Valley and along the way shares how he lives and works with AI as a student, job candidate, and new employee. We discuss whether AI will put 20-somethings like Jack and MJ out of work, or just maybe 50-somethings like me will be on the outs! MJ asks a great question about learning to think vs learning to program and we get closer to some answers. Get in touch at youteachthemachines.com and thanks for listening!
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/36785850
info_outline
Serena - AI, Privacy and Public Health
04/27/2025
Serena - AI, Privacy and Public Health
The kids care about privacy! Hallelujah! Serena joins to discuss privacy and use of health data in AI. Also, use of your content in OneDrive, Gmail, Google Drive to train AI. We try for a balanced view, but are mostly freaked out. Thanks for listening, please review, subscribe and get in touch at youteachthemachines.com.
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/36328470
info_outline
Alexa - French, and Sarah - Econ at Colby
03/21/2025
Alexa - French, and Sarah - Econ at Colby
Does AI fall flat with French idioms? Why should you NOT prompt AI with important data? Three different answers to the question "If AI went away, would you be able to do what you learned with it." Three seniors adapt their learning to an AI world whether they like it or not.
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/35813370
info_outline
Oscar - ME at Northeastern
03/11/2025
Oscar - ME at Northeastern
Oscar shares how he saw AI in major use at his second co-op, his perspective on how it helps him code even though he's not a CS major. MJ asks good questions about what's worth learning if someday AI is going to automate it. Jeff asks both MJ and Oscar how AI helps them learn. Want to share your story or perspective on AI? Get in touch! Thanks for listening!
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/35609250
info_outline
Lucy - Psych at Colby
03/10/2025
Lucy - Psych at Colby
Lucy, a Colby College psych major joins Maryjane and Jeff to share how AI teaches her to break down topics and write better papers. We discuss similarities between AI and the dawn of the world wide web thirty years ago. Jeff shares the grumpy old man perspective, Lucy and MJ nod along politely. Want to share your story? Get in touch. Thanks for listening!
/episode/index/show/4479687b-ddf4-48af-8397-6a2f83aa314a/id/35593160