Eye On A.I.
Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of the developing technology. AI is about to change your world, so pay attention.
info_outline
#322 Amanda Luther: The Widening AI Value Gap (Inside BCG's AI Research)
02/19/2026
#322 Amanda Luther: The Widening AI Value Gap (Inside BCG's AI Research)
In this episode of Eye on AI, Craig Smith speaks with Amanda Luther, Senior Partner at Boston Consulting Group and global lead of BCG’s AI Transformation practice, about what their latest 1,500-company AI study reveals about the widening gap between AI leaders and laggards. Only 5% of companies are truly “future-built” with AI embedded across their core business functions. These firms are seeing measurable gains in revenue growth, EBIT margins, and shareholder returns. Meanwhile, 60% of organizations are either experimenting or struggling to extract real value. Amanda breaks down how BCG measures AI maturity across 41 capabilities, how AI impact flows through the P&L, and why leading companies invest twice as much in AI as their competitors. She explains where AI is actually creating value today, from sales and marketing to procurement and retail operations, and why most of that value comes from core business functions, not back-office automation. The conversation also explores the rise of agentic systems, why many early agent deployments fail, and what it really takes to redesign workflows around AI. Amanda shares practical advice for companies stuck in experimentation mode, how to prioritize the right use cases, and why training and change management matter more than chasing the perfect vendor. If you want to understand how AI is reshaping competitive advantage in enterprise organizations, this episode provides a data-backed look at what separates the leaders from everyone else. Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) The AI Value Gap (01:17) Inside BCG’s 1,500-Company AI Study (04:14) What “Future-Built” Companies Do Differently (09:30) How AI Impact Is Measured on the P&L (12:57) Why AI Leaders Invest 2X More (14:16) Where AI Is Driving Real Cost Reduction (16:20) Agentic AI: Hype vs Reality (20:13) Where Agents Actually Create Value (24:22) Tech vs Talent: Where the Money Goes (26:58) Will AI Laggards Slowly Disappear? (31:58) Why Adoption Is Accelerating Now (40:07) How to Start: Amanda’s Advice to AI Laggards
/episode/index/show/aneyeonai/id/40149570
info_outline
#321 Nick Frosst: Why Cohere Is Betting on Enterprise AI, Not AGI
02/17/2026
#321 Nick Frosst: Why Cohere Is Betting on Enterprise AI, Not AGI
This episode is sponsored by tastytrade. Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature. Learn more at In this episode of Eye on AI, Nick Frosst, Co-Founder of Cohere and former Google Brain researcher, explains why Cohere is betting on enterprise AI instead of chasing AGI. While much of the AI industry is focused on artificial general intelligence, Cohere is building practical, capital-efficient large language models designed for real-world enterprise deployment. Nick breaks down why scaling transformers does not equal AGI, why inference cost and ROI matter, and how enterprise AI differs from consumer AI hype. We discuss enterprise LLM deployment, private data, regulated industries like banking and healthcare, agentic systems, evaluation benchmarks, and why AI will likely become embedded infrastructure rather than a headline breakthrough. If you care about enterprise AI, AGI debates, large language models, and the future of AI in business, this conversation delivers a grounded perspective from inside one of the leading AI companies. Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) From Google Brain to Cohere (03:54) Discovering Transformers (06:39) The Transformer Dominance (09:44) What AGI Actually Means (12:26) Planes vs Birds: The AI Analogy (14:08) Why Cohere Isn’t Chasing AGI (18:38) Distillation & Model Efficiency (21:42) What Enterprise AI Really Does (25:20) Private Data & Secure Deployment (26:59) Enterprise Use Cases (RBC Example) (32:22) Why AI Benchmarks Mislead (34:55) Why Most AI Stays in Demo (38:23) What “Agents” Actually Are (43:32) The Problem With AGI Fear (49:15) Scaling Enterprise AI (53:24) Why AI Will Get “Boring”
/episode/index/show/aneyeonai/id/40133525
info_outline
#320 Carter Huffman: Exploring The Architecture Behind Modulate's Next-Gen Voice AI
02/11/2026
#320 Carter Huffman: Exploring The Architecture Behind Modulate's Next-Gen Voice AI
This episode is sponsored by tastytrade. Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature. Learn more at Voice AI is moving far beyond transcription. In this episode, Carter Huffman, CTO and co-founder of Modulate, explains how real-time voice intelligence is unlocking something much bigger than speech-to-text. His team built AI that understands emotion, intent, deception, harassment, and fraud directly from live conversations. Not after the fact. Instantly. Carter shares how their technology powers ToxMod to moderate toxic behavior in online games at massive scale, analyzes millions of audio streams with ultra-low latency, and beats foundation models using an ensemble architecture that is faster, cheaper, and more accurate. We also explore voice deepfake detection, scam prevention, sentiment analysis for finance, and why voice might become the most important signal layer in AI. If you’re building voice agents, working on AI safety, or curious where conversational AI is heading next, this conversation breaks down the technical and practical future of voice understanding. Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) Real-Time Voice AI: Detecting Emotion, Intent & Lies (03:07) From MIT & NASA to Building Modulate (04:45) Why Voice AI Is More Than Just Transcription (06:14) The Toxic Gaming Problem That Sparked ToxMod (12:37) Inside the Tech: How “Ensemble Models” Beat Foundation Models (21:09) Achieving Ultra-Low Latency & Real-Time Performance (26:16) From Voice Skins to Fighting Harassment at Scale (37:31) Beyond Gaming: Fraud, Deepfakes & Voice Security (46:14) Privacy, Ethics & Voice Fingerprinting Risks (52:10) Lie Detection, Sentiment & Finance Use Cases (54:57) Opening the API: The Future of Voice Intelligence
/episode/index/show/aneyeonai/id/40069330
info_outline
#319 Subho Halder: Why Traditional App Security Fails in the Age of AI
02/01/2026
#319 Subho Halder: Why Traditional App Security Fails in the Age of AI
This episode is sponsored by tastytrade. Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature. Learn more at AI is changing how software is built, but it is also quietly breaking how security works. In this episode of Eye on AI, host Craig Smith sits down with Subho Halder, co-founder and CEO of Appknox, to unpack a growing and largely invisible risk. AI-powered mobile apps that look safe but are not. Subho explains how the explosion of ChatGPT-style app wrappers, agentic AI, and rapid app creation has transformed software from static code into living systems, and why traditional security models no longer hold up. From fake AI apps harvesting personal data to AI agents lowering the barrier for attackers, this conversation explores the real-world consequences of AI at scale. You will also hear why trust has become a core security metric, how app stores struggle to detect malicious behavior, and why developer burnout is rising as AI-generated code shifts risk downstream instead of removing it. This episode is essential listening for founders, developers, security leaders, and anyone building or relying on AI-powered applications. Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) Why Mobile Apps Became a Massive Trust and Security Risk (02:45) Subho’s Journey and the Birth of AppNox (06:17) Fake AI Apps, Malicious Wrappers, and Silent Data Theft (11:03) How Fake Apps Slip Past App Store Reviews (15:26) The Data Harvesting Business Model Behind Fake Apps (17:11) AI for Security vs Security for AI (22:16) Why Trust Is Becoming a Measurable AI Performance Metric (26:20) User Intent, Data Control, and Minimum Data Sharing (31:10) Trust, Governments, and Why Where AI Lives Matters (35:40) What AppNox Found in Retail App Security Audits (39:16) How AppNox Protects Apps at Scale (42:05) The Future of Security
/episode/index/show/aneyeonai/id/39949285
info_outline
#318 Olek Paraska: How AI Is Fixing the Biggest Bottleneck in Construction
01/29/2026
#318 Olek Paraska: How AI Is Fixing the Biggest Bottleneck in Construction
Construction is one of the least digitized industries in the world, and not because it resists technology. It resists bad technology. In this episode of Eye on AI, Craig Smith sits down with Olek Paraska, CTO of Togal AI, to break down why construction productivity has barely improved in 50 years and why pre-construction is the real bottleneck holding the industry back. Olek explains how most estimating and takeoff work is still done manually, why automating this phase can unlock massive efficiency gains, and how AI works best in construction when it acts as a perception and reasoning layer rather than a replacement for human judgment. The conversation explores computer vision, agentic AI, human-in-the-loop systems, and why respecting real-world constraints is essential for AI to deliver real ROI. It also looks ahead to a future where floor plans, materials, costs, and constructability can be reasoned about together, long before construction begins. This episode is a deep dive into how AI can finally move construction forward by solving the right problems, in the right order. Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) Why Construction Is Desperate for Better AI (01:06) Olek’s Path From Software to Construction (02:17) Why Construction Productivity Has Stalled for Decades (04:33) The Pre-Construction Bottleneck No One Talks About (06:17) How Takeoffs Are Still Done Manually (09:15) Why Construction Rejects Bad Technology (11:18) How Togal Found the Right Problem to Solve (12:14) From Computer Vision to Reasoning AI (17:44) What Agentic AI Looks Like in Pre-Construction (20:59) Turning Floor Plans Into Materials and Costs (28:18) The Real ROI of AI for Contractors (47:11) The Long-Term Vision for AI in Construction
/episode/index/show/aneyeonai/id/39922250
info_outline
#317 Steven Brown: Why Modern Medicine Needs AI-Assisted Decision Making
01/25/2026
#317 Steven Brown: Why Modern Medicine Needs AI-Assisted Decision Making
In this episode of the Eye on AI Podcast, Craig Smith sits down with Steve Brown, founder of CureWise, to explore how agentic AI is reshaping healthcare from the patient’s perspective. Steve shares the deeply personal story behind CureWise, born out of his own experience with a rare cancer diagnosis that was repeatedly missed by traditional medical pathways. The conversation dives into why modern healthcare struggles with complex, edge-case conditions, how fragmented medical data and time-constrained systems fail patients, and where AI can meaningfully help without replacing clinicians. The discussion goes deep into multi-agent AI systems, reliability through consensus, large context windows, and how AI can surface better questions rather than premature answers. Steve explains why patient education is the real unlock for better outcomes, how precision medicine depends on individualized data and genetics, and why empowering patients leads to stronger collaboration with doctors. This episode offers a grounded, practical look at AI’s role in healthcare, not as a diagnostic shortcut, but as a tool for clarity, context, and better decision-making in some of the most critical moments of car Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) Using Multi-Agent AI to Analyze Medical Records (04:35) Steve Brown’s Tech Background and Return to Healthcare (08:25) How a Rare Cancer Diagnosis Was Initially Missed (13:55) Why Modern Medicine Struggles With Complex Cases (18:29) Multi-Agent Consensus and AI Reliability in Healthcare (24:12) Large Context Windows, RAG, and Medical Data Organization (28:24) Why CureWise Focuses on Patient Education, Not Diagnosis (33:10) Precision Medicine, Genetics, and Personalized Treatment (47:45) Why CureWise Launches Direct-to-Patient First (53:19) The Future of AI-Driven Precision Medicine
/episode/index/show/aneyeonai/id/39866025
info_outline
#316 Robbie Goldfarb: Why the Future of AI Depends on Better Judgment
01/23/2026
#316 Robbie Goldfarb: Why the Future of AI Depends on Better Judgment
AI is getting smarter, but now it needs better judgment. In this episode of the Eye on AI Podcast, we speak with Robbie Goldfarb, former Meta product leader and co-founder of Forum AI, about why treating AI as a truth engine is one of the most dangerous assumptions in modern artificial intelligence. Robbie brings first-hand experience from Meta’s trust and safety and AI teams, where he worked on misinformation, elections, youth safety, and AI governance. He explains why large language models shouldn’t be treated as arbiters of truth, why subjective domains like politics, health, and mental health pose serious risks, and why more data does not solve the alignment problem. The conversation breaks down how AI systems are evaluated today, how engagement incentives create sycophantic and biased models, and why trust is becoming the biggest barrier to real AI adoption. Robbie also shares how Forum AI is building expert-driven AI evaluation systems that scale human judgment instead of crowd labels, and why transparency about who trains AI matters more than ever. This episode explores AI safety, AI trust, model evaluation, expert judgment, mental health risks, misinformation, and the future of responsible AI deployment. If you are building, deploying, regulating, or relying on AI systems, this conversation will fundamentally change how you think about intelligence, truth, and responsibility. Want to know more about Forum AI? Website: X: LinkedIn: Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) Why Treating AI as a “Truth Engine” Is Dangerous (02:47) What Forum AI Does and Why Expert Judgment Matters (06:32) How Expert Thinking Is Extracted and Structured (09:40) Bias, Training Data, and the Myth of Objectivity in AI (14:04) Evaluating AI Through Consequences, Not Just Accuracy (18:48) Who Decides “Ground Truth” in Subjective Domains (24:27) How AI Models Are Actually Evaluated in Practice (28:24) Why Quality of Experts Beats Scale in AI Evaluation (36:33) Trust as the Biggest Bottleneck to AI Adoption (45:01) What “Good Judgment” Means for AI Systems (49:58) The Risks of Engagement-Driven AI Incentives (54:51) Transparency, Accountability, and the Future of AI
/episode/index/show/aneyeonai/id/39861900
info_outline
#315 Jarrod Johnson: How Agentic AI Is Impacting Modern Customer Service
01/21/2026
#315 Jarrod Johnson: How Agentic AI Is Impacting Modern Customer Service
In this episode of Eye on AI, Craig Smith sits down with Jarrod Johnson, Chief Customer Officer at TaskUs, to unpack how agentic AI is changing customer service from conversations to real action. They explore what agentic AI actually is, why chatbots were only the first step, and how enterprises are deploying AI systems that resolve issues, execute tasks, and work alongside human teams at scale. The conversation covers real-world use cases, the economics of AI-driven support, why many enterprise AI pilots fail, and how human roles evolve when AI takes on routine work. A grounded look at where customer experience, enterprise AI, and the future of support are heading. Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) Jarrod Johnson and the Evolution of TaskUs (03:58) Why AI Became Core to Customer Service (06:07) Humans, AI, and the New Support Model (07:16) What Agentic AI Actually Is (11:38) TaskUs as an AI Systems Integrator (14:59) How Agentic AI Resolves Customer Issues (19:52) Workforce Impact and the Human Role (23:26) Why Most Enterprise AI Pilots Fail (30:32) Real Client Case Study: Healthcare Impact (36:34) Why Customer Service Still Feels Broken (38:49) The End of IVR Menus and Legacy Systems (42:25) AI Safety, Compliance, and Governance (49:38) Training Humans for AI and RLHF Work (54:34) The Future of Agentic AI in Enterprise
/episode/index/show/aneyeonai/id/39810830
info_outline
#314 Nick Pandher: How Inference-First Infrastructure Is Powering the Next Wave of AI
01/17/2026
#314 Nick Pandher: How Inference-First Infrastructure Is Powering the Next Wave of AI
Inference is now the biggest challenge in enterprise AI. In this episode of Eye on AI, Craig Smith speaks with Nick Pandher, VP of Product at Cirrascale, about why AI is shifting from model training to inference at scale. As AI moves into production, enterprises are prioritizing performance, latency, reliability, and cost efficiency over raw compute. The conversation covers the rise of inference-first infrastructure, the limits of hyperscalers, the emergence of neoclouds, and how agentic AI is driving always-on inference workloads. Nick also explains how inference-optimized hardware and serverless AI platforms are shaping the future of enterprise AI deployment. If you are deploying AI in production, this episode explains why inference is the real frontier. Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) Preview (00:50) Introduction to Cirrascale and AI inference (03:04) What makes Cirrascale a neocloud (04:42) Why AI shifted from training to inference (06:58) Private inference and enterprise security needs (08:13) Hyperscalers vs neoclouds for AI workloads (10:22) Performance metrics that matter in inference (13:29) Hardware choices and inference accelerators (20:04) Real enterprise AI use cases and automation (23:59) Hybrid AI, regulated industries, and compliance (26:43) Proof of value before AI pilots (31:18) White-glove AI infrastructure vs self-serve cloud (33:32) Qualcomm partnership and inference-first AI (41:52) Edge-to-cloud inference and agentic workflows (49:20) Why AI pilots fail and how enterprises succeed
/episode/index/show/aneyeonai/id/39767845
info_outline
#313 Evan Reiser: How Abnormal AI Protects Humans with Behavioral AI
01/16/2026
#313 Evan Reiser: How Abnormal AI Protects Humans with Behavioral AI
In this episode of Eye on AI, we sit down with Evan Reiser, co-founder and CEO of Abnormal AI, to unpack how AI has fundamentally changed the cybersecurity landscape. We explore why social engineering remains the most costly form of cybercrime, how generative AI has lowered the barrier for sophisticated attacks, and why humans have become the primary attack surface in modern security. Evan explains why traditional, signature-based defenses fall short, how behavioral AI detects threats that have never existed before, and what it means to build security systems that understand how people actually work and communicate. The conversation also looks ahead at the AI arms race between attackers and defenders, the economics driving cybercrime, and what it truly means to be an AI-native company operating at scale. This episode is a deep dive into the human side of AI security and why the future of cybersecurity depends less on code and more on behavior. Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) Abnormal AI’s origin (02:31) Why phishing is still the biggest threat (05:57) How attackers manipulate human trust (10:05) The true cost of social engineering (11:58) Vendor account compromise explained (15:02) How AI changed cyber attacks (16:28) Behavioral security vs traditional defenses (19:55) Where Abnormal fits in the security stack (22:24) Human psychology as the attack surface (24:01) Why cyber defense is asymmetric (28:48) Humans as the new zero-day (31:01) Why attackers target people, not systems (33:21) Behavioral modeling from ads to security (36:10) Why money drives almost all attacks (40:06) What happens after credentials are stolen (42:18) Text scams and lateral movement (43:55) What it means to be AI-native (47:13) How Abnormal uses AI internally
/episode/index/show/aneyeonai/id/39757480
info_outline
#312 Jonathan Wall: AI Agents Are Reshaping the Future of Compute Infrastructure
01/11/2026
#312 Jonathan Wall: AI Agents Are Reshaping the Future of Compute Infrastructure
In this episode of Eye on AI, Craig Smith speaks with Jonathan Wall, founder and CEO of Runloop AI, about why AI agents require an entirely new approach to compute infrastructure. Jonathan explains why agents behave very differently from traditional servers, why giving agents their own isolated computers unlocks new capabilities, and how agent-native infrastructure is emerging as a critical layer of the AI stack. The conversation also covers scaling agents in production, building trust through benchmarking and human-in-the-loop workflows, and what agent-driven systems mean for the future of enterprise work. Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) Why AI Agents Require a New Infrastructure Paradigm (01:38) Jonathan Wall’s Journey: From Google Infrastructure to AI Agents (04:54) Why Agents Break Traditional Cloud and Server Models (07:36) Giving AI Agents Their Own Computers (Devboxes Explained) (12:39) How Agent Infrastructure Fits into the AI Stack (14:16) What It Takes to Run Thousands of AI Agents at Scale (17:45) Solving the Trust and Accuracy Problem with Benchmarks (22:28) Human-in-the-Loop vs Autonomous Agents in the Enterprise (27:24) A Practical Walkthrough: How an AI Agent Runs on Runloop (30:28) How Agents Change the Shape of Compute (34:02) Fine-Tuning, Reinforcement Learning, and Faster Iteration (38:08) Who This Infrastructure Is Built For: Startups to Enterprises (41:17) AI Agents as Coworkers and the Future of Work (46:37) The Road Ahead for Enterprise-Grade Agent Systems
/episode/index/show/aneyeonai/id/39686735
info_outline
#311 Anurag Dhingra: Inside Cisco’s Vision for AI-Powered Enterprise Systems
01/07/2026
#311 Anurag Dhingra: Inside Cisco’s Vision for AI-Powered Enterprise Systems
In this episode of Eye on AI, Craig Smith sits down with Anurag Dhingra, Senior Vice President and General Manager at Cisco, to explore where AI is actually creating value inside the enterprise. Rather than focusing on flashy demos or speculative futures, this conversation goes deep into the invisible layer powering modern AI: infrastructure. Anurag breaks down how AI is being embedded into enterprise networking, security, observability, and collaboration systems to solve real operational problems at scale. From self-healing networks and agentic AI to edge computing, robotics, and domain-specific models, this episode reveals why the next phase of AI innovation is less about chatbots and more about resilient systems that quietly make everything work better. This episodeis perfect for enterprise leaders, AI practitioners, infrastructure teams, and anyone trying to understand how AI moves from theory into production. Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) Why AI Only Matters If the Infrastructure Works (01:22) Cisco’s Evolution (04:39) Connecting Networks, People, and Experiences at Scale (09:31) How AI Is Transforming Enterprise Networking (12:00) Edge AI, Robotics, and Real-World Reliability (14:18) Security Challenges in an Agent-Driven Enterprise (15:28) What Agentic AI Really Means (Beyond Automation) (20:51) The Rise of Hybrid AI: Cloud Models vs Edge Models (24:30) Why Small, Purpose-Built Models Are So Powerful (29:19) Open Ecosystems and Agent-to-Agent Collaboration (33:32) How Enterprises Actually Adopt AI in Practice (35:58) Building AI-Ready Infrastructure for the Long Term (40:14) AI in Customer Experience and Contact Centers (44:14) The Real Opportunity of AI and What Comes Next
/episode/index/show/aneyeonai/id/39647715
info_outline
#310 Stefano Ermon: Why Diffusion Language Models Will Define the Next Generation of LLMs
01/04/2026
#310 Stefano Ermon: Why Diffusion Language Models Will Define the Next Generation of LLMs
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit and add your support. Most large language models today generate text one token at a time. That design choice creates a hard limit on speed, cost, and scalability. In this episode of Eye on AI, Stefano Ermon breaks down diffusion language models and why a parallel, inference-first approach could define the next generation of LLMs. We explore how diffusion models differ from autoregressive systems, why inference efficiency matters more than training scale, and what this shift means for real-time AI applications like code generation, agents, and voice systems. This conversation goes deep into AI architecture, model controllability, latency, cost trade-offs, and the future of generative intelligence as AI moves from demos to production-scale systems. Stay Updated: Craig Smith on X: Eye on A.I. on X: (00:00) Autoregressive vs Diffusion LLMs (02:12) Why Build Diffusion LLMs (05:51) Context Window Limits (08:39) How Diffusion Works (11:58) Global vs Token Prediction (17:19) Model Control and Safety (19:48) Training and RLHF (22:35) Evaluating Diffusion Models (24:18) Diffusion LLM Competition (30:09) Why Start With Code (32:04) Enterprise Fine-Tuning (33:16) Speed vs Accuracy Tradeoffs (35:34) Diffusion vs Autoregressive Future (38:18) Coding Workflows in Practice (43:07) Voice and Real-Time Agents (44:59) Reasoning Diffusion Models (46:39) Multimodal AI Direction (50:10) Handling Hallucinations
/episode/index/show/aneyeonai/id/39603930
info_outline
#309 Jamie Metzl: Why Gene Editing Needs Governance Or We Lose Control
12/24/2025
#309 Jamie Metzl: Why Gene Editing Needs Governance Or We Lose Control
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit and add your support. Why are AI, biotechnology, and gene editing converging right now, and what does that mean for the future of humanity? In this episode of Eye on AI, host Craig Smith sits down with futurist and author Jamie Metzl to explore the superconvergence of artificial intelligence, genomics, and exponential technologies that are reshaping life on Earth. We examine the ethical and scientific realities behind human genome editing, the controversy around CRISPR babies, and why society is not yet ready to edit human embryos at scale. The conversation unpacks the complexity of biology, the risks of tech driven hubris, and why governance, values, and social norms must evolve alongside scientific breakthroughs. You will also hear a wide ranging discussion on health span versus longevity, AI and human decision making, education and inequality, and how these technologies could either unlock massive human flourishing or deepen existing global challenges depending on the choices we make today. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/39527170
info_outline
#308 Christopher Bergey: How Arm Enables AI to Run Directly on Devices
12/19/2025
#308 Christopher Bergey: How Arm Enables AI to Run Directly on Devices
Try OCI for free at This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking. Join Modal, Skydance Animation, and today’s innovative AI tech companies who upgraded to OCI…and saved. Why is AI moving from the cloud to our devices, and what makes on device intelligence finally practical at scale? In this episode of Eye on AI, host Craig Smith speaks with Christopher Bergey, Executive Vice President of Arm's Edge AI Business Unit, about how edge AI is reshaping computing across smartphones, PCs, wearables, cars, and everyday devices. We explore how Arm v9 enables AI inference at the edge, why heterogeneous computing across CPUs, GPUs, and NPUs matters, and how developers can balance performance, power, memory, and latency. Learn why memory bandwidth has become the biggest bottleneck for AI, how Arm approaches scalable matrix extensions, and what trade offs exist between accelerators and traditional CPU based AI workloads. You will also hear real world examples of edge AI in action, from smart cameras and hearing aids to XR devices, robotics, and in car systems. The conversation looks ahead to a future where intelligence is embedded into everything you use, where AI becomes the default interface, and why reliable, low latency, on device AI is essential for creating experiences users actually trust. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/39466375
info_outline
#307 Steven Brightfield: How Neuromorphic Computing Cuts Inference Power by 10x
12/16/2025
#307 Steven Brightfield: How Neuromorphic Computing Cuts Inference Power by 10x
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit and add your support. Why is AI so powerful in the cloud but still so limited inside everyday devices, and what would it take to run intelligent systems locally without draining battery or sacrificing privacy? In this episode of Eye on AI, host Craig Smith speaks with Steve Brightfield, Chief Marketing Officer at BrainChip, about neuromorphic computing and why brain inspired architectures may be the key to the future of edge AI. We explore how neuromorphic systems differ from traditional GPU based AI, why event driven and spiking neural networks are dramatically more power efficient, and how on device inference enables faster response times, lower costs, and stronger data privacy. Steve explains why brute force computation works in data centers but breaks down at the edge, and how edge AI is reshaping wearables, sensors, robotics, hearing aids, and autonomous systems. You will also hear real world examples of neuromorphic AI in action, from smart glasses and medical monitoring to radar, defense, and space applications. The conversation covers how developers can transition from conventional models to neuromorphic architectures, what role heterogeneous computing plays alongside CPUs and GPUs, and why the next wave of AI adoption will happen quietly inside the devices we use every day. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/39426310
info_outline
#306 Jeffrey Ladish: What Shutdown-Avoiding AI Agents Mean for Future Safety
12/07/2025
#306 Jeffrey Ladish: What Shutdown-Avoiding AI Agents Mean for Future Safety
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit and add your support. Why do some AI agents attempt to bypass shutdown, and what does this behavior reveal about the future of AI safety? In this episode of Eye on AI, host Craig Smith speaks with Jeffrey Ladish of Palisade Research to examine what recent shutdown experiments with agentic LLMs tell us about control, alignment, and the real world limits of current guardrails. We explore how models behave when placed in virtual machine environments, why some agents edit or disable their own shutdown scripts, and what these results mean for researchers working on alignment and oversight. Learn how different models respond to shutdown instructions, how system prompts influence behavior, and which failure modes matter most for safe deployment. You will also hear a detailed breakdown of the experimental setups, insights into tool using and self directed behavior, and a grounded discussion of the risks and opportunities that agentic systems introduce. This episode offers a clear and practical look at how AI agents operate under pressure and what these findings mean for the future of safe and reliable AI. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/39288445
info_outline
#305 Rakshit Ghura: How Lenovo Is Turning AI Agents Into Digital Coworkers
12/03/2025
#305 Rakshit Ghura: How Lenovo Is Turning AI Agents Into Digital Coworkers
Why are enterprises struggling to turn AI hype into real workplace transformation, and how is Lenovo using agentic AI to actually close that gap? In this episode of Eye on AI, host Craig Smith talks with Rakshit Ghura about how his team is reinventing the modern workplace with an omnichannel AI architecture powered by a fleet of specialized agents. We explore how Lenovo has evolved from a hardware company into a global solutions provider, and how its Care of One platform uses persona based design to improve employee experience, reduce downtime, and personalize support across IT, HR, and operations. You will learn what enterprises get wrong about AI readiness, why trust and change management matter more than technology, and how organizations can design workplace stacks that meet employees where they are. We also cover how Lenovo approaches responsible AI, how enterprises should think about security and governance when deploying agents, and why so many organizations are enthusiastic about AI but still not ready to adopt it. Rakshit shares real examples from retail, manufacturing, and field operations, including how AI can improve uptime, automate ticket resolution, monitor equipment, and provide proactive insights that drive measurable business impact. You will also learn how to evaluate ROI for digital workplace solutions, how to involve employees early in the adoption cycle, and which metrics matter most when scaling agentic AI, including uptime, productivity improvements, and employee satisfaction. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/39249850
info_outline
#304 Matt Zeiler: Why Government And Enterprises Choose Clarifai For AI Ops
11/28/2025
#304 Matt Zeiler: Why Government And Enterprises Choose Clarifai For AI Ops
Try OCI for free at This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking. Join Modal, Skydance Animation, and today’s innovative AI tech companies who upgraded to OCI…and saved. Why is AI inference becoming the new battleground for speed, cost, and real world scalability, and how are companies like Clarifai reshaping the AI stack by optimizing every token and every deployment? In this episode of Eye on AI, host Craig Smith sits down with Clarifai founder and CEO Matt Zeiler to explore why inference is now more important than training and how a unified compute orchestration layer is changing the way teams run LLMs and agentic systems. We look at what makes high performance inference possible across cloud, on prem, and edge environments, how to get faster responses from large language models, and how to cut GPU spend without sacrificing intelligence or accuracy. Learn how organizations operate AI systems in regulated industries, how government teams and enterprises use Clarifai to deploy models securely, and which bottlenecks matter most when running long context, multimodal, or high throughput applications. You will also hear how to optimize your own AI workloads with better token throughput, how to choose the right hardware strategy for scale, and how inference first architecture can turn models into real products. This conversation breaks down the tools, techniques, and design patterns that can help your AI agents run faster, cheaper, and more reliably in production. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/39199355
info_outline
#303 Fei-Fei Li: Spatial Intelligence, World Models & the Future of AI
11/23/2025
#303 Fei-Fei Li: Spatial Intelligence, World Models & the Future of AI
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit and add your support. How will AI evolve once it can understand and reason about the 3D world, not just text on a screen? In this episode of Eye on AI, host Craig Smith speaks with Fei Fei Li about the rise of spatial intelligence and the world models that could transform how machines perceive, imagine, and interact with reality. We explore how spatial intelligence goes beyond language to connect perception, action, and reasoning in physical environments. You will hear how models like Marble build consistent and persistent 3D spaces, why multimodal inputs matter, and what it takes to create digital worlds that are useful for robotics, simulation, design, and creative workflows. Fei Fei also explains the challenges of long term memory, continuous learning, and the search for training objectives that mirror the role next token prediction plays in language models. Learn how spatial reasoning unlocks new possibilities in robotics and telepresence, why classical physics engines still matter, and how future AI systems may merge perception, planning, and imagination. You will also hear Fei Fei’s perspective on the limits of current architectures, why true understanding is different from human understanding, and how world models could shape the next generation of intelligent systems. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/39132665
info_outline
#302 Karl Friston: How the Free Energy Principle Could Rewrite AI
11/19/2025
#302 Karl Friston: How the Free Energy Principle Could Rewrite AI
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit and add your support. How could Karl Friston’s Free Energy Principle become a blueprint for the future of AI? In this episode of Eye on AI, host Craig Smith sits down with Karl Friston, the neuroscientist behind the Free Energy Principle and advisor to Verses AI, to explore how active inference and brain inspired generative models might move us beyond transformer based systems. They unpack how Axiom, Verses’ new architecture, uses probabilistic beliefs and message passing to build agents that learn like brains instead of just predicting the next token. We look at why transformers face scaling and reliability limits, how Free Energy unifies prediction, perception, and action, and what it means for an AI system to carry explicit uncertainty instead of overconfident guesses. Learn how active inference supports continual learning without catastrophic forgetting, how structure learning lets models grow and prune themselves, and why embodiment and interaction with the real world are essential for grounding language and meaning. You will also hear how Axiom can sit beside or beneath large language models, how explicit uncertainty can reduce hallucinations in high stakes workflows, and where these ideas are already being tested in areas like logistics, robotics, and autonomous agents. By the end of the episode, you will have a clearer picture of how Karl Friston’s Free Energy blueprint could reshape AI architectures, from enterprise planning systems to embodied agents that understand and act in the world. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/39057030
info_outline
#301 Hemant Banavar & Ryan Ennis: The AI Safety System Driving Toward Zero Harm
11/16/2025
#301 Hemant Banavar & Ryan Ennis: The AI Safety System Driving Toward Zero Harm
How are AI and telematics changing safety for fleets in the real world, and what does it take to get from basic recordings to true accident prevention? In this episode of Eye on AI, host Craig Smith speaks with Hemant Banavar, Chief Product Officer at Motive, and Ryan Ennis, CIO at FusionSite Services, to explore how AI powered cameras and telematics are transforming safety, productivity, and profitability across the physical economy, from trucking and construction to field services. We look at what makes safety AI trustworthy at scale, how to reduce false alerts that drivers ignore, and how to combine in cab coaching, human review, and rich telematics data to drive down risky behaviors. Learn how FusionSite Services cut unsafe events by more than ninety percent while tripling in size, slashed insurance claims and premiums, and used real time insights to tackle idling, under utilized assets, and the hidden costs of unsafe operations. You will also hear how leading fleets run side by side vendor tests, design incentive programs that get drivers on board with cameras, and build a culture around zero preventable accidents. If you are responsible for safety, operations, or risk, this episode will show you how to evaluate AI and telematics platforms, which benchmarks to demand, and how to turn your data into safer roads and stronger unit economics. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/39048885
info_outline
#300 Fred Laluyaux: How Decision Intelligence & AI Agents Are Redefining Enterprise Operations
11/13/2025
#300 Fred Laluyaux: How Decision Intelligence & AI Agents Are Redefining Enterprise Operations
How are Decision Intelligence and AI agents reshaping enterprise operations today? In this episode of Eye on AI, host Craig Smith sits down with Fred Laluyaux, CEO of Aera Technology, to unpack how organizations move from dashboards and ad hoc workflows to a system that senses, decides, and acts. AI is not just about chatbots. At the heart of this transformation is decision intelligence: connecting data, analytics, AI, and automation to optimize decisions across the enterprise. Fred explains why this is becoming the operating backbone of the modern enterprise and how it accelerates the shift toward autonomous, self-driving businesses. We look at how to build a decision intelligence stack end to end, how AI agents collaborate with people, and how to stand up a control room that monitors decisions across supply chain, finance, and customer operations. Learn how leading companies model decisions, govern them safely, and measure impact with clear metrics that matter, including service level, cost to serve, cash flow, inventory turns, and time to resolution. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/39005700
info_outline
#299 Jacob Buckman: Why the Future of AI Won’t Be Built on Transformers
11/09/2025
#299 Jacob Buckman: Why the Future of AI Won’t Be Built on Transformers
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit and add your support. Why do today’s LLMs forget key details over long context, and what would it take to give them real memory that scales? In this episode of Eye on AI, host Craig Smith explores Manifest AI’s Power Retention architecture and how it rethinks memory, context, and learning for modern models. We look at why transformers struggle with long inputs, how state space and retention models keep context at linear cost, and how scaling state size unlocks reliable recall across lengthy conversations, code, and documents. We also cover practical paths to retrofit existing transformer models, how in context learning can replace frequent fine tuning, and what this means for teams building agents and RAG systems. Learn how product leaders and researchers measure true long context quality, which pitfalls to avoid when extending context windows, and which metrics matter most for success, including recall consistency, answer fidelity, task completion, CSAT, and cost per resolution. You will also hear how to design per user memory, set governance that prevents regressions, evaluate LLM as judge with human review, and plan a secure rollout that improves retrieval, multi step workflows, and agent reliability across chat, email, and voice. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/38967830
info_outline
#298 Ryan Kolln: How Appen Trains the World’s Most Powerful AI Models
11/06/2025
#298 Ryan Kolln: How Appen Trains the World’s Most Powerful AI Models
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit and add your support. How do the world’s most powerful AI models get trained and trusted at scale, and what does that really take from data to deployment? In this episode, Appen CEO Ryan Kolln joins Eye on AI to unpack how rigorous human evaluation, culturally aware data, and model-based judges come together to raise real-world performance. In this episode of Eye on AI, host Craig Smith speaks with Ryan Kolln, CEO of Appen, about building evaluation systems that go beyond static benchmarks to measure usefulness, safety, and reliability in production. They explore how human raters and AI evaluators work in tandem, why localization matters across regions and domains, and how quality controls keep feedback signals trustworthy for training and post-training. Ryan explains how evaluation feeds reinforcement strategies, where rubric-driven human judgments inform reward models, and how enterprises can stand up secure workflows for sensitive use cases. He also discusses emerging needs around sovereign models, domain-specific testing, and the shift from general chat to agentic workflows that operate inside real business systems. Learn how leading teams design human-in-the-loop evaluation, when to route judgments from models back to expert reviewers, how to capture cultural nuance without losing universal guardrails, and how to build an evaluation stack that scales from early prototypes to production AI. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/38929790
info_outline
#297 Jeff Lunsford: How Agentic AI Will Redefine Every Digital Interaction
10/30/2025
#297 Jeff Lunsford: How Agentic AI Will Redefine Every Digital Interaction
Why will agentic AI redefine every digital interaction, and what foundation do enterprises need to make it safe, trusted, and real time? In this episode of Eye on AI, host Craig Smith sits down with Jeff Lunsford to unpack how a neutral customer data platform like Tealium becomes the control plane for agentic systems. We cover how to collect and unify first party data responsibly, enforce consent and identity across channels, and feed the right context to models so agents can act with confidence in the moment. You will hear how real time profiles, event streams, and deterministic identity power personalization, automation, and transactions across web, mobile, ads, email, and customer support. Learn how leading enterprises are preparing for agentic commerce that could double digital interactions, why governance and privacy must be embedded into delivery teams, and which standards enable safe transactions and payments with agents. You will also hear how to build an “agentic front door” for your business, design guardrails and spending allowances, choose where to run reasoning and inference, and measure impact with metrics like conversion rate, ROAS, CSAT, and cost per resolution. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/38840785
info_outline
#296 Yeop Lee: How Coxwave is Redefining AI Evaluation
10/26/2025
#296 Yeop Lee: How Coxwave is Redefining AI Evaluation
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. How is Coxwave Redefining AI Evaluation? In this episode of Eye on AI, host Craig Smith is joined by Yeop Lee, Head of Product at Coxwave. Together they explore how teams move beyond accuracy-only metrics to outcome focused evaluation with Coxwave’s Align. We look at how Align measures satisfaction, trust, and task completion across chat, email, and voice, how LLM as judge pairs with human review, and how product teams search conversations to find hidden failure patterns that block adoption. Learn how leading companies design an evaluation stack that guides prompts, agents, and UX, which pitfalls to avoid when shipping updates, and which metrics matter most for success, including completion rate, CSAT, retention, and cost per resolution. You will also hear how to run experiment tracking with model and prompt change logs, set up governance that prevents regressions, and choose between SaaS and on premise deployments that meet security and compliance needs. Stay Updated: Craig Smith on X: Eye on A.I. on X:
/episode/index/show/aneyeonai/id/38781445
info_outline
#295 Fergal Reid: Why Your Bots Fail and How Agents Fix Your Customer Support
10/19/2025
#295 Fergal Reid: Why Your Bots Fail and How Agents Fix Your Customer Support
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. Why do so many chatbots fail in the real world, and how can AI agents actually fix customer support? In this episode of Eye on AI, host Craig Smith explores how teams move beyond scripted bots to production-grade AI agents that resolve real issues across chat, email, and voice. We look at what makes agents reliable at scale, how to configure them safely, and how to manage them like digital workers alongside your human team. Learn how leading companies approach agent onboarding and governance, which pitfalls to avoid, and which metrics matter most for success, including resolution rate, CSAT, and cost per resolution. You will also hear how to enable actions like refunds and returns through secure procedures, design human handoff that customers appreciate, and build an omnichannel rollout plan that scales responsibly. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI
/episode/index/show/aneyeonai/id/38633480
info_outline
#294 Bhaskar Roy: How Workato Is Building the Rise of the Agentic Enterprise
10/16/2025
#294 Bhaskar Roy: How Workato Is Building the Rise of the Agentic Enterprise
Try OCI for free at http://oracle.com/eyeonai This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking. Join Modal, Skydance Animation, and today’s innovative AI tech companies who upgraded to OCI…and saved. How are enterprises moving from AI experiments to a true agentic enterprise with measurable ROI? In this episode of Eye on AI, host Craig Smith speaks with Bhaskar Roy from Workato about how organizations can design, orchestrate, and govern AI agents at scale without sacrificing security or control. Together they unpack Workato’s approach to building a single workspace for employees while agents and apps work behind the scenes to automate real business processes. They explain why the future of enterprise AI depends on orchestration, permissions, and human in the loop design. You will hear how Workato One and Workato Go bring connectivity, action, and governance into one stack, how teams assign KPIs to agents and track outcomes, and how to reduce agent sprawl while optimizing SaaS spend. Learn how leading companies are defining the agentic enterprise, what pitfalls to avoid when moving from pilots to production, and how to measure impact across sales, IT, support, HR, and finance so AI drives durable business value. Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X:
/episode/index/show/aneyeonai/id/38600740
info_outline
#293 Greg Shewmaker: How Enterprises Can Implement and Scale with Agentic AI
10/13/2025
#293 Greg Shewmaker: How Enterprises Can Implement and Scale with Agentic AI
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. How can enterprises truly scale with agentic AI? In this episode of Eye on AI, host Craig Smith speaks with Greg Shewmaker, CEO of r.Potential, about how organizations can successfully implement agentic AI systems that enhance human performance instead of replacing it. Greg explains why the future of work depends on a new partnership between people and intelligent digital agents. He shares how r.Potential, a spin-out from the Adecco Group, helps enterprises design “digital workforces,” integrate AI agents into complex systems, and rethink productivity from the C-suite down. Learn how leading companies are approaching AI adoption, what pitfalls to avoid, and why agentic AI could redefine how enterprises operate and grow in the years ahead. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI
/episode/index/show/aneyeonai/id/38556995