loader from loading.io

#316 Robbie Goldfarb: Why the Future of AI Depends on Better Judgment

Eye On A.I.

Release Date: 01/23/2026

#316 Robbie Goldfarb: Why the Future of AI Depends on Better Judgment show art #316 Robbie Goldfarb: Why the Future of AI Depends on Better Judgment

Eye On A.I.

AI is getting smarter, but now it needs better  judgment. In this episode of the Eye on AI Podcast, we speak with Robbie Goldfarb, former Meta product leader and co-founder of Forum AI, about why treating AI as a truth engine is one of the most dangerous assumptions in modern artificial intelligence. Robbie brings first-hand experience from Meta’s trust and safety and AI teams, where he worked on misinformation, elections, youth safety, and AI governance. He explains why large language models shouldn’t be treated as arbiters of truth, why subjective domains like politics, health, and...

info_outline
#315 Jarrod Johnson: How Agentic AI Is Impacting Modern Customer Service show art #315 Jarrod Johnson: How Agentic AI Is Impacting Modern Customer Service

Eye On A.I.

In this episode of Eye on AI, Craig Smith sits down with Jarrod Johnson, Chief Customer Officer at TaskUs, to unpack how agentic AI is changing customer service from conversations to real action.    They explore what agentic AI actually is, why chatbots were only the first step, and how enterprises are deploying AI systems that resolve issues, execute tasks, and work alongside human teams at scale.    The conversation covers real-world use cases, the economics of AI-driven support, why many enterprise AI pilots fail, and how human roles evolve when AI takes on routine...

info_outline
#314 Nick Pandher: How Inference-First Infrastructure Is Powering the Next Wave of AI show art #314 Nick Pandher: How Inference-First Infrastructure Is Powering the Next Wave of AI

Eye On A.I.

Inference is now the biggest challenge in enterprise AI. In this episode of Eye on AI, Craig Smith speaks with Nick Pandher, VP of Product at Cirrascale, about why AI is shifting from model training to inference at scale. As AI moves into production, enterprises are prioritizing performance, latency, reliability, and cost efficiency over raw compute. The conversation covers the rise of inference-first infrastructure, the limits of hyperscalers, the emergence of neoclouds, and how agentic AI is driving always-on inference workloads. Nick also explains how inference-optimized hardware and...

info_outline
#313 Evan Reiser: How Abnormal AI Protects Humans with Behavioral AI show art #313 Evan Reiser: How Abnormal AI Protects Humans with Behavioral AI

Eye On A.I.

In this episode of Eye on AI, we sit down with Evan Reiser, co-founder and CEO of Abnormal AI, to unpack how AI has fundamentally changed the cybersecurity landscape.   We explore why social engineering remains the most costly form of cybercrime, how generative AI has lowered the barrier for sophisticated attacks, and why humans have become the primary attack surface in modern security. Evan explains why traditional, signature-based defenses fall short, how behavioral AI detects threats that have never existed before, and what it means to build security systems that understand how people...

info_outline
#312 Jonathan Wall: AI Agents Are Reshaping the Future of Compute Infrastructure show art #312 Jonathan Wall: AI Agents Are Reshaping the Future of Compute Infrastructure

Eye On A.I.

In this episode of Eye on AI, Craig Smith speaks with Jonathan Wall, founder and CEO of Runloop AI, about why AI agents require an entirely new approach to compute infrastructure.   Jonathan explains why agents behave very differently from traditional servers, why giving agents their own isolated computers unlocks new capabilities, and how agent-native infrastructure is emerging as a critical layer of the AI stack. The conversation also covers scaling agents in production, building trust through benchmarking and human-in-the-loop workflows, and what agent-driven systems mean for the...

info_outline
#311 Anurag Dhingra: Inside Cisco’s Vision for AI-Powered Enterprise Systems show art #311 Anurag Dhingra: Inside Cisco’s Vision for AI-Powered Enterprise Systems

Eye On A.I.

In this episode of Eye on AI, Craig Smith sits down with Anurag Dhingra, Senior Vice President and General Manager at Cisco, to explore where AI is actually creating value inside the enterprise. Rather than focusing on flashy demos or speculative futures, this conversation goes deep into the invisible layer powering modern AI: infrastructure. Anurag breaks down how AI is being embedded into enterprise networking, security, observability, and collaboration systems to solve real operational problems at scale.  From self-healing networks and agentic AI to edge computing, robotics, and...

info_outline
#310 Stefano Ermon: Why Diffusion Language Models Will Define the Next Generation of LLMs show art #310 Stefano Ermon: Why Diffusion Language Models Will Define the Next Generation of LLMs

Eye On A.I.

This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents.  Visit and add your support. Most large language models today generate text one token at a time. That design choice creates a hard limit on speed, cost, and scalability. In this episode of Eye on AI, Stefano Ermon breaks down diffusion language models and why a parallel, inference-first approach could define the next generation of LLMs. We explore how diffusion models differ from autoregressive systems, why inference efficiency matters more than training scale, and what this shift means for...

info_outline
#309 Jamie Metzl: Why Gene Editing Needs Governance Or We Lose Control show art #309 Jamie Metzl: Why Gene Editing Needs Governance Or We Lose Control

Eye On A.I.

This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents.  Visit and add your support. Why are AI, biotechnology, and gene editing converging right now, and what does that mean for the future of humanity? In this episode of Eye on AI, host Craig Smith sits down with futurist and author Jamie Metzl to explore the superconvergence of artificial intelligence, genomics, and exponential technologies that are reshaping life on Earth. We examine the ethical and scientific realities behind human genome editing, the controversy around CRISPR babies, and why...

info_outline
#308 Christopher Bergey: How Arm Enables AI to Run Directly on Devices show art #308 Christopher Bergey: How Arm Enables AI to Run Directly on Devices

Eye On A.I.

Try OCI for free at   This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking.  Join Modal, Skydance Animation, and today’s innovative AI tech companies who upgraded to OCI…and saved. Why is AI moving from the cloud to our devices, and what makes on device intelligence finally practical at scale? In this episode of Eye on AI, host Craig Smith...

info_outline
#307 Steven Brightfield: How Neuromorphic Computing Cuts Inference Power by 10x show art #307 Steven Brightfield: How Neuromorphic Computing Cuts Inference Power by 10x

Eye On A.I.

This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents.  Visit and add your support. Why is AI so powerful in the cloud but still so limited inside everyday devices, and what would it take to run intelligent systems locally without draining battery or sacrificing privacy? In this episode of Eye on AI, host Craig Smith speaks with Steve Brightfield, Chief Marketing Officer at BrainChip, about neuromorphic computing and why brain inspired architectures may be the key to the future of edge AI. We explore how neuromorphic systems differ from traditional...

info_outline
 
More Episodes

AI is getting smarter, but now it needs better  judgment.

In this episode of the Eye on AI Podcast, we speak with Robbie Goldfarb, former Meta product leader and co-founder of Forum AI, about why treating AI as a truth engine is one of the most dangerous assumptions in modern artificial intelligence.

Robbie brings first-hand experience from Meta’s trust and safety and AI teams, where he worked on misinformation, elections, youth safety, and AI governance. He explains why large language models shouldn’t be treated as arbiters of truth, why subjective domains like politics, health, and mental health pose serious risks, and why more data does not solve the alignment problem.

The conversation breaks down how AI systems are evaluated today, how engagement incentives create sycophantic and biased models, and why trust is becoming the biggest barrier to real AI adoption. Robbie also shares how Forum AI is building expert-driven AI evaluation systems that scale human judgment instead of crowd labels, and why transparency about who trains AI matters more than ever.

This episode explores AI safety, AI trust, model evaluation, expert judgment, mental health risks, misinformation, and the future of responsible AI deployment.

If you are building, deploying, regulating, or relying on AI systems, this conversation will fundamentally change how you think about intelligence, truth, and responsibility.


Want to know more about Forum AI?
Website: https://www.byforum.com/
X: https://x.com/TheForumAI
LinkedIn: https://www.linkedin.com/company/byforum/

Stay Updated:
Craig Smith on X: https://x.com/craigss
Eye on A.I. on X: https://x.com/EyeOn_AI


(00:00) Why Treating AI as a “Truth Engine” Is Dangerous
(02:47) What Forum AI Does and Why Expert Judgment Matters
(06:32) How Expert Thinking Is Extracted and Structured
(09:40) Bias, Training Data, and the Myth of Objectivity in AI
(14:04) Evaluating AI Through Consequences, Not Just Accuracy
(18:48) Who Decides “Ground Truth” in Subjective Domains
(24:27) How AI Models Are Actually Evaluated in Practice
(28:24) Why Quality of Experts Beats Scale in AI Evaluation
(36:33) Trust as the Biggest Bottleneck to AI Adoption
(45:01) What “Good Judgment” Means for AI Systems
(49:58) The Risks of Engagement-Driven AI Incentives
(54:51) Transparency, Accountability, and the Future of AI