Eye On A.I.
What if the country that produces the world's top AI talent finally figured out how to keep it? In this episode of Eye on AI, Craig Smith sits down with Professor Mausam, one of India's leading AI researchers, AAAI Fellow, and founding head of the Yardi School of Artificial Intelligence at IIT Delhi, to get an honest and unflinching diagnosis of why India has fallen so far behind the US and China in artificial intelligence and what it will actually take to close that gap. Mausam breaks down the structural story behind India's deficit. A pipeline of world-class students that gets exported...
info_outlineEye On A.I.
Why IBM Is Betting Everything on Small AI Models In this episode of Eye on AI, Craig Smith sits down with Sriram Raghavan, Vice President of AI at IBM Research, to explore one of the most important debates in enterprise AI right now. Do you actually need a massive model to get world class results? IBM's answer is no, and Sriram breaks down exactly why. Sriram explains why IBM chose to train its Granite models directly using reinforcement learning rather than distilling from larger models like most of the industry. The reason goes beyond performance. It comes down to data lineage, safety...
info_outlineEye On A.I.
What if the country that trained the world's engineers finally decided to keep them? In this episode of Eye on AI, Craig Smith sits down with Abhishek, the civil servant leading India's $1.2 billion national AI Mission, to explore how one of the world's largest and most diverse nations is mounting a serious challenge to US and Chinese dominance in artificial intelligence. Abhishek breaks down the honest story behind India's late start. World-class talent, but no research ecosystem to retain it. Digitization without AI-usable data. Compute so scarce that the entire country had fewer than 500...
info_outlineEye On A.I.
Most enterprises are excited about agentic AI. But very few are actually deploying it in production. In this episode of Eye on AI, Craig Smith sits down with Adi Kuruganti, Chief AI and Development Officer at Automation Anywhere, to break down why agentic AI is so hard to get right in the enterprise and what it actually takes to move from a promising pilot to a mission-critical deployment. Adi explains why the future of enterprise automation is not agentic AI alone, but the combination of deterministic and agentic systems working together, and why companies that treat AI as a technology...
info_outlineEye On A.I.
What happens when AI writes code faster than anyone can test it? In this episode of Eye on AI, Craig Smith sits down with Dan Faulkner, CEO of SmartBear, to explore one of the most underappreciated risks of the AI coding boom. As tools like Claude Code and Codex push software development to unprecedented speed, the systems built to validate that software are being left behind. Dan makes a distinction that every engineering leader needs to hear: clean code passing unit tests is not the same as an application that actually works. Dan introduces the concept of application integrity, continuous...
info_outlineEye On A.I.
This episode is sponsored by Modulate. Most voice AI focuses on transcription. Velma takes it further by actually understanding conversations, analyzing tone, timing, stress, and intent using its Ensemble Listening Model architecture. Explore the live preview: What does it actually mean to build a foundation model for robots? In this episode of Eye on AI, Craig Smith sits down with Sergey Levine, co-founder of Physical Intelligence and professor at UC Berkeley, to explore a fundamentally different approach to building robots, one inspired not by programming a single perfect machine, but...
info_outlineEye On A.I.
AI has been trained like software. But what if it should be grown like life? In this episode of Eye on AI, Craig Smith sits down with Sebastian Risi, professor and leading researcher in neuroevolution and artificial life, to explore a fundamentally different approach to building intelligence, one inspired by how nature evolves, grows, and adapts. Sebastian explains why traditional AI systems are limited by fixed architectures and one-time training, and how evolutionary algorithms can create systems that continuously learn, self-organize, and even grow their own neural structures over time....
info_outlineEye On A.I.
Quantum computing has been “5 years away” for decades. So what’s actually holding it back? In this episode of Eye on AI, Craig Smith sits down with Izhar Medalsy, Co-founder & CEO of Quantum Elements, to break down the real bottleneck in quantum computing today and why the future of the industry may depend more on classical systems and AI than quantum hardware itself. Izhar explains how digital twins of quantum systems are being used to simulate real hardware, generate massive amounts of training data, and solve one of the biggest challenges in the field: noise and error...
info_outlineEye On A.I.
AI is changing more than just productivity. It’s changing what we can trust. In this episode, Kevin Tian, Co-founder and CEO of Doppel, breaks down how AI is enabling a new wave of social engineering attacks—from deepfake phone calls to impersonation across LinkedIn, YouTube, and search engines. The reality is this:Deepfakes are just one part of a much bigger problem. Attackers are now operating across multiple channels at once, using AI to manipulate people, not just systems. And as these attacks scale, the real risk isn’t just fraud or data loss—it’s the erosion of trust in...
info_outlineEye On A.I.
This episode is sponsored by Modulate. Most voice AI focuses on transcription. Velma takes it further by actually understanding conversations, analyzing tone, timing, stress, and intent using its Ensemble Listening Model architecture. Explore the live preview: Baris Gultekin, Head of AI at Snowflake, breaks down how enterprise AI is actually being built, deployed, and scaled today. From running AI directly inside governed data environments to enabling natural language access across entire organizations, this conversation explores the shift from experimentation to real-world impact....
info_outlineAI is not just getting smarter. It is getting faster by learning how to optimize the hardware it runs on.
In this episode, Sharon Zhou, VP of AI at AMD and former Stanford AI researcher, explains how language models are beginning to write and optimize their own GPU kernel code. We explore what self improving AI actually means, how reinforcement learning is used in post training, and why kernel optimization could be one of the most overlooked scaling levers in modern AI.
Sharon breaks down how GPU efficiency impacts the cost of training and inference, why catastrophic forgetting remains a challenge in continual learning, and how verifiable rewards from hardware profiling can help models improve themselves. The conversation also dives into compute economics, synthetic data, RLHF, and why infrastructure may define the next phase of AI progress.
If you want to understand where AI scaling is really happening beyond bigger models and more data, this episode goes under the hood.
Stay Updated:
Craig Smith on X: https://x.com/craigss
Eye on A.I. on X: https://x.com/EyeOn_AI
(00:00) Preview and Intro
(00:25) Sharon Zhou’s Background and Transition to AMD
(02:00) What Is Self-Improving AI?
(04:16) What Is a GPU Kernel and Why It Matters
(07:01) Using AI Agents and Evolutionary Strategies to Write Kernels
(11:31) Just-In-Time Optimization and Continual Learning
(13:59) Self-Improving AI at the Infrastructure Layer
(16:15) Synthetic Data and Models Generating Their Own Training Data
(20:48) AMD’s AI Strategy: Research Meets Product
(23:22) Inside the NeurIPS Tutorial on AI-Generated Kernels
(30:59) Reinforcement Learning Beyond RLHF
(39:09) 10x Faster Kernels vs 10x More Compute
(41:50) Will Efficiency Reduce Chip Demand?
(42:18) Beyond Language Models: Diffusion, JEPA, and Robotics
(45:34) Educating the Next Generation of AI Builders