Tech Talks Daily
How far can we trust research that is generated without asking a single human being? In this episode, I sat down with Jordan Harper from Qualtrics to unpack one of the most talked-about developments at the Qualtrics X4 Summit, synthetic research. It is a topic that sparks curiosity, excitement, and a fair amount of skepticism in equal measure. And honestly, that tension is exactly why this conversation matters. Jordan brings a rare mix of scientific thinking and real-world technology experience, which makes him well placed to cut through the hype. We explored what synthetic panels actually...
info_outlineTech Talks Daily
What does customer experience really mean when every company claims to put the customer first? In this episode, I sat down with Jeannie Walters, founder of Experience Investigators, to unpack why so many organizations talk about customer experience yet struggle to turn it into something that drives real business outcomes. With more than two decades of hands-on work across industries, Jeannie brings a perspective that cuts through the noise and focuses on what actually works inside complex organizations. Our conversation took place at the Qualtrics X4 Summit, where one theme kept resurfacing....
info_outlineTech Talks Daily
What does a great patient experience really look like when people are at their most vulnerable? In this episode, I sat down with Stanford Health Care’s SVP and Chief Patient Experience and Operational Performance Officer, Alpa Vyas, to explore how one of the world’s leading healthcare organizations is rethinking the human side of care. From the outside, healthcare is often seen as a system of processes, technology, and clinical outcomes. But as Alpa explains, every interaction sits within a deeply emotional moment in someone’s life, where fear, uncertainty, and complexity collide. That...
info_outlineTech Talks Daily
What happens when customer experience stops being a soft metric and starts becoming a direct driver of revenue, retention, and real-time action? In this episode, I sat down with Jeff Gelfuso, SVP and Chief Product and Experience Officer at Qualtrics, during X4 Summit in Seattle to talk about how AI is changing the way businesses understand and improve customer relationships. Jeff shared how his role sits at the point where product, experience, and business outcomes meet, helping customers use Qualtrics in ways that are both practical and measurable. One of the biggest themes in our...
info_outlineTech Talks Daily
What does it really mean to lead in AI when the headlines are loud, the claims are endless, and the real signals are often buried under hype? In this episode, I sit down with Ed White from Clarivate to make sense of one of the most important questions in technology right now, who is actually leading the AI innovation race, and what does the data really tell us? Ed leads the Clarivate Centre for IP and Innovation Research, where his team analyzes enormous volumes of intellectual property and innovation data to understand where technology is heading, who is building it, and which ideas are...
info_outlineTech Talks Daily
What does it really take to move AI from impressive demos into the hands of the people who keep the world running every day? In this episode of Tech Talks Daily, I sat down with Kriti Sharma, CEO of IFS Nexus Black, to explore a side of AI that rarely gets the spotlight. While much of the conversation around artificial intelligence focuses on chatbots and copilots, Kriti is working in environments where failure is not an option. Manufacturing plants, energy grids, airlines, and field service operations all depend on precision, experience, and consistency. What struck me early in our...
info_outlineTech Talks Daily
How are global payment systems quietly shifting beneath our feet, and what does that mean for businesses trying to grow across borders? In this episode of Tech Talks Daily, I sat down with Stuart Neal, CEO of Boku, to unpack a transformation that many consumers barely notice but every global business feels. Payments have long been dominated by familiar names like Visa and Mastercard, yet Stuart explains how that dominance is slowly being challenged by a surge in local payment methods. From mobile wallets in emerging markets to direct carrier billing in places where credit cards are far from...
info_outlineTech Talks Daily
What does it really take to turn a massive AI infrastructure investment into actual business value? In this episode, I’m joined by Alex Bouzari, founder and CEO of DDN, for a conversation that gets right to the heart of where AI infrastructure is heading next. There is a lot of noise in the market about faster chips, larger models, and bigger data centers, but Alex argues that the real story has changed. According to him, GPUs are no longer the main constraint. The true bottleneck now lies in the data layer, where data is moved, cached, served, and managed across increasingly complex AI...
info_outlineTech Talks Daily
Are employees really ready for AI in the workplace, or are we moving faster than people can realistically keep up? In this episode, I’m joined by David Evans, Chief Product Strategist at GoTo, to explore what is actually happening inside organizations as AI becomes part of everyday work. There is a growing assumption that businesses are already well on their way, with employees confidently using AI tools and leaders rolling out strategies at pace. But David brings a more measured view, backed by research and real-world insight, that suggests the picture is far more complex. One of the...
info_outlineTech Talks Daily
How can companies be drowning in customer data and still struggle to make better decisions? In this episode, I speak with Jochem van der Veer, CEO and co-founder of TheyDo, about a problem that many business leaders quietly recognize but rarely solve. Organizations are investing heavily in customer experience and AI, yet the results often fall short. There is more data than ever before, more dashboards, more reporting, and still a disconnect between insight and action. Jochem offers a refreshing perspective shaped by his work with global brands like Ford, Atlassian, Cisco, and Home Depot. He...
info_outlineWhat happens when the real bottleneck in artificial intelligence is no longer training models, but actually running them at scale?
In this episode of Tech Talks Daily, I sit down with Satyam Srivastava from d-Matrix to explore a shift that is quietly reshaping the entire AI infrastructure landscape. While much of the early AI race focused on training ever larger models, the next phase of AI adoption is increasingly defined by inference. That is the moment when trained models are deployed and used to generate real-world results millions of times a day.
Satyam brings a unique perspective shaped by years of experience in signal processing, machine learning, and hardware architecture, including time spent at NVIDIA and Intel working on graphics, media technologies, and AI systems. Now at d-Matrix, he is helping design next-generation computing architectures focused on one of the biggest challenges facing the AI industry today: efficiently running large language models without overwhelming data centers with unsustainable power and infrastructure demands.
During our conversation, we explored why the industry underestimated the infrastructure implications of inference at scale. While training large models grabs headlines, the real operational pressure often comes later when those models must serve millions of queries in real time. That shift places enormous strain on memory bandwidth, energy consumption, and data movement inside modern data centers.
Satyam explains how d-Matrix identified this challenge years before generative AI exploded into the mainstream. Instead of focusing on training hardware like many AI startups at the time, the company concentrated on inference efficiency. That decision is becoming increasingly relevant as organizations begin to realize that simply adding more GPUs to data centers is not a sustainable long-term strategy.
We also discuss the growing power constraints surrounding AI infrastructure, and why efficiency-driven design may be the only realistic path forward. With electricity supply, cooling capacity, and semiconductor availability all becoming limiting factors, the industry is being forced to rethink how AI systems are architected. Custom silicon, purpose-built accelerators, and heterogeneous computing environments are now emerging as key pieces of the puzzle.
The conversation also touches on the geopolitical and economic importance of AI semiconductor leadership, and why the relationship between frontier AI labs, infrastructure providers, and chip designers is becoming increasingly strategic. As governments and companies compete to maintain technological leadership, the question of who controls the hardware powering AI may prove just as important as the models themselves.
Looking ahead, Satyam shares his perspective on how the role of engineers will evolve as AI infrastructure becomes more specialized and energy-aware. Foundational engineering skills remain essential, but the next generation of engineers will also need to think in terms of entire systems, combining software, hardware, and AI tools to build more efficient computing environments.
As AI continues to move from research labs into everyday products and services, are organizations prepared for the infrastructure shift that comes with an inference-driven future? And could efficiency, rather than raw computing power, become the defining metric of the next phase of the AI race?