Six Five Media
Six Five Media is a leading producer of professional video content, specifically crafted to elevate leading tech companies, their products, and their executives among enterprise customers and industry peers. As a joint venture between The Futurum Group and Moor Insights & Strategy, Six Five Media harnesses the expertise of top-ranked industry analysts and influential hosts to ensure its clients' messages resonate effectively in the market. Learn more at sixfivemedia.com.
info_outline
Inside the Memory Tech Powering Today’s AI and HPC Workloads
01/30/2026
Inside the Memory Tech Powering Today’s AI and HPC Workloads
AI and HPC are outgrowing yesterday’s memory architectures, and the next performance breakthroughs won’t come from GPUs alone. How are memory and storage innovations reshaping how data centers scale for the AI era? From SC25, host , Global Technology Advisor at , is joined by 's , Vice President and General Manager, AI Solutions for Micron’s Cloud Memory Business Unit, for a conversation on the advanced memory technologies driving efficiency, bandwidth, and scalability for AI and HPC workloads. The discussion brings practical insights into the real-world implications of memory bottlenecks and emerging architectures and interconnects shaping next-gen data center performance. What evolving strategies around composable infrastructure are helping data center architects plan for memory-intensive computing? Key Takeaways Include: 🔹Memory bottlenecks in AI and HPC: Why memory architectures are now the critical limiting factor in AI/HPC system performance and scaling. 🔹Emerging solutions and trade-offs: How higher bandwidth, increased capacity, new module designs, and innovative interconnects are addressing performance needs, and the challenges and roadblocks system designers still face. 🔹Modular, composable, and CXL-driven infrastructure: The practical benefits of modular and composable memory, and how technologies like CXL are enabling more dynamic, agile, and efficient memory use for modern workloads. 🔹Real-world benchmarking & ecosystem collaboration: Insights from industry-standard benchmarks and testing with partners, revealing how latency and bandwidth behave under authentic AI/HPC conditions. Learn more at Watch the full video at , and be sure to , so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39938455
info_outline
The View From Davos | IBM on Turning AI Spend Into Real Enterprise Value
01/27/2026
The View From Davos | IBM on Turning AI Spend Into Real Enterprise Value
The AI ROI gap isn’t between ambition and technology. It’s between ambition and execution. From Davos, and are with ’s , Senior Vice President, Software and Chief Commercial Officer, to examine the disconnect many enterprises are now facing. AI investment is accelerating, expectations are high, yet measurable business impact remains frustratingly uneven. Rob shifts the lens away from technology gaps and toward application discipline. The biggest gains, he argues, are rarely headline-grabbing. They show up in operational workflows that reduce friction, compress cycle times, and quietly lower costs. He also points to resilience, not novelty, as the trait that increasingly defines successful enterprise AI, especially as organizations move from experimentation to systems they must rely on every day. Key Takeaways Include: 🔷 AI ROI is driven by execution, not imagination: Enterprises rarely fail due to a lack of ideas. They struggle to operationalize AI because processes, incentives, and ownership models are not designed for change at scale. 🔷 Boring workflows deliver outsized returns: Repetitive, operational tasks across finance, HR, supply chain, and customer support often provide faster and more defensible ROI than high-visibility, experimental use cases. 🔷 Productivity gains matter more than novelty: Reducing costs and accelerating cycle times are the most reliable early indicators that AI is delivering real business value. 🔷 Culture determines whether AI scales: AI adoption requires top-down commitment and tolerance for disruption. Without leadership-driven change management, even strong technology investments underperform. 🔷 Resilience is returning as a strategic priority: As AI becomes embedded in core systems, reliability, security, and operational stability move from secondary concerns to competitive differentiators.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39899225
info_outline
The View from Davos with Antonio Neri
01/27/2026
The View from Davos with Antonio Neri
AI progress is real. Scaling it responsibly is the true stress test. From Davos, and sit down with President and CEO to examine how enterprises are navigating the next phase of AI. As momentum shifts from experimentation to deployment, the conversation centers on where AI should run, how risk is created by over-concentration, and why distributed architectures are becoming essential to resilience and competitiveness. Rather than framing sovereignty and global scale as opposing forces, Antonio outlines how enterprises and governments can reconcile both through hybrid and edge-centric AI strategies. Latency, regulation, energy constraints, and data gravity are no longer theoretical considerations. They are shaping real infrastructure decisions today, and the leaders who focus on orchestration, not ownership, are best positioned to turn AI investment into a durable advantage. Key Takeaways Include: 🔷 AI is inherently hybrid: Enterprise AI increasingly spans edge, regional, and hyperscale environments, driven by data sensitivity, latency requirements, and cost realities. 🔷 Concentration creates risk: Over-reliance on centralized cloud and AI capacity exposes enterprises and regions to strategic, operational, and geopolitical vulnerabilities. 🔷 Sovereignty and competitiveness can coexist: Distributed architectures allow organizations to protect local data and control while still leveraging global innovation. 🔷 Inference is driving the shift: Growth in inferencing signals that AI models are moving into real workloads, making placement and orchestration critical. 🔷 Platforms matter more than stacks: Leaders succeeding with AI focus on unified platforms that manage complexity, not on owning every layer themselves. Learn more about ’s collaboration with NVIDIA and how distributed AI architectures are taking shape.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39899300
info_outline
Modern Private Cloud: Balancing Operational Agility with Data Sovereignty
01/27/2026
Modern Private Cloud: Balancing Operational Agility with Data Sovereignty
AI is forcing a hard reset on cloud strategy in Europe. Sovereignty, locality, and private cloud are no longer edge cases—they’re becoming core to how enterprises scale AI without new dependencies. In this conversation, , Vice President and Practice Lead, AI at The Futurum Group, is joined by , Chief Revenue Officer at , to examine why sovereign and hyper-local cloud models are gaining momentum across Europe. As AI workloads move into production, control, locality, and automation are becoming board-level infrastructure decisions rather than a policy checkbox. Regulatory pressure, geopolitical risk, and AI-driven workloads are forcing leaders to answer harder questions: where data lives, who controls it, and how systems scale without creating new dependencies. Drawing on VMware Cloud Foundation strategy and evoila’s hyper-local deployment model, our guests share how enterprises are modernizing private cloud environments without sacrificing agility. As AI inference, RAG, and regulated workloads move into production, they underscore that infrastructure decisions made today will directly shape resilience, compliance, and long-term flexibility. Key Takeaways: 🔷 Sovereignty is about control, not just location: Enterprises want confidence they can keep operating, keep ownership of data, and avoid external “on/off switch” dependencies. 🔷 VCF is becoming the private cloud default stack: Standardized deployment and automation reduce the time, complexity, and “multi-year-project” drag that defined older private cloud builds. 🔷 Hyperlocal private cloud changes the provider equation: Local proximity plus workload mobility across compatible environments creates choice without sacrificing compliance-led architectures. 🔷 Ecosystem growth will likely come with consolidation: Demand is rising, but not every provider can meet the required operational bar. 🔷 Repatriation is a strategic reset: Economics, sovereignty, and private AI needs are converging—hybrid remains real, but private is becoming central. Learn more at and .
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39899095
info_outline
The View from Davos with Qualcomm’s Nakul Duggal
01/26/2026
The View from Davos with Qualcomm’s Nakul Duggal
Edge AI, Physical Intelligence, and the Path to Scalable Automation AI conversations at Davos often start in the data center. This one doesn’t stay there. From Davos, and sit down with , EVP and Group GM for Automotive, Industrial, Embedded IoT, and Robotics at Qualcomm, to unpack why the next phase of AI scale depends less on model size and more on where intelligence runs. The discussion moves beyond cloud-centric narratives to focus on Edge AI as a system-level requirement, not an architectural afterthought. Duggal explains why real-world AI, from autonomous vehicles to robotics and industrial automation, demands real-time execution, deterministic latency, safety guarantees, and extreme energy efficiency. These constraints fundamentally change how AI must be designed, deployed, and monetized. As AI becomes physical, the conversation highlights why robotics, industrial platforms, and intelligent machines are emerging as the next major growth vector. Qualcomm’s evolution from mobile-first innovation to automotive, industrial, and robotics leadership illustrates how edge-native design principles, mixed-precision compute, and ecosystem enablement are becoming decisive advantages for scaling AI outside the data center. Key Takeaways 🔷 Edge AI is execution, not experimentation: AI systems that interact with the physical world must operate in real time, with safety, latency, and reliability baked in. This shifts the AI conversation from training to deployment discipline. 🔷 Physical AI introduces non-negotiable constraints: Power, cost, thermal limits, and deterministic performance define what can scale. These constraints favor architectures designed for efficiency over brute-force compute. 🔷 Robotics is accelerating faster than expected: Advances in model distillation and physical AI are compressing timelines. Robotics is moving from research to deployment faster than most industries anticipated. 🔷 Industrial AI requires ecosystem enablement: Scaling AI at the edge depends on developers, partners, and platforms that can prototype, deploy, and iterate quickly across vertical-specific use cases. 🔷 The next AI wave is hybrid by necessity: Data centers and edge systems must work together. Trillions in AI-driven value depend on connecting intelligence across cloud, network, and device layers.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39888635
info_outline
The View from Davos with MBZUAI’s Eric Xing
01/24/2026
The View from Davos with MBZUAI’s Eric Xing
As AI research pushes beyond pattern recognition, foundational questions are resurfacing about what progress should actually look like. Growth for growth’s sake alone is no longer a sufficient answer. From Davos, sits down with , President of , to challenge the assumption that bigger models alone will carry AI forward. The focus shifts to why real progress depends on systems that can reason, plan, and interact with the world, not just absorb more data. Xing also presses on the stakes for academia as AI becomes entangled with geopolitics, regulation, and national strategy, arguing that universities now sit on the front line, defending open research, reshaping education for an AI-native era, and deciding how innovation advances with intent rather than inertia. Key Takeaways Include: 🔷 World models signal a shift in AI research priorities: Progress toward more general intelligence may require systems that model the world and can reason about it, rather than simply scaling language models. 🔷 Today’s AI remains fundamentally limited: Despite rapid advances, current systems still lack the planning, reasoning, and adaptability that define more general forms of intelligence. 🔷 Open science is under pressure: Rising geopolitical tensions around AI sovereignty and compute are reshaping how universities and research institutions operate globally. 🔷 Responsible acceleration is becoming a leadership challenge: Balancing innovation speed with societal impact now requires deliberate choices, not default momentum. 🔷 Higher education must adapt quickly: Universities face urgent decisions about what to stop teaching and what to prioritize in preparing students for an AI-native decade. Learn more at .
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39857095
info_outline
Innovation to Product: How AWS Goes from Science to Service
01/23/2026
Innovation to Product: How AWS Goes from Science to Service
How does AWS turn groundbreaking scientific research into cutting-edge cloud services powering global businesses? From AWS re:Invent 2025, host is joined by 's , VP, Agentic AI, for a conversation on the innovation process at AWS. Their discussion provides insight into the “AWS innovation engine” and how AWS transforms research into market-ready services. Key Takeaways Include: 🔹Concept to Launch: Swami outlines the rapid journey of Amazon Quick Suite from a research idea to a customer-facing service. 🔹Innovation Framework: Learn how AWS evaluates new research and identifies which scientific advances are primed to scale into impactful cloud products. 🔹Customer-centric Development: Insights on balancing bold, long-term innovation against immediate needs and value delivery to AWS’s vast global customer base. 🔹Real-world Impact: Swami reflects on the innovations he’s most proud of, offering his personal perspective on meaningful technological progress. Learn more at . Watch the full video at sixfivemedia.com, and be sure to , so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39856110
info_outline
The View from Davos with Wedbush’s Daniel Ives
01/23/2026
The View from Davos with Wedbush’s Daniel Ives
AI adoption is broadening, use cases are becoming more physical, and the path for technology-led growth is long. From Davos, catches up with , Managing Director and Global Head of Technology Research at , to take the pulse of a market moving into the next phase of the AI cycle. Set against geopolitics, policy debates, and global competition, the conversation looks past the headlines to what is actually changing. Ives explains why the AI narrative is moving toward enterprise software, modernization, and ROI, and why this moment feels less like a bubble and more like the early stages of a multi-year buildout. From hyperscalers and cybersecurity to physical AI and robotics, confidence is building that monetization is beginning to align with investment. Rather than dwelling on short-term valuation swings, focus should remain on structural drivers, enterprise data, collaboration between AI labs and businesses, and the infrastructure required to support what comes next. Key Takeaways Include: 🔷 Enterprise AI is entering its monetization phase: The conversation points to software modernization, cybersecurity, and enterprise platforms as the next major drivers of AI value creation. 🔷 AI investment is part of a long build cycle: Ives frames the current moment as year three of an eight-to-ten-year transformation, not a late-cycle peak. 🔷 Physical AI is the next frontier: Robotics, autonomy, and real-world AI applications are emerging as the next multi-trillion-dollar opportunity. 🔷 Collaboration unlocks ROI: The combination of advanced models from AI labs and high-value enterprise data is where scalable impact begins. 🔷 Geopolitics and energy matter: Semiconductor supply chains, energy access, and national competitiveness are increasingly intertwined with AI leadership. Learn more about ’ perspective on enterprise AI and technology markets. Subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39852805
info_outline
The View from Davos with Check Point CEO Nadav Zafrir
01/23/2026
The View from Davos with Check Point CEO Nadav Zafrir
AI is moving fast. Security has to move first. From Davos, sits down with , CEO of , to examine why cybersecurity has become a foundational requirement for AI transformation, not a downstream consideration. As geopolitical tension, rapid AI adoption, and enterprise transformation collide, the conversation centers on a simple reality: AI is already operating in the physical world. Autonomous vehicles, healthcare systems, financial infrastructure, and agent-driven workflows are operating live. That shift dramatically expands the attack surface, compresses response windows, and raises the cost of failure. Zafrir outlines why securing AI requires rethinking security architecture from the ground up. Prevention-first strategies, ecosystem collaboration, and AI-powered defense are no longer optional. As enterprises race to remain relevant, security becomes the gating factor between ambition and safe, scalable execution. Key Takeaways Include: 🔷 AI security is not a future problem: Agents are already driving cars, handling medical workflows, and automating operations, which makes AI security an immediate, real-world requirement. 🔷 Attackers move faster than defenders by default: Enterprises must re-evaluate their existing security posture, account for new AI-driven attack surfaces, and assume asymmetry in threat evolution. 🔷 Cybersecurity must be built into AI by design: Securing AI requires prevention-first architectures, not reactive detection layered on after deployment. 🔷 AI must defend AI: Enterprises increasingly need to use AI to secure their own systems, accelerating response and reducing noise at scale. 🔷 Ecosystems matter more than standalone tools: No single vendor or platform can solve AI security alone. Collaboration across customers, partners, and vendors is essential. Learn more about how Check Point is securing AI at scale at . Watch the full video at , and be sure to subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39851940
info_outline
The Main Scoop Episode 39: Fostering Learning Agility and Curiosity - How to Empower Today’s Developers
01/23/2026
The Main Scoop Episode 39: Fostering Learning Agility and Curiosity - How to Empower Today’s Developers
How do enterprises responsibly modernize, optimizing developer experience (DevX) with the latest tooling and AI, while maintaining the security and reliability of mission-critical systems? On this episode of The Main Scoop, hosts , SVP & GM, Mainframe Software Division, Broadcom, and , CEO and Chief Analyst, Futurum, are joined by Group’s , VP and Engineering Practice Owner, and DevSecOps Executive Product Owner and Technology Lead, to explore how enterprises like TD Bank are empowering developers through modern AI tools, evolving culture, and practical modernization while meeting the security and governance demands of regulated environments. The conversation ties developer experience priorities such as reliability, modernization, and career growth directly to business impact, showing why technology and culture must evolve together in enterprise IT. Key Takeaways Include: 🔹 Modernizing core systems responsibly Developers are evolving mission-critical platforms—like the mainframe—while maintaining the security, reliability, and resilience enterprises depend on. 🔹 AI and modern tooling as DevX accelerators Generative AI and contemporary development tools are enhancing productivity, improving technical depth, and helping teams modernize without sacrificing control. 🔹 Culture, curiosity, and career growth Fostering learning agility, clear career paths, and a supportive culture is critical to retaining talent and enabling sustained innovation in hybrid enterprise environments. Watch the full video at , and be sure to , so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39838585
info_outline
The View from Davos with Cisco’s Jeetu Patel
01/22/2026
The View from Davos with Cisco’s Jeetu Patel
AI is advancing fast. The real friction is in scaling the systems behind it. From Davos, and sit down with , President and Chief Product Officer at , to talk about what changes when AI shifts from experimentation into production. The focus is not on whether AI works, but on whether networks, security, and data systems are ready to support AI operating at machine speed. Jeetu breaks down three constraints that now define enterprise AI scale: infrastructure limits around power, compute, and bandwidth; a growing trust gap as AI systems become non-deterministic; and a widening data gap as organizations exhaust publicly available training data and turn to machine and synthetic sources. Together, these pressures are reshaping how enterprises think about networking, security, and observability as foundational AI capabilities. Key Takeaways Include: 🔷 AI scale is constrained by infrastructure, not imagination: Power availability, network bandwidth, and compute distribution now set the ceiling for what AI systems can realistically deliver. 🔷 Trust and security are prerequisites, not add-ons: As AI systems become non-deterministic, enterprises must secure both the network and the AI itself to enable adoption. 🔷 Data strategy is becoming a limiting factor: Enterprises are running out of usable public data, increasing the importance of machine data, observability, and correlation at scale. 🔷 Networking is shifting from scale-out to scale-across: Connecting AI clusters across locations is becoming essential as power and capacity fragment geographically. 🔷 Edge inferencing is no longer optional: Latency, autonomy, and operational needs are pushing more AI workloads closer to where data is created. Learn more at . Subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39830605
info_outline
The View from Davos with Celonis Chief Trust Officer Vanessa Candela
01/22/2026
The View from Davos with Celonis Chief Trust Officer Vanessa Candela
AI ambition is easy to talk about, but making AI reliable inside real enterprises is much harder. From Davos, is with , Chief Trust Officer at , to talk about why trust, process intelligence, and clean enterprise data now sit at the center of AI progress. As generative AI spreads across regulated industries and public-sector environments, the risks of layering AI on top of broken processes are becoming impossible to ignore. Rather than treating AI as a standalone capability, Vanessa explains why enterprise AI only works when it understands how a business actually operates. Process intelligence gives AI context. Without it, models generate generic answers, amplify inefficiencies, and create compliance risk. With it, AI becomes faster, more accurate, and meaningfully useful inside real workflows. Key Takeaways: 🔷 There is no AI without process intelligence: AI systems need context to deliver value. Without clean, well-understood business processes, AI outputs remain generic, unreliable, or outright wrong. 🔷 Trust starts with data and governance: Enterprise AI depends on accurate data, secure systems, and compliance-ready platforms, especially in regulated and public-sector environments. 🔷 Agentic AI amplifies what already exists: Autonomous agents do not fix broken workflows. They accelerate them. Organizations must address inefficiencies before scaling AI-driven automation. 🔷 Process optimization unlocks real ROI: Enterprises that audit and simplify processes before deploying AI see faster cycles, lower costs, and measurable business outcomes. Learn more at . Subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39830525
info_outline
The View from Davos with Ericsson’s Niklas Heuveldop and Åsa Tamsons
01/22/2026
The View from Davos with Ericsson’s Niklas Heuveldop and Åsa Tamsons
The AI chatter in Davos is loud. The hard part is still quiet: getting the physical world connected fast enough for AI to matter. joins ’s and for a grounded look at what enterprise and agentic AI actually require to deliver ROI. In ports, airports, mines, factories, and other industrial environments, 5G and advanced connectivity are no longer optional. They are the control panel for real-time automation. The discussion moves quickly from ambition to execution. From, why large-scale industry transformation has lagged to what China’s momentum reveals about adoption at scale, and why enterprises keep circling back to the same demand: trusted, reliable connectivity that’s as simple to consume as the cloud. The goals are clear: stop optimizing for tools. Build the stack. Connect the devices. Make the data usable. Then move, fast. Key Takeaways Include: 🔷 Enterprise AI needs real-time connectivity to leave the demo stage: Sensors, machines, and edge devices create the workload, and networks determine whether that workload can run reliably at business speed. 🔷 5G was built for “many device types,” not just smartphones: Industrial AI depends on connectivity that is smart, secure, and performance-tuned to device requirements. 🔷 Physical AI success is a stack problem, not a single-tool problem: Cloud, connectivity, compute, and model access have to work together, then teams can iterate on what creates value. 🔷 Adoption and culture decide ROI: Faster feedback loops and “learn fast” execution matter more than steering-committee perfection when the transformation window is shrinking. 🔷 Edge + wide-area is the real operating environment: Many use cases start inside factories and then extend beyond them, which raises the bar for consistent networking and applications across contexts. Watch the full video at and to our YouTube channel so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39819565
info_outline
The View from Davos with Infosys‘s Anand Swaminathan
01/21/2026
The View from Davos with Infosys‘s Anand Swaminathan
How are global business leaders balancing AI innovation with economic and geopolitical uncertainty in 2026? From Davos, host is joined by 's , EVP, for a conversation on Infosys’s perspective from Davos, highlighting cautious optimism in business, the maturing conversation around AI, and the company’s key role in enabling enterprise-scale AI outcomes. The discussion centers on keyphrase focus: AI in the enterprise, shifting technology spending, the impact of agentic and sovereign AI, workforce transformation, and the growing significance of power and energy constraints. Key Takeaways Include: 🔹 AI Conversation Has Matured From “What Is AI?” to “Where’s the ROI?”: Cautious optimism prevails in Davos, and the focus is on how enterprises and governments can deploy AI at scale with the goal of saving money or generating additional revenue. ROI remains unresolved for many, making execution and absorption of AI the real differentiator. 🔹 Infosys’s Competitive Edge: Success in AI relies on domain expertise and operational rigor, not just technology. Deployment cycles are shrinking dramatically, benefiting customers, but raising execution pressure. Infosys’ value lies in bringing customer context, industry knowledge, and operational rigor to AI deployments. 🔹 Shifting Spend: Traditional IT spending is declining, while AI-driven business outcomes such as supply chain resilience, faster product innovation cycles, and customer retention are driving new investments. 🔹 Enterprise Challenges and Opportunities: As agent ecosystems expand, security, resilience, and governance become critical. Infosys is actively working with cybersecurity partners to ensure responsible, sustainable AI deployment. 🔹 Sovereign AI is Getting Real: There is growing concern over data residency, geopolitical risk, and technology autonomy. Countries want to ensure they are not “unplugged” from critical AI infrastructure and telcos are becoming central players in sovereign AI strategies. 🔹 Social Responsibility and Inclusion Matter: AI has the potential to dramatically expand access to education, healthcare, and opportunity. Examples include medical AI agents delivering expert-level guidance in underserved regions. While the transition will be uneven, the long-term outcome is broad-based uplift, not concentration of advantage. Watch the full video at , and be sure to so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39819525
info_outline
The View from Davos with Workiva’s Mike Rost
01/21/2026
The View from Davos with Workiva’s Mike Rost
From Davos, Six Five Media examines how AI is reshaping enterprise software through the lens of trust, governance, and execution. is with , Chief Strategy Officer at , to explore what changes when AI moves from literacy to fluency inside regulated, zero-error environments. Rather than framing AI as a threat to SaaS, the discussion surfaces a more grounded reality: AI depends on trusted, validated data and disciplined systems. As speculation grows around AI “replacing” enterprise software, Workiva’s position is clear: platforms built on investor-grade data, governance, and deterministic workflows become more critical, not less, as AI adoption accelerates. They believe the focus should not be autonomy for autonomy’s sake, but productivity gains that organizations can trust at scale. Key Takeaways Include: 🔷 AI fluency is replacing AI literacy: Enterprises are moving beyond experimentation toward embedding AI directly into core workflows and processes. 🔷 Trusted data is the real AI advantage: Investor-grade, validated data determines whether AI accelerates work or introduces unacceptable risk. 🔷 AI accelerates SaaS, it does not replace it: Regulated environments still require platforms that continuously adapt to compliance, reporting, and governance demands. 🔷 Governance defines the ceiling for AI adoption: In zero-error domains, speed without control creates more risk than value. 🔷 Productivity beats autonomy: Customers prioritize tools that augment judgment and reduce friction over fully autonomous systems. Watch the full episode at
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39817690
info_outline
The View from Davos with IonQ’s CEO Niccolo de Masi
01/20/2026
The View from Davos with IonQ’s CEO Niccolo de Masi
Quantum computing isn’t sci-fi. It is a present-day security, infrastructure, and competitiveness issue. From Davos, sits down with , CEO of , to break down how quantum computing is moving out of academic labs and into the center of geopolitical and enterprise strategy. As fault-tolerant systems advance, timelines are compressing, pushing quantum security from an abstract risk into an immediate priority for governments, financial institutions, and global enterprises. Niccolo walks through how IonQ is building beyond compute into networking, sensing, and security, framing quantum as a platform rather than a feature. As nations and enterprises push toward sovereign systems, decisions around infrastructure control, power efficiency, and ecosystem openness are now shaping how, and how fast, quantum scales. Key Takeaways Include: 🔷 Quantum security urgency is rising fast: As fault-tolerant quantum approaches reality, existing cryptographic infrastructure faces real exposure, pushing security upgrades higher on enterprise and government agendas. 🔷 Quantum is evolving into a platform: Computing alone is not enough. Networking, sensing, and security must operate together to deliver meaningful quantum advantage. 🔷 Sovereign systems are gaining momentum: Governments and large enterprises increasingly want control over where quantum systems run, how data is handled, and who owns the stack. 🔷 Energy efficiency changes the equation: Quantum systems offer meaningful advantages in power consumption compared to classical AI compute, showing promise for long-term infrastructure goals and reshaping planning. 🔷 Quantum value is already emerging: Optimization, materials science, drug discovery, and logistics are seeing early, practical benefits as quantum integrates with classical systems.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39805220
info_outline
The View from Davos with Snowflake: From AI Ambition to Enterprise Impact
01/20/2026
The View from Davos with Snowflake: From AI Ambition to Enterprise Impact
From Davos, Switzerland, amidst the activity of the WEF, Patrick Moorhead and Daniel Newman are with Sridhar Ramaswamy, CEO of Snowflake, to examine why so many enterprise AI initiatives stall between strategy and execution. Nearly every organization has an AI roadmap, but far fewer have systems that run reliably in production. This conversation highlights where execution breaks down and why data architecture, performance, governance, and operational discipline now separate AI that scales from AI that stalls. As AI moves beyond assistive tools toward systems and agents that act on behalf of the business, trust, control, and data sovereignty shift from secondary concerns to core requirements. Key Takeaways Include:🔷 Production exposes the real gaps: Most AI initiatives fail not at the model level, but when systems are expected to run reliably inside live enterprise environments. 🔷 Data architecture determines outcomes: Performance, accessibility, and integration of data now define what AI can realistically deliver at scale. 🔷 Governance becomes a frontline requirement: As AI systems take action on behalf of the business, trust, control, and accountability move from background concerns to operational necessities. 🔷 Agentic AI raises the bar: Systems that act autonomously require stronger foundations around data sovereignty, monitoring, and decision boundaries. 🔷 Execution discipline wins: Sustainable AI success comes from durable, enterprise-ready platforms, not from the number of pilots launched.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39800360
info_outline
The View from Davos with Activate Consulting’s Michael J. Wolf
01/20/2026
The View from Davos with Activate Consulting’s Michael J. Wolf
AI optimism is high, but the real debate is whether this moment looks more like durable transformation or another overheated cycle. From the streets of Davos, is joined by Founder and CEO of , for a fast-paced conversation on why today’s AI buildout does not resemble the tech bubble of the early 2000s. Wolf explains why comparisons miss the mark, pointing to real demand, real revenue, and the real physical limits across compute, memory, networking, and energy that are shaping the next phase of AI growth. Rather than focusing on hype cycles, they look at the real pressure tests for AI and explore how AI is reshaping media, creativity, and content economics, and why human differentiation becomes more valuable, not less, as generative tools scale. Key Takeaways Include: 🔷 This is not a repeat of the 2000 tech bubble: Unlike speculative vendor financing cycles, today’s AI expansion is driven by real usage, real customers, and sustained infrastructure demand. 🔷 AI growth is constrained by physical limits: Compute, memory, networking, and especially energy availability are now the gating factors for scale, not model ambition. 🔷 Circular deals do not negate demand: Infrastructure investments will be utilized regardless of which AI platform dominates, reinforcing durability across the ecosystem. 🔷 Content abundance raises the value of authenticity: As AI-generated media explodes, originality, human creativity, and trusted voices become stronger differentiators. 🔷 Human relevance increases as automation scales: AI expands output, but trust, journalism, creativity, and leadership remain fundamentally human advantages.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39799995
info_outline
The Data Problem: Building Infrastructure for the World’s Most Valuable Enterprise Asset
01/19/2026
The Data Problem: Building Infrastructure for the World’s Most Valuable Enterprise Asset
Data isn’t scarce, but turning it into something usable, timely, and actionable at scale is still where most enterprises fall short. From Lenovo Tech World in Las Vegas, and are joined by , SVP, ISG Sales, ISO at , to focus on why enterprise data continues to fall short of its potential despite widespread AI investment. They break down where AI ambition is running ahead of infrastructure readiness and why legacy data architectures are struggling to keep pace. As data stretches across cloud, core, and edge environments, the idea that a single platform can handle every workload is breaking down, causing organizations to make tradeoffs as they move compute closer to where data is created, balancing responsiveness with governance, security, and cost. The message is clear: unlocking data value requires platforms built for distributed execution, not just centralized analytics. Key Takeaways Include: 🔷 Enterprise data remains underutilized: Most organizations collect massive amounts of data but lack the infrastructure to activate it effectively. 🔷 AI ambition often outpaces readiness: Infrastructure gaps, not algorithms, are slowing progress from pilots to production. 🔷 Single-environment strategies no longer scale: Data and AI workloads increasingly demand architectures that span cloud, core, and edge. 🔷 Execution determines advantage: Organizations that modernize data platforms early are better positioned to turn AI into sustained value. Watch the full video at , and be sure to subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39789195
info_outline
Case Studies in Action: How Real-World Enterprises Are Deploying Hybrid AI Today
01/19/2026
Case Studies in Action: How Real-World Enterprises Are Deploying Hybrid AI Today
Hybrid AI isn’t theoretical anymore. It’s running inside enterprises and delivering value. From Lenovo Tech World in Las Vegas, Six Five On The Road shifts the focus on Hybrid AI from promises to proof. and n are joined by , Director of Strategic Product Management, ISG at Lenovo, to look at how large enterprises are actually moving AI from pilots into production. Rather than speculating about what AI could do, the session focuses on what enterprises are already putting into production. Robert points to where AI is delivering measurable value today, which use cases continue to scale, and where organizations still struggle when moving from pilots to live operations. As deployments spread across edge, data center, and cloud environments, architectural and operational choices emerge as the factors shaping performance, reliability, and time-to-value. Key Takeaways Include: 🔷 Production AI is already delivering value: Enterprises are seeing results where AI is tightly aligned to specific business outcomes, not broad experimentation. 🔷 Moving beyond pilots requires clarity: Organizations stall when objectives, ownership, and operating models are not clearly defined from the start. 🔷 Workload placement drives performance: Decisions across edge, data center, and cloud materially affect latency, cost, and reliability at scale. 🔷 Architecture choices compound over time: Early infrastructure decisions have an outsized impact on speed to value and long-term scalability. 🔷 Momentum is an organizational challenge: Teams that align technology, economics, and operations are better positioned to keep AI initiatives moving forward. Learn more at Watch the full video at and be sure to subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39789065
info_outline
Scaling AI Infrastructure: Lessons from the Lenovo and Nscale Partnership
01/16/2026
Scaling AI Infrastructure: Lessons from the Lenovo and Nscale Partnership
As AI systems move out of pilot mode, infrastructure challenges become operational realities. and sit down with VP, CSP, ISG of and Senior Advisor at to break down what it actually takes to deploy and operate production-grade AI infrastructure. They focus on the gap between early experimentation and real-world execution, where systems grow denser, power demands rise, and deployment complexity accelerates. Drawing on lessons from the Lenovo–NSCALE partnership, their discussion highlights how close collaboration between infrastructure providers and CSPs can reduce deployment risk, shorten timelines, and improve operational stability. As these advanced computing environments continue to scale, our guests underscore why architectures, processes, and partnerships must evolve continuously to keep pace with rising performance and efficiency demands. Key Takeaways Include: 🔹 Scaling AI infrastructure is harder than early pilots suggest: Moving from experimentation into production exposes gaps in power availability, cooling capacity, deployment processes, and operational maturity that pilots rarely reveal. 🔹 Execution discipline matters as much as platform choice: Successful deployments depend on coordinated delivery, repeatable processes, and operational rigor, not just hardware specifications. 🔹 Power and cooling are defining constraints at scale: High-density AI systems force organizations to rethink data center design, energy access, and thermal management strategies. 🔹 Platform partnerships reduce deployment risk: Collaboration between infrastructure providers and CSPs helps manage complexity across design, delivery, and ongoing operations. 🔹 Production AI requires continuous evolution:As systems become denser and more demanding, architectures and operating models must adapt to support long-term scalability and stability. Learn more at . Watch the full video at , and be sure to , so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39764350
info_outline
Software-Defined Systems: The Architecture Decisions Shaping The Future Of How We Move, Live And Work
01/16/2026
Software-Defined Systems: The Architecture Decisions Shaping The Future Of How We Move, Live And Work
What’s shaping the next generation of vehicles isn’t always visible from the driver’s seat. From CES 2026, sits down with , Director of Automotive Systems at , to explore how software-defined architectures are changing vehicle design. They dissect the systems that now decide how vehicles perform, scale, and stay safe, from semiconductors to real-time intelligence, and the constraints OEMs and Tier 1s face as software takes the lead. As vehicle lifecycles stretch and updates become continuous, the takeaway is straightforward. Platforms that can adapt to growing data, new AI workloads, and rising system complexity will define what scales next. Key Takeaways 🔷 Software-defined vehicles depend on strong architectural foundations: The most impactful automotive innovations now happen below the surface, where system architecture determines how safely, efficiently, and reliably vehicles can scale over time. 🔷 Edge intelligence is becoming essential: Real-time decision-making increasingly requires compute to move closer to where data is generated, reducing latency while enabling faster, more responsive systems. 🔷 Performance and efficiency must advance together: Automotive AI demands higher capability without sacrificing power efficiency, reliability, or functional safety, forcing tighter engineering tradeoffs across the system. 🔷 Architectural choices today shape longevity tomorrow: Decisions around zonal architectures, centralized compute, and sensor fusion directly influence how vehicles adapt to software updates, new workloads, and rising complexity over long lifecycles. Sponsored by Watch the full video at , and be sure to subscribe to our .
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39764000
info_outline
Infrastructure for the AI Era: From Experimentation to Execution
01/15/2026
Infrastructure for the AI Era: From Experimentation to Execution
AI doesn’t fail because of weak models. It fails when the infrastructure decisions underneath them weren’t built for scale. As AI investment accelerates, many organizations are discovering that ambition alone does not translate into results. The real challenge is infrastructure, and the growing gap between experimentation and execution is forcing IT leaders to rethink architecture, deployment models, and long-term strategy. In this episode of Six Five On The Road, hosts and are joined by , SVP, ISG Sales, ISO at , to examine why so many AI initiatives stall between pilots and production, and what it actually takes to build infrastructure that can support AI at scale. Key Takeaways Include: 🔹 From Pilots to Production: Many AI initiatives fail not because of models, but because the underlying infrastructure cannot support reliable, day-to-day execution. 🔹 Infrastructure as a Differentiator: Data flow, connectivity, and system architecture are becoming competitive advantages, not back-office concerns. 🔹 Hybrid by Necessity: AI workloads are increasingly distributed across cloud, data center, and edge, making architectural flexibility essential. 🔹 Avoiding Lock-In: Early infrastructure decisions can either enable long-term scale or introduce complexity that limits future options. 🔹 Execution Over Experimentation: Organizations that align infrastructure strategy with business outcomes are more likely to turn AI into a durable advantage. Learn more at . Watch the full video at , and be sure to subscribe to the Six Five Media so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39751720
info_outline
Technology and Entertainment Collide at The Sphere in Las Vegas
01/15/2026
Technology and Entertainment Collide at The Sphere in Las Vegas
Entertainment and events at the Las Vegas Sphere test the limits of modern infrastructure. and speak with , Vice President and General Manager of ISG North America at , to examine what hosting Lenovo Tech World at The Sphere represents from a technology and infrastructure perspective, and what it reveals about the future of real-time, immersive systems. Rather than focusing on the visual spectacle of the venue, they focus on the operational and infrastructure requirements needed to deliver immersion consistently and without interruption. From real-time rendering at extreme resolution and intelligent content pipelines to latency sensitivity, and the systems discipline required to support always-on live environments. They break down how the same demands appearing in entertainment are increasingly relevant across manufacturing, healthcare, digital twins, and scientific simulation, where performance, reliability, and responsiveness must coexist at scale. Key Takeaways Include: 🔹 Immersive experiences are redefining performance expectations: Scale, responsiveness, and consistency are becoming baseline requirements rather than differentiators. 🔹 Infrastructure, not content alone, determines feasibility: Real-time rendering, massive data movement, and intelligent workflows require predictable, high-throughput systems. 🔹 Operational reliability is foundational: In live environments, availability and stability matter as much as raw compute capability. 🔹 Challenges extend well beyond entertainment: Similar infrastructure demands are emerging in manufacturing, healthcare, digital twins, and scientific simulation. 🔹 Hybrid architectures support scale and control: Distributing workloads across edge, datacenter, and cloud environments allows performance to align with operational needs. Learn more at . Watch the full video at , and be sure to subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39751580
info_outline
AI Inferencing at the Speed of Real Life
01/15/2026
AI Inferencing at the Speed of Real Life
Inferencing is where AI proves it can act, not just think. and sit down with , Vice President, Product Group, at , during Lenovo Tech World in Las Vegas, to focus on the moment AI leaves experimentation and enters live operations. As models mature, AI inference has become the layer that decides whether AI delivers real value or remains stuck in demos and pilots. Their discussion shifts from training to inferencing and into the realities of running AI at business speed, where latency, data movement, infrastructure readiness, and energy efficiency quickly surface as limiting factors. Bringing AI closer to where data is created is increasingly unavoidable, but inference across edge, data center, and cloud raises new demands for consistency, security, and governance. As inferencing becomes the dominant AI workload, organizations that build this capability early gain not just performance, but operational leverage. Key Takeaways: 🔷 Inferencing is where AI becomes operational: AI only creates impact when models can act reliably in real time, not just generate outputs in isolation. 🔷 Latency and data movement are the real bottlenecks: Once AI leaves centralized systems, response time and data flow define success more than model accuracy. 🔷 Edge inferencing is accelerating adoption: Bringing AI closer to data reduces delay but increases architectural and operational complexity. 🔷 Governance must scale with deployment: Security, consistency, and control become harder as inference spans environments. 🔷 Operational readiness determines winners: Enterprises that invest in inferencing infrastructure now are better positioned as AI becomes embedded everywhere. Watch the full video at and subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39751430
info_outline
Cooling, Power, and Running AI in Production
01/15/2026
Cooling, Power, and Running AI in Production
Once AI leaves the lab, power, heat, and efficiency decide what can actually run. From Tech World in Las Vegas, Six Five On The Road turns its attention to what changes when AI moves from experimentation into sustained production. As compute density rises and AI workloads become persistent, the conversation shifts toward the physical demands required to support them, including power delivery, thermal management, and operational efficiency. and are joined by Lenovo’s , Vice President, Product Group, ISG to examine how power availability, heat dissipation, and energy efficiency are now shaping enterprise AI deployment decisions. As higher-density systems push traditional air-cooled environments to their limits, organizations are rethinking how AI infrastructure is designed, what it costs to operate, and where it can realistically scale, with liquid cooling increasingly entering the conversation as a practical requirement rather than an edge case. Key Takeaways Include: 🔷 Production AI introduces physical constraints: Once AI runs continuously, power availability, heat, and efficiency shape what can be deployed and sustained. 🔷 Higher density changes system design: As AI workloads concentrate more compute in less space, traditional cooling approaches face growing limitations. 🔷 Energy efficiency impacts economics: Power and cooling are now major contributors to the total cost of ownership (TCO) for AI systems in production. 🔷 Liquid cooling is becoming a practical option: What was once limited to hyperscalers is increasingly relevant to enterprises planning for long-term AI growth. 🔷 Operational planning determines scalability: Organizations that account for power and thermal requirements early are better positioned to expand AI without disruption. Watch the full video at , and be sure to subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39751295
info_outline
The New Backbone of Enterprise IT: Why is Hybrid AI the Next Compute Era?
01/15/2026
The New Backbone of Enterprise IT: Why is Hybrid AI the Next Compute Era?
The AI era isn’t being built in one place, and enterprises are finally accepting that reality. From Lenovo Tech World in Las Vegas, Patrick Moorhead and Daniel Newman are with Flynn Maloy, Chief Marketing Officer for ISG at Lenovo, to examine why hybrid AI has become the operating model enterprises are embracing in practice. Their discussion moves past cloud ideology to focus on how organizations are distributing AI across on-prem, cloud, and edge environments as performance tradeoffs, governance requirements, and data gravity dictate where workloads can realistically run. Key Takeaways: 🔷 Hybrid AI is becoming the default, not the exception: Enterprises are learning that no single environment can meet every AI requirement across performance, cost, and governance. 🔷 Data location drives architecture decisions: The question of where data will live increasingly determines where AI workloads can run, reshaping infrastructure roadmaps and becoming as important as the models themselves. 🔷 On-prem, cloud, and edge must work together: Modern AI depends on coordination across environments, not isolated stacks. 🔷 Infrastructure choices today define flexibility tomorrow: Early architectural decisions shape how easily organizations can scale and adapt AI over time. Watch the full video at and be sure to subscribe to our and never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39750675
info_outline
Operationalizing AI: How Inferencing Changes Enterprise AI Deployment at Scale
01/14/2026
Operationalizing AI: How Inferencing Changes Enterprise AI Deployment at Scale
Enterprise adoption is entering a more demanding phase, where intelligence is expected to perform consistently, not just demonstrate potential. and speak with , Vice President, Product Group, ISG at , about how inferencing connects infrastructure, architecture, and operations as enterprises scale intelligent systems beyond pilots. Their discussion looks at what it takes to run inference across distributed environments such as factories, retail locations, healthcare settings, and remote sites. As organizations encounter the limits of centralized strategies, factors like latency, data gravity, and operational complexity increasingly shape outcomes. Key Takeaways Include: 🔹 Inferencing marks the shift from experimentation to operations: Real-time execution, not model creation, determines whether enterprise initiatives deliver durable business value. 🔹 Centralized strategies are reaching practical limits: As data spreads across locations, latency, bandwidth, and coordination challenges expose weaknesses in cloud-only designs. 🔹 Operational complexity defines success at scale: Deployment discipline, monitoring, and lifecycle management are as critical as compute capability. 🔹 Distributed architectures are becoming foundational: Aligning inference closer to where data is generated improves responsiveness and reliability. 🔹 Execution separates leaders from laggards: Long-term advantage comes from embedding intelligence into daily operations, not from labeling projects as transformational. Learn more at . Watch the full video at , and be sure to subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39732985
info_outline
How is Software Rewriting Automotive Engineering? A Conversation with Synopsys CEO Sassine Ghazi
01/14/2026
How is Software Rewriting Automotive Engineering? A Conversation with Synopsys CEO Sassine Ghazi
Six Five is at CES in Las Vegas, inside the Automotive Hall, where the shift is unmistakable. Cars are no longer engineered as machines. They’re being built as complex computing systems. and sit down with , CEO of , to unpack how automotive engineering is being reshaped as software, electronics, physics, and AI converge. For many automakers, the challenge is no longer packing in horsepower or sculpting sheet metal. It’s how to design, validate, and ship millions of lines of code safely, efficiently, and on schedule. The conversation looks at why modern vehicles increasingly behave like rolling computers and where things tend to break down as cars become more software-defined. Sassine explains why designing software, silicon, and physical systems in isolation no longer works, and how aligning these pieces early can improve timelines, cost, and safety. The lesson is straightforward. The future of automotive innovation will be written in software, and the winners will be the companies that learn to design it all together from the start. Key Takeaways: 🔷 Cars are becoming software-defined systems: Modern vehicles are increasingly governed by software, turning automotive engineering into a complex systems problem rather than a purely mechanical one. 🔷 Siloed design does not scale: Designing software, silicon, and physical systems separately creates downstream risk, delays, and cost overruns once vehicles move toward production. 🔷 AI is changing engineering workflows right now: Accelerated computing and AI-driven tools are already reshaping how vehicles are designed, tested, and validated through virtual prototyping and deeper ecosystem collaboration. 🔷 Virtual-first development is getting real: Automakers are shifting away from expensive physical prototypes toward software-driven simulation and validation, enabling faster iteration, fewer surprises, a clearer path from concept to production, and better overall outcomes. Learn more at . Watch the full video at , and be sure to subscribe to our .
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39731085
info_outline
HPC 2.0: How CIQ Is Simplifying Supercomputing for the AI Era – Take :05
01/14/2026
HPC 2.0: How CIQ Is Simplifying Supercomputing for the AI Era – Take :05
How are container-native workflows and security-focused strategies reshaping high-performance computing for AI workloads? From Supercomputing 2025, host , Global Technology Advisor at , is joined by ’s (Senior Director of Business Development) and (Senior HPC Engineer) for a conversation on how HPC 2.0 is simplifying supercomputing for the AI era. Guests share how CIQ is making high-performance computing more accessible through streamlined container-native workflows, enhanced security and compliance, and hybrid environments—all key elements in supporting the next wave of AI workloads. Key Takeaways Include: 🔹HPC 2.0 explained: Modernization of supercomputing infrastructure prioritizing simplicity, scalability, and adaptability for AI-driven workloads. 🔹Container-native workflows: How containerization accelerates deployment, improves reproducibility, and reduces administrative complexity in HPC environments. 🔹Security and compliance: CIQ’s strategies for addressing new security challenges and compliance demands as HPC environments grow more distributed and hybrid. 🔹Hybrid and cloud-native models: Why blending on-premises and cloud resources offers flexibility and optimal performance for evolving use cases. Learn more at . Watch the full video at sixfivemedia.com, and be sure to , so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39730245