Six Five Media
Six Five Media is a leading producer of professional video content, specifically crafted to elevate leading tech companies, their products, and their executives among enterprise customers and industry peers. As a joint venture between The Futurum Group and Moor Insights & Strategy, Six Five Media harnesses the expertise of top-ranked industry analysts and influential hosts to ensure its clients' messages resonate effectively in the market. Learn more at sixfivemedia.com.
info_outline
IBM Says Quantum Computing Is Closer Than You Think. The Race for Quantum Advantage
04/23/2026
IBM Says Quantum Computing Is Closer Than You Think. The Race for Quantum Advantage
Quantum is no longer theoretical. It’s entering the phase where it must prove real value. sits down with , Director of IBM Research, at Research in Yorktown to unpack what “quantum advantage” actually means and why the timeline is tightening. Quantum is shifting from controlled experiments to real-world validation, where systems are no longer compared to simulations, but to actual data. This transition marks the beginning of quantum becoming part of the scientific method and, eventually, enterprise workflows. At the same time, the stakes are rising. From roadmap transparency to ecosystem development and post-quantum security, IBM is pushing to accelerate adoption while the window for preparation is already open. Key Takeaways: 🔷 Quantum advantage is defined by real-world performance, not theoretical benchmarks 🔷 The shift from simulation to real data marks a turning point for quantum utility 🔷 Hybrid models combining quantum and classical systems will define early enterprise use 🔷 Post-quantum security is an immediate priority, not a future concern 🔷 Ecosystem development is critical to accelerating real-world quantum applications The question of “advantage” is simple: when does quantum become cheaper, faster, or more accurate? It won’t happen overnight. It’ll be integrated step by step into real workflows.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40991990
info_outline
Modernizing Manufacturing Without Disruption, How SMBs Move from Visibility to Autonomy
04/22/2026
Modernizing Manufacturing Without Disruption, How SMBs Move from Visibility to Autonomy
Manufacturing SMBs are under pressure to increase output, reduce downtime, and operate with fewer resources. The challenge isn’t access to technology, but rather turning existing data and systems into something that actually drives decisions. is joined by of to examine how AI is reshaping smart manufacturing and enabling SMBs to move beyond visibility into more autonomous operations. The conversation focuses on execution. Many manufacturers already have data, but it’s trapped in silos and underutilized. The next step is operationalizing it, standardizing it at the edge, layering analytics, and applying AI to continuously improve performance. As Edmunds notes, the industry has moved from talking about Industry 4.0 to actively implementing it, with AI acting as the missing layer that enables real-time optimization and decision-making. Key Takeaways Include: 🔹 AI is enabling the transition from smart to autonomous manufacturing. Manufacturers are moving beyond dashboards and visibility toward systems that can act, adapt, and improve without constant human intervention. 🔹 Data is abundant but underutilized across the plant floor. The primary challenge isn’t data collection, but integrating and operationalizing siloed data to drive efficiency and output. 🔹 Digital twins are becoming a core operational layer. Virtual models of equipment, processes, and workflows allow manufacturers to simulate, optimize, and scale without disrupting production. 🔹 Workforce constraints are accelerating automation adoption. With experienced workers nearing retirement, AI and automation are helping preserve institutional knowledge and maintain productivity. 🔹 Starting small with scalable infrastructure is critical. SMBs can begin with targeted use cases and build toward broader transformation without shutting down operations. 🔹 Efficiency and sustainability are converging. Reducing waste, energy consumption, and operational inefficiencies is increasingly tied directly to cost savings and competitiveness. The core message is that manufacturers that focus on operationalizing data and building scalable foundations will be better positioned to transition into more autonomous, resilient environments.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40982840
info_outline
How Agentic AI Is Transforming Mainframe Workforce Training
04/22/2026
How Agentic AI Is Transforming Mainframe Workforce Training
AI is accelerating execution while tightening the margin for error. In mainframe environments, that shift is redefining the role of the practitioner, from operator to validator, from executor to outcome owner. On this episode of The Main Scoop, hosts and sit down with , CEO of , to explore how agentic AI is reshaping workforce expectations and why training is becoming a strategic control layer. The conversation reframes AI not as a replacement for expertise, but as a force multiplier that increases the need for it. As AI becomes embedded in enterprise workflows, teams are no longer just completing tasks. They are interpreting outputs, validating decisions, and managing downstream consequences in real time. Darren highlights a growing disconnect: organizations assume AI will fill skills gaps, but in reality, it amplifies the risks of shallow knowledge. Especially in mainframe environments, where reliability is non-negotiable, this creates a new operational pressure, speed without sacrificing precision. The discussion also examines how training itself is evolving. AI-enabled content creation, adaptive learning models, and faster delivery mechanisms are expanding access. But without structure, scale introduces inconsistency. Organizations that treat training as a continuous, applied discipline, not a one-time event, are better positioned to deploy AI responsibly and confidently. Key Takeaways: 🔷 The mainframe role is shifting toward oversight, validation, and outcome ownership as AI becomes embedded in workflows 🔷 Foundational expertise remains critical to interpret, challenge, and guide AI-generated outputs 🔷 Assuming AI will compensate for skill gaps introduces operational risk in high-precision environments 🔷 Structured, organization-wide training programs are essential for consistent and responsible AI adoption 🔷 AI is improving how training is delivered, but human judgment remains central to how it is applied AI is not reducing the need for expertise, it is raising the bar for it. Watch the full episode at sixfivemedia.com and subscribe for more conversations shaping enterprise technology.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40965155
info_outline
Six Five Connected: How Dell Is Rebuilding the Enterprise PC for AI
04/10/2026
Six Five Connected: How Dell Is Rebuilding the Enterprise PC for AI
AI is no longer confined to the datacenter. It is reshaping how work gets done across every device in the enterprise. From Dell’s Client Solutions Interactive Lab in Austin, Six Five Connected host brings together leaders across Dell’s Commercial Client Solutions Group to explore how enterprise PCs and workstations are evolving for the AI era. Hear from, , COO and Vice Chairman of Dell Technologies, , President of Dell’s Commercial Client Solutions Group, , Commercial Notebooks and Education Lead, , Commercial Workstations and Rugged Lead, and , Director of Industrial Design Engineering at Dell Technologies. As workloads become more complex and distributed, devices are shifting from endpoints to active participants in the AI stack. outlines a broader strategy built around portfolio clarity, engineering execution, and workload-driven design, where commercial PCs and workstations are aligned to how work actually happens. Workstations, in particular, are taking on a larger role in AI development, simulation, and advanced creation workflows, acting as a bridge between local experimentation and datacenter-scale compute. The result is a more connected vision for enterprise computing, one where devices, workflows, and infrastructure operate together as a unified, AI-ready ecosystem. Key Takeaways Include: 🔹 Why enterprise devices are becoming a more important layer in the AI stack 🔹 How Dell is redesigning its commercial PC and workstation portfolio for modern workloads 🔹 Where workstations fit between local experimentation and datacenter-scale compute 🔹 How engineering, thermals, modularity, and mobility are shaping next-generation systems 🔹 Why organizations need more flexible, workload-aligned infrastructure to support AI adoption 🔹 How Dell is connecting endpoint strategy to a broader enterprise AI vision ▶ Explore Dell’s Pro and Pro Precision portfolio for the AI era: ▶ Watch more Six Five Connected coverage at sixfivemedia.com and subscribe on YouTube for the full series:
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40612040
info_outline
AI Beyond the Pilot: How Freshworks Is Delivering Real Outcomes in IT Service Operations
04/09/2026
AI Beyond the Pilot: How Freshworks Is Delivering Real Outcomes in IT Service Operations
AI is no longer stuck in pilot mode, but who’s actually driving real outcomes in IT service operations? sits down with , CEO and President of , to talk about what it takes to operationalize AI inside IT service operations environments and why some organizations are pulling ahead while others are still experimenting. Freshworks has been focused on what Woodside calls the “agile enterprise,” companies that are growing quickly and need systems that can keep up. In that environment, AI isn’t replacing existing platforms, it’s accelerating them. Conversation Highlights: 🔹 Freshworks is targeting fast-growing, agile enterprises that need flexible, AI-enabled systems 🔹 AI is already embedded in real buying behavior, with thousands of customers paying for AI capabilities 🔹 Enterprises are using AI to shift support from a cost center to a revenue driver 🔹 Established platforms still hold structural advantages in data, security, and workflow depth 🔹 The market is moving toward hybrid pricing models that reflect how AI is actually consumed One of the clearest takeaways here is that the “SaaS reset” narrative is overstated. While new entrants are experimenting with lightweight builds, enterprise environments still require secure, production-ready systems with deep integrations and governance. As Woodside puts it, “AI isn’t replacing the system, it’s making the system more valuable.” Watch the full conversation at and subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40793430
info_outline
From Earnings to Optics: Marvell’s AI-Driven Infrastructure Playbook
04/06/2026
From Earnings to Optics: Marvell’s AI-Driven Infrastructure Playbook
The conversation around AI infrastructure has been dominated by compute, but that’s only part of the story. is back with , President and COO of , at the company’s Silicon Valley headquarters to unpack what is actually driving the next phase of scale in AI, and why connectivity is quickly becoming the limiting factor. Marvell is coming off a record year, with growth driven by accelerating hyperscaler investment and a surge in demand tied directly to AI infrastructure buildouts. But it’s becoming clear that compute alone does not define performance. As Koopmans explains, the growth in compute is creating an even faster expansion in the need to move data between systems, clusters, and even across data centers. That shift is pushing networking, and more specifically optics, into a central role. As bandwidth requirements continue to climb, traditional electrical connections are hitting physical limits, making optical technologies essential for scaling modern data centers. Together, they explore how this transition is unfolding as a mix of approaches, architectures, and technologies working together across different parts of the infrastructure stack. Key Takeaways Include: 🔹 Growth in AI compute is driving an even faster increase in connectivity requirements across infrastructure. 🔹 Hyperscaler CAPEX trends continue to accelerate, reinforcing long-term demand for AI infrastructure buildouts. 🔹 Optical technologies are becoming essential as electrical interconnects reach physical bandwidth limitations. 🔹 Data center architectures are diverging, with multiple approaches emerging to support different AI workloads. 🔹 Marvell is focusing its strategy on high-speed connectivity and targeted portfolio expansion to capture this shift. Watch the full episode at sixfivemedia.com and subscribe to our for more analyst insights from the front lines of enterprise technology.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40749315
info_outline
Rethinking Data Security, Governance, and Resilience for the Agentic Era
04/03/2026
Rethinking Data Security, Governance, and Resilience for the Agentic Era
AI is scaling faster than enterprises can secure the data behind it. At RSAC 2026, and sit down with , CEO of , and , President of Security and AI, to examine how data is becoming the defining layer of AI adoption. As enterprises push toward agentic workflows, unstructured data is expanding both opportunity and risk. It is no longer just an input to AI systems. It is becoming the layer where outcomes are shaped and where exposure is created. That shift is forcing a change in how security is approached. The conversation moves beyond perimeter defenses and into questions of visibility, permissions, governance, and recovery at the data level. What emerges is a clearer picture of what it takes to scale AI in production without introducing systemic risk. Specific challenges include: 🔹 Unstructured data is increasingly shaping both AI outcomes and enterprise risk exposure 🔹 The attack surface is moving closer to the data layer as AI systems interact directly with it 🔹 AI adoption is advancing faster than trust, governance, and control frameworks can keep up 🔹 Data hygiene and context become more critical as AI agents operate across systems at scale 🔹 Security, compliance, and identity signals are converging into a more unified operating model 🔹 Platform-level approaches are emerging to connect governance, resilience, and recovery Veeam outlines how its platform strategy is evolving toward a unified control layer designed to support faster innovation while maintaining control over data, access, and recovery. The implication is straightforward. AI does not scale safely unless the data behind it is trusted, governed, and recoverable. Watch the full conversation at sixfivemedia.com and subscribe to our YouTube channel so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40726965
info_outline
How Autonomous IT Is Redefining Enterprise Operations
04/02/2026
How Autonomous IT Is Redefining Enterprise Operations
AI is pushing enterprise IT toward something new, systems that do not just detect issues, but act on them. At RSAC 2026 in San Francisco, and are with , CTO of , to break down how Autonomous IT is reshaping operations, security, and decision-making at scale. As AI agents accelerate across enterprise environments, the challenge isn’t just visibility, it’s execution. Autonomous IT shifts decision-making to the endpoint, enabling real-time action across millions of devices while reducing operational complexity. 🔹 Autonomous IT enables real-time decision-making at the endpoint to reduce system complexity 🔹 Enterprise security risks are evolving, from external threats to internal AI-driven exposure 🔹 AI adoption is outpacing traditional governance and control models 🔹 Human roles are shifting from step-by-step control to high-level oversight 🔹 Foundational visibility across endpoints is critical to enabling trusted automation Tanium’s approach focuses on turning real-time data into immediate, secure action. They’ve developed solutions like Guardian AI Spotlight, which allows organizations to identify how AI is being used across their environments and ensure it aligns with security policies without slowing innovation. AI is forcing a shift. Enterprises must move from reactive IT to systems that can act, adapt, and respond in real time, with humans guiding the system, not slowing it down. Watch the full conversation at and subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40715575
info_outline
Resilience in the AI Era: Why Security, Data, and Recovery Must Converge
04/02/2026
Resilience in the AI Era: Why Security, Data, and Recovery Must Converge
AI is scaling faster than enterprise resilience models can keep up, and that gap is where risk multiplies. At RSAC 2026, hosts and sit down with leaders from cyber and AI resilience company, . Chief Marketing Officer , and SVP, Global Partners and Channels, examine how cyber resilience is being redefined in the AI era. As AI adoption accelerates, organizations are encountering a new class of risk driven by agentic workflows, expanding attack surfaces, and ungoverned data growth. Traditional security models built on layered tools and fragmented architectures are starting to fall short in this new environment. This shift highlights the need for resilience, not just protection, as the central operational priority. In response, teams are adopting unified operating models like ResOps, where data protection, identity, security, and recovery converge into a continuous, automated system. Rather than relying on static backups or reactive security measures, enterprises must adopt real-time detection, response, and recovery strategies that align with the speed and scale of AI-driven systems. Key Takeaways Include: 🔹 AI risk is now the primary barrier to deployment, as organizations struggle with compliance, data readiness, and governance gaps. 🔹 Fragmented security architectures are breaking under AI at scale, forcing a shift toward unified platforms and system-of-record approaches for resilience. 🔹 ResOps emerges as a new operating model, aligning teams, tools, and automation to enable continuous detection, protection, and recovery. 🔹 Partner ecosystems are evolving from integration layers to orchestration engines, enabling automated, policy-driven responses across security and data environments. 🔹 Resilience is becoming a board-level priority, driven by the direct impact of AI on revenue, risk exposure, and enterprise trust. 🔹 Security is moving towards convergence, where identity, data, cloud, and recovery systems operate as a single coordinated framework. Watch the full conversation at and subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40712880
info_outline
Reinventing the Commercial PC: Inside Dell’s New Pro Portfolio
03/31/2026
Reinventing the Commercial PC: Inside Dell’s New Pro Portfolio
Enterprise PCs are entering a new era as organizations balance rising performance demands from AI workloads, tighter IT budgets, and growing expectations for premium user experiences at work. Host sits down with , President of s Commercial Client Solutions Group, , Commercial Notebooks and Education Lead, and , Director of Industrial Design Engineering at Dell Technologies, for this installment of our series “The Next Generation of Dell PCs.” The group explores how Dell is reinventing its commercial PC portfolio for the next generation of enterprise computing. Key insights include: 🔹 Why enterprise PCs are reaching a new inflection point 🔹 How Dell accelerated its roadmap and rebuilt the Dell Pro portfolio 🔹 How the Dell Pro lineup aligns systems with different enterprise workloads 🔹 The engineering innovations behind Dell’s largest design refresh in a decade 🔹 How modular architecture enables flexibility and faster innovation 🔹 What these improvements mean for productivity, durability, and IT management From accelerated product roadmaps to modular architecture and design innovation, Dell’s new Dell Pro portfolio aims to deliver greater flexibility, stronger performance, and better long-term outcomes for both IT teams and end users. As enterprise computing evolves, devices are becoming more adaptable, more powerful, and more aligned with the needs of modern hybrid work. Explore the to see how Dell is redefining enterprise PCs for the AI era and modern hybrid work environments. Watch the full conversation at and subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40609715
info_outline
Managing Intelligent Fleets: How HPE Is Redefining Compute Ops at Scale - Signal65
03/30/2026
Managing Intelligent Fleets: How HPE Is Redefining Compute Ops at Scale - Signal65
Enterprise compute has changed dramatically, but compute management is struggling to keep pace. and sit down with , Head of Product Management, Compute & Software at , to discuss the evolution of unified compute infrastructure and the management layer required to operate modern fleets. As enterprises shift from single data centers to distributed fleets across colocation sites and the edge, complexity has increased exponentially. Network variance, fewer on-site personnel, tighter security requirements, and inconsistent environmental conditions demand a new operational model. This Signal65 conversation explores how HPE’s Compute Ops Management (COM) platform enables policy-driven, cloud-native fleet management at scale. Key Takeaways: 🔹 Compute management is now fleet management: Enterprises are managing thousands of distributed systems, not single locations. 🔹 Cloud-native management scales elastically: COM eliminates the need for regional appliances and delivers centralized policy control. 🔹 Compute Copilot changes workflows: Natural language interaction replaces manual navigation, accelerating troubleshooting and compliance tasks. 🔹 Resilient edge systems are imperative: Stress testing of the DL145 Gen11 demonstrated consistent AI inference performance under extreme edge conditions. 🔹 Security cannot be an afterthought: Support for FIPS 140-3 Level 3 ensures hardened, enterprise-grade deployments. Watch the full discussion and subscribe to our Youtube Channel for more Signal65 infrastructure insights.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40675405
info_outline
Powering the AI Workstation Era: Inside Dell’s New Pro Precision Portfolio
03/27/2026
Powering the AI Workstation Era: Inside Dell’s New Pro Precision Portfolio
AI development, engineering simulation, and advanced creative workflows are pushing more compute power closer to where professionals actually work. As part of our special series “The Next Generation of Dell PCs,” host sits down with , President of Dell’s Commercial Client Solutions Group, , Commercial Workstations and Rugged Lead, and Paul Doczy, Director of Industrial Design Engineering at Dell Technologies, to discuss how Dell is evolving the workstation category for the AI era. The conversation explores how Dell’s Pro Precision portfolio is designed to support increasingly complex workloads across AI development, simulation, and professional creation, balancing scalability, mobility, and reliability for modern enterprise users. Key Takeaways 🔹 Where workstations fit within the modern AI development stack 🔹 How the Dell Pro Precision lineup scales from creator systems to AI development platforms 🔹 What industry-first expandability enables for tower workstation performance 🔹 How Dell balances mobility with workstation-class performance in modern devices 🔹 Where organizations will see the greatest impact from next-generation workstation platforms As AI experimentation and simulation workloads continue to expand beyond the datacenter, workstations are becoming a critical bridge between local development environments and large-scale compute infrastructure. Explore the to see how workstation platforms are evolving for the next generation of professional computing. Watch the full conversation at and subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40609640
info_outline
From Storage to Intelligence: Everpure on Redefining Data for the AI Era
03/25/2026
From Storage to Intelligence: Everpure on Redefining Data for the AI Era
AI is forcing enterprises to confront a hard reality: storing data is easy. Making it usable at scale is not. At NVIDIA GTC, and sit down with , Chief Technology & Growth Officer at , to unpack how the role of data infrastructure is changing as organizations move from experimentation to real AI deployment. Everpure’s evolution from Pure Storage reflects a broader industry shift. Infrastructure is no longer just about storing data. It is about making that data usable, intelligent, and actionable across environments. That shift introduces new requirements. Speed alone is not enough. Enterprises need systems that are fast, reliable, and manageable, without creating fragmentation or forcing constant reinvestment as AI workloads evolve. Everpure’s approach focuses on unifying infrastructure, management, and data intelligence into a single platform. This enables organizations to adapt to changing AI demands while maintaining performance, flexibility, and operational control. As AI becomes embedded across every enterprise function, the advantage will go to organizations that can activate their data, not just store it. Key Takeaways Include: 🔹 AI is exposing data, not compute, as the primary enterprise bottleneck 🔹 Data platforms are evolving from storage to intelligence and management layers 🔹 Reliability and manageability are now as critical as performance 🔹 Enterprises need flexible infrastructure that can evolve with AI workloads 🔹 Unified platforms reduce complexity across hybrid and multi-environment deployments Learn how platform is helping enterprises turn data into a competitive advantage for AI. Watch the full conversation at and subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40611540
info_outline
Dell’s PC Strategy for the AI Era
03/25/2026
Dell’s PC Strategy for the AI Era
Enterprise computing is entering a new phase as AI workloads, hybrid work environments, and rising security requirements reshape what organizations expect from their devices. In this segment of our series “The Next Generation of Dell PCs,” hosts and sit down with , COO and Vice Chairman of Dell Technologies, and , President of Dell’s Commercial Client Solutions Group, to explore how Dell is evolving its commercial PC strategy for the AI era. As innovation across silicon, AI workloads, and enterprise security accelerates, Dell is focusing on engineering-led design, an expanded commercial portfolio, and end-to-end workspace solutions that help organizations navigate the next generation of computing. Key Insights: 🔹 Why the PC market is at a new inflection point 🔹 How Dell’s engineering-led design is shaping the next generation of commercial PCs 🔹 What the new Dell Pro and Dell Pro Precision portfolio brings to enterprise customers 🔹 How security, manageability, and hybrid work demands are redefining device expectations 🔹 Why Dell’s scale, supply chain leadership, and AI-enabled portfolio matter in today’s enterprise environment As AI moves closer to the endpoint, devices are becoming more intelligent, more secure, and more critical to enterprise productivity. Want to see what AI-ready enterprise devices actually look like? Visit Watch the full conversation at and subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40609540
info_outline
Scaling AI at Inference: The Road to Agent-Driven ROI
03/24/2026
Scaling AI at Inference: The Road to Agent-Driven ROI
AI has moved beyond model training, inference is the new frontier. This Six Five Webcast features and , joined by , Co-founder & Chief Business Officer at , to explore how AI infrastructure is evolving from massive training clusters to production-grade inference systems built for agents, open-source models, and real ROI. Nebius positions itself as an AI-specialized cloud, purpose-built to optimize inference workloads at scale. As AI shifts from research labs to product companies and enterprise agents, performance, cost efficiency, and system-level orchestration have become the defining battleground. Key Takeaways: 🔹 The shift from training to inference: Why budgets, architectures, and customer priorities are changing. 🔹 The Nebius Token Factory: How full-stack optimization across hardware, software, and orchestration improves unit economics. 🔹 Open-source in the enterprise: Why flexibility, tunability, and cost control matter as much as frontier intelligence. 🔹 Agent-driven ROI: Why 2026 will demand measurable business outcomes, not just model benchmarks. 🔹 Performance beyond GPUs: How CPUs, workload orchestration, caching, quantization, and stack optimization tie in to define success. Nebius combines next-generation silicon access with a purpose-built cloud stack and white-glove technical support to help customers ship AI products that are fast, affordable, and compliant at scale. The next phase of AI won’t be defined by a model, it will be defined by who can run inference most efficiently. To learn more about how Nebius is scaling AI for real-world inference and agent-driven ROI, read about it here and explore the full solution: Watch the full webcast at or subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40606895
info_outline
The Main Scoop Ep. 41 | Preparing for Quantum Computing: This is Your Wake-up Call
03/24/2026
The Main Scoop Ep. 41 | Preparing for Quantum Computing: This is Your Wake-up Call
Quantum computing is not just a future opportunity. It is a present-day data protection challenge. On this episode of The Main Scoop, hosts , CEO and Chief Analyst at Futurum, and , SVP and GM of the Mainframe Software Division at , are joined by , Director and Individual Board Member at , to discuss Y2Q, post-quantum cryptography, and why organizations need to prepare now for a quantum-safe future. While quantum computing brings meaningful innovation, it also creates new pressure on the systems that protect sensitive data. Encryption standards used today were not built for a world with quantum-level computing power, and organizations that wait too long to act may leave critical information exposed. Watch the full conversation as the group examines what Y2Q means for enterprise security, why post-quantum readiness matters now, and how businesses can begin assessing the cryptography embedded across their environments. Key Takeaways: 🔹 Y2Q is the next major security challenge: Organizations need to prepare computing environments for a shift to quantum-safe algorithms before existing protections become inadequate. 🔹 Quantum creates both opportunity and risk: The same advances that make quantum computing powerful also introduce new vulnerabilities in cryptology and encryption. 🔹 Preparation starts with visibility: Enterprises must identify where encryption and cryptographic dependencies exist across systems, applications, and infrastructure. 🔹 Post-quantum readiness requires proactive planning: Waiting until quantum capabilities fully mature could leave sensitive data exposed to long-term security threats. 🔹 A transition strategy matters now: Businesses need a deliberate approach for moving toward quantum-safe cryptography without disrupting current operations. The question is not whether quantum computing will affect enterprise security. It is whether organizations will be ready when it does. Subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40598515
info_outline
Fusion’s Future: Accelerating AI-Driven Superconductor Discovery
03/23/2026
Fusion’s Future: Accelerating AI-Driven Superconductor Discovery
AI isn’t just optimizing workflows, it’s now being used to accelerate scientific discovery. At NVIDIA GTC, and sit down with , VP of Marketing at , and , CEO of , to explore how AI infrastructure is enabling breakthroughs in superconductors, fusion energy, and advanced medical technologies. Quantum Formatics is using AI-driven simulation and high-throughput experimentation to develop next-gen superconducting materials, critical for scalable fusion energy and more accessible MRI systems. However, progress at this level requires more than models. It demands infrastructure capable of supporting diverse workloads, from simulation to inference, under strict performance, power, and security constraints. Through Lenovo’s infrastructure and its collaboration with Digital Realty’s DRIL environment, Quantum Formatics deployed a purpose-built, private AI cluster designed for flexibility, scalability, and cost predictability. This approach enables continuous experimentation without the constraints of variable cloud costs or infrastructure limitations. As AI advances, the organizations pushing the boundaries of science are the ones building systems that can support both complexity and scale. Key Takeaways 🔹 AI infrastructure is enabling breakthroughs in fusion energy and medical technology 🔹 Advanced workloads require hybrid environments combining CPU, GPU, and specialized compute 🔹 Power, cooling, and physical infrastructure are primary constraints at scale 🔹 Private AI environments provide cost predictability and reduce experimentation risk 🔹 Partnerships across infrastructure, data center, and AI ecosystems are critical to innovation 🔹 AI-driven simulation is accelerating materials science and deep tech discovery Subscribe to our for more insights from NVIDIA GTC 2026.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40595365
info_outline
From AI Momentum to Reality: HPE on Building the AI Factory
03/20/2026
From AI Momentum to Reality: HPE on Building the AI Factory
AI innovation is not slowing down, but enterprises are struggling to operationalize it. At NVIDIA GTC, sits down with , SVP and GM of HPC & AI Infrastructure Solutions at , to unpack what it actually takes to move from AI ambition to real-world deployment. As enterprises scale beyond experimentation, the challenge is shifting fast. Training, fine-tuning, and inference workloads are exposing cracks in infrastructure, forcing a rethink of how systems are designed, integrated, and operated. HPE’s answer is the “AI factory” model: tightly integrated infrastructure, software, and models built for repeatable, production-grade outcomes. Backed by decades of HPC expertise, from Cray supercomputing to liquid cooling, these systems are engineered for dense compute, converged workloads, and enterprise scale. At the same time, AI and simulation are merging, accelerating discovery while raising the bar for performance and flexibility. The focus is shifting from experimentation to execution, where time-to-value and uptime define success. Key Takeaways 🔹 AI deployment is significantly more complex than initial experimentation 🔹 The “AI factory” model is driving standardized, scalable infrastructure 🔹 HPC expertise, including liquid cooling, is foundational to modern AI systems 🔹 AI and simulation workloads are converging on shared infrastructure 🔹 Time to AI value and uptime are now primary enterprise metrics 🔹 Infrastructure strategy is becoming central to AI ROI Subscribe to our for more insights from NVIDIA GTC 2026.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40563075
info_outline
AI Gigafactories: From Design to First Token at Scale
03/20/2026
AI Gigafactories: From Design to First Token at Scale
The real limit to AI success is how fast you can deploy it. At NVIDIA GTC 2026 in San Jose, and sit down with of and of to unpack what it actually takes to move from AI system design to “first token” at gigafactory scale. As demand shifts from training models to powering real-world applications, infrastructure is becoming the critical path. The challenge is no longer designing systems, it’s deploying them at scale with the power, cooling, networking, and operational precision required to sustain performance. The group explores why many AI initiatives stall between planning and production, and how vertically integrated approaches, combining infrastructure design, deployment, and operations, are emerging as a competitive advantage. At this scale, every constraint matters, from land and energy to networking complexity and system reliability. As AI gigafactories take shape, the focus is shifting toward measurable outcomes, time-to-first-token, sustained performance, and the ability to deliver consistent value across enterprise and cloud environments. Key Takeaways: 🔹 Time-to-first-token is becoming a defining metric for AI ROI 🔹 Infrastructure bottlenecks span power, cooling, networking, and deployment timelines 🔹 Vertical integration is emerging as a key advantage in scaling AI infrastructure 🔹 Sustaining performance at scale is more difficult than achieving peak performance 🔹 Enterprise demand is driving AI adoption beyond model training 🔹 AI gigafactories are reshaping competitive dynamics across cloud and enterprise Subscribe to our for more insights from NVIDIA GTC 2026.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40561695
info_outline
Rethinking AI Infrastructure: Why Memory Now Drives Performance
03/20/2026
Rethinking AI Infrastructure: Why Memory Now Drives Performance
AI infrastructure is shifting fast, and compute is no longer the only performance driver. At NVIDIA GTC, hosts and sit down with , President of Samsung Semiconductors, and VP , to explore how memory, system design, and packaging are redefining AI performance. The conversation is focused on how AI architectures are evolving beyond GPU-centric models toward more specialized compute environments, where memory bandwidth, latency, and integration are becoming critical constraints. As organizations scale AI across training and inference, innovations like HBM4, HBM4E, and next-generation HBM roadmaps, and tighter system-level integration are reshaping how performance is achieved. Samsung’s perspective highlights how memory is moving from a supporting role to a central pillar of AI infrastructure design. Key Takeaways 🔹 AI infrastructure is shifting toward diverse, workload-specific architectures 🔹 Memory bandwidth and latency are emerging as primary performance bottlenecks 🔹 HBM4 and next-generation HBM innovations are enabling next-gen AI scalability 🔹 System-level integration across memory, logic, and packaging is becoming critical 🔹 Collaboration across the ecosystem is accelerating AI infrastructure evolution As AI systems scale, performance will be defined not just by compute, but by how effectively memory and system architecture are designed together. Subscribe to our for more insights from NVIDIA GTC 2026.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40559740
info_outline
The Inference Inflection: MiTAC on Building Flexible AI Infrastructure for Enterprise Scale
03/20/2026
The Inference Inflection: MiTAC on Building Flexible AI Infrastructure for Enterprise Scale
AI infrastructure is shifting from experimentation to production, and that shift is redefining how enterprise systems are built. At NVIDIA GTC 2026 in San Jose, sits down with , GM and VP of Sales & Business Development at , to explore what it takes to support AI at scale across training, inference, and emerging RAG workloads. As organizations move into production, infrastructure must remain flexible by design. Modular platforms like NVIDIA MGX are enabling standardized yet adaptable deployments, while orchestration and data movement are becoming critical to performance. AI systems are no longer just compute-bound, they’re constrained by how efficiently data moves and how effectively workloads are managed across complex environments. Through partnerships with Rafay and DDN, MiTAC is integrating orchestration and high-performance data pipelines directly into its infrastructure stack, helping enterprises simplify deployment and maximize system utilization. Key Takeaways 🔹 AI is reaching an inference inflection, shifting focus from experimentation to production 🔹 Flexible, modular infrastructure is required to support diverse AI workloads 🔹 Standardized platforms like NVIDIA MGX enable scalable, configurable deployments 🔹 Orchestration is critical for managing complex GPU and AI workloads at scale 🔹 Data movement and high-performance storage are central to AI system performance 🔹 Turnkey solutions are accelerating enterprise AI adoption and deployment speed For a closer look at how flexible infrastructure is enabling enterprise AI visit Subscribe to our for more insights from NVIDIA GTC 2026.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40548470
info_outline
AI Inferencing Everywhere: Scaling Enterprise AI from Core to Edge
03/20/2026
AI Inferencing Everywhere: Scaling Enterprise AI from Core to Edge
AI is moving out of the lab and into real-world environments, and the challenge is no longer building models, it’s running them everywhere. At NVIDIA GTC, hosts and sit down with of Lenovo and of Akamai to explore how distributed infrastructure is enabling enterprise AI inferencing from the core to the edge. The conversation unpacks how enterprises are shifting from centralized AI architectures to highly distributed environments where performance, consistency, and security must hold across locations. Through Lenovo’s collaboration with Akamai, AI workloads are being deployed on infrastructure that spans data centers to edge locations, redefining how organizations think about cloud, latency, and execution. As new performance metrics like time-to-first-token gain importance, the discussion highlights how infrastructure decisions directly impact real-world AI outcomes. Key Takeaways 🔹 AI deployment is shifting from centralized models to distributed inferencing environments 🔹 Time-to-first-token is emerging as a critical performance metric for AI workloads 🔹 Unified infrastructure is key to maintaining consistency across core and edge 🔹 Lenovo and Akamai are redefining what “cloud” means in the AI era 🔹 Edge AI is enabling faster, more secure, real-time decision-making As AI scales, the ability to execute models consistently across distributed environments will define enterprise success. Watch the full conversation at and subscribe to our for more insights from NVIDIA GTC 2026.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40561205
info_outline
The Rise of the AI-Native Phone: From Assistants to Action
03/18/2026
The Rise of the AI-Native Phone: From Assistants to Action
At MWC Barcelona 2026, the conversation around AI phones moves beyond chatbots and voice assistants, toward AI-native devices capable of taking real action across apps and services. For Six Five In The Booth, host is with , Founder & CEO of , to explore what it will take for agentic, on-device AI to move from hype to everyday utility. As smartphones evolve into proactive, intelligent systems, trust, performance, and privacy become defining factors in adoption. Highlights include: 🔹 What is fundamentally different about this wave of AI-native phones 🔹 Why behavioral trust, not just model accuracy, is the real adoption barrier 🔹 Where on-device inference outperforms the cloud for agent tasks 🔹 How local vs. cloud AI will shape privacy and performance tradeoffs 🔹 Whether agentic AI becomes a durable moat or fast-following table stakes 🔹 How intelligence as the interface reshapes app ecosystems and monetization As AI becomes embedded into the device itself, the phone transitions from assistant to actor. Watch the full conversation at and subscribe to our for more insights from MWC 2026.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40517900
info_outline
AI Is Writing the Code Now: Cisco’s Vision for the Agent Era
03/17/2026
AI Is Writing the Code Now: Cisco’s Vision for the Agent Era
AI is moving from experimentation to enterprise-scale deployment, but the biggest constraint is no longer the model. It is the system. At NVIDIA GTC, speaks with , President and Chief Product Officer, about what it actually takes to operationalize AI across the enterprise. This is not a conversation about potential. It is about what is already breaking, what is accelerating, and what leaders are underestimating. As organizations move into production, the constraint is no longer the model, it is the system. Infrastructure must handle heterogeneous compute, data pipelines must withstand continuous agent-driven workloads, and trust has to be built into every layer. At the same time, AI is shifting from a tool to a workforce layer, where agents operate alongside humans and reshape how software is built, decisions are made, and value is delivered. The advantage will not go to those experimenting faster, but to those building systems that can absorb rapid AI advancement and consistently deliver outcomes. Key Takeaways 🔹 AI is transitioning from pilots to production, with 2026 marking a major inflection point in enterprise adoption 🔹 Infrastructure is evolving into a heterogeneous model, where GPUs, CPUs, and accelerators must work together 🔹 Trust and security are emerging as the primary blockers to enterprise-scale AI deployment 🔹 AI-generated code is already reshaping software development, with large portions of future products expected to be built by AI 🔹 Agents are redefining work itself, shifting AI from a tool to an operational collaborator inside organizations Watch the full conversation at and subscribe to our for more insights from NVIDIA GTC 2026.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40517850
info_outline
Lenovo’s AI Acceleration: From Device Innovation to Global Impact
03/16/2026
Lenovo’s AI Acceleration: From Device Innovation to Global Impact
MWC Barcelona is buzzing about connectivity, edge compute, AI, and the road to 6G. and sit down with , EVP of Lenovo Group and President of the Intelligent Devices Group, to discuss how is translating AI momentum into tangible enterprise outcomes. Coming off record stock performance, Lenovo reported a historic milestone of over 25% market share for two consecutive quarters. At the same time, Motorola’s premium mobile growth is accelerating, and the infrastructure business is delivering high double-digit expansion. Taken together, these moves align with Lenovo’s stated “Smarter AI for All” strategy across edge-to-cloud architectures. Key Takeaways: 🔹 Hybrid AI is the future: Lenovo believes workloads will be intelligently split between on-device and cloud to balance privacy, power, and performance. 🔹 Ambient AI changes the interface: With the introduction of Qira, Lenovo is moving toward intelligence that is always present and capable of acting on a user’s behalf with consent. 🔹 AI-native differentiation matters: As hardware converges, intelligent orchestration across devices becomes the real competitive edge. 🔹 Innovation builds future IP: From foldables to experimental gaming handhelds, Lenovo’s R&D fuels long-term platform advantages. 🔹 Supply chain strategy is a differentiator: Through long-term supplier commitments, Lenovo has secured demand through 2026 and 2027 despite global constraints. With more than 30 innovations showcased at MWC, including the ruggedized ThinkPad X11, the refreshed Yoga 9i 2-in-1, and the Motorola Razr Fold, Lenovo is positioning AI as the connective layer across its entire portfolio. Watch the full conversation at or subscribe to our for more Six Five Media coverage from MWC Barcelona 2026.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40504960
info_outline
Intel’s Telco Commitment: AI in the Network and the Path to 6G
03/04/2026
Intel’s Telco Commitment: AI in the Network and the Path to 6G
At Mobile World Congress 2026 in Barcelona, Intel is demonstrating that it’s all-in on telco. Six Five On The Road hosts and sit down with , EVP & GM of Intel’s Data Center Group, and , VP & GM of Network and Edge at Intel, to discuss ’s recommitment to the telco sector, the evolving role of AI in the network, and what “6G ready” really means. With inference and agentic workloads shifting AI conversations beyond GPUs, Intel is emphasizing the value of the CPU in delivering the right compute for the right workload across core, RAN, and edge. Conversation highlights include: 🔹 Why integrating NEX back into DCG signals Intel’s long-term telco commitment 🔹 How inference and agentic AI are increasing CPU affinity in telco environments 🔹 Why AI in the RAN is not CPU vs GPU, but right compute for right workload 🔹 Real-world momentum with Xeon 6 across core, RAN, and edge collaborations 🔹 What “6G ready” means without forcing a hardware reset 🔹 How open platforms, power efficiency, and software evolution define the path forward As operators look to monetize 5G while preparing for 6G, efficient scalability, power optimization, and open ecosystems are becoming foundational. Watch the full conversation at and subscribe to our for more insights from MWC 2026.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40318445
info_outline
Improving AI Inference with AMD EPYC Host CPUs | Signal65 Webcast
02/25/2026
Improving AI Inference with AMD EPYC Host CPUs | Signal65 Webcast
AI performance gains are increasingly determined by what happens before and after the GPU. In this Signal65 webcast, , , and are joined by , Corporate VP, Compute and Enterprise AI Products at , and , Senior Director, Compute and Enterprise AI Products at AMD, to explore how AMD EPYC processors are improving AI inference performance in enterprise environments. As AI workloads move from experimentation to production, the efficiency and scalability of the host platform become critical. This discussion breaks down how EPYC CPUs support AI acceleration, optimize data movement, and deliver measurable performance improvements in real-world deployments. Key Takeaways: 🔹 Inference is infrastructure-bound: AI performance is heavily influenced by host CPU architecture, not just accelerators. 🔹 Data movement is a bottleneck: Memory bandwidth, I/O, and interconnects significantly impact AI workload efficiency. 🔹 CPU + GPU synergy matters: Optimizing inference requires tight integration between EPYC CPUs and AI accelerators. 🔹 Enterprise AI requires balance: Power efficiency, core density, and scalability determine real-world deployment success. 🔹 Platform-level optimization wins: AI performance is achieved through system-level engineering, not component-level thinking. To understand how EPYC CPUs are shaping AI inference performance in enterprise data centers: Read the Signal65 research paper: Watch the full video at , and be sure to subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40231735
info_outline
The Rise of Companion Silicon: Rethinking AI Architecture from Edge to Cloud
02/23/2026
The Rise of Companion Silicon: Rethinking AI Architecture from Edge to Cloud
As AI systems scale from data centers to the edge, the architectural conversation is shifting. It’s no longer just about CP=Us and GPUs. Companion silicon is becoming foundational to how intelligent systems are built, controlled, and secured. Hosts and are joined by CEO, , to explore why FPGA attach rates are rising and how small and mid-range, low-power devices are driving real system-level value across AI infrastructure. This discussion builds on Lattice’s recent momentum, reinforcing the company’s positioning as the “everywhere companion chip” from the edge to the cloud. Key Takeaways: 🔹 Rising FPGA Attach Rates in AI Systems: As AI deployments expand, more systems are incorporating companion silicon to handle control, security, connectivity, and real-time management alongside CPUs, GPUs, and accelerators. 🔹 System-Level Value in Small and Mid-Range FPGAs: While industry attention often centers on massive accelerators, much of the architectural value is being created in low-power, deterministic devices that enable flexibility and orchestration across communications, compute, industrial, and automotive markets. 🔹 The “Everywhere Companion Chip” Strategy: Lattice’s FPGAs are designed to operate from the edge to the cloud, supporting modern AI infrastructure by managing critical system functions that go beyond raw compute. 🔹 Platform and Software-Driven Deployment: Beyond silicon, Lattice emphasizes a platform and software-led approach that helps customers deploy faster, scale designs, and support long-lived systems in regulated and industrial environments. 🔹 Physical AI and the Next Phase of Infrastructure: As AI moves deeper into robotics, industrial automation, and communications, companion silicon is positioned to play a growing role in enabling real-time control and secure system management. Watch the full video at , and be sure to subscribe to our so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40203935
info_outline
Lenovo’s Hybrid AI Strategy for the Inference Era – Six Five Connected with Diana Blass
02/20/2026
Lenovo’s Hybrid AI Strategy for the Inference Era – Six Five Connected with Diana Blass
Models aren’t the bottleneck to AI adoption. Infrastructure is. At CES 2026, Lenovo made clear that the future of enterprise AI will be defined by inference at scale, not experimental models. In this episode of Six Five Connected, host explores how Lenovo is tackling the real barriers to AI activation, from power and cooling to hybrid deployment across edge, on-prem, and cloud environments. Featuring insights from , VP & Principal Analyst, Data Center at Moor Insights & Strategy; , SVP, ISG Sales at Lenovo; , CMO at Lenovo; and , EVP & President, ISG at Lenovo, along with perspective from , Chair & CEO of AMD; , CEO of Lenovo; and , CEO of NVIDIA, this episode breaks down what it takes to move from AI pilots to operational reality. 🔹 Key Takeaways: 🔹 Inference is the next AI wave: As Flynn Maloy notes, the inferencing wave is just beginning, shifting AI from model training to real-time operational decision systems. 🔹 Most enterprises are stuck in pilot mode: Matthew Kimball explains that outdated infrastructure and operational complexity are keeping organizations from scaling. 🔹 Hybrid AI is the enterprise default: Vlad Rozanovich outlines why AI must span edge, on-prem, and cloud to meet enterprise demands. 🔹 Infrastructure complexity is the real challenge: Ashley Gorakhpurwalla emphasizes that compute, storage, networking, security, and permissions must evolve together. 🔹 Liquid cooling is no longer optional: Kimball highlights Lenovo’s long-standing leadership in liquid cooling as chips grow denser and hotter. 🔹 Operational simplicity matters: Lenovo’s XClarity platform is designed to abstract complexity and help IT teams scale efficiently. 🔹 Hyperscale and enterprise are converging: Yang Yuanqing, Lisa Su, and Jensen Huang underscore how solutions like the AI Cloud Gigafactory with NVIDIA and Helios with AMD bring rack-scale AI to both hyperscalers and enterprises. The winners in 2026 won’t be those with the flashiest models. They’ll be the ones who turn inference into real operations. ▶️ Watch the full video at , and be sure to , so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/40178660
info_outline
Navigating Today’s Hybrid IT Landscape as a Small to Mid-Sized Business - The Main Scoop
02/18/2026
Navigating Today’s Hybrid IT Landscape as a Small to Mid-Sized Business - The Main Scoop
If you run the mainframe efficiently, it can support businesses of any size – whether that’s small, mid-sized, or global enterprise. On this episode of The Main Scoop, hosts , CEO and Chief Analyst at Futurum, and , SVP & GM, Mainframe Software Division at , sit down with , President and CEO of , to discuss how smaller businesses can keep critical systems running while managing software and hardware across today’s hybrid IT environments. The conversation highlights the real constraints that small and mid-sized businesses face, including limited budgets, tight technical resources, and the complexity of operating across multiple platforms. John outlines how organizations are keeping core workloads stable and modernizing with intent by running the mainframe efficiently, leaning on managed services, and making focused modernization decisions that move the business forward without stretching teams too thin. Key Takeaways Include: 🔹 Mainframe efficiency matters at every scale: When run properly, the mainframe remains a strong foundation for critical workloads, regardless of company size. 🔹 Hybrid IT increases complexity: Small and mid-sized businesses often operate across multiple platforms, increasing operational and maintenance demands. 🔹 Resource constraints drive strategy: Limited funding and talent require smarter approaches to modernization, not wholesale replacement. 🔹 Services and partnerships fill gaps: Managed services, staff augmentation, and hosted models help smaller organizations maintain stability while evolving their environments. Watch the full video at , and be sure to , so you never miss an episode.
/episode/index/show/26071023-0cb5-405e-825c-a5ab155a13e7/id/39985920