Medha Bankhwal and Michael Chui: Implementing AI Trust
Release Date: 04/10/2025
The Road to Accountable AI
Former Congressman and Pentagon official Brad Carson discusses his organization, Americans for Responsible Innovation (ARI), which seeks to bridge the gap between immediate AI harms like and catastrophic safety risks, while bringing deep Capitol Hill expertise to the AI conversation . He argues that unlike previous innovations such as electricity or the automobile, AI has been deeply unpopular with the public from the start, creating a rare bipartisan alignment among those skeptical of its power and impacts. This creates openings for productive discussions about AI policy. Drawing on his...
info_outlineThe Road to Accountable AI
Oliver Patel has built a sizeable online following for his social media posts and Substack about enterprise AI governance, using clever acronyms and visual frameworks to distill down insights based on his experience at AstraZeneca, a major global pharmaceutical company. In this episode, he details his career journey from academic theory to government policy and now practical application, and offers insights for those new to the field. He argues that effective enterprise AI governance requires being pragmatic and picking your battles, since the role isn't to stop AI adoption but to enable...
info_outlineThe Road to Accountable AI
Ravit Dotan argues that the primary barrier to accountable AI is not a lack of ethical clarity, but organizational roadblocks. While companies often understand what they should do, the real challenge is organizational dynamics that prevent execution—AI ethics has been shunted into separate teams lacking power and resources, with incentive structures that discourage engineers from raising concerns. Drawing on work with organizational psychologists, she emphasizes that frameworks prescribe what systems companies should have but ignore how to navigate organizational realities. The key insight:...
info_outlineThe Road to Accountable AI
Kevin Werbach speaks with Trey Causey about the precarious state of the responsible AI (RAI) field. Causey argues that while the mission is critical, the current organizational structures for many RAI teams are struggling. He highlights a fundamental conflict between business objectives and governance intentions, compounded by the fact that RAI teams' successes (preventing harm) are often invisible, while their failures are highly visible. Causey makes the case that for RAI teams to be effective, they must possess deep technical competence to build solutions and gain credibility with...
info_outlineThe Road to Accountable AI
Kevin Werbach speaks with Caroline Louveaux, Chief Privacy, AI, and Data Responsibility Officer at Mastercard, about what it means to make trust mission critical in the age of artificial intelligence. Caroline shares how Mastercard built its AI governance program long before the current AI boom, grounding it in the company’s Data and Technology Responsibility Principles”. She explains how privacy-by-design practices evolved into a single global AI governance framework aligned with the EU AI Act, NIST AI Risk Management, and standards. The conversation explores how Mastercard balances...
info_outlineThe Road to Accountable AI
Cameron Kerry, Distinguished Visiting Fellow at the Brookings Institution and former Acting US Secretary of Commerce, joins Kevin Werbach to explore the evolving landscape of AI governance, privacy, and global coordination. Kerry emphasizes the need for agile and networked approaches to AI regulation that reflect the technology’s decentralized nature. He argues that effective oversight must be flexible enough to adapt to rapid innovation while grounded in clear baselines that can help organizations and governments learn together. Kerry revisits his long-standing push for...
info_outlineThe Road to Accountable AI
Carnegie Mellon business ethics professor Derek Leben joins Kevin Werbach to trace how AI ethics evolved from an early focus on embodied systems—industrial robots, drones, self-driving cars—to today’s post-ChatGPT landscape that demands concrete, defensible recommendations for companies. Leben explains why fairness is now central: firms must decide which features are relevant to a task (e.g., lending or hiring) and reject those that are irrelevant—even if they’re predictive. Drawing on philosophers such as John Rawls and Michael Sandel, he argues for objective judgments about a...
info_outlineThe Road to Accountable AI
Kevin Werbach interviews Heather Domin, Global Head of the Office of Responsible AI and Governance at HCLTech. Domin reflects on her path into AI governance, including her pioneering work at IBM to establish foundational AI ethics practices. She discusses how the field has grown from a niche concern to a recognized profession, and the importance of building cross-functional teams that bring together technologists, lawyers, and compliance experts. Domin emphasizes the advances in governance tools, bias testing, and automation that are helping developers and organizations keep pace with rapidly...
info_outlineThe Road to Accountable AI
Kevin Werbach interviews Dean Ball, Senior Fellow at the Foundation for American Innovation and one of the key shapers of the Trump Administration's approach to AI policy. Ball reflects on his career path from writing and blogging to shaping federal policy, including his role as Senior Policy Advisor for AI and Emerging Technology at the White House Office of Science and Technology Policy, where he was the primary drafter of the Trump Administration's recent AI Action Plan. He explains how he has developed influence through a differentiated viewpoint: rejecting the notion that AI progress will...
info_outlineThe Road to Accountable AI
Kevin Werbach interviews David Hardoon, Global Head of AI Enablement at Standard Chartered Bank and former Chief Data Officer of the Monetary Authority of Singapore (MAS), about the evolving practice of responsible AI. Hardoon reflects on his perspective straddling both government and private-sector leadership roles, from designing the landmark FEAT principles at MAS to embedding AI enablement inside global financial institutions. Hardoon explains the importance of justifiability, a concept he sees as distinct from ethics or accountability. Organizations must not only justify their AI use to...
info_outlineKevin Werbach speaks with Medha Bankhwal and Michael Chui from QuantumBlack, the AI division of the global consulting firm McKinsey. They discuss how McKinsey's AI work has evolved from strategy consulting to hands-on implementation, with AI trust now embedded throughout their client engagements. Chui highlights what makes the current AI moment transformative, while Bankwhal shares insights from McKinsey's recent AI survey of over 760 organizations across 38 countries. As they explain, trust remains a major barrier to AI adoption, although there are geographic differences in AI governance maturity.
Medha Bankhwal, a graduate of Wharton's MBA program, is an Associate Partner, as well as Co-founder of McKinsey’s AI Trust / Responsible AI practice. Prior to McKinsey, Medha was at Google and subsequently co-founded a digital learning not-for-profit startup. She co-leads forums for AI safety discussions for policy + tech practitioners, titled “Trustworthy AI Futures” as well as a community of ex-Googlers dedicated to the topic of AI Safety.
Michael Chui is a senior fellow at QuantumBlack, AI by McKinsey. He leads research on the impact of disruptive technologies and innovation on business, the economy, and society. Michael has led McKinsey research in such areas as artificial intelligence, robotics and automation, the future of work, data & analytics, collaboration technologies, the Internet of Things, and biological technologies.
The State of AI: How Organizations are Rewiring to Capture Value (March 12, 2025)
Superagency in the workplace: Empowering people to unlock AI’s full potential (January 28, 2025)
Building AI Trust: The Key Role of Explainability (November 26, 2024)